Multi-device Control System And Method And Non-transitory Computer-readable Medium Storing Component For Executing The Same Patent Application (2025)

U.S. patent application number 16/487368 was filed with the patent office on 2021-10-28 for multi-device control system and method and non-transitory computer-readable medium storing component for executing the same. This patent application is currently assigned to LG ELECTRONICS INC.. The applicant listed for this patent is LG ELECTRONICS INC.. Invention is credited to Jisoo PARK.

Application Number20210335354 16/487368
Document ID /
Family ID1000005707576
Filed Date2021-10-28
United States PatentApplication20210335354
Kind CodeA1
PARK; JisooOctober 28, 2021

MULTI-DEVICE CONTROL SYSTEM AND METHOD AND NON-TRANSITORYCOMPUTER-READABLE MEDIUM STORING COMPONENT FOR EXECUTING THESAME

Abstract

Disclosed is a multi-device control method including: performinga voice recognition operation on a voice command generated from asound source; identifying distances between each of the pluralityof devices and the sound source; assigning response rankings to thedevices by combining a context-specific correction score of eachdevice corresponding to the voice command and the distances; andselecting a device to respond to the voice command from among thedevices according to the response rankings.

Inventors:PARK; Jisoo; (Seoul,KR)
Applicant:
NameCityStateCountryType

LG ELECTRONICS INC.

Seoul

KR
Assignee:LG ELECTRONICS INC.
Seoul
KR
Family ID:1000005707576
Appl. No.:16/487368
Filed:April 19, 2019
PCT Filed:April 19, 2019
PCT NO:PCT/KR2019/004728
371 Date:August 20, 2019
Current U.S.Class:1/1
Current CPCClass:G10L 15/32 20130101;G10L 15/10 20130101; G10L 15/14 20130101; G10L 15/22 20130101; G10L15/063 20130101; G10L 15/16 20130101
InternationalClass:G10L 15/22 20060101G10L015/22; G10L 15/10 20060101 G10L015/10; G10L 15/14 20060101G10L015/14; G10L 15/16 20060101 G10L015/16; G10L 15/32 20060101G10L015/32; G10L 15/06 20060101 G10L015/06

Claims

1. A multi-device control method comprising: performing a voicerecognition operation on a voice command generated from a soundsource; identifying distances between each of the plurality ofdevices and the sound source; assigning response rankings to thedevices by combining a context-specific correction score of eachdevice corresponding to the voice command and the distances; andselecting a device to respond to the voice command from among thedevices according to the response rankings.

2. The method of claim 1, wherein the context-specific correctionscore is determined on the basis of score base information relatedto each of the devices for the voice command.

3. The method of claim 2, wherein the score base informationcomprises at least one of always-on characteristic information,device on/off information, device control state information, userusage pattern information for a device, and usage environmentinformation.

4. The method of claim 1, wherein the identifying of the distancesbetween each of the devices and the sound source comprisescalculating the distances on the basis of decibel information whichis output from each of the devices and which corresponds to amagnitude of the voice command.

5. The method of claim 1, wherein the identifying of the distancesbetween each of the devices and the sound source comprises:identifying a voice signal metric value received from each of thedevices; and determining the distances on the basis of the voicesignal metric values.

6. The method of claim 5, wherein the voice signal metric valuescomprise a signal-to-noise ratio, a voice spectrum, and a voiceenergy for the voice command.

7. The method of claim 1, wherein the context-specific correctionscore of each device corresponding to the voice command is definedin a database.

8. The method of claim 1, wherein the context-specific correctionscore of each device corresponding to the voice command is updatedaccording to a specific voice command and a device context.

9. The method of claim 7, wherein the database is constructedthrough a deep learning-based base learning model.

10. The method of claim 8, wherein the context-specific correctionscore of each device corresponding to the voice command is updatedthrough an artificial intelligent (AI) agent training moduleaccording to a specific voice command and a device context.

11. A non-transitory computer-readable medium storing acomputer-executable component configured to be executed in at leastone processor of a computing device, wherein thecomputer-executable component performs a voice recognitionoperation on a voice command generated from a sound source;identifies distances between each of the plurality of devices andthe sound source; assigns response rankings to the devices bycombining a context-specific correction score of each devicecorresponding to the voice command and the distances; and selects adevice to respond to the voice command from among the devicesaccording to the response rankings.

12. A multi-device control system comprising: a voice recognitionmodule performing a voice recognition operation on a voice commandgenerated from a sound source; a distance identification moduleidentifying distances between each of a plurality of devices andthe sound source; and a processor assigning response rankings tothe devices by combining a context-specific correction score ofeach device corresponding to the voice command and the distances,and selecting a device to respond to the voice command from amongthe devices according to the response rankings.

13. The multi-device control system of claim 12, wherein the voicerecognition module, the distance identification module, and theprocessor are mounted in a cloud server connected to the devices bya communication network.

14. The multi-device control system of claim 12, wherein the voicerecognition module, the distance identification module, and theprocessor are mounted in an internal server of any one of thedevices connected to each other by a communication network.

15. The multi-device control system of claim 12, furthercomprising: a storage unit including a database in which acontext-specific correction score of each device corresponding tothe voice command is defined.

16. The multi-device control system of claim 15, wherein thecontext-specific correction score is determined on the basis ofscore base information related to each of the devices for the voicecommand.

17. The multi-device control system of claim 16, wherein the scorebase information comprises at least one of always-on characteristicinformation, device on/off information, device control stateinformation, user usage pattern information for a device, and usageenvironment information.

18. The multi-device control system of claim 15, furthercomprising: an artificial intelligent (AI) agent module updatingthe context-specific correction score of each device correspondingto the voice command according to a specific voice command and adevice context.

19. The multi-device control system of claim 12, wherein each ofthe devices includes a decibel sensor generating decibelinformation corresponding to a magnitude of the voice command, andthe distance identification module calculates the distance on thebasis of the decibel information.

20. The multi-device control system of claim 12, wherein thedistance identification module identifies a voice signal metricvalue received from each of the devices, and calculates thedistance on the basis of the voice signal metric value.

21. The multi-device control system of claim 20, wherein the voicesignal metric value comprises a signal-to-noise ratio for the voicecommand, a voice spectrum, and voice energy.

Description

TECHNICAL FIELD

[0001] The present invention relates to a multi-device controlsystem, and more particularly, to a multi-device control system andmethod for a plurality of devices controlled according to a user'svoice command and a non-transitory computer-readable medium storinga component for executing the same.

BACKGROUND ART

[0002] Smart home refers to a new type of housing that providesvarious types of automation services on the basis of communication.In the smart home, a user may communicate with various homeappliances, and the home appliances may be controlled according tothe user's voice command.

[0003] If there are several home appliances, it may be difficult toobtain a desired control result with a user's voice. In order toobtain a desired control result, a first method of including a mainkeyword in a voice command and a second method of preferentiallycontrolling a home appliance within the shortest distance from theuser may be considered.

[0004] In the first method, the user may specify a control targetby including a product name (TV, air-conditioner, etc.) of a homeappliance in a voice command such as "Turn off TV!" or "Turn offair-conditioner!". In the case of the first method, it may bedifficult to obtain a desired control result unless the productname of the home appliance is included in the voice command. Thefirst method is against user convenience.

[0005] Regarding the second method, it may be difficult to obtain adesired control result with only a distance of the user to a homeappliance. For example, if a user located near the air-conditionerto cool off issues a voice command "Turn off" because the userwants to turn off the TV, the air-conditioner may be turnedoff.

DISCLOSURE

Technical Problem

[0006] The present invention aims at solving the above-mentionedneeds and/or problems.

[0007] It is an object of the present invention to cause a homeappliance corresponding to a user's intention to be controlled byvoice even if a main keyword specifying a response target is notincluded in a voice command.

Technical Solution

[0008] According to an aspect of the present invention, there isprovided a multi-device control method including: performing avoice recognition operation on a voice command generated from asound source; identifying distances between each of the pluralityof devices and the sound source; assigning response rankings to thedevices by combining a context-specific correction score of eachdevice corresponding to the voice command and the distances; andselecting a device to respond to the voice command from among thedevices according to the response rankings.

[0009] The context-specific correction score may be determined onthe basis of score base information related to each of the devicesfor the voice command.

[0010] The score base information may include at least one ofalways-on characteristic information, device on/off information,device control state information, user usage pattern informationfor a device, and usage environment information.

[0011] The identifying of the distances between each of the devicesand the sound source may include: calculating the distances on thebasis of decibel information which is output from each of thedevices and which corresponds to a magnitude of the voicecommand.

[0012] The identifying of the distances between each of the devicesand the sound source may include: identifying a voice signal metricvalue received from each of the devices; and determining thedistances on the basis of the voice signal metric values.

[0013] The voice signal metric values may include a signal-to-noiseratio, a voice spectrum, and a voice energy for the voicecommand.

[0014] The context-specific correction score of each devicecorresponding to the voice command may be defined in adatabase.

[0015] The context-specific correction score of each devicecorresponding to the voice command may be updated according to aspecific voice command and a device context.

[0016] The database may be constructed through a deeplearning-based base learning model.

[0017] The context-specific correction score of each devicecorresponding to the voice command may be updated through anartificial intelligent (AI) agent training module according to aspecific voice command and a device context.

[0018] According to another aspect of the present invention, thereis provided a multi-device control system including: a voicerecognition module performing a voice recognition operation on avoice command generated from a sound source; a distanceidentification module identifying distances between each of aplurality of devices and the sound source; and a processorassigning response rankings to the devices by combining acontext-specific correction score of each device corresponding tothe voice command and the distances, and selecting a device torespond to the voice command from among the devices according tothe response rankings.

Advantageous Effects

[0019] Effects of the multi-device control system and methodaccording to an embodiment of the present invention are asfollows.

[0020] According to the present invention, a response target isselected by combining a context-specific correction score of eachdevice and a distance, instead of selecting a response target onlywith a simple distance. Therefore, the present invention mayincrease user convenience by enabling a home appliance intended bythe user to be controlled, although a main keyword for specifyingthe response target is not included in a voice command.

[0021] Also, according to the present invention, by updating thecontext-specific correction score through a training algorithm, itis possible to select a response target according to a userintention although situations are variously changed.

[0022] The effects according to the embodiment of the presentinvention are not limited by the contents exemplified above, andmore various effects are included in the specification.

DESCRIPTION OF DRAWINGS

[0023] FIG. 1 is a block diagram of a wireless communication systemto which the methods proposed herein may be applied.

[0024] FIG. 2 shows an example of a basic operation of an userequipment and a 5G network in a 5G communication system.

[0025] FIG. 3 illustrates an example of application operation of anuser equipment and a 5G network in a 5G communication system.

[0026] FIGS. 4 to 7 show an example of an operation of an userequipment using 5G communication.

[0027] FIG. 8 is a diagram illustrating an example of a 3GPP signaltransmission/reception method.

[0028] FIG. 9 illustrates an SSB structure and FIG. 10 illustratesSSB transmission.

[0029] FIG. 11 illustrates an example of a random accessprocedure.

[0030] FIG. 12 shows an example of an uplink grant.

[0031] FIG. 13 shows an example of a conceptual diagram of uplinkphysical channel processing.

[0032] FIG. 14 shows an example of an NR slot in which a PUCCH istransmitted.

[0033] FIG. 15 is a block diagram of a transmitter and a receiverfor hybrid beamforming.

[0034] FIG. 16 shows an example of beamforming using an SSB and aCSI-RS.

[0035] FIG. 17 is a flowchart illustrating an example of a DL BMprocess using an SSB.

[0036] FIG. 18 shows another example of DL BM process using aCSI-RS.

[0037] FIG. 19 is a flowchart illustrating an example of a processof determining a reception beam of a UE.

[0038] FIG. 20 is a flowchart illustrating an example of atransmission beam determining process of a BS.

[0039] FIG. 21 shows an example of resource allocation in time andfrequency domains related to an operation of FIG. 21.

[0040] FIG. 22 shows an example of a UL BM process using anSRS.

[0041] FIG. 23 is a flowchart illustrating an example of a UL BMprocess using an SRS.

[0042] FIG. 24 is a diagram showing an example of a method ofindicating a pre-emption.

[0043] FIG. 25 shows an example of a time/frequency set ofpre-emption indication.

[0044] FIG. 26 shows an example of a narrowband operation andfrequency diversity.

[0045] FIG. 27 is a diagram illustrating physical channels that maybe used for MTC and a general signal transmission method using thesame.

[0046] FIG. 28 is a diagram illustrating an example of schedulingfor each of MTC and legacy LTE.

[0047] FIG. 29 shows an example of a frame structure when asubcarrier spacing is 15 kHz.

[0048] FIG. 30 shows an example of a frame structure when asubscriber spacing is 3.75 kHz.

[0049] FIG. 31 shows an example of a resource grid for NB-IoTuplink.

[0050] FIG. 32 shows an example of an NB-IoT operation mode.

[0051] FIG. 33 is a diagram illustrating an example of physicalchannels that may be used for NB-IoT and a general signaltransmission method using the same.

[0052] FIG. 34 is a schematic block diagram of a multi-devicecontrol system according to the present invention.

[0053] FIG. 35 is a block diagram showing an embodiment forimplementing the multi-device control system of FIG. 34.

[0054] FIG. 36 is a block diagram showing another embodiment forimplementing the multi-device control system of FIG. 34.

[0055] FIG. 37 is a block diagram showing a configuration of acloud server of FIG. 35 and a master server of FIG. 36.

[0056] FIG. 38 is a block diagram showing a schematic configurationof a voice processing apparatus in a communication system of FIG.35.

[0057] FIG. 39 is a block diagram showing a schematic configurationof a voice processing apparatus in the multi-device control systemof FIG. 36.

[0058] FIG. 40 is a block diagram showing a schematic configurationof an artificial intelligent (AI) agent module of FIGS. 38 and39.

[0059] FIG. 41 is a flowchart of a multi-device control methodaccording to an embodiment of the present invention.

[0060] FIG. 42 is a view illustrating a way in which a responseranking is determined by combining a context-specific correctionscore and a distance in a multi-device control method according toan embodiment of the present invention.

[0061] FIG. 43 is a view illustrating an example of a plurality ofdevices having different distances from a sound source.

[0062] FIGS. 44 and 45 are views illustrating an example ofdetermining a response ranking by correcting a distance with acorrection score according to device characteristics.

[0063] FIGS. 46 and 47 are views illustrating an example ofdetermining a response ranking by correcting a distance with acorrection score according to a device context. FIG.

[0064] FIGS. 48, 49, and 50 are views illustrating an example ofdetermining a response ranking by correcting a distance with acorrection score according to a device usage pattern of a user.

[0065] FIGS. 50 and 51 are views illustrating an example ofdetermining a response ranking by correcting a distance with acorrection score according to a usage pattern and anenvironment.

[0066] FIGS. 52, 54, and 55 are views illustrating an example ofdetermining the response ranking by correcting a distance with acorrection score according to a usage environment.

[0067] FIGS. 56 to 59 are views showing context-specific correctionscores for each device corresponding to voice commands.

[0068] FIG. 60 is a diagram showing an example of an operationprogress of a device as a training target (or learning target)according to each situation.

MODE FOR INVENTION

[0069] The advantages and features of the present invention and amethod of achieving them will become apparent with reference to theembodiments described in detail below together with theaccompanying drawings. However, it should be understood that thepresent invention is not limited to the embodiments describedbelow, but may be implemented in various other forms and theembodiments are provided so that the disclosure of the presentinvention is fully complete and that those skilled in the art willfully appreciate the scope of the invention, and the presentinvention is defined by only the scope of the claims.

[0070] The shapes, sizes, ratios, angles, numbers, and the likedisclosed in the drawings for describing embodiments of the presentinvention are illustrative, and thus the present invention is notlimited thereto. Like reference numerals designate like elementsthroughout the specification. When terms "comprising", "having","including`, or the like are used in the present invention, unlessthe term `only` is not used, the other part may be added. Unlessthe context otherwise clearly indicates, words used in the singularinclude the plural, the plural includes the singular.

[0071] In interpreting components, it is construed to include anerror range even if there is no explicit description.

[0072] In the case of a description of the positional relationship,for example, when the positional relationship between two parts isdescribed such as "on", "above", "under", and "next to", if"immediate" or "direct" is not used, one or more other parts may belocated between the two parts.

[0073] The first, the second and the like may be used fordescribing various components, but these components are not limitedby these terms. These terms are used for only distinguishing onecomponent from another component. Therefore, a first componentdescribed below may be a second component within the scope of thepresent invention.

[0074] Unless the context otherwise clearly indicates, words usedin the singular include the plural, the plural includes thesingular.

[0075] Hereinafter, an embodiment of the present invention will bedescribed in detail with reference to the attached drawings. In thefollowing description, detailed descriptions of well-knownfunctions and structures incorporated herein may be omitted toavoid obscuring the subject matter of the present invention.

[0076] A. Example of Autonomous Vehicle and 5G Network

[0077] FIG. 1 is a block diagram of a wireless communication systemto which methods proposed in the disclosure are applicable.

[0078] Referring to FIG. 1, a device including an autonomousdriving module is defined as a first communication device (910 ofFIG. 1 and see paragraph N for detailed description), and aprocessor 911 may perform detailed autonomous drivingoperations.

[0079] Another vehicle or a 5G network communicating with theautonomous driving device is defined as a second communicationdevice (920 of FIG. 1, and see paragraph N for details), and aprocessor 921 may perform detailed autonomous drivingoperations.

[0080] Details of a wireless communication system, which is definedas including a first communication device, which is an autonomousvehicle, and a second communication device, which is a 5G network,may refer to paragraph N.

[0081] B. AI Operation Using 5G Communication

[0082] FIG. 2 shows an example of a basic operation of a userequipment and a 5G network in a 5G communication system.

[0083] The UE transmits the specific information transmission tothe 5G network (S1).

[0084] Then, the 5G network performs 5G processing on the specificinformation (S2).

[0085] In this connection, the 5G processing may include AIprocessing.

[0086] Then, the 5G network transmits a response including the AIprocessing result to the UE (S3).

[0087] FIG. 3 shows an example of application operation of a userterminal and a 5G network in a 5G communication system.

[0088] The UE performs an initial access procedure with the 5Gnetwork (S20). The initial connection procedure will be describedin more detail in paragraph F.

[0089] Then, the UE performs a random access procedure with the 5Gnetwork (S21). The random access procedure will be described inmore detail in paragraph G.

[0090] The 5G network transmits an UL grant for schedulingtransmission of specific information to the UE (S22). The processof the UE receiving the UL grant will be described in more detailin the UL transmission/reception operation in paragraph H.

[0091] Then, the UE transmits specific information to the 5Gnetwork based on the UL grant (S23).

[0092] Then, the 5G network performs 5G processing on the specificinformation (S24).

[0093] In this connection, the 5G processing may include AIprocessing.

[0094] Then, the 5G network transmits a DL grant for schedulingtransmission of the 5G processing result of the specificinformation to the UE (S25).

[0095] Then, the 5G network transmits a response including the AIprocessing result to the UE based on the DL grant (S26).

[0096] In FIG. 3, an example in which the AI operation and theinitial connection process, or the random access process and the DLgrant reception process are combined with each other has beenexemplarily described using the S20 to S26. However, the presentinvention is not limited thereto.

[0097] For example, the initial connection process and/or therandom access process may be performed using the process of S20,S22, S23, S24, and S24. In addition, the initial connection processand/or the random access process may be performed using, forexample, the process of S21, S22, S23, S24, and S26. Further, theAI operation and the downlink grant reception procedure may becombined with each other using the process of S23, S24, S25, andS26.

[0098] C. UE Operation Using 5G Communication

[0099] FIG. 4 to FIG. 7 show an example of the operation of the UEusing 5G communication.

[0100] Referring first to FIG. 4, the UE performs an initial accessprocedure with the 5G network based on SSB to obtain DLsynchronization and system information (S30).

[0101] Then, the UE performs a random access procedure with the 5Gnetwork for UL synchronization acquisition and/or UL transmission(S31).

[0102] Then, the UE receives an UL grant to the 5G network totransmit specific information (S32).

[0103] Then, the UE transmits the specific information to the 5Gnetwork based on the UL grant (S33).

[0104] Then, the UE receives a DL grant for receiving a response tothe specific information from the 5G network (S34).

[0105] Then, the UE receives a response including the AI processingresult from the 5G network based on the DL grant (S35).

[0106] A beam management (BM) process may be added to S30. A beamfailure recovery process may be added to S31. A quasi-co locationrelationship may be added to S32 to S35. A more detaileddescription thereof will be described in more detail in paragraphI.

[0107] Next, referring to FIG. 5, the UE performs an initial accessprocedure with the 5G network based on SSB to obtain DLsynchronization and system information (S40).

[0108] Then, the UE performs a random access procedure with the 5Gnetwork for UL synchronization acquisition and/or UL transmission(S41).

[0109] Then, the UE transmits the specific information to the 5Gnetwork based on a configured grant (S42). A procedure forconfiguring the grant in place of receiving the UL grant from the5G network will be described in more detail in paragraph H.

[0110] Then, the UE receives a DL grant for receiving a response tothe specific information from the 5G network (S43).

[0111] Then, the UE receives the response including the AIprocessing result from the 5G network based on the DL grant(S44).

[0112] Next, referring to FIG. 6, the UE performs an initial accessprocedure with the 5G network based on the SSB to obtain DLsynchronization and system information (S50).

[0113] Then, the UE performs a random access procedure with the 5Gnetwork for UL synchronization acquisition and/or UL transmission(S51).

[0114] Then, the UE receives a DownlinkPreemption IE from the 5Gnetwork (S52).

[0115] The UE receives a DCI format 2_1 including a preambleindication from the 5G network based on the DownlinkPreemption IE(S53).

[0116] Then, the UE does not perform (or expect or assume) thereception of the eMBB data using a resource (PRB and/or OFDMsymbol) indicated by the pre-emption indication (S54).

[0117] The operation related to the preemption indication isdescribed in more detail in paragraph J.

[0118] Then, the UE receives an UL grant to the 5G network totransmit the specific information (S55).

[0119] Then, the UE transmits the specific information to the 5Gnetwork based on the UL grant (S56).

[0120] Then, the UE receives a DL grant for receiving a response tothe specific information from the 5G network (S57).

[0121] Then, the UE receives a response including the AI processingresult from the 5G network based on the DL grant (S58).

[0122] Next, referring to FIG. 7, the UE performs an initial accessprocedure with the 5G network based on SSB to obtain DLsynchronization and system information (S60).

[0123] Then, the UE performs a random access procedure with the 5Gnetwork for UL synchronization acquisition and/or UL transmission(S61).

[0124] Then, the UE receives an UL grant to the 5G network totransmit the specific information (S62).

[0125] The UL grant includes information on the number ofrepetitions of transmission of the specific information. Thespecific information is repeatedly transmitted based on theinformation on the repetition number (S63).

[0126] The UE transmits the specific information to the 5G networkbased on the UL grant.

[0127] Then, the iterative transmission of the specific informationis performed using the frequency hopping. The first transmission ofthe specific information may be done using a first frequencyresource, and the second transmission of the specific informationmay be done using a second frequency resource.

[0128] The specific information may be transmitted over a narrowband of 6RB (Resource Block) or 1RB (Resource Block).

[0129] Then, the UE receives a DL grant for receiving a response tothe specific information from the 5G network (S64).

[0130] Then, the UE receives a response including the AI processingresult from the 5G network based on the DL grant (S65).

[0131] The mMTC described in FIG. 7 will be described in moredetail in the paragraph K.

[0132] D. Introduction

[0133] Hereinafter, downlink (DL) refers to communication from abase station (BS) to user equipment (UE), and uplink (UL) refers tocommunication from a UE to a BS. In the downlink, a transmitter maybe part of the BS and a receiver may be part of the UE. In theuplink, a transmitter may be part of the UE and a receiver may bepart of the BS. Herein, the UE may be represented as a firstcommunication device and the BS may be represented as a secondcommunication device. The BS may be replaced with a term such as afixed station, a Node B, an evolved NodeB (eNB), a next generationnodeB (gNB), a base transceiver system (BTS), an access point (AP),a network or a 5G (5th generation), artificial intelligence (AI)system, a road side unit (RSU), robot, and the like. Also, the UEmay be replaced with a terminal, a mobile station (MS), a userterminal (UT), a mobile subscriber station (MSS), a subscriberstation (SS), an advanced mobile station (AMS), a wireless terminal(WT), a machine-type communication (MTC) device, amachine-to-machine (M2M) device, a device-to-device (D2D) device, avehicle, a robot, an AI module, and the like.

[0134] Techniques described herein may be used in a variety ofwireless access systems such as Code Division Multiple Access(CDMA), Frequency Division Multiple Access (FDMA), Time DivisionMultiple Access (TDMA), Orthogonal Frequency Division MultipleAccess (OFDMA), Single Carrier Frequency Division Multiple Access(SC-FDMA), etc. CDMA may be implemented as a radio technology suchas Universal Terrestrial Radio Access (UTRA) or CDMA2000. TDMA maybe implemented as a radio technology such as Global System forMobile communications (GSM)/General Packet Radio Service(GPRS)/Enhanced Data Rates for GSM Evolution (EDGE). OFDMA may beimplemented as a radio technology such as IEEE 802.11 (Wi-Fi), IEEE802.16 (WiMAX), IEEE 802.20, Evolved-UTRA (E-UTRA) etc. UTRA is apart of Universal Mobile Telecommunications System (UMTS). 3rdGeneration Partnership Project (3GPP) Long Term Evolution (LTE) isa part of Evolved UMTS (E-UMTS) using E-UTRA. LTE-Advanced(LTE-A)/LTE-A pro is an evolution of 3GPP LTE. 3GPP NR NR(New Radioor New Radio Access Technology) is an evolution of 3GPPLTE/LTE-A/LTE-A pro.

[0135] For clarity, the following description focuses on a 3GPPcommunication system (e.g., LTE-A, NR), but technical features ofthe present invention is not limited thereto. LTE refers totechnology after 3GPP TS 36.xxx Release 8. In detail, LTEtechnology after 3GPP TS 36.xxx Release 10 is referred to as LTE-A,and LTE technology after 3GPP TS 36.xxx Release 13 is referred toas LTE-A pro. 3GPP 5G (5th generation) technology refers totechnology after TS 36.xxx Release 15 and technology after TS38.XXX Release 15. The technology after TS 38.xxx Release 15 may bereferred to as 3GPP NR, and technology after TS 36.xxx Release 15may be referred to as enhanced LTE. "xxx" refers to a standarddocument detail number. LTE/NR may be collectively referred to as a3GPP system.

[0136] In this disclosure, a node refers to a fixed point capableof transmitting/receiving a radio signal through communication witha UE. Various types of BSs may be used as nodes irrespective of theterms thereof. For example, a BS, a node B (NB), an e-node B (eNB),a pico-cell eNB (PeNB), a home eNB (HeNB), a relay, a repeater,etc. may be a node. In addition, the node may not be a BS. Forexample, the node may be a radio remote head (RRH) or a radioremote unit (RRU). The RRH or RRU generally has a power level lowerthan a power level of a BS. At least one antenna is installed pernode. The antenna may refer to a physical antenna or refer to anantenna port, a virtual antenna, or an antenna group. A node may bereferred to as a point.

[0137] In this specification, a cell refers to a prescribedgeographical area to which one or more nodes provide acommunication service. A "cell" of a geographic region may beunderstood as coverage within which a node can provide a serviceusing a carrier and a "cell" of a radio resource is associated withbandwidth (BW) which is a frequency range configured by thecarrier. Since DL coverage, which is a range within which the nodeis capable of transmitting a valid signal, and UL coverage, whichis a range within which the node is capable of receiving the validsignal from the UE, depends upon a carrier carrying the signal,coverage of the node may be associated with coverage of "cell" of aradio resource used by the node. Accordingly, the term "cell" maybe used to indicate service coverage by the node sometimes, a radioresource at other times, or a range that a signal using a radioresource can reach with valid strength at other times.

[0138] In this specification, communicating with a specific cellmay refer to communicating with a BS or a node which provides acommunication service to the specific cell. In addition, a DL/ULsignal of a specific cell refers to a DL/UL signal from/to a BS ora node which provides a communication service to the specific cell.Anode providing UL/DL communication services to a UE is called aserving node and a cell to which UL/DL communication services areprovided by the serving node is especially called a serving cell.Furthermore, channel status/quality of a specific cell refers tochannel status/quality of a channel or communication link formedbetween a BS or node which provides a communication service to thespecific cell and a UE.

[0139] Meanwhile, a "cell" associated with radio resource may bedefined as a combination of DL resources and UL resources, that is,a combination of a DL component carrier (CC) and a UL CC. A cellmay be configured to be a DL resource alone or a combination of DLresources and UL resources. If carrier aggregation is supported, alinkage between a carrier frequency of a DL resource (or DL CC) anda carrier frequency of a UL resource (or UL CC) may be indicated bysystem information transmitted through a corresponding cell. Here,the carrier frequency may be the same as or different from a centerfrequency of each cell or CC. Hereinafter, a cell operating at aprimary frequency will be referred to as a primary cell (Pcell) ora PCC, and a cell operating at a secondary frequency will bereferred to as a secondary cell (Scell) Or SCC. The Scell may beconfigured after the UE performs a radio resource control (RRC)connection establishment with the BS to establish an RRC connectiontherebetween, that is, after the UE is RRC_CONNECTED. Here, RRCconnection may refer to a channel through which an RRC of the UEand an RRC of the BS may exchange RRC messages with each other. TheScell may be configured to provide additional radio resources tothe UE. Depending on the capabilities of the UE, the Scell may forma set of serving cells for the UE together with the Pcell. In thecase of a UE which is in the RRC_CONNECTED state but is notconfigured in carrier aggregation or does not support carrieraggregation, there is only one serving cell that is only configuredas the Pcell.

[0140] Cells support unique wireless access technologies. Forexample, transmission/reception according to LTE radio accesstechnology (RAT) is performed on an LTE cell, andtransmission/reception according to 5G RAT is performed on a 5Gcell.

[0141] A carrier aggregation (CA) system refers to a system forsupporting a wide bandwidth by aggregating a plurality of carrierseach having a narrower bandwidth than a target bandwidth. A CAsystem is different from OFDMA technology in that DL or ULcommunication is performed using a plurality of carrier frequencieseach of which forms a system bandwidth (or a channel bandwidth),whereas the OFDM system carries a base frequency band divided intoa plurality of orthogonal subcarriers on a single carrier frequencyto perform DL or UL communication. For example, in the case ofOFDMA or orthogonal frequency division multiplexing (OFDM), onefrequency band having a constant system bandwidth is divided into aplurality of subcarriers having a certain subscriber spacing, andinformation/data is mapped in the plurality of subcarriers, and thefrequency band to which the information/data is mapped isunconverted and transmitted as a carrier frequency of the frequencyband. In the case of wireless carrier aggregation, frequency bandshaving their own system bandwidth and carrier frequency may besimultaneously used for communication, and each frequency band usedfor carrier aggregation may be divided into a plurality ofsubcarriers having a predetermined subcarrier spacing.

[0142] The 3GPP-based communication standard defines DL physicalchannels corresponding to resource elements carrying informationderived from a higher layer of a physical layer (e.g., a mediumaccess control (MAC) layer, a radio link control (RLC) layer, apacket data convergence protocol (PDCP) layer, a radio resourcecontrol (RRC) layer, a service data adaptation protocol (SDAP), anda non-access stratum (NAS) layer and DL physical signalscorresponding to resource elements which are used by a physicallayer but which do not carry information derived from a higherlayer. For example, a physical downlink shared channel (PDSCH), aphysical broadcast channel (PBCH), a physical multicast channel(PMCH), a physical control format indicator channel (PCFICH), and aphysical downlink control channel (PDCCH) are defined as the DLphysical channels, and a reference signal and a synchronizationsignal are defined as the DL physical signals. A reference signal(RS), also called a pilot, refers to a special waveform of apredefined signal known to both a BS and a UE. For example, acell-specific RS (CRS), a UE-specific RS, a positioning RS (PRS),channel state information RS (CSI-RS), and a demodulation referencesignal (DMRS) may be defined as DL RSs. Meanwhile, the 3GPP-basedcommunication standards define UL physical channels correspondingto resource elements carrying information derived from a higherlayer and UL physical signals corresponding to resource elementswhich are used by a physical layer but which do not carryinformation derived from a higher layer. For example, a physicaluplink shared channel (PUSCH), a physical uplink control channel(PUCCH), and a physical random access channel (PRACH) are definedas the UL physical channels, and a demodulation reference signal(DM RS) for a UL control/data signal and a sounding referencesignal (SRS) used for UL channel measurement are defined as the ULphysical signals.

[0143] In this specification, a physical downlink control channel(PDCCH) and a physical downlink shared channel (PDSCH) may refer toa set of a time-frequency resources or a set of resource elementscarrying downlink control information (DCI) and downlink data,respectively. In addition, a physical uplink control channel, aphysical uplink shared channel (PUSCH), and a physical randomaccess channel refer to a set of a time-frequency resources or aset of resource elements carrying uplink control information (UCI),uplink data and random access signals, respectively. Hereinafter,UE's transmitting an uplink physical channel (e.g., PUCCH, PUSCH,or PRACH) means transmitting UCI, uplink data, or a random accesssignal on the corresponding uplink physical channel or through thenuplink physical channel. BS's receiving an uplink physical channelmay refer to receiving DCI, uplink data, or random access signal onor through the uplink physical channel. BS's transmitting adownlink physical channel (e.g., PDCCH and PDSCH) has the samemeaning as transmitting DCI or downlink data on or through thecorresponding downlink physical channel. UE's receiving a downlinkphysical channel may refer to receiving DCI or downlink data on orthrough the corresponding downlink physical channel.

[0144] In this specification, a transport block is a payload for aphysical layer. For example, data given to a physical layer from anupper layer or a medium access control (MAC) layer is basicallyreferred to as a transport block.

[0145] In this specification, HARQ (Hybrid Automatic Repeat andreQuest) is a kind of error control method. HARQ-acknowledgement(HARQ-ACK) transmitted through the downlink is used for errorcontrol on uplink data, and HARQ-ACK transmitted on the uplink isused for error control on downlink data. A transmitter thatperforms the HARQ operation transmits data (e.g., a transportblock, a codeword) and waits for an acknowledgment (ACK). Areceiver that performs the HARQ operation sends an acknowledgment(ACK) only when data is properly received, and sends a negativeacknowledgment (NACK) if an error occurs in the received data. Thetransmitter may transmit (new) data if ACK is received, andretransmit data if NACK is received. After the BS transmitsscheduling information and data according to the schedulinginformation, a time delay occurs until the ACK/NACK is receivedfrom the UE and retransmission data is transmitted. This time delayoccurs due to channel propagation delay and a time taken for datadecoding/encoding. Therefore, when new data is sent after thecurrent HARQ process is finished, a blank space occurs in the datatransmission due to the time delay. Therefore, a plurality ofindependent HARQ processes are used to prevent generation of theblank space in data transmission during the time delay period. Forexample, if there are seven transmission occasions between aninitial transmission and retransmission, the communication devicemay operate seven independent HARQ processes to perform datatransmission without a blank space. Utilizing the plurality ofparallel HARQ processes, UL/DL transmissions may be performedcontinuously while waiting for HARQ feedback for a previous UL/DLtransmission.

[0146] In this specification, channel state information (CSI)refers to information indicating quality of a radio channel (or alink) formed between a UE and an antenna port. The CSI may includeat least one of a channel quality indicator (CQI), a precodingmatrix indicator (PMI), a CSI-RS resource indicator (CRI), an SSBresource indicator (SSBRI), a layer indicator (LI), a rankindicator (RI), or a reference signal received power (RSRP).

[0147] In this specification, frequency division multiplexing (FDM)may refer to transmission/reception of signals/channels/users atdifferent frequency resources, and time division multiplexing (TDM)may refer to transmission/reception of signals/channels/users atdifferent time resources.

[0148] In the present invention, a frequency division duplex (FDD)refers to a communication scheme in which uplink communication isperformed on an uplink carrier and downlink communication isperformed on a downlink carrier wave linked to the uplink carrier,and time division duplex (TDD) refers to a communication scheme inwhich uplink and downlink communications are performed by dividingtime on the same carrier.

[0149] For background information, terms, abbreviations, etc. usedin the present specification, may refer to those described instandard documents published before the present invention. Forexample, the following document may be referred:

[0150] 3GPP LTE [0151] 3GPP TS 36.211: Physical channels andmodulation [0152] 3GPP TS 36.212: Multiplexing and channel coding[0153] 3GPP TS 36.213: Physical layer procedures [0154] 3GPP TS36.214: Physical layer; Measurements [0155] 3GPP TS 36.300: Overalldescription [0156] 3GPP TS 36.304: User Equipment (UE) proceduresin idle mode [0157] 3GPP TS 36.314: Layer 2--Measurements [0158]3GPP TS 36.321: Medium Access Control (MAC) protocol [0159] 3GPP TS36.322: Radio Link Control (RLC) protocol [0160] 3GPP TS 36.323:Packet Data Convergence Protocol (PDCP) [0161] 3GPP TS 36.331:Radio Resource Control (RRC) protocol [0162] 3GPP TS 23.303:Proximity-based services (Prose); Stage 2 [0163] 3GPP TS 23.285:Architecture enhancements for V2X services [0164] 3GPP TS 23.401:General Packet Radio Service (GPRS) enhancements for EvolvedUniversal Terrestrial Radio Access Network (E-UTRAN) access [0165]3GPP TS 23.402: Architecture enhancements for non-3GPP accesses[0166] 3GPP TS 23.286: Application layer support for V2X services;Functional architecture and information flows [0167] 3GPP TS24.301: Non-Access-Stratum (NAS) protocol for Evolved Packet System(EPS); Stage 3 [0168] 3GPP TS 24.302: Access to the 3GPP EvolvedPacket Core (EPC) via non-3GPP access networks; Stage 3 [0169] 3GPPTS 24.334: Proximity-services (ProSe) User Equipment (UE) to ProSefunction protocol aspects; Stage 3 [0170] 3GPP TS 24.386: UserEquipment (UE) to V2X control function; protocol aspects; Stage 33GPP NR [0171] 3GPP TS 38.211: Physical channels and modulation[0172] 3GPP TS 38.212: Multiplexing and channel coding [0173] 3GPPTS 38.213: Physical layer procedures for control [0174] 3GPP TS38.214: Physical layer procedures for data [0175] 3GPP TS 38.215:Physical layer measurements [0176] 3GPP TS 38.300: NR and NG-RANOverall Description [0177] 3GPP TS 38.304: User Equipment (UE)procedures in idle mode and in RRC inactive state [0178] 3GPP TS38.321: Medium Access Control (MAC) protocol [0179] 3GPP TS 38.322:Radio Link Control (RLC) protocol [0180] 3GPP TS 38.323: PacketData Convergence Protocol (PDCP) [0181] 3GPP TS 38.331: RadioResource Control (RRC) protocol [0182] 3GPP TS 37.324: Service DataAdaptation Protocol (SDAP) [0183] 3GPP TS 37.340:Multi-connectivity; Overall description [0184] 3GPP TS 23.287:Application layer support for V2X services; Functional architectureand information flows [0185] 3GPP TS 23.501: System Architecturefor the 5G System [0186] 3GPP TS 23.502: Procedures for the 5GSystem [0187] 3GPP TS 23.503: Policy and Charging Control Frameworkfor the 5G System; Stage 2 [0188] 3GPP TS 24.501:Non-Access-Stratum (NAS) protocol for 5G System (5GS); Stage 3[0189] 3GPP TS 24.502: Access to the 3GPP 5G Core Network (SGCN)via non-3GPP access networks [0190] 3GPP TS 24.526: User Equipment(UE) policies for 5G System (5GS); Stage 3

[0191] E. 3GPP Signal Transmission/Reception Method

[0192] FIG. 8 is a diagram illustrating an example of a 3GPP signaltransmission/reception method.

[0193] Referring to FIG. 8, when a UE is powered on or enters a newcell, the UE performs an initial cell search operation such assynchronization with a BS (S201). For this operation, the UE canreceive a primary synchronization channel (P-SCH) and a secondarysynchronization channel (S-SCH) from the BS to synchronize with theBS and acquire information such as a cell ID. In LTE and NRsystems, the P-SCH and S-SCH are respectively called a primarysynchronization signal (PSS) and a secondary synchronization signal(SSS). The initial cell search procedure is described in detail inparagraph F. below.

[0194] After initial cell search, the UE can acquire broadcastinformation in the cell by receiving a physical broadcast channel(PBCH) from the BS. Further, the UE can receive a downlinkreference signal (DL RS) in the initial cell search step to check adownlink channel state.

[0195] After initial cell search, the UE can acquire more detailedsystem information by receiving a physical downlink shared channel(PDSCH) according to a physical downlink control channel (PDCCH)and information included in the PDCCH (S202).

[0196] Meanwhile, when the UE initially accesses the BS or has noradio resource for signal transmission, the UE can perform a randomaccess procedure (RACH) for the BS (steps S203 to S206). To thisend, the UE can transmit a specific sequence as a preamble througha physical random access channel (PRACH) (S203 and S205) andreceive a random access response (RAR) message for the preamblethrough a PDCCH and a corresponding PDSCH (S204 and S206). In thecase of a contention-based RACH, a contention resolution proceduremay be additionally performed. The random access procedure isdescribed in detail in paragraph G. below.

[0197] After the UE performs the above-described process, the UEcan perform PDCCH/PDSCH reception (S207) and physical uplink sharedchannel (PUSCH)/physical uplink control channel (PUCCH)transmission (S208) as normal uplink/downlink signal transmissionprocesses. Particularly, the UE receives downlink controlinformation (DCI) through the PDCCH

[0198] The UE monitors a set of PDCCH candidates in monitoringoccasions set for one or more control element sets (CORESET) on aserving cell according to corresponding search spaceconfigurations. A set of PDCCH candidates to be monitored by the UEis defined in terms of search space sets, and a search space setmay be a common search space set or a UE-specific search space set.CORESET includes a set of (physical) resource blocks having aduration of one to three OFDM symbols. A network can configure theUE such that the UE has a plurality of CORESETs. The UE monitorsPDCCH candidates in one or more search space sets. Here, monitoringmeans attempting decoding of PDCCH candidate(s) in a search space.When the UE has successfully decoded one of PDCCH candidates in asearch space, the UE determines that a PDCCH has been detected fromthe PDCCH candidate and performs PDSCH reception or PUSCHtransmission on the basis of DCI in the detected PDCCH.

[0199] The PDCCH can be used to schedule DL transmissions over aPDSCH and UL transmissions over a PUSCH. Here, the DCI in the PDCCHincludes downlink assignment (i.e., downlink grant (DL grant))related to a physical downlink shared channel and including atleast a modulation and coding format and resource allocationinformation, or an uplink grant (UL grant) related to a physicaluplink shared channel and including a modulation and coding formatand resource allocation information.

[0200] F. Initial Access (IA) Process

[0201] Synchronization signal block (SSB) transmission and relatedoperation

[0202] FIG. 9 illustrates an SSB structure. The UE can perform cellsearch, system information acquisition, beam alignment for initialaccess, and DL measurement on the basis of an SSB. The SSB isinterchangeably used with a synchronization signal/physicalbroadcast channel (SS/PBCH) bloc.

[0203] Referring to FIG. 9, the SSB includes a PSS, an SSS and aPBCH. The SSB is configured in four consecutive OFDM symbols, and aPSS, a PBCH, an SSS/PBCH or a PBCH is transmitted for each OFDMsymbol. Each of the PSS and the SSS includes one OFDM symbol and127 subcarriers, and the PBCH includes 3 OFDM symbols and 576subcarriers. The PBCH is encoded/decoded on the basis of a polarcode and modulated/demodulated according to quadrature phase shiftkeying (QPSK). The PBCH in the OFDM symbol includes data resourceelements (REs) to which a complex modulation value of a PBCH ismapped and DMRS REs to which a demodulation reference signal (DMRS)for the PBCH is mapped. There are three DMRS REs per resource blockof the OFDM symbol, and there are three data REs between the DMRSREs.

[0204] Cell Search

[0205] Cell search refers to a process in which a UE acquirestime/frequency synchronization of a cell and detects a cellidentifier (ID) (e.g., physical layer cell ID (PCI)) of the cell.The PSS is used to detect a cell ID in a cell ID group and the SSSis used to detect a cell ID group. The PBCH is used to detect anSSB (time) index and a half-frame.

[0206] The cell search procedure of the UE may be summarized asshown in Table 1 below.

TABLE-US-00001 TABLE 1 Type of Signals Operations 1st step PSSSS/PBCH block (SSB) symbol timing acquisition Cell ID detectionwithin a cell ID group(3 hypothesis) 2nd Step SSS Cell ID groupdetection (336 hypothesis) 3rd Step PBCH SSB index and Half frame(HF) index DMRS (Slot and frame boundary detection) 4th Step PBCHTime information (80 ms, System Frame Number (SFN), SSB index, HF)Remaining Minimum System Information (RMSI) Control resource set(CORESET)/ Search space configuration 5th Step PDCCH and Cellaccess information PDSCH RACH configuration

[0207] There are 336 cell ID groups and there are 3 cell IDs percell ID group. A total of 1008 cell IDs are present. Information ona cell ID group to which a cell ID of a cell belongs isprovided/acquired through an SSS of the cell, and information onthe cell ID among 336 cell ID groups is provided/acquired through aPSS.

[0208] FIG. 10 illustrates SSB transmission.

[0209] The SSB is periodically transmitted in accordance with SSBperiodicity. A default SSB periodicity assumed by a UE duringinitial cell search is defined as 20 ms. After cell access, the SSBperiodicity can be set to one of {5 ms, 10 ms, 20 ms, 40 ms, 80 ms,160 ms} by a network (e.g., a BS). An SSB burst set is configuredat a start portion of the SSB period. The SSB burst set includes a5 ms time window (i.e., half-frame), and the SSB may be transmittedup to N times within the SS burst set. The maximum transmissionnumber L of the SSB may be given as follows according to afrequency band of a carrier wave. One slot includes a maximum oftwo SSBs.

[0210] For frequency range up to 3 GHz, L=4

[0211] For frequency range from 3 GHz to 6 GHz, L=8

[0212] For frequency range from 6 GHz to 52.6 GHz, L=64

[0213] A time position of an SSB candidate in the SS burst set maybe defined according to a subscriber spacing. The SSB candidatetime position is indexed from 0 to L-1 (SSB index) in time orderwithin the SSB burst set (i.e., half-frame).

[0214] A plurality of SSBs may be transmitted within a frequencyspan of a carrier wave. Physical layer cell identifiers of theseSSBs need not be unique, and other SSBs may have different physicallayer cell identifiers.

[0215] The UE may acquire the DL synchronization by detecting theSSB. The UE may identify a structure of the SSB burst set on thebasis of the detected SSB (time) index and thus detect asymbol/slot/half-frame boundary. The number of the frame/half-frameto which the detected SSB belongs may be identified using systemframe number (SFN) information and half-frame indicationinformation.

[0216] Specifically, the UE may acquire a 10-bit SFN for a frame towhich the PBCH belongs from the PBCH. Next, the UE may acquire1-bit half-frame indication information. For example, if the UEdetects a PBCH with a half-frame indication bit set to 0, it maydetermine that the SSB, to which the PBCH belongs, belongs to afirst half-frame in the frame, and if the UE detects a PBCH with ahalf-frame indication bit set to 1, it may determine that the SSB,to which the PBCH belongs, belongs to a second half-frame in theframe. Finally, the UE may acquire an SSB index of the SSB to whichthe PBCH belongs on the basis of a DMRS sequence and PBCH payloadcarried by the PBCH.

[0217] Acquisition of System Information (SI)

[0218] SI is divided into a master information block (MIB) and aplurality of system information blocks (SIBs). The SI other thanthe MIB may be referred to as remaining minimum system information(RMSI). Details thereof may be referred to the following: [0219]The MIB includes information/parameters for monitoring the PDCCHscheduling PDSCH carrying system information block1 (SIB1) and istransmitted by the BS through the PBCH of the SSB. For example, theUE may check whether a control resource set (CORESET) exists forthe Type 0-PDCCH common search space on the basis of the MIB. TheType 0-PDCCH common search space is a kind of PDCCH search spaceand is used to transmit a PDCCH for scheduling an SI message. Ifthe Type 0-PDCCH common search space is present, the UE maydetermine (i) a plurality of contiguous resource blocks and one ormore consecutive resource blocks constituting a CORESET on thebasis of information in the MIB (e.g., pdcch-ConfigSIB1) and (ii) aPDCCH occasion (e.g., time domain position for PDCCH reception). Ifno Type 0-PDCCH common search space exists, pdcch-ConfigSIB1provides information on a frequency location where SSB/SIB1 existsand information on a frequency range where SSB/SIB1 does not exist.[0220] SIB1 includes information related to availability andscheduling (e.g., transmission periodicity and SI-window size) ofthe remaining SIBs (hereinafter, SIBx, x is an integer equal to orgreater than 2). For example, SIB1 may indicate whether the SIBx isperiodically broadcast or provided according to a request from theUE on an on-demand basis. If SIBx is provided on the on-demandbasis, SIB1 may include information necessary for the UE to performthe SI request. The SIB1 is transmitted through the PDSCH, thePDCCH for scheduling the SIB1 is transmitted through the Type0-PDCCH common search space, and the SIB1 is transmitted throughthe PDSCH indicated by the PDCCH. [0221] The SIBx is included inthe SI message and transmitted via the PDSCH. Each SI message istransmitted within a time window (i.e., SI-window) that occursperiodically.

[0222] G. Random Access Procedure

[0223] The random access procedure of the UE may be summarized asshown in Table 2 and FIG. 11.

TABLE-US-00002 TABLE 2 Signal type Acquired operation/informationFirst PRACH preamble Acquire initial beam step in UL Randomselection of random access preamble ID Second Random access Timingadvance information step response on Random access preamble IDPDSCH Initial UL grant, temporary C-RNTI Third UL transmission RRCconnection request step on PUSCH UE identifier Fourth ContentionTemporary C-RNTI on PDCCH for step resolution on DL initial accessC-RNTI on PDCCH for RRC_CONNECTED UE

[0224] The random access procedure is used for various purposes.For example, the random access procedure can be used for networkinitial access, handover, and UE-triggered UL data transmission. AUE can acquire UL synchronization and UL transmission resourcesthrough the random access procedure. The random access procedure isclassified into a contention-based random access procedure and acontention-free random access procedure.

[0225] FIG. 11 illustrates an example of a random access procedure.In particular, FIG. 11 illustrates a contention-based random accessprocedure.

[0226] First, a UE can transmit a random access preamble through aPRACH as Msg1 of a random access procedure in UL.

[0227] Random access preamble sequences having different twolengths are supported. A long sequence length 839 is applied tosubcarrier spacings of 1.25 kHz and 5 kHz and a short sequencelength 139 is applied to subcarrier spacings of 15 kHz, 30 kHz, 60kHz and 120 kHz.

[0228] Multiple preamble formats are defined by one or more RACHOFDM symbols and different cyclic prefixes (and/or guard time).RACH configuration for a cell is included in the system informationof the cell and is provided to the UE. The RACH configurationincludes information on a subcarrier spacing of the PRACH,available preambles, preamble format, and the like. The RACHconfiguration includes association information between SSBs andRACH (time-frequency) resources. The UE transmits a random accesspreamble in the RACH time-frequency resource associated with thedetected or selected SSB.

[0229] A threshold value of the SSB for the RACH resourceassociation may be set by the network, and RACH preamble istransmitted or retransmitted on the basis of the SSB in whichreference signal received power (RSRP) measured on the basis of theSSB satisfies the threshold value. For example, the UE may selectone of the SSB (s) satisfying the threshold value and may transmitor retransmit the RACH preamble on the basis of the RACH resourceassociated with the selected SSB.

[0230] When a BS receives the random access preamble from the UE,the BS transmits a random access response (RAR) message (Msg2) tothe UE. A PDCCH that schedules a PDSCH carrying a RAR is CRC maskedby a random access (RA) radio network temporary identifier (RNTI)(RA-RNTI) and transmitted. Upon detection of the PDCCH masked bythe RA-RNTI, the UE can receive a RAR from the PDSCH scheduled byDCI carried by the PDCCH. The UE checks whether the RAR includesrandom access response information with respect to the preambletransmitted by the UE, that is, Msg1. Presence or absence of randomaccess information with respect to Msg1 transmitted by the UE canbe determined according to presence or absence of a random accesspreamble ID with respect to the preamble transmitted by the UE. Ifthere is no response to Msg1, the UE can retransmit the RACHpreamble less than a predetermined number of times while performingpower ramping. The UE calculates PRACH transmission power forpreamble retransmission on the basis of most recent pathloss and apower ramping counter.

[0231] When the random access response information includes timingadvance information for UL synchronization and an UL grant, andwhen a temporary UE receives a random response informationregarding the UE itself on the PDSCH, the UE may know timingadvance information for UL synchronization, an initial UL grant,and a UE temporary cell RNTI (cell RNTI, C-RNTI). The timingadvance information is used to control uplink signal transmissiontiming. In order to ensure that the PUSCH/PUCCH transmission by theUE is better aligned with the subframe timing at a network end, thenetwork (e.g. BS) may measure a time difference between thePUSCH/PUCCH/SRS reception and subframes and send timing advanceinformation on the basis of the time difference. The UE can performUL transmission through Msg3 of the random access procedure over aphysical uplink shared channel on the basis of the random accessresponse information. Msg3 can include an RRC connection requestand a UE ID. The network can transmit Msg4 as a response to Msg3,and Msg4 can be handled as a contention resolution message on DL.The UE can enter an RRC connected state by receiving Msg4.

[0232] Meanwhile, the contention-free random access procedure maybe performed when the UE performs handover to another cell or BS orwhen the contention-free random access procedure is requested by aBS command. A basic process of the contention-free random accessprocedure is similar to the contention-based random accessprocedure. However, unlike the contention-based random accessprocedure in which the UE randomly selects a preamble to be usedamong a plurality of random access preambles, in the case of thecontention-free random access procedure, a preamble (hereinafterreferred to as a dedicated random access preamble) to be used bythe UE is allocated by the BS to the UE. Information on thededicated random access preamble may be included in an RRC message(e.g., a handover command) or may be provided to the UE via a PDCCHorder. When the random access procedure is started, the UEtransmits a dedicated random access preamble to the BS. When the UEreceives the random access procedure from the BS, the random accessprocedure is completed.

[0233] As mentioned above, the UL grant in the RAR schedules PUSCHtransmission to the UE. The PUSCH carrying initial UL transmissionbased on the UL grant in the RAR will be referred to as Msg3 PUSCH.The content of the RAR UL grant starts at an MSB and ends at a LSBand is given in Table 3.

TABLE-US-00003 TABLE 3 RAR UL grant field Number of bits Frequencyhopping flag 1 Msg3 PUSCH frequency resource allocation 12 Msg3PUSCH time resource allocation 4 Modulation and coding scheme (MCS)4 Transmit power control (TPC) for Msg3 PUSCH 3 CSI request 1

[0234] The TPC command is used to determine transmission power ofthe Msg3 PUSCH and is interpreted, for example, according to Table4.

TABLE-US-00004 TABLE 4 TPC command value [dB] 0 -6 1 -4 2 -2 3 0 42 5 4 6 6 7 8

[0235] In the contention-free random access procedure, the CSIrequest field in the RAR UL grant indicates whether the UE includesan aperiodic CSI report in the corresponding PUSCH transmission. Asubcarrier spacing for the Msg3 PUSCH transmission is provided byan RRC parameter. The UE will transmit the PRACH and Msg3 PUSCH onthe same uplink carrier of the same service providing cell. A ULBWP for Msg3 PUSCH transmission is indicated by SIB1(SystemInformationBlock1).

[0236] H. DL and UL Transmitting/Receiving Operations

[0237] DL Transmitting/Receiving Operation

[0238] A downlink grant (also referred to as a downlink assignment)may be divided into (1) dynamic grant and (2) configured grant. Thedynamic grant, which is intended to maximize resource utilization,refers to a method of data transmission/reception on the basis ofdynamic scheduling by the BS.

[0239] The BS schedules downlink transmission through a DCI. The UEreceives on the PDCCH the DCI for downlink scheduling (i.e.,including scheduling information of the PDSCH) from the BS. DCIformat 1_0 or 1_1 may be used for downlink scheduling. The DCIformat 1_1 for downlink scheduling may include, for example, thefollowing information: an identifier for DCI format, a bandwidthpart indicator, a frequency domain resource assignment, time domainresource assignment, MCS.

[0240] The UE may determine a modulation order, a target code rate,and a transport block size for the PDSCH on the basis of the MCSfield in the DCI. The UE may receive the PDSCH in time-frequencyresource according to frequency domain resource allocationinformation and time domain resource allocation information.

[0241] The DL grant is also referred to as semi-persistentscheduling (SPS). The UE may receive an RRC message including aresource configuration for transmission of DL data from the BS. Inthe case of the DL SPS, an actual DL configured grant is providedby the PDCCH and is activated or deactivated by the PDCCH. If theDL SPS is configured, at least the following parameters areprovided to the UE via RRC signaling from the BS: a configuredscheduling RNTI (CS-RNTI) for activation, deactivation andretransmission; and cycle. The actual DL grant of the DL SPS isprovided to the UE by the DCI in the PDCCH addressed to theCS-RNTI. The UE activates an SPS associated with the CS-RNTI ifspecific fields of the DCI in the PDCCH addressed to the CS-RNTIare set to specific values for scheduling activation. The UE mayreceive downlink data through the PDSCH on the basis of theSPS.

[0242] UL Transmitting/Receiving Operation

[0243] The BS transmits a DCI including uplink schedulinginformation to the UE. The UE receives on the PDCCH the DCI foruplink scheduling (i.e., including scheduling information of thePUSCH) from the BS. DCI format 0_0 or 0_1 may be used for uplinkscheduling. The DCI format 0_1 for uplink scheduling may includethe following information: an identifier for DCI format, abandwidth part indicator, a frequency domain resource assignment, atime domain resource assignment, MCS.

[0244] The UE transmits uplink data on the PUSCH on the basis ofthe DCI. For example, when the UE detects the PDCCH including theDCI format 0_0 or 0_1, the UE transmits the PUSCH according to aninstruction based on the DCI. Two transmission schemes aresupported for PUSCH transmission: codebook-based transmission andnon-codebook-based transmission.

[0245] When an RRC parameter `txConfig` receives an RRC message setto `codebook`, the UE is configured to a codebook-basedtransmission. Meanwhile, when an RRC message in which the RRCparameter `txConfig` is set to `nonCodebook` is received, the UE isconfigured to a non-codebook-based transmission. The PUSCH may besemi-statically scheduled by the DCI format 0_0, by the DCI format0_1, or by RRC signaling.

[0246] The uplink grant may be divided into (1) a dynamic grant and(2) a configured grant.

[0247] FIG. 12 shows an example of an uplink grant. FIG. 12(a)illustrates an UL transmission process based on the dynamic grant,and FIG. 12(b) illustrates an UL transmission process based on theconfigured grant.

[0248] A dynamic grant, which is to maximize utilization ofresources, refers to a data transmission/reception method based ondynamic scheduling by a BS. This means that when the UE has data tobe transmitted, the UE requests uplink resource allocation from theBS and transmits the data using only uplink resource allocated bythe BS. In order to use the uplink radio resource efficiently, theBS must know how much data each UE transmits on the uplink.Therefore, the UE may directly transmit information on uplink datato be transmitted to the BS, and the BS may allocate uplinkresources to the UE on the basis of the information. In this case,the information on the uplink data transmitted from the UE to theBS is referred to as a buffer status report (BSR), and the BSRrelates to the amount of uplink data stored in a buffer of theUE.

[0249] Referring to FIG. 12(a), an uplink resource allocationprocess for actual data when the UE does not have an uplink radioresource available for transmission of the BSR is illustrated. Forexample, since the UE which does not have a UL grant cannotavailable for UL data transmission cannot transmit the BSR througha PUSCH, the UE must request resource for uplink data must bystarting transmission of a scheduling request via a PUCCH, and inthis case, an uplink resource allocation process of five steps isused.

[0250] Referring to FIG. 12(a), if there is no PUSCH resource fortransmitting a BSR, the UE first transmits a scheduling request(SR) to the BS in order to be allocated a PUSCH resource. The SR isused by the UE to request the BS for PUSCH resources for uplinktransmission when a reporting event occurs but there is no PUSCHresource available to the UE. Depending on whether there is a validPUCCH resource for the SR, the UE transmits the SR via the PUCCH orinitiates a random access procedure. When the UE receives the ULgrant from the BS, it transmits the BSR to the BS via the PUSCHresource allocated by the UL grant. The BS checks the amount ofdata to be transmitted by the UE on the uplink on the basis of theBSR and transmits a UL grant to the UE. The UE receiving the ULgrant transmits actual uplink data to the BS through the PUSCH onthe basis of the UL grant.

[0251] Referring to FIG. 12(b), the UE receives an RRC messageincluding a resource configuration for transmission of UL data fromthe BS. There are two types of UL-configured grants in the NRsystem: Type 1 and Type 2. In the case of UL-configured grant type1, an actual UL grant (e.g., time resource, frequency resource) isprovided by RRC signaling, and in the case of Type 2, an actual ULgrant is provided by the PDCCH and is activated or deactivated bythe PDCCH. If the grant type 1 is configured, at least thefollowing parameters are provided to the UE via RRC signaling fromthe BS: CS-RNTI for retransmission; periodicity of the configuredgrant type 1; information about a start symbol index S and a symbollength L for an intra-slot PUSCH; time domain offset representingan offset of the resource for SFN=0 in the time domain; MCS indexindicating modulation order, target code rate, and transport blocksize. If the grant type 2 is configured, at least the followingparameters are provided to the UE via RRC signaling from the BS:CS-RNTI for activation, deactivation and retransmission;periodicity of configured grant type 2. The actual UL grant of theconfigured grant type 2 is provided to the UE by the DCI in thePDCCH addressed to the CS-RNTI. If the specific fields of the DCIin the PDCCH addressed to the CS-RNTI are set to a specific valuefor scheduling activation, the UE activates the configured granttype 2 associated with the CS-RNTI.

[0252] The UE may perform uplink transmission via the PUSCH on thebasis of the configured grant according to the type 1 or type2.

[0253] Resources for initial transmission by the configured grantmay or may not be shared by one or more UEs.

[0254] FIG. 13 shows an example of a conceptual diagram of uplinkphysical channel processing.

[0255] Each of the blocks shown in FIG. 13 may be performed in eachmodule in the physical layer block of a transmission device. Morespecifically, the uplink signal processing in FIG. 13 may beperformed in the processor of the UE/BS described in thisspecification. Referring to FIG. 13, the uplink physical channelprocessing may be performed through scrambling, modulation mapping,layer mapping, transform precoding, precoding, resource elementmapping, and SC-FDMA signal generation (SC-FDMA signal generation).Each of the above processes may be performed separately or togetherin each module of the transmission device. The transform precodingis spreading UL data in a special way to reduce a peak-to-averagepower ratio (PAPR) of a waveform, and is a kind of discrete Fouriertransform (DFT). OFDM using a CP together with the transformprecoding that performs DFT spreading is called DFT-s-OFDM, andOFDM using a CP without DFT spreading is called CP-OFDM. Transformprecoding may optionally be applied if it is enabled for the UL inan NR system. That is, the NR system supports two options for ULwaveforms, one of which is CP-OFDM and the other is DFT-s-OFDM.Whether the UE must use the CP-OFDM as a UL transmit waveform orthe DFT-s-OFDM as a UL transmit waveform is provided from the BS tothe UE via RRC parameters. FIG. 13 is a conceptual diagram ofuplink physical channel processing for DFT-s-OFDM. In the case ofCP-OFDM, the transform precoding among the processes of FIG. 13 isomitted.

[0256] More specifically, the transmission device scrambles codedbits in a codeword by a scrambling module, and then transmits thecoded bits through a physical channel. Here, the codeword isacquired by encoding a transport block. The scrambled bits aremodulated by a modulation mapping module into complex-valuedmodulation symbols. The modulation mapping module may modulate thescrambled bits according to a predetermined modulation scheme andarrange the modulated bits as complex-valued modulation symbolsrepresenting a position on a signal constellation. pi/2-BPSK(pi/2-Binary Phase Shift Keying), m-PSK (m-Phase Shift Keying) orm-QAM (m-Quadrature Amplitude Modulation) may be used formodulating the coded data. The complex-valued modulation symbolsmay be mapped to one or more transport layers by a layer mappingmodule. The complex-valued modulation symbols on each layer may beprecoded by a precoding module for transmission on an antenna port.If the transform precoding is enabled, the precoding module mayperform precoding after performing transform precoding on thecomplex-valued modulation symbols as shown in FIG. 13. Theprecoding module may process the complex-valued modulation symbolsin a MIMO manner according to multiple transmission antennas tooutput antenna-specific symbols, and distribute theantenna-specific symbols to a corresponding resource elementmapping module. An output z of the precoding module may be acquiredby multiplying an output y of the layer mapping module by aprecoding matrix W of N.times.M. Here, N is the number of antennaports and M is the number of layers. The resource element mappingmodule maps the complex-valued modulation symbols for each antennaport to an appropriate resource element in the resource blockallocated for transmission. The resource element mapping module maymap the complex-valued modulation symbols to appropriatesubcarriers and multiplex the same according to users. The SC-FDMAsignal generation module (CP-OFDM signal generation module if thetransform precoding is disabled) modulates the complex-valuedmodulation symbol according to a specific modulation scheme, forexample, an OFDM scheme, to generate a complex-valued time domainOFDM (Orthogonal Frequency Division Multiplexing) symbol signal.The signal generation module may perform Inverse Fast FourierTransform (IFFT) on the antenna specific symbol, and a CP may beinserted into the time domain symbol on which the IFFT has beenperformed. The OFDM symbol undergoes digital-to-analog conversion,upconverting, and the like, and transmitted to a reception devicethrough each transmission antenna. The signal generation module mayinclude an IFFT module and a CP inserter, a digital-to-analogconverter (DAC), and a frequency uplink converter.

[0257] A signal processing procedure of a reception device may bethe reverse of the signal processing procedure of the transmissiondevice. Details thereof may be referred to the above contents andFIG. 13.

[0258] Next, the PUCCH will be described.

[0259] The PUCCH supports a plurality of formats, and the PUCCHformats may be classified according to symbol duration, payloadsize, multiplexing, and the like. Table 5 below illustrates PUCCHformats.

TABLE-US-00005 TABLE 5 PUCCH length in OFDM Number Format symbolsof bits Usage Etc. 0 1-2 .ltoreq.2 1 Sequence selection 1 4-14.ltoreq.2 2 Sequence modulation 2 1-2 >2 4 CP-OFDM 3 4-14 >28 DFT-s-OFDM(no UE multiplexing) 4 4-14 >2 16 DFT-s-OFDM(Pre DFTorthogonal cover code(OCC))

[0260] The PUCCH formats shown in Table 5 may be divided into (1) ashort PUCCH and (2) a long PUCCH. PUCCH formats 0 and 2 may beincluded in the short PUCCH, and PUCCH formats 1, 3 and 4 may beincluded in the long PUCCH.

[0261] FIG. 14 shows an example of an NR slot in which a PUCCH istransmitted.

[0262] The UE transmits one or two PUCCHs through serving cells indifferent symbols in one slot. When the UE transmits two PUCCHs inone slot, at least one of the two PUCCHs has a structure of theshort PUCCH.

[0263] I. eMBB (Enhanced Mobile Broadband Communication)

[0264] In the case of the NR system, a massive multiple inputmultiple output (MIMO) environment in which the transmit/receiveantennas are significantly increased may be considered. That is, asthe large MIMO environment is considered, the number oftransmit/receive antennas may increase to several tens or hundredsor more. Meanwhile, the NR system supports communication in above 6GHz band, that is, the millimeter frequency band. However, themillimeter frequency band has a frequency characteristic in whichsignal attenuation according to a distance is very sharp due to theuse of a frequency band which is too high. Therefore, an NR systemusing the band of 6 GHz or higher uses a beamforming technique inwhich energy is collected and transmitted in a specific direction,not in all directions, in order to compensate for suddenpropagation attenuation characteristics. In the massive MIMOenvironment, a hybrid type beamforming technique combining ananalog beamforming technique and a digital beamforming technique isrequired depending on a position to which a beamforming weightvector/precoding vector is applied, to reduce complexity ofhardware implementation, increase performance using multipleantennas, obtain flexibility of resource allocation, and facilitatebeam control for each frequency.

[0265] Hybrid Beamforming

[0266] FIG. 15 illustrates an example of a block diagram of atransmitter and a receiver for hybrid beamforming.

[0267] As a method for forming a narrow beam in a millimeterfrequency band, a beam forming scheme in which energy is increasedonly in a specific direction by transmitting the same signal usinga phase difference suitable for a large number of antennas in a BSor a UE is mainly considered. Such beamforming scheme includesdigital beamforming to create a phase difference in a digitalbaseband signal, analog beamforming to create a phase difference ina modulated analog signal using time delay (i.e., cyclic shift),and hybrid beamforming using both digital beamforming and analogbeamforming, or the like. If each antenna element has an RF unit(or transceiver unit (TXRU)) to adjust transmission power andphase, independent beamforming is possible for each frequencyresource. However, it is not effective in terms of price to installan RF unit in all 100 antenna elements. That is, since themillimeter frequency band requires a large number of antennas tocompensate for the sudden attenuation characteristics and digitalbeamforming requires an RF component (e.g., a digital-to-analogconverter (DAC), a mixer, a power amplifier, a linear amplifier,and the like), implementation of digital beamforming in themillimeter frequency band causes the price of the communicationdevice to increase. Therefore, when a large number of antennas arerequired such as in the millimeter frequency band, the use ofanalog beamforming or hybrid beamforming is considered. In theanalog beamforming scheme, a plurality of antenna elements aremapped to one TXRU and a direction of a beam is adjusted by ananalog phase shifter. Such an analog beamforming scheme maygenerate only one beam direction in the entire band, and thus, itcannot perform frequency selective beamforming (BF). Hybrid BF isan intermediate form of digital BF and analog BF and has B RF unitsfewer than Q antenna elements. In the case of the hybrid BF,directions of beams that may be transmitted at the same time islimited to B or less, although there is a difference depending on amethod of connecting the B RF units and Q antenna elements.

[0268] Beam Management (BM)

[0269] The BM process includes processes for acquiring andmaintaining a set of BS (or a transmission and reception point(TRP)) and/or UE beams that may be used for downlink (DL) anduplink (UL) transmission/reception and may include the followingprocesses and terms. [0270] beam measurement: operation for BS orUE to measure characteristic of received beamforming signal. [0271]beam determination: operation for BS or UE to select its own Txbeam/Rx beam. [0272] beam sweeping: an operation to cover spatialdomain using transmission and/or reception beam during apredetermined time interval in a predetermined manner. [0273] beamreport: an operation for UE to report information of beamformedsignal on the basis of beam measurement.

[0274] The BM process may be classified into (1) DL BM processusing SSB or CSI-RS and (2) UL BM process using SRS (soundingreference signal). Also, each BM process may include Tx beamsweeping to determine Tx beam and Rx beam sweeping to determine Rxbeam.

[0275] DL BM Process

[0276] The DL BM process may include (1) transmission of beamformedDL RSs (e.g., CSI-RS or SSB) by the BS, and (2) beam reporting bythe UE.

[0277] Here, the beam report may include a preferred DL RS ID(s)and a corresponding reference signal received power (RSRP). The DLRS ID may be an SSBRI (SSB Resource Indicator) or a CRI (CSI-RSResource Indicator).

[0278] FIG. 16 shows an example of beamforming using SSB andCSI-RS.

[0279] As shown in FIG. 16, the SSB beam and the CSI-RS beam may beused for beam measurement. The measurement metric is an RSRP perresource/block. The SSB may be used for coarse beam measurement,and the CSI-RS may be used for fine beam measurement. SSB may beused for both Tx beam sweeping and Rx beam sweeping. Rx beamsweeping using the SSB may be performed by attempting to receivethe SSB while the UE changes the Rx beam for the same SSBRI acrossmultiple SSB bursts. Here, one SS burst may include one or moreSSBs, and one SS burst set includes one or more SSB bursts.

[0280] 1. DL BM Using SSB

[0281] FIG. 17 is a flowchart illustrating an example of a DL BMprocess using SSB.

[0282] A configuration for beam report using the SSB is performedat the time of channel state information (CSI)/beam configurationin RRC_CONNECTED.

[0283] The UE receives from the BS a CSI-ResourceConfig IEincluding a CSI-SSB-ResourceSetList for the SSB resources used forthe BM (S410). The RRC parameter csi-SSB-ResourceSetList representsa list of SSB resources used for beam management and reporting inone resource set. Here, the SSB resource set may be configured to{SSB.times.1, SSB.times.2, SSB.times.3, SSB.times.4}. The SSB indexmay be defined from 0 to 63. [0284] The UE receives signals on theSSB resources from the BS on the basis of theCSI-SSB-ResourceSetList (S420). [0285] If the CSI-RS reportConfigassociated with reporting on the SSBRI and reference signalreceived power (RSRP) is configured, the UE reports the best SSBRIand its corresponding RSRP to the BS S430). For example, if thereportQuantity of the CSI-RS reportConfig IE is set to`ssb-Index-RSRP`, the UE reports the best SSBRI and a correspondingRSRP to the BS.

[0286] When the CSI-RS resource is configured in the same OFDMsymbol (s) as the SSB and `QCL-Type D` is applicable, the UE mayassume that the CSI-RS and the SSB are quasi co-located (QCL-ed) interms of `QCL-TypeD`. Here, QCL-TypeD may refer to QCL-ed betweenantenna ports in terms of spatial Rx parameter. The same receivebeam may be applied when the UE receives signals of a plurality ofDL antenna ports in the QCL-TypeD relationship. Details of QCL mayrefer to a section 4. QCL below.

[0287] 2. DL BM Using CSI-RS

[0288] Referring to the use of CSI-RS, i) if a repetition parameteris set for a specific CSI-RS resource set and TRS_info is notconfigured, CSI-RS is used for beam management. ii) If therepetition parameter is not set and TRS_info is set, the CSI-RS isused for a tracking reference signal (TRS). Iii) If the repetitionparameter is not set and TRS_info is not set, the CSI-RS is usedfor CSI acquisition.

[0289] (RRC Parameter) If the repetition is set to `ON`, it relatesto a Rx beam sweeping process of the UE. If the repetition is setto `ON`, the UE may assume that if NZP-CSI-RS-ResourceSet isconfigured, signals of at least one CSI-RS resource in theNZP-CSI-RS-ResourceSet are transmitted in the same downlink spacedomain filter. That is, at least one CSI-RS resource in theNZP-CSI-RS-ResourceSet is transmitted through the same Tx beam.Here, signals of at least one CSI-RS resource in theNZP-CSI-RS-ResourceSet may be transmitted in different OFDMsymbols.

[0290] Meanwhile, if the repetition is set to `OFF`, it relates toa Tx beam sweeping process of the BS. If the repetition is set to`OFF`, the UE does not assume that signals of at least one CSI-RSresource in the NZP-CSI-RS-ResourceSet are transmitted in the samedownlink spatial domain transmission filter. That is, the signalsof at least one CSI-RS resource in the NZP-CSI-RS-ResourceSet aretransmitted through different Tx beams. FIG. 18 shows anotherexample of the DL BM process using CSI-RS.

[0291] FIG. 18(a) shows a process of Rx beam determination (orrefinement) of the UE, and FIG. 18(b) shows a Tx beam sweepingprocess of the BS. FIG. 18 (a) shows a case where the repetitionparameter is set to `ON`, and FIG. 18(b) shows a case where therepetition parameter is set to `OFF`.

[0292] A process of determining the Rx beam of the UE will bedescribed with reference to FIGS. 18(a) and 19.

[0293] FIG. 19 is a flowchart illustrating an example of a processof determining a reception beam of a UE. [0294] The UE receives anNZP CSI-RS resource set IE including the RRC parameter regarding`repetition` from the BS through RRC signaling (S610). Here, theRRC parameter `repetition` is set to `ON`. [0295] The UE repeatedlyreceives signals on the resource(s) in the CSI-RS resource in whichthe RRC parameter `repetition` is set to `ON` in different OFDM (s)through the same Tx beam (or DL space domain transmission filter)of the BS (S620). [0296] The UE determines its own Rx beam (S630).[0297] The UE omits the CSI reporting (S640). That is, the UE mayomit CSI reporting when the uplink RRC parameter `repetition` isset to `ON`.

[0298] A Tx beam determining process of the BS will be describedwith reference to FIGS. 18(b) and 20.

[0299] FIG. 20 is a flowchart illustrating an example of atransmission beam determining process of the BS. [0300] The UEreceives an NZP CSI-RS resource set IE including an RRC parameterregarding `repetition` from the BS through RRC signaling (S710).Here, the RRC parameter `repetition` is set to `OFF` and is relatedto the Tx beam sweeping process of the BS. [0301] The UE receivessignals on the resources in the CSI-RS resource in which the RRCparameter `repetition` is set to `OFF` through different Tx beams(DL spatial domain transmission filters) of the BS (S720). [0302]The UE selects (or determines) the best beam (S730) [0303] The UEreports an ID (e.g., CRI) for the selected beam and related qualityinformation (e.g., RSRP) to the BS (S740). That is, the UE reportsthe CRI and the RSRP to the BS when the CSI-RS is transmitted forthe BM.

[0304] FIG. 21 shows an example of resource allocation in time andfrequency domains related to the operation of FIG. 18.

[0305] When repetition `ON` is set in the CSI-RS resource set, aplurality of CSI-RS resources are repeatedly used by applying thesame transmission beam, and when repetition `OFF` is set in theCSI-RS resource set, different CSI-RS resources may be transmittedin different transmission beams.

[0306] 3. DL BM-Related Beam Indication

[0307] The UE may receive a list of up to M candidate transmissionconfiguration indication (TCI) states for at least a quasico-location (QCL) indication via RRC signaling. Here, M depends onUE capability and may be 64.

[0308] Each TCI state may be configured with one reference signal(RS) set. Table 6 shows an example of a TCI-State IE. The TCI-StateIE is associated with a quasi co-location (QCL) type correspondingto one or two DL reference signals (RSs).

TABLE-US-00006 TABLE 6 -- ASN1START -- TAG-TCI-STATE-STARTTCI-State ::= SEQUENCE { tci-StateId TCI-StateId, qcl-Type1QCL-Info, qcl-Type2 QCL-Info OPTIONAL, -- Need R ... } QCL-Info ::=SEQUENCE { cell ServCellIndex OPTIONAL, -- Need R bwp-Id BWP-IdOPTIONAL, -- Cond CSI-RS-Indicated referenceSignal CHOICE { csi-rsNZP-CSI-RS-ResourceId, ssb SSB-Index }, qcl-Type ENUMERATED {typeA,typeB, typeC, typeD}, ... } -- TAG-TCI-STATE-STOP -- ASN1STOP

[0309] In Table 6, `bwp-Id` denotes a DL BWP where RS is located,`cell` denotes a carrier where RS is located, `referencesignal`denotes a reference antenna port(s) which is a QCL-ed source fortarget antenna port(s) or a reference signal including the same.The target antenna port(s) may be CSI-RS, PDCCH DMRS, or PDSCHDMRS.

[0310] 4. QCL (Quasi-Co Location)

[0311] The UE may receive a list including up to M TCI-stateconfigurations to decode the PDSCH according to the detected PDCCHhaving an intended DCI for the UE and a given cell. Here, M dependson the UE capability.

[0312] As illustrated in Table 6, each TCI-State includes aparameter for establishing a QCL relationship between one or two DLRSs and the DM-RS port of the PDSCH. The QCL relationship isconfigured with a RRC parameter qcl-Type 1 for the first DL RS anda qcl-Type2 (if set) for the second DL RS.

[0313] The QCL type corresponding to each DL RS is given by theparameter `qcl-Type` in QCL-Info and may have one of the followingvalues: [0314] `QCL-TypeA`: {Doppler shift, Doppler spread, averagedelay, delay spread} [0315] `QCL-TypeB`: {Doppler shift, Dopplerspread} [0316] `QCL-TypeC`: {Doppler shift, average delay} [0317]`QCL-TypeD`: {Spatial Rx parameter}

[0318] For example, when a target antenna port is a specific NZPCSI-RS, corresponding NZP CSI-RS antenna ports may beinstructed/configured to be QCL-ed with a specific TRS in terms ofQCL-Type A and QCL-ed with a specific SSB in terms of QCL-Type D.The thusly instructed/configured UE may receive the correspondingNZP CSI-RS using a Doppler and delay value measured by theQCL-TypeA TRS and apply a reception beam used for receiving theQCL-TypeD SSB to the corresponding NZP CSI-RS reception.

[0319] UL BM Process

[0320] In the UL BM, a Tx beam-Rx beam reciprocity (or beamcorrespondence) may be or may not be established depending on UEimplementation. If the Tx beam-Rx beam reciprocity is establishedin both the BS and the UE, a UL beam pair may be matched through aDL beam pair. However, if the Tx beam-Rx beam reciprocity is notestablished in either the BS or the UE, a UL beam pair determiningprocess is required, apart from DL beam pair determination.

[0321] In addition, even when the BS and the UE maintain beamcorrespondence, the BS may use the UL BM process for DL Tx beamdetermination without requesting the UE to report a preferredbeam.

[0322] The UL BM may be performed through beamformed UL SRStransmission and whether to apply the UL BM of the SRS resource setis configured by the RRC parameter in a (RRC parameter) usage. Ifthe usage is configured as `BeamManagement (BM)`, only one SRSresource may be transmitted for each of a plurality of SRS resourcesets at a given time instant.

[0323] The UE may be configured with one or more sounding referencesignal (SRS) resource sets (through RRC signaling, etc.) set by the(RRC parameter) SRS-ResourceSet. For each SRS resource set,K.gtoreq.1 SRS resources may be set for the UE. Here, K is anatural number, and a maximum value of K is indicated bySRS_capability.

[0324] Like the DL BM, the UL BM process may also be divided intoTx beam sweeping of the UE and Rx beam sweeping of the BS.

[0325] FIG. 22 shows an example of a UL BM process using SRS.

[0326] FIG. 22(a) shows a process of determining Rx beamforming ofa BS, and FIG. 22(b) shows a process of sweeping Tx beam of theUE.

[0327] FIG. 23 is a flowchart illustrating an example of a UL BMprocess using SRS. [0328] The UE receives RRC signaling (e.g.,SRS-Config IE) including an (RRC parameter) usage parameter set to`beam management` from the BS (S1010). An SRS-Config IE is used forconfiguration of SRS transmission. The SRS-Config IE includes alist of SRS-Resources and a list of SRS-ResourceSets. Each SRSresource set refers to a set of SRS-resources. [0329] The UEdetermines Tx beamforming for the SRS resource to be transmitted onthe basis of SRS-SpatialRelation Info included in the SRS-Config IE(S1020). Here, the SRS-SpatialRelation Info is configured for eachSRS resource and indicates whether to apply the same beamforming asthat used in SSB, CSI-RS, or SRS for each SRS resource. [0330] IfSRS-SpatialRelationInfo is configured in the SRS resource, the samebeamforming as that used in SSB, CSI-RS, or SRS is applied andtransmitted. However, if SRS-SpatialRelationInfo is not configuredin the SRS resource, the UE randomly determines the Tx beamformingand transmits the SRS through the determined Tx beamforming(S1030).

[0331] More specifically, regarding P-SRS in which`SRS-ResourceConfigType` is set to `periodic`:

[0332] i) If the SRS-SpatialRelationInfo is set to SSB/PBCH', theUE transmits the corresponding SRS by applying the same spatialdomain transmission filter (or generated from the correspondingfilter) as the spatial domain Rx filter used for receivingSSB/PBCH; or

[0333] ii) If the SRS-SpatialRelationInfo is set to `CSI-RS`, theUE transmits the SRS by applying the same spatial domaintransmission filter used for receiving the CSI-RS; or

[0334] iii) When SRS-SpatialRelationInfo is set to `SRS`, the UEtransmits the corresponding SRS by applying the same spatial domaintransmission filter used for transmitting the SRS.

[0335] In addition, the UE may receive or may not receive afeedback on the SRS from the BS as in the following three cases(S1040).

[0336] i) When Spatial_Relation_Info is set for all SRS resourcesin the SRS resource set, the UE transmits the SRS to the beamindicated by the BS. For example, if Spatial_Relation_Infoindicates SSB, CRI, or SRI in which Spatial_Relation_Info is thesame, the UE repeatedly transmits the SRS on the same beam.

[0337] ii) Spatial_Relation_Info may not be set for all SRSresources in the SRS resource set. In this case, the UE may freelytransmit while changing the SRS beamforming.

[0338] iii) Spatial_Relation_Info may only be set for some SRSresources in the SRS resource set. In this case, the SRS istransmitted on the indicated beam for the set SRS resource, and foran SRS resource in which Spatial_Relation_Info is not set, the UEmay transit the SRS resource by randomly applying Txbeamforming.

[0339] A Beam Failure Recovery (BFR) Process

[0340] In a beamformed system, a radio link failure (RLF) may occurfrequently due to rotation, movement, or beamforming blockage ofthe UE. Therefore, BFR is supported in NR to prevent frequentoccurrence of the RLFs. The BFR is similar to the radio linkfailure recovery process and may be supported if the UE knows thenew candidate beam(s).

[0341] For beam failure detection, the BS configures beam failuredetection reference signals for the UE, and if the number of timesof beam failure indications from the physical layer of the UEreaches a threshold set by the RRC signaling within a period set bythe RRC signaling of the BS, the UE declares beam failure.

[0342] After the beam failure is detected, the UE triggers a beamfailure recovery by initiating a random access procedure on thePCell; and performs beam failure recovery by selecting a suitablebeam (If the BS provides dedicated random access resources forcertain beams, they are prioritized by the UE). Upon completion ofthe random access procedure, beam failure recovery is considered tobe completed.

[0343] J. URLLC (Ultra-Reliable and Low Latency Communication)

[0344] The URLLC transmission defined by the NR may refer totransmission for (1) a relatively low traffic size, (2) arelatively low arrival rate, (3) an extremely low latencyrequirement (e.g., 0.5, 1 ms), (4) relatively short transmissionduration (e.g., 2 OFDM symbols), and (5) urgent service/message,etc.

[0345] In the case of UL, transmission for a particular type oftraffic (e.g., URLLC) needs to be multiplexed with other previouslyscheduled transmissions (e.g., eMBB) to meet a more stringentlatency requirement. In this regard, one method is to giveinformation indicating that a scheduled UE will be preempted for aspecific resource, and allow the URLLC UE to use the resource forUL transmission.

[0346] Pre-Emption Indication

[0347] In the case of NR, dynamic resource sharing between eMBB andURLLC is supported. eMBB and URLLC services may be scheduled onnon-overlapping time/frequency resources and URLLC transmission mayoccur on scheduled resources for ongoing eMBB traffic. The eMBB UEmay not know whether PDSCH transmission of the UE is partiallypunctured and the UE may not be able to decode the PDSCH due tocorrupted coded bits. In consideration of this, NR provides apreemption indication.

[0348] The preemption indication may also be referred to as aninterrupted transmission indication.

[0349] With respect to the preamble indication, the UE receivesDownlinkPreemption IE through RRC signaling from the BS. Table 7below shows an example of the DownlinkPreemption IE.

TABLE-US-00007 TABLE 7 -- ASN1START -- TAG-DOWNLINKPREEMPTION-STARTDownlinkPreemption ::= SEQUENCE { int-RNTI RNTI-Value,timeFrequencySet ENUMERATED {set0, set1}, dci-PayloadSize INTEGER(0..maxINT-DCI-PayloadSize), int-ConfigurationPerServingCellSEQUENCE (SIZE (1..maxNrofServingCells)) OFINT-ConfigurationPerServingCell, ... }INT-ConfigurationPerServingCell ::= SEQUENCE { servingCellIdServCellIndex, positionInDCI INTEGER (0..maxINT-DCI-PayloadSize-1)} -- TAG-DOWNLINKPREEMPTION-STOP -- ASN1STOP

[0350] If the UE is provided with the DownlinkPreemption IE, the UEis configured with an INT-RNTI provided by a parameter int-RNTI inthe DownlinkPreemption IE to monitor a PDCCH conveying the DCIformat 2_1. The UE is further configured with a set of servingcells and a corresponding set of locations for fields in the DCIformat 2_1 by positionInDCI by an INT-ConfigurationPerServing Cellincluding a set of serving cell indices provided by aservingCellID, is configured with an information payload size forDCI format 2_1 by dci-PayloadSize, and is configured withgranularity of time-frequency resources by timeFrequencySect.

[0351] The UE receives the DCI format 2_1 from the BS on the basisof the DownlinkPreemption IE.

[0352] If the UE detects the DCI format 2_1 for a serving cell inthe set of serving cells, the UE may assume there is notransmission to the UE in PRBs and symbols indicated by the DCIformat 2_1 among sets of PRBs and sets of symbols in the lastmonitoring period before a monitoring period to which the DCIformat 2_1 belongs. For example, referring to FIG. 9A, the UEdetermines that a signal in the time-frequency resource indicatedby pre-emption is not a DL transmission scheduled for the UE itselfand decodes data on the basis of signals received in the remainingresource area.

[0353] FIG. 24 is a diagram showing an example of an preemptionindication method.

[0354] A combination of {M,N} is set by the RRC parametertimeFrequencySet. {M, N}={14,1}, {7,2}.

[0355] FIG. 25 shows an example of a time/frequency set of apreemption indication.

[0356] A 14-bit bitmap for a preemption indication indicates one ormore frequency parts (N>=1) and/or one or more time domain parts(M>=1). In the case of {M, N}={14,1}, as shown in FIG. 25(a), 14parts in the time domain correspond one-to-one to 14 bits of the14-bit bit map, and a part corresponding to a bit set to 1, amongthe 14 bits, is part including pre-empted resources. In the case of{M, N}={7,2}, as shown in FIG. 25(b), the time-frequency resourcesof the monitoring period is divided into seven parts in the timedomain and two parts in the frequency domain, so as to be dividedinto a total of 14 time-frequency parts. The total of 14time-frequency parts correspond one-to-one to the 14 bits of the14-bit bitmap, and the part corresponding to the bit set to 1 amongthe 14 bits includes the pre-empted resources.

[0357] K. MMTC (Massive MTC)

[0358] The massive machine type communication (mMTC) is one of the5G scenarios for supporting a hyper-connection service thatsimultaneously communicates with a large number of UEs. In thisenvironment, the UE intermittently performs communication with avery low transfer rate and mobility. Therefore, mMTC is aimed athow low cost and for how long the UE can be driven. In this regard,MTC and NB-IoT, which are dealt with in 3GPP will be described.

[0359] Hereinafter, a case where a transmission time interval of aphysical channel is a subframe will be described as an example. Forexample, a case where a minimum time interval from a start oftransmission of one physical channel (e.g., MPDCCH, PDSCH, PUCCH,PUSCH) to a start of transmission of a next physical channel is onesubframe will be described as an example. In the followingdescription, the subframe may be replaced by a slot, a mini-slot,or multiple slots.

[0360] MTC (Machine Type Communication)

[0361] MTC (Machine Type Communication), which is an applicationthat does not require much throughput applicable to M2M(Machine-to-Machine) or IoT (Internet-of-Things), refers to acommunication technology adopted to meet the requirements of theIoT service in 3GPP (3rd Generation Partnership Project).

[0362] The MTC may be implemented to meet the criteria of (1) lowcost & low complexity, (2) enhanced coverage, and (3) low powerconsumption.

[0363] In 3GPP, MTC has been applied since release 10 (3GPPstandard document version 10.x.x.) and features of MTC added foreach release of 3GPP will be briefly described.

[0364] First, the MTC described in 3GPP Release 10 and Release 11relates to a load control method. The load control method is toprevent IoT (or M2M) devices from suddenly loading the BS. Morespecifically, 3GPP Release 10 relates to a method of controlling aload by disconnecting IoT devices when the load occurs, and Release11 relates to a method of preventing connection of the UE inadvance by informing the UE about connection to a cell laterthrough system information of the cell. In Release 12, features forlow cost MTC are added, for which UE category 0 is newly defined.The UE category is an indicator indicating how much data the UE mayhandle at a communication modem. A UE in UE category 0 is a UE witha reduced peak data rate and relaxed radio frequency (RF)requirements, thus reducing baseband and RF complexity. In Release13, a technology called eMTC (enhanced MTC) was introduced, whichallows the UE to operate only at 1.08 MHz, a minimum frequencybandwidth supported by legacy LTE, thereby lowering the price andpower consumption of the UE.

[0365] The contents described hereinafter is features mainlyrelated to eMTC but may also be equally applicable to the MTC,eMTC, 5G (or NR) unless otherwise mentioned. Hereinafter, forconvenience of explanation, MTC will be collectively described.

[0366] Therefore, the MTC described below may referred to as theenhanced MTC (eMTC), the LTE-M1/M2, BL (bandwidth reduced lowcomplexity/CE (coverage enhanced), non-BL UE (in enhancedcoverage), NR MTC, enhanced BL/CE, and the like. That is, the termMTC may be replaced with terms to be defined in the 3GPP standardin the future.

[0367] MTC General Features

[0368] (1) MTC operates only within a specific system bandwidth (orchannel bandwidth).

[0369] MTC may use six resource blocks (RBs) in the system band ofthe legacy LTE as shown in FIG. 26 or use a specific number of RBsin the system band of the NR system. The frequency bandwidth inwhich the MTC operates may be defined in consideration of afrequency range of NR and subcarrier spacing. Hereinafter, aspecific system or frequency bandwidth in which the MTC operates isreferred to as an MTC narrowband (NB). In the NR, the MTC mayoperate in at least one bandwidth part (BWP) or in a specific bandof BWP.

[0370] MTC follows a narrowband operation to transmit and receivephysical channels and signals, and a maximum channel bandwidth inwhich the MTC UE is operable is reduced to 1.08 MHz or six (LTE)RBs.

[0371] The narrowband may be used as a reference unit in resourceallocation units of some downlink and uplink channels, and aphysical location of each narrowband in the frequency domain may bedefined to be different depending on the system bandwidth.

[0372] The bandwidth of 1.08 MHz defined in MTC is defined for theMTC UE to follow the same cell search and random access procedureas the legacy UE.

[0373] MTC may be supported by cells having a bandwidth (e.g., 10MHz) much larger than 1.08 MHz but the physical channels andsignals transmitted and received by the MTC are always limited to1.08 MHz. The system with having much larger bandwidth may belegacy LTE, NR systems, 5G systems, and the like.

[0374] A narrowband is defined as six non-overlapping consecutivephysical resource blocks in the frequency domain.

[0375] FIG. 26(a) is a diagram showing an example of a narrowbandoperation, and FIG. 26(b) is a diagram showing an example ofrepetition having RF retuning.

[0376] Frequency diversity by RF retuning will be described withreference to FIG. 26(b).

[0377] Due to narrowband RF, single antenna and limited mobility,the MTC supports limited frequency, space and time diversity. Inorder to reduce fading and outage, frequency hopping is supportedby MTC between different narrow bands by RF retuning.

[0378] In MTC, frequency hopping is applied to different uplink anddownlink physical channels when repetition is possible. Forexample, if 32 subframes are used for PDSCH transmission, first 16subframes may be transmitted on a first narrowband. Here, the RFfront end is retuned to another narrow band, and the remaining 16subframes are transmitted on the second narrow band.

[0379] The narrowband of MTC may be set to the UE via systeminformation or DCI (downlink control information) transmitted bythe BS.

[0380] (2) The MTC operates in a half duplex mode and uses alimited (or reduced) maximum transmit power. The half duplex moderefers to a mode in which a communication device operates only inan uplink or a downlink at one frequency at one time point andoperates in a downlink or an uplink at another frequency at anothertime point. For example, when the communication device operates inthe half-duplex mode, the communication device performscommunication using the uplink frequency and the downlinkfrequency, and the communication device may not use the uplinkfrequency and the downlink frequency at the same time. Thecommunication device divides time to perform uplink transmissionthrough the uplink frequency and the downlink reception byre-tuning to the downlink frequency for another predeterminedtime.

[0381] (3) MTC does not use channels (defined in legacy LTE or NR)that must be distributed over the entire system bandwidth of thelegacy LTE or NR. For example, in the MTC, the PDCCH of the legacyLTE is not used because the PDCCH is distributed over the entiresystem bandwidth. Instead, a new control channel, MPDCCH (MTCPDCCH), is defined in the MTC. The MPDCCH is transmitted/receivedwithin a maximum of 6 RBs in the frequency domain.

[0382] (4) MTC uses the newly defined DCI format. For example, DCIformats 6-0A, 6-0B, 6-1A, 6-1B, 6-2, etc., may be used as a DCIformat for MTC (see 3GPP TS 36.212).

[0383] (5) In the case of MTC, a physical broadcast channel (PBCH),a physical random access channel (PRACH), an MTC physical downlinkcontrol channel (M-PDCCH), a physical downlink shared channel(PDSCH), a physical uplink control channel (PUCCH), and a physicaluplink shared channel (PUSCH) may be repeatedly transmitted. Due tothe MTC repeated transmission, an MTC channel may be decoded evenwhen signal quality or power is very poor, such as in an inadequateenvironment such as a basement, thereby increasing a cell radiusand increasing a penetration effect.

[0384] (6) In MTC, PDSCH transmission based on PDSCH scheduling(DCI) and PDSCH scheduling may occur in different subframes(cross-subframe scheduling).

[0385] (7) In the LTE system, the PDSCH carrying a general SIB1 isscheduled by the PDCCH, whereas all the resource allocationinformation (e.g., subframe, transport block size, narrowbandindex) for SIB1 decoding is determined by a parameter of the MIBand no control channel is used for SIB1 decoding of the MTC.

[0386] (8) All resource allocation information (subframe, TBS,subband index) for SIB2 decoding is determined by several SIB1parameters and no control channel for SIB2 decoding of MTC isused.

[0387] (9) The MTC supports an extended paging (DRX) cycle. Here,the paging period refers to a period during which the UE must bewake up to check whether there is a paging from a network even whenthe UE is in a discontinuous reception (DRX) mode in which it doesnot attempt to receive a downlink signal for power saving.

[0388] (10) MTC may use the same PSS (Primary SynchronizationSignal)/SSS (Secondary Synchronization Signal)/CRS (CommonReference Signal) used in legacy LTE or NR. In the case of NR, thePSS/SSS is transmitted on an SSB basis, and a tracking RS (TRS) isa cell-specific RS and may be used for frequency/time tracking.

[0389] MTC Operation Mode and Level

[0390] Next, an MTC operation mode and level will be described. MTCis classified into two operation modes (first mode, second mode)and four different levels for coverage improvement as shown inTable 8 below.

[0391] The MTC operation mode is referred to as a CE (CoverageEnhancement) mode. In this case, the first mode may be referred toas a CE mode A, and the second mode may be referred to as a CE modeB.

TABLE-US-00008 TABLE 8 Mode Level Description Mode A Level 1 Norepetition for PRACH Level 2 Small Number of Repetition for PRACHMode B Level 3 Medium Number of Repetition for PRACH Level 4 LargeNumber of Repetition for PRACH

[0392] The first mode is defined for small coverage enhancement tosupport full mobility and CSI (channel state information, in whichthere is no repetition or fewer repetition times. The second modeis defined for UEs with extremely poor coverage conditions thatsupport CSI feedback and limited mobility, in which a large numberof repetitive transmissions is defined. The second mode provides acoverage improvement of up to 15 dB. Each level of MTC is defineddifferently in the random access procedure and the pagingprocess.

[0393] The MTC operation mode is determined by the BS, and eachlevel is determined by the MTC UE. Specifically, the BS transmitsRRC signaling including information on the MTC operation mode tothe UE. Here, the RRC signaling may be an RRC connection setupmessage, an RRC connection reconfiguration message or an RRCconnection reestablishment message.

[0394] Thereafter, the MTC UE determines a level in each operationmode and transmits the determined level to the BS. Specifically,the MTC UE determines a level in an operation mode on the basis ofmeasured channel quality (e.g., reference signal received power(RSRP), reference signal received quality (RSRQ), or signal tointerference plus noise ratio (SINR), and transmits an RACHpreamble using a PRACH resource (e.g., frequency, time, preambleresource for PRACH) corresponding to the determined level, therebyinforming the BS about the determined level.

[0395] MTC Guard Period

[0396] As discussed above, MTC operates in narrow band. Thelocation of the narrow band used in the MTC may be different foreach particular time unit (e.g., subframe or slot). The MTC UE maytune to different frequencies depending on the time units. Acertain amount of time is required for frequency retuning, andcertain amount of time is defined as a guard period of MTC. Thatis, a guard period is required when frequency retuning is performedwhile transitioning from one time unit to the next time unit, andtransmission and reception do not occur during the guardperiod.

[0397] MTC Signal Transmission/Reception Method

[0398] FIG. 27 is a diagram illustrating physical channels that maybe used for MTC and a general signal transmission method using thesame.

[0399] In step S1001, the MTC UE, which is powered on again orenters a new cell, performs an initial cell search operation suchas synchronizing with the BS. To this end, the MTC UE receives aprimary synchronization signal (PSS) and a secondarysynchronization signal (SSS) from the BS, adjusts synchronizationwith the BS, and acquires information such as a cell ID. ThePSS/SSS used in the initial cell search operation of the MTC may bea PSS/SSS, a resynchronization signal (RSS), or the like of anlegacy LTE.

[0400] Thereafter, the MTC UE may receive a physical broadcastchannel (PBCH) signal from the BS to acquire broadcast informationin a cell.

[0401] Meanwhile, the MTC UE may receive a downlink referencesignal (DL RS) in an initial cell search step to check a downlinkchannel state. The broadcast information transmitted through thePBCH is a master information block (MIB), and in the LTE, the MIBis repeated by every 10 ms.

[0402] Among the bits of the MIB of the legacy LTE, reserved bitsare used in MTC to transmit scheduling for a new SIB1-BR (systeminformation block for bandwidth reduced device) including atime/frequency location and a transport block size. The SIB-BR istransmitted directly on the PDSCH without any control channel(e.g., PDCCH, MPDDCH) associated with the SIB-BR.

[0403] Upon completion of the initial cell search, the MTC UE mayreceive an MPDCCH and a PDSCH according to the MPDCCH informationto acquire more specific system information in step S1002. TheMPDCCH may be transmitted only once or repeatedly. The maximumnumber of repetitions of the MPDCCH is set to the UE by RRCsignaling from the BS.

[0404] Thereafter, the MTC UE may perform a random access proceduresuch as steps S1003 to S1006 to complete the connection to the BS.A basic configuration related to the RACH process of the MTC UE istransmitted by SIB2. In addition, SIB2 includes parameters relatedto paging. In the 3GPP system, a paging occasion (PO) refers to atime unit in which the UE may attempt to receive paging. The MTC UEattempts to receive the MPDCCH on the basis of a P-RNTI in the timeunit corresponding to its PO on the narrowband (PNB) set forpaging. The UE that has successfully decoded the MPDCCH on thebasis of the P-RNTI may receive a PDSCH scheduled by the MPDCCH andcheck a paging message for itself. If there is a paging message foritself, the UE performs a random access procedure to access anetwork.

[0405] For the random access procedure, the MTC UE transmits apreamble through a physical random access channel (PRACH) (S1003),and receives a response message (RAR) for the preamble through theMPDCCH and the corresponding PDSCH. (S1004). In the case of acontention-based random access, the MTC UE may perform a contentionresolution procedure such as transmission of an additional PRACHsignal (S1005) and reception of the MPDCCH signal and correspondingPDSCH signal (S1006). The signals and/or messages Msg 1, Msg 2, Msg3, and Msg 4 transmitted in the RACH process in the MTC may berepeatedly transmitted, and the repeat pattern is set to bedifferent according to the CE level. Msg1 denotes a PRACH preamble,Msg2 denotes a random access response (RAR), Msg3 denotes ULtransmission on the basis of a UL grant included in the RAR, andMsg4 denotes a DL transmission of the BS to Msg3.

[0406] For random access, PRACH resources for the different CElevels are signaled by the BS. This provides the same control of anear-far effect on the PRACH by grouping together UEs experiencingsimilar path loss. Up to four different PRACH resources may besignaled to the MTC UE.

[0407] The MTC UE estimates RSRP using a downlink RS (e.g., CRS,CSI-RS, TRS, and the like), and selects one of different PRACHresources (e.g., frequency, time, and preamble resources for PRACH)for the random access on the basis of the measurement result. TheRAR for the PRACH and search spaces for the contention resolutionmessages for PRACH are also signaled at the BS via systeminformation.

[0408] The MTC UE that has performed the above-described processmay then receive an MPDCCH signal and/or a PDSCH signal (S1007) andtransmit a physical uplink shared channel (PUSCH) signal and/or aphysical uplink control channel (PUCCH) (S1108) as a generaluplink/downlink signal transmission process. The MTC UE maytransmit uplink control information (UCI) to the BS through thePUCCH or PUSCH. The UCI may include HARQ-ACK/NACK, schedulingrequest (SR), and/or CSI.

[0409] When RRC connection to the MTC UE is established, the MTC UEmonitors the MPDCCH in a search space set to acquire uplink anddownlink data allocation and attempts to receive the MDCCH.

[0410] In the case of MTC, the MPDCCH and the PDSCH scheduled bythe MDCCH are transmitted/received in different subframes. Forexample, the MPDCCH having the last repetition in subframe #nschedules the PDSCH starting at subframe #n+2. The DCI transmittedby the MPDCCH provides information on how many times the MPDCCH isrepeated so that the MTC UE may know when the PDSCH transmission isstarted. For example, when the DCI in the MPDCCH started to betransmitted from the subframe #n includes information that theMPDCCH is repeated 10 times, a last subframe in which the MPDCCH istransmitted is the subframe #n+9 and transmission of the PDSCH maystart at subframe #n+11.

[0411] The PDSCH may be scheduled in the same as or different froma narrow band in which the MPDCCH scheduling the PDSCH is present.If the MPDCCH and the corresponding PDSCH are located in differentnarrow bands, the MTC UE needs to retune the frequency to thenarrow band in which the PDSCH is present before decoding thePDSCH.

[0412] For uplink data transmission, scheduling may follow the sametiming as legacy LTE. For example, the MPDCCH which is lastlytransmitted at subframe #n may schedule PUSCH transmission startingat subframe #n+4.

[0413] FIG. 28 shows an example of scheduling for MTC and legacyLTE, respectively.

[0414] In the legacy LTE, the PDSCH is scheduled using the PDCCH,which uses the first OFDM symbol(s) in each subframe, and the PDSCHis scheduled in the same subframe as the subframe in which thePDCCH is received.

[0415] In contrast, the MTC PDSCH is cross-subframe scheduled, andone subframe between the MPDCCH and the PDSCH is used as a timeperiod for MPDCCH decoding and RF retuning. The MTC control channeland data channel may be repeated over a large number of subframesincluding up to 256 subframes for the MPDCCH and up to 2048subframes for the PDSCH so that they may be decoded under extremecoverage conditions.

[0416] NB-IoT (Narrowband-Internet of Things)

[0417] The NB-IoT may refer to a system for supporting lowcomplexity, low power consumption through a system bandwidth(system BW) corresponding to one resource block (RB) of a wirelesscommunication system.

[0418] Here, NB-IoT may be referred to as other terms such asNB-LTE, NB-IoT enhancement, enhanced NB-IoT, further enhancedNB-IoT, NB-NR. That is, NB-IoT may be replaced with a term definedor to be defined in the 3GPP standard, and hereinafter, it will becollectively referred to as `NB-IoT` for convenience ofexplanation.

[0419] The NB-IoT is a system for supporting a device (or UE) suchas machine-type communication (MTC) in a cellular system so as tobe used as a communication method for implementing IoT (i.e.,Internet of Things). Here, one RB of the existing system band isallocated for the NB-IoT, so that the frequency may be efficientlyused. Also, in the case of NB-IoT, each UE recognizes a single RBas a respective carrier, so that RB and carrier referred to inconnection with NB-IoT in the present specification may beinterpreted to have the same meaning.

[0420] Hereinafter, a frame structure, a physical channel, amulti-carrier operation, an operation mode, and general signaltransmission/reception related to the NB-IoT in the presentspecification are described in consideration of the case of thelegacy LTE system, but may also be extendedly applied to a nextgeneration system (e.g., an NR system, etc.). In addition, thecontents related to NB-IoT in this specification may be extendedlyapplied to MTC (Machine Type Communication) oriented for similartechnical purposes (e.g., low-power, low-cost, coverageenhancement, etc.).

[0421] Hereinafter, a case where a transmission time interval of aphysical channel is a subframe are described as an example. Forexample, a case where a minimum time interval from the start oftransmission of one physical channel (e.g., NPDCCH, NPDSCH, NPUCCH,NPUSCH) to the start of transmission of a next physical channel isone subframe will be described, but in the following description,the subframe may be replaced by a slot, a mini-slot, or multipleslots.

[0422] Frame Structure and Physical Resource of NB-IoT

[0423] First, the NB-IoT frame structure may be configured to bedifferent according to subcarrier spacing. Specifically, FIG. 29shows an example of a frame structure when a subscriber spacing is15 kHz, and FIG. 30 shows an example of a frame structure when asubscriber spacing is 3.75 kHz. However, the NB-IoT frame structureis not limited thereto, and NB-IoT for other subscriber spacings(e.g., 30 kHz) may be considered with different time/frequencyunits.

[0424] In addition, although the NB-IoT frame structure on thebasis of the LTE system frame structure has been exemplified in thepresent specification, it is merely for the convenience ofexplanation and the present invention is not limited thereto. Themethod described in this disclosure may also be extendedly appliedto NB-IoT based on a frame structure of a next-generation system(e.g., NR system).

[0425] Referring to FIG. 29, the NB-IoT frame structure for a 15kHz subscriber spacing may be configured to be the same as theframe structure of the legacy system (e.g., LTE system) describedabove. For example, a 10 ms NB-IoT frame may include ten 1 msNB-IoT subframes, and the 1 ms NB-IoT subframe may include two 0.5ms NB-IoT slots. Further, each 0.5 ms NB-IoT may include 7 OFDMsymbols.

[0426] Alternatively, referring to FIG. 30, the 10 ms NB-IoT framemay include five 2 ms NB-IoT subframes, the 2 ms NB-IoT subframemay include seven OFDM symbols and one guard period (GP). Also, the2 ms NB-IoT subframe may be represented by an NB-IoT slot or anNB-IoT RU (resource unit).

[0427] Next, physical resources of the NB-IoT for each of downlinkand uplink will be described.

[0428] First, the physical resources of the NB-IoT downlink may beconfigured by referring to physical resources of other wirelesscommunication system (e.g., LTE system, NR system, etc.), exceptthat a system bandwidth is limited to a certain number of RBs(e.g., one RB, i.e., 180 kHz). For example, when the NB-IoTdownlink supports only the 15-kHz subscriber spacing as describedabove, the physical resources of the NB-IoT downlink may beconfigured as resource regions limiting a resource grid of the LTEsystem shown in FIG. 31 to one RB in the frequency domain.

[0429] Next, in the case of the NB-IoT uplink physical resource,the system bandwidth may be limited to one RB as in the case ofdownlink. For example, if the NB-IoT uplink supports 15 kHz and3.75 kHz subscriber spacings as described above, a resource gridfor the NB-IoT uplink may be expressed as shown in FIG. 31. In thiscase, the number of subcarriers NULsc and the slot period Tslot inthe uplink band in FIG. 31 may be given as shown in Table 9below.

TABLE-US-00009 TABLE 9 Subcarrier spacing NULsc Tslot .DELTA.f =3.75 kHz 48 6144 Ts .DELTA.f = 15 kHz 12 15360 Ts

[0430] In NB-IoT, resource units (RUs) are used for mapping thePUSCH for NB-IoT (hereinafter referred to as NPUSCH) to resourceelements. RU may include NULsymb*NULslot SC-FDMA symbols in thetime domain and include NRUsc number of consecutive subcarriers inthe frequency domain. For example, NRUsc and NULsymb may be givenby Table 10 below for frame structure type 1, which is a framestructure for FDD, and may be given by Table 11 below for framestructure type 2, which is frame structure for TDD.

TABLE-US-00010 TABLE 10 NPUSCH format .DELTA.f NRUsc NULslotsNULsymb 1 3.75 kHz 1 16 7 .sup. 15 kHz 1 16 3 8 6 4 12 2 2 3.75 kHz1 4 .sup. 15 kHz 1 4

TABLE-US-00011 TABLE 11 Supported up- NPUSCH link-downlink format.DELTA.f configurations NRUsc NULslots NULsymb 1 3.75 kHz 1, 4 1 167 .sup. 15 kHz 1, 2, 3, 4, 5 1 16 3 8 6 4 12 2 2 3.75 kHz 1, 4 1 4.sup. 15 kHz 1, 2, 3, 4, 5 1 4

[0431] Physical Channel of NB-IoT

[0432] A BS and/or a UE supporting the NB-IoT may be configured totransmit/receive physical channels and/or physical signalsconfigured separately from the legacy system. Hereinafter, specificcontents related to physical channels and/or physical signalssupported by the NB-IoT will be described.

[0433] An orthogonal frequency division multiple access (OFDMA)scheme may be applied to the NB-IoT downlink on the basis of asubscriber spacing of 15 kHz. Through this, co-existence with othersystems (e.g., LTE system, NR system) may be efficiently supportedby providing orthogonality between subcarriers. A downlink physicalchannel/signal of the NB-IoT system may be represented by adding `N(Narrowband)` to distinguish it from the legacy system. Forexample, a downlink physical channel may be referred to as an NPBCH(narrowband physical broadcast channel), an NPDCCH (narrowbandphysical downlink control channel), or an NPDSCH (narrowbandphysical downlink shared channel), and a downlink physical signalmay be referred to as an NPSS (narrowband primary synchronizationsignal), an NSSS (narrowband secondary synchronization signal), anNRS (narrowband reference signal), an NPRS (narrowband positioningreference signal), an NWUS (narrowband wake up signal), and thelike. Generally, the downlink physical channels and physicalsignals of the NB-IoT may be configured to be transmitted on thebasis of a time domain multiplexing scheme and/or a frequencydomain multiplexing scheme. In the case of NPBCH, NPDCCH, NPDSCH,etc., which are the downlink channels of the NB-IoT system,repetition transmission may be performed for coverage enhancement.In addition, the NB-IoT uses a newly defined DCI format. Forexample, the DCI format for NB-IoT may be defined as DCI format NO,DCI format N1, DCI format N2, and the like.

[0434] In the NB-IoT uplink, a single carrier frequency divisionmultiple access (SC-FDMA) scheme may be applied on the basis of asubscriber spacing of 15 kHz or 3.75 kHz. As mentioned in thedownlink section, the physical channel of the NB-IoT system may beexpressed by adding `N (Narrowband)` to distinguish it from theexisting system. For example, the uplink physical channel may berepresented by a narrowband physical random access channel (NPRACH)or a narrowband physical uplink shared channel (NPUSCH), and theuplink physical signal may be represented by a narrowbanddemodulation reference signal (NDMRS), or the like. NPUSCH may bedivided into NPUSCH format 1 and NPUSCH format 2. In one example,NPUSCH Format 1 may be used for uplink shared channel (UL-SCH)transmission (or transport), and NPUSCH Format 2 may be used foruplink control information transmission such as HARQ ACK signaling.In the case of NPRACH, which is an uplink channel of the NB-IoTsystem, repetition transmission may be performed for coverageenhancement. In this case, repetition transmission may be performedby applying frequency hopping.

[0435] Multi-Carrier Operation of NB-IoT

[0436] Next, a multi-carrier operation of the NB-IoT will bedescribed. The multicarrier operation may refer to that multiplecarriers set for different uses (i.e., different types) are usedfor transmitting/receiving channels and/or signals between the BSand/or UE in the NB-Iot.

[0437] The NB-IoT may operate in a multi-carrier mode. Here, in theNB-IoT, a carrier wave in the N-Iot may be classified as an anchortype carrier (i.e., an anchor carrier, an anchor PRB) and anon-anchor type carrier a non-anchor type carrier (i.e., non-anchorcarrier).

[0438] The anchor carrier may refer to a carrier that transmitsNPSS, NSSS, NPBCH, and NPDSCH for a system information block(N-SIB) for initial access from a point of view of the BS. That is,in NB-IoT, the carrier for initial access may be referred to as ananchor carrier and the other(s) may be referred to as a non-anchorcarrier. Here, only one anchor carrier wave may exist in thesystem, or there may be a plurality of anchor carrier waves.

[0439] Operation Mode of NB-IoT

[0440] Next, an operation mode of the NB-IoT will be described. Inthe NB-IoT system, three operation modes may be supported. FIG. 32shows an example of operation modes supported in the NB-IoT system.Although the operation mode of the NB-IoT is described herein onthe basis of an LTE band, this is for convenience of explanationand may be extendedly applied to other system bands (e.g. NR systemband).

[0441] Specifically, FIG. 32(a) shows an example of an in-bandsystem, FIG. 32 (b) shows an example of a guard-band system, andFIG. 32(c) Represents an example of a stand-alone system. In thiscase, the in-band system may be expressed as an in-band mode, theguard-band system may be expressed as a guard-band mode, and thestand-alone system may be expressed in a stand-alone mode.

[0442] The in-band system may refer to a system or mode that uses aspecific RB in the (legacy) LTE band. The in-band system may beoperated by allocating some resource blocks of the LTE systemcarrier.

[0443] A guard-band system may refer to a system or mode that usesNB-IoT in a space reserved for a guard-band of the legacy LTE band.The guard-band system may be operated by allocating a guard-band ofan LTE carrier not used as a resource block in the LTE system. Forexample, the (legacy) LTE band may be configured to have aguard-band of at least 100 kHz at the end of each LTE band, andwith two non-contiguous guard-bands for 200 kHz for NB-IoT may beused.

[0444] As described above, the in-band system and the guard-bandsystem may be operated in a structure in which NB-IoT coexists inthe (legacy) LTE band.

[0445] By contrast, the stand-alone system may refer to a system ormode that is configured independently of the legacy LTE band. Thestand-alone system may be operated by separately allocatingfrequency bands (e.g., reassigned GSM carriers in the future) usedin a GERAN (GSM EDGE radio access network).

[0446] The three operation modes described above may be operatedindependently of each other, or two or more operation modes may beoperated in combination.

[0447] NB-IoT Signal Transmission/Reception Process

[0448] FIG. 33 is a diagram illustrating an example of physicalchannels that may be used for NB-IoT and a general signaltransmission method using the same. In a wireless communicationsystem, an NB-IoT UE may receive information from a BS through adownlink (DL) and the NB-IoT UE may transmit information to the BSthrough an uplink (UL). In other words, in the wirelesscommunication system, the BS may transmit information to the NB-IoTUE through the downlink and the BS may receive information from theNB-IoT UE through the uplink.

[0449] The information transmitted/received by the BS and theNB-IoT UE includes data and various control information, andvarious physical channels may exist depending on the type/purposeof the information transmitted/received by the BS and NB-IoT UE.The signal transmission/reception method of the NB-IoT may beperformed by the above-described wireless communication devices(e.g., BS and UE).

[0450] The NB-IoT UE, which is powered on again or enters a newcell, may perform an initial cell search operation such asadjusting synchronization with the BS, or the like (S11). To thisend, the NB-IoT UE receives NPSS and NSSS from the BS, performssynchronization with the BS, and acquires cell identityinformation. Also, the NB-IoT UE may receive the NPBCH from the BSand acquire the in-cell broadcast information. In addition, theNB-IoT UE may receive a DL RS (downlink reference signal) in theinitial cell search step to check a downlink channel state.

[0451] After completion of the initial cell search, the NB-IoT UEmay receive the NPDCCH and the corresponding NPDSCH to acquire morespecific system information (S12). In other words, the BS maytransmit more specific system information by transmitting theNPDCCH and corresponding NPDSCH to the NB-IoT UE after completionof the initial cell search.

[0452] Thereafter, the NB-IoT UE may perform a random accessprocedure to complete connection to the BS (S13 to S16).

[0453] Specifically, the NB-IoT UE may transmit a preamble to theBS via the NPRACH (S13). As described above, the NPRACH may beconfigured to be repeatedly transmitted on the basis of frequencyhopping or the like to enhance coverage or the like. In otherwords, the BS may (repeatedly) receive a preamble through theNPRACH from the NB-IoT UE.

[0454] Thereafter, the NB-IoT UE may receive a random accessresponse (RAR) for the preamble from the BS through the NPDCCH andthe corresponding NPDSCH (S14). In other words, the BS may transmitthe RAR for the preamble to the NB-IoT UE through the NPDCCH andthe corresponding NPDSCH.

[0455] Thereafter, the NB-IoT UE transmits the NPUSCH to the BSusing scheduling information in the RAR (S15), and may perform acontention resolution procedure such as the NPDCCH and thecorresponding NPDSCH (S16). In other words, the BS may receive theNPUSCH from the UE using the scheduling information in the NB-IoTRAR, and perform the contention resolution procedure.

[0456] The NB-IoT UE that has performed the above-described processmay perform NPDCCH/NPDSCH reception (S17) and NPUSCH transmission(S18) as a general uplink/downlink signal transmission process. Inother words, after performing the above-described processes, the BSmay perform NPDCCH/NPDSCH transmission and NPUSCH reception as ageneral signal transmission/reception process to the NB-IoT UE.

[0457] In the case of NB-IoT, as mentioned above, NPBCH, NPDCCH,NPDSCH, and the like may be repeatedly transmitted for coverageimprovement and the like. In the case of NB-IoT, UL-SCH (i.e.,general uplink data) and uplink control information may betransmitted through the NPUSCH. Here, the UL-SCH and the uplinkcontrol information (UCI) may be configured to be transmittedthrough different NPUSCH formats (e.g., NPUSCH format 1, NPUSCHformat 2, etc.).

[0458] Also, the UCI may include HARQ ACK/NACK (Hybrid AutomaticRepeat and reQuest Acknowledgement/Negative-ACK), SR (SchedulingRequest), CSI (Channel State Information), and the like. Asdescribed above, the UCI in the NB-IoT may generally be transmittedvia the NPUSCH. Also, in response to a request/instruction from thenetwork (e.g., BS), the UE may transmit the UCI via the NPUSCH in aperiodic, aperiodic, or semi-persistent manner.

[0459] Hereinafter, the wireless communication system block diagramshown in FIG. 1 will be described in detail.

[0460] N. Wireless Communication Device

[0461] Referring to FIG. 1, a wireless communication systemincludes a first communication device 910 and/or a secondcommunication device 920. `A and/or B` may be interpreted to havethe same meaning as `includes at least one of A or B.` The firstcommunication device may represent a BS and the secondcommunication device may represent a UE (alternatively, the firstcommunication device may represent a UE and the secondcommunication device may represent a BS).

[0462] The first and second communication devices may includeprocessors 911 and 921, memories 914 and 924, one or more Tx/Rx RFmodules 915 and 925, Tx processors 912 and 922, Rx processors 913and 923, and antennas 916 and 926, respectively. The Tx/Rx moduleis also called a transceiver. The processor implements thefunctions, procedures and/or methods discussed above. Morespecifically, in the DL (communication from the first communicationdevice to the second communication device), a higher layer packetfrom the core network is provided to the processor 911. Theprocessor implements the function of a layer 2 (i.e., L2) layer. Inthe DL, the processor multiplexes a logical channel and a transportchannel, provides radio resource allocation to the secondcommunication device 920, and is responsible for signaling to thesecond communication device. A transmission (TX) processor 912implements various signal processing functions for the L1 layer(i.e., the physical layer). The signal processing functionfacilitates forward error correction (FEC) in the secondcommunication device, and includes coding and interleaving. Theencoded and interleaved signals are scrambled and modulated intocomplex-valued modulation symbols. For modulation, BPSK (QuadraturePhase Shift Keying), QPSK (Quadrature Phase Shift Keying), 16QAM(quadrature amplitude modulation), 64QAM, 246QAM, and the like maybe used. The complex-valued modulation symbols (hereinafterreferred to as modulation symbols) are divided into parallelstreams, each stream being mapped to an OFDM subcarrier andmultiplexed with a reference signal (RS) in the time and/orfrequency domain, and combined together using IFFT (Inverse FastFourier Transform) to create a physical channel carrying a timedomain OFDM symbol stream. The OFDM symbol stream is spatiallyprecoded to produce multiple spatial streams. Each spatial streammay be provided to a different antenna 916 via a separate Tx/Rxmodule (or transceiver, 915). Each Tx/Rx module may upconvert eachspatial stream into an RF carrier for transmission. In the secondcommunication device, each Tx/Rx module (or transceiver, 925)receives a signal of the RF carrier via each antenna 926 of eachTx/Rx module. Each Tx/Rx module restores the RF carrier signal to abaseband signal and provides it to the reception (RX) processor923. The RX processor implements various signal processingfunctions of the L1 (i.e., the physical layer). The RX processormay perform spatial processing on the information to recover anyspatial stream directed to the second communication device. Ifmultiple spatial streams are directed to the second communicationdevice, they may be combined into a single OFDMA symbol stream bymultiple RX processors. The RX processor transforms the OFDM symbolstream, which is a time domain signal, into a frequency domainsignal using a fast Fourier transform (FFT). The frequency domainsignal includes a separate OFDM symbol stream for each subcarrierof the OFDM signal. The modulation symbols and the reference signalon each subcarrier are recovered and demodulated by determining themost likely signal constellation points sent by the firstcommunication device. These soft decisions may be based on channelestimate values. Soft decisions are decoded and deinterleaved torecover data and control signals originally transmitted by thefirst communication device on the physical channel. Thecorresponding data and control signals are provided to theprocessor 921.

[0463] The UL (communication from the second communication deviceto the first communication device) is processed in the firstcommunication device 910 in a manner similar to that described inconnection with a receiver function in the second communicationdevice 920. Each Tx/Rx module 925 receives a signal via eachantenna 926. Each Tx/Rx module provides an RF carrier andinformation to RX processor 923. The processor 921 may be relatedto the memory 924 that stores program code and data. The memory maybe referred to as a computer-readable medium.

[0464] The 5G communication technology discussed above may beapplied in combination with the methods proposed in the presentdisclosure to be described later with reference to FIGS. 34 to 60,or may be used as a supplement to embody or clarify the technicalfeatures of the methods proposed in the present disclosure.

[0465] FIG. 34 is a schematic block diagram of a multi-devicecontrol system according to the present invention.

[0466] Referring to FIG. 34, a multi-device control system 1according to the present invention may include a plurality ofdevices 10, 20, 30, and 30 connected via a network 5 in a specificenvironment such as a home, a building, an office, and thelike.

[0467] The devices 10, 20, 30, and 40 may be home appliances suchas a refrigerator, a TV, a smartphone, an audio set, a computer, awashing machine, an electric oven, a lighting lamp, anair-conditioner, an automobile, or the like.

[0468] A wireless communication interface may include, for example,Internet of Things (IoT). As another example, the wirelesscommunication interface may include cellular communication using atleast one of a long term evolution (LTE), LTE advance (LTE-A), codedivision multiple access (CDMA), wideband CDMA (WCDMA), universalmobile telecommunications system (UMTS), global system for mobilecommunications (GSM), and the like. As another example, thewireless communication interface may include at least one of Wi-Fi,Bluetooth, Bluetooth low energy (BLE), Zigbee, near fieldcommunication (NFC), magnetic secure transmission, radio frequency(RF), or a body area network (BAN).

[0469] The devices 10, 20, 30, and 40 may be connected to a cloudserver via the network 5. In this case, a voice command of a usermay be processed through a voice recognition module (or a speechrecognition module) in the cloud server. In selecting a device torespond to a voice command from among the devices 10, 20, 30, and40, the cloud server may cause a device intended by the user (i.e.,a device which corresponds to the user's intention) to becontrolled by a voice although a main keyword specifying a responsetarget is not included in the voice command, by further consideringa context-specific correction score of each device corresponding tothe voice command, as well as distances between each of the devices10, 20, 30, and 40 and a sound source (user).

[0470] The devices 10, 20, 30, and 40 may not be connected to thecloud server via the network 5. In this case, one of the devices10, 20, 30 and 40 may be a master device responsible for signalprocessing and response control related to a voice command and theremaining devices except for the master device may be slave devicesunder the control of the master device. The voice command of theuser may be processed through a voice recognition module mounted onthe master device. In selecting a device to respond to a voicecommand from among the devices 10, 20, 30, and 40, the masterdevice may cause a device intended by the user (i.e., a devicewhich corresponds to the user's intention) to be controlled by avoice although a main keyword specifying a response target is notincluded in a voice command, by further considering acontext-specific correction score of each device corresponding tothe voice command, as well as distances between each of the devices10, 20, 30, and 40 and a sound source (user).

[0471] FIG. 35 is a block diagram illustrating an embodiment forimplementing the multi-device control system of FIG. 34.

[0472] Referring to FIG. 35, a multi-device control system 1Aaccording to an embodiment of the present invention may includefirst to fourth devices 10A, 20A, 30A, and 40A connected to eachother via a network 5 and a cloud server 100.

[0473] The cloud server 100 performs a voice recognition operation,an operation of identifying distances between each of the devices10A, 20A, 30A, and 40A and a sound source, an operation ofassigning a response ranking to each device by combining acontext-specific correction score and a distance, and an operationof selecting a device to respond to a voice command from among thedevices 10A, 20A, 30A, and 40A according to the responseranking.

[0474] The first device 10A may include a first controller 11A, afirst communication unit 12A, a first measurement unit 13A, and afirst driving unit 14A.

[0475] The first controller 11A may control an overall operation ofeach component of the first device 10A such as the firstcommunication unit 12A, the first measurement unit 13A, the firstdriving unit 14A, and the like. The first controller 11A may beimplemented as a control board including a central processing unit(CPU), a micro processing unit (MPU), an application specificintegrated circuits (ASIC), a digital signal processor (DSP), adigital signal processing device (DSPD), a programmable logicdevice (PLD), a field programmable gate array (FPGA), amicro-controller, a microprocessor, or the like.

[0476] The first controller 11A may provide score base informationincluding always-on characteristic information, device on/offinformation, device control state information, and the like to thefirst communication unit 12A according to a request from the cloudserver 100. The score base information is used as a base fordetermining a context-specific correction score. The firstcontroller 11A may obtain decibel information (or a voice signalmetric value) corresponding to a magnitude of the voice commandfrom the first measurement unit 13A and provide the obtaineddecibel information to the first communication unit 12A at therequest of the cloud server 100. The first controller 11A may drivethe first driver 14A to perform an operation corresponding to thevoice command in the first device 10A in response to a responserequest from the cloud server 100.

[0477] The first communication unit 12A is connected to the cloudserver 100 via the communication network 5 to transmit and receivevarious data such as decibel information (or voice signal metricvalue), device selection information, and the like. The firstcommunication unit 12A may include a wireless Internet module ofmobile communication such as 2G, 3G, 4G and long term evolution(LTE), a wireless broadband (Wibro), a world Interoperability formicrowave access (Wimax), and a high speed downlink packet access,and the like, and a short-range communication module such as radiofrequency identification (RFID), infrared data association (IrDA),ultra wideband (UWB), ZigBee, and the like.

[0478] The first measurement unit 13A may include anacoustoelectric transducer (e.g., a microphone) for converting asound wave such as sound or voice into an electrical signal andfurther include a decibel meter (or a decibel measurement sensor).The microphone receives a user's voice signal and generates anelectrical signal (voice signal metric value) according tovibration of the sound wave or ultrasonic wave. The decibel metermay generate decibel information corresponding to a magnitude ofthe voice signal. The microphone and the decibel meter may beintegrated. Either the microphone or the decibel meter may beomitted.

[0479] The first driving unit 14A performs an operationcorresponding to a voice command under the control of the firstcontroller 11A. For example, the first driving unit 14A may performvarious operations including a turn-on/off operation. The firstdriving unit 14A may further include an output unit such as adisplay, a speaker, and the like, to provide a service processingresult to the user.

[0480] The second device 20A may include a second controller 21A, asecond communication unit 22A, a second measurement unit 23A, and asecond driving unit 24A. The third device 30A may include a thirdcontroller 31A, a third communication unit 32A, a third measurementunit 33A, and a third driving unit 34A. The fourth device 40A mayinclude a fourth controller 41A, a fourth communication unit 42A, afourth measurement unit 43A, and a fourth driving unit 44A.

[0481] The second to fourth controllers 21A, 31A, and 41A may beimplemented to be substantially the same as the first controller11A. The second to fourth communication units 22A, 32A, and 42A maybe implemented to be substantially the same as the firstcommunication unit 12A. The second to fourth measurement units 23A,33A, and 43A may be implemented to be substantially the same as thefirst measurement unit 13A. The second to fourth driving units 24A,34A, and 44A may be implemented to be substantially the same as thefirst driving unit 14A.

[0482] FIG. 36 is a block diagram showing another embodiment forimplementing the multi-device control system of FIG. 34.

[0483] Referring to FIG. 36, a multi-device control system 1Baccording to another embodiment of the present invention mayinclude first to fourth devices 10B, 20B, 30B, and 40B connected toeach other via a network 5. The first device 10B which is one ofthe first to fourth devices 10B, 20B, 30B and 40B may be a masterdevice and the other devices 20B, 30B, and 40B except for the firstdevice 10B may be slave devices. Here, it should be noted that thefirst device 10B is a master device, for example, and any one ofthe devices 20B, 30B, and 40B may become a master device.

[0484] The first device 10B may further include a master server200, in addition to the first controller 11B, the firstcommunication unit 12B, the first measurement unit 13B and thefirst driving unit 14B.

[0485] The master server 200 performs a voice recognitionoperation, an operation of identifying distances between each ofthe devices 10A, 20A, 30A, and 40A and a sound source, an operationof assigning a response ranking to each device by combining acontext-specific correction score and a distance, and an operationof selecting a device to respond to a voice command from among thedevices 10A, 20A, 30A, and 40A according to the responseranking.

[0486] The first controller 11B may control an overall operation ofeach component of the first device 10B such as the firstcommunication unit 12B, the first measurement unit 13B, the firstdriving unit 14B, and the like. The first controller 11B may beimplemented as a control board including a central processing unit(CPU), a micro processing unit (MPU), an application specificintegrated circuits (ASIC), a digital signal processor (DSP), adigital signal processing device (DSPD), a programmable logicdevice (PLD), a field programmable gate array (FPGA), amicro-controller, a microprocessor, or the like.

[0487] The first controller 11B may provide score base informationincluding always-on characteristic information, device on/offinformation, device control state information, and the like to themaster server 200 according to a request from the master server200. The score base information is used as a base for determining acontext-specific correction score. The first controller 11B mayobtain decibel information (or a voice signal metric value)corresponding to a magnitude of the voice command from the firstmeasurement unit 13B and provide the obtained decibel informationto the master server 200 at the request of the master server 200.The first controller 11B may drive the first driving unit 14B toperform an operation corresponding to the voice command in thefirst device 10B in response to a response request from the masterserver 200.

[0488] The first communication unit 12B is connected to the otherdevices 20B, 30B, and 40B via the communication network 5, receivesdecibel information (or a voice signal metric value) from the otherdevices 20B, 30B, and 40B and transfers the received decibelinformation to the master server 200, and transmits deviceselection information or the like from the master server 200 to theother devices 20B, 30B, and 40B. The first communication unit 12Bmay include a wireless Internet module of mobile communication suchas 2G, 3G, 4G and long term evolution (LTE), a wireless broadband(Wibro), a world Interoperability for microwave access (Wimax), anda high speed downlink packet access, and the like, and ashort-range communication module such as radio frequencyidentification (RFID), infrared data association (IrDA), ultrawideband (UWB), ZigBee, and the like.

[0489] The first measurement unit 13B may include anacoustoelectric transducer (e.g., a microphone) for converting asound wave such as sound or voice into an electrical signal andfurther include a decibel meter (or a decibel measurement sensor).The microphone receives a user's voice signal and generates anelectrical signal (voice signal metric value) according tovibration of the sound wave or ultrasonic wave. The decibel metermay generate decibel information corresponding to a magnitude ofthe voice signal. The microphone and the decibel meter may beintegrated. Either the microphone or the decibel meter may beomitted.

[0490] The first driving unit 14B performs an operationcorresponding to a voice command under the control of the firstcontroller 11A. For example, the first driving unit 14A may performvarious operations including a turn-on/off operation. The firstdriving unit 14A may further include an output unit such as adisplay, a speaker, and the like, to provide a service processingresult to the user.

[0491] The second device 20B may include a second controller 21B, asecond communication unit 22B, a second measurement unit 23B, and asecond driving unit 24B. The third device 30B may include a thirdcontroller 31B, a third communication unit 32B, a third measurementunit 33B, and a third driving unit 34B. The fourth device 40B mayinclude a fourth controller 41B, a fourth communication unit 42B, afourth measurement unit 43B, and a fourth driving unit 44B.

[0492] The second to fourth controllers 21B, 31B, and 41B may beimplemented to be substantially the same as the first controller11B. The second to fourth communication units 22B, 32B, and 42B maybe implemented to be substantially the same as the firstcommunication unit 12B. The second to fourth measurement units 23B,33B, and 43B may be implemented to be substantially the same as thefirst measurement unit 13B. The second to fourth driving units 24B,34B, and 44B may be implemented to be substantially the same as thefirst driving unit 14B.

[0493] Since the multi-device control system 1B of FIG. 36 selectsa device to respond to a voice command by utilizing the masterserver 200 mounted in at least one of the devices 10B, 20B, 30B,and 40B, a transmission path of signals transmitted and receivedfor voice control may be reduced to advantageously increaseaccuracy and reliability of the control operation as compared withFIG. 35. In addition, since the multi-device control system 1B ofFIG. 36 does not need to communicate with a cloud server for voiceprocessing, the multi-device control system 1B may perform animmediate and real-time voice processing operation.

[0494] FIG. 37 is a block diagram showing a configuration of thecloud server of FIG. 35 and the master server of FIG. 36. Theserver of FIG. 37 refers to the cloud server 100 in FIG. 35 or themaster server 200 in FIG. 36.

[0495] Referring to FIG. 37, the server may include a voicerecognition module 410, a distance identification module 420, aprocessor 430, and a storage unit 440.

[0496] The voice recognition module 410 receives a user input,i.e., a voice command which has undergone a preprocessing processor the like in each device, and performs a voice recognitionoperation on the voice command. A voice processing processincluding voice recognition will be described later with referenceto FIGS. 38 to 40.

[0497] The distance identification module 420 may identifydistances between each of the devices and a sound source byreceiving decibel information from each of the plurality of devicesor by identifying a voice signal metric value received from each ofthe devices. Here, the voice signal metric value may include asignal-to-noise ratio, a voice spectrum, voice energy, and thelike. When the method of identifying the distance by utilizing thevoice signal metric value is used, there is an advantage that adecibel meter is not required in each device.

[0498] The storage unit 440 may include a database in which acontext-specific correction score of each device corresponding to avoice command is defined. The context-specific correction score maybe determined on the basis of score base information related toeach of the devices regarding voice commands.

[0499] The processor 430 may receive the distance from the distanceidentification module 420, correct the distance on the basis of thecontext-specific correction score read out from the storage 440,and assign a response ranking to the devices. Thereafter, theprocessor 430 may select a device to respond to the voice commandaccording to the response ranking, whereby a device intended by theuser can be controlled by the voice even if a main keyword thatspecifies a response target is not included in the voicecommand.

[0500] The voice control processes performed in the processor 430may be implemented by one or more signal processing and/oron-demand integrated circuits, hardware, software instructions forexecution by one or more processors, firmware, or a combinationthereof.

[0501] Referring to FIG. 37, the server may further include anartificial intelligent (AI) agent module 450 for updating thecontext-specific correction scores defined in the database throughtraining (or learning). This will be described later with referenceto FIGS. 38 to 40.

[0502] FIG. 38 shows an example in which a voice processing processis performed in a cloud environment (or a server environment) inthe multi-device control system of FIG. 35. FIG. 39 shows anexample of on-device processing in which a voice processing processis performed in a device 70 in the multi-device control system ofFIG. 36.

[0503] In FIGS. 38 and 39, the device environment 50 and 70 may bereferred to as client devices, and cloud environments 60 and 80 maybe referred to as cloud servers. The client device 50 in FIG. 38 isa device not including a master server, and the client device 70 inFIG. 39 may be a device including a master server.

[0504] Referring to FIG. 38, various components are required toprocess a voice event in an end-to-end voice UI environment.Sequences for processing a voice event may include a plurality ofprocesses such as signal acquisition and playback, speechpreprocessing, voice activation, voice recognition, naturallanguage processing, speech synthesis, and the like.

[0505] The client device 50 may include an input module. The inputmodule may receive a user input from a user. For example, the inputmodule may receive a user input from a connected external device(e.g., keyboard, headset, etc.). Further, for example, the inputmodule may include a touch screen. Further, for example, the inputmodule may include a hardware key located in the user terminal.

[0506] According to an embodiment, the input module may include atleast one microphone capable of receiving a user's utterance (orspeech) as a voice signal (or speech signal). The input module mayinclude a speech input system and receive a user's speech throughthe speech input system as a voice signal. The at least onemicrophone may generate an input signal for audio input, therebydetermining a digital input signal for user utterance. According toan embodiment, a plurality of microphones may be implemented in anarray. The array may be arranged in a geometric pattern, forexample, a linear geometric shape, a circular geometric shape, orany other configuration. For example, for a certain point, an arrayof four sensors may be arranged in a circular pattern separated by90 degrees to receive sound in the four directions. In someimplementations, the microphone may include spatially differentarrays of sensors in data communication and a networked array ofsensors may be included therein. The microphone may includeomnidirectional or directional (e.g., shotgun) microphones, and thelike.

[0507] The client device 50 may include a preprocessing module 51pre-processing the user input (voice signal) received via the inputmodule (e.g., microphone).

[0508] The pre-processing module 51 may include an adaptive echocanceller (AEC) function to remove an echo included in the uservoice signal input through the microphone. The preprocessing module51 may include a noise suppression (NS) function to removebackground noise included in the user input. The preprocessingmodule 51 may include an end-point detect (EPD) function to detectan end point of the user's voice to thus locate only a portionwhere the user's voice is present. In addition, the preprocessingmodule 51 may include an automatic gain control (AGC) function toadjust a volume of the user input to be suitable for recognizingand processing the user input.

[0509] The client device 50 may include a voice activation module52. The voice activation module 52 may recognize a wake-up commandto recognize user's call. The voice activation module 52 may detecta certain keyword (e.g., Hi LG) from a user input which hasundergone a preprocessing process. The voice activation module 52may be in a standby state and perform an always-on keyworddetection function.

[0510] The client device 50 may transmit a user voice input to thecloud server 60. Automatic voice recognition (ASR) and naturallanguage understanding (NLU) operations, which are key componentsfor processing a user voice, may be traditionally performed in thecloud server 60 due to computing, storage, power constraints, andthe like.

[0511] The cloud server 60 may include an auto voice recognition(ASR) module 61, an artificial intelligent (AI) agent 62, a naturallanguage understanding (NLU) module 63, a text-to-speech (TTS)module 64, and a service manager 65.

[0512] The ASR module 61 may convert the user voice input receivedfrom the client device 50 into text data.

[0513] The ASR module 61 includes a front-end speech preprocessor.The front-end speech preprocessor extracts representative featuresfrom a speech input. For example, a front-end speech preprocessorperforms Fourier transform on a speech input to extract a spectralfeature that characterizes the speech input as a sequence ofrepresentative multidimensional vectors. Further, the ASR module 61may include one or more voice recognition models (e.g., sound modeland/or language model) and may implement one or more voicerecognition engines. Examples of the voice recognition modelsinclude hidden Markov models, Gaussian-mixture models, deep neuralnetwork models, n-gram language models, and other statisticalmodels. Examples of the voice recognition engines include dynamictime warping-based engines and weighted finite state transformer(WFST)-based engines. The one or more voice recognition models andthe one or more voice recognition engines may be used to processextracted representative features of the front-end speechpreprocessor to produce intermediate recognition results (e.g.,phonemes, phoneme strings, and sub-words) and, ultimately, textrecognition results (e.g., words, word strings, or sequence oftokens).

[0514] When the ASR module 61 generates a recognition resultincluding a text string (e.g., words, or sequence of words, orsequence of tokens), the recognition result is delivered to the NLUmodule 63. In some instances, the ASR module 61 generates aplurality of candidate text representations of the speech input.Each candidate text representation is a sequence of words or tokenscorresponding to the speech input.

[0515] The NLU module 63 may perform a syntactic analysis or asemantic analysis to recognize the user's intention. Thegrammatical analysis may divide grammar units (e.g., words,phrases, morphemes, etc.) and recognize what grammatical elementsthe divided units. The semantic analysis may be performed usingsemantic matching, rule matching, formula matching, or the like.Accordingly, the NUL module 63 may obtain a domain, an intent or aparameter necessary for representing the intent of the userinput.

[0516] The NLU module 63 may further include a natural languageproducing module (not shown). The natural language producing modulemay change designated information into a text form. The informationchanged into the text form may be in the form of a natural languageutterance. The designated information may include, for example,information about an additional input, information indicatingcompletion of an operation corresponding to a user input, orinformation indicating an additional input of the user. Theinformation changed into the text form may be transmitted to theclient device and displayed on a display or may be transmitted to aTTS module and changed into a voice form.

[0517] The TTS module 64 may change the information in the textform into information in a voice form. The TTS module 64 mayreceive the information in the text form from the natural languageproducing module of the NLU module 63 and convert the informationin the text form into information in the voice form and transmitthe same to the client device 50. The client device 50 may outputthe information in the voice form through a speaker.

[0518] Meanwhile, the cloud server 60 may further include anartificial intelligence (AI) agent 62. The AI agent 62 may bedesigned to perform at least some of the functions performed by theASR module 61, the NLU module 62 and/or the TTS module 64 describedabove. The intelligent agent (AI) module 62 may also contribute toperforming the independent functions of the ASR module 61, the NLUmodule 62, and/or the TTS module 64.

[0519] The AI agent 62 may perform the above-described functionsthrough deep learning. The deep learning refers to a learning typeof representing certain data into a form (e.g., in the case of animage, pixel information is represented as a column vector, etc.)that can be recognized by a computer and applying the form intolearning. Many studies (regarding how to make better representationtechniques and how to make models to learn them) have beenconducted for deep learning, and as a result of these efforts,various deep learning techniques such as deep neural networks(DNN), convolutional deep neural networks (CNN), recurrentBoltzmann machine (RNN), restricted Boltzmann machine (RBM), deepbelief networks (DBN), or deep Q-network may be applied to variousfields such as computer vision, voice recognition, natural languageprocessing, voice/signal processing, and the like. Currently, allmajor commercial voice recognition systems (MS Kotana, Skypetranslator, Google Now, Apple Siri, etc.) are on the basis of thedeep-learning techniques.

[0520] The AI agent 62 may perform various natural languageprocessing processes including machine translation, emotionanalysis, and information retrieval using a deep artificial neuralnetwork structure in the natural language processing field.

[0521] In particular, the AI agent 62 may use the deep artificialneural network structure to update the correction scores defined inthe database described above through training.

[0522] Meanwhile, the cloud server 60 may include the servicemanager 65 that may collect various personalized information andsupport the functions of the AI agent 62. The personalizedinformation obtained by the service manager 65 may include at leastone data (calendar application, messaging service, musicapplication usage, etc.) that the client device 50 uses through thecloud environment, at least one sensing data (camera, microphone,temperature, humidity, gyro sensor, C-V2X, pulse, ambient light,iris scan, etc.) collected by the client device 50 and/or the cloudserver 60, and off device data that is not directly related to theclient device 50. For example, the personalized information mayinclude maps, SMS, news, music, stock, weather, and wikipediainformation.

[0523] Although the AI agent 62 is illustrated as a separate blockto be distinguished from the ASR module 61, the NLU module 63, andthe TTS module 64 for convenience of explanation, but the AI agent62 may perform at least some or all of the functions of the modules61, 62, and 64.

[0524] Although the example in which the AI agent 62 is implementedin a cloud environment due to computing operations, storage, andpower constraints has been described with reference to FIG. 38, thepresent invention is not limited thereto. For example, FIG. 39 isthe same as that shown in FIG. 38 except that an AI agent module 74is included in the client device 70.

[0525] The client device 70 and a cloud environment 80 shown inFIG. 39 may correspond to FIG. 38, except for only differences fromthe client device 50 and the cloud environment 60 in terms of someconfigurations and functions. Thus, specific functions of thecorresponding blocks may be referred to FIG. 38.

[0526] Referring to FIG. 39, the client device 70 may include apreprocessing module 51, a voice activation module 72, an ASRmodule 73, an AI module 74, an NLU module 75, and a TTS module 76.In addition, the client device 50 may include an input module (atleast one microphone) and at least one output module.

[0527] Further, the cloud environment may include cloud knowledge80 storing personalized information in the form of knowledge.

[0528] The function of each module shown in FIG. 39 may be referredto FIG. 38. However, since the ASR module 73, the NLU module 75,and the TTS module 76 are included in the client device 70,communication with the cloud may be unnecessary for speechprocessing such as voice recognition and speech synthesis, andthus, immediate and real-time voice processing operation may beperformed.

[0529] The modules shown in FIGS. 38 and 6 is only an example forexplaining a voice processing process, and greater or fewer modulesthan the modules shown in FIGS. 38 and 6 may be provided. Further,it should also be noted that two or more modules may be combined ordifferent modules or different arrangements of modules may beprovided. The various modules shown in FIGS. 38 and 6 may beimplemented by one or more signal processing and/or on-demandintegrated circuits, hardware, software instructions for executionby one or more processors, firmware, or a combination thereof.

[0530] FIG. 40 is a block diagram showing a schematic configurationof the AI agent module of FIGS. 38 and 6.

[0531] Referring to FIG. 40, the AI agent module may support aninteractive operation with a user, in addition to performing theASR operation, the NLU operation, and the TTS operation in thevoice processing process described above with reference to FIGS. 38and 6. Further, the AI agent module may contribute to performing anoperation of further clarifying, supplementing, or additionallydefining the information included in the textual representationsreceived by the NLU module 63 from the ASR module 61 using contextinformation.

[0532] Here, the context information may include preference of auser of the client device, hardware and/or software states of theclient device, various kinds of sensor information collectedbefore, during, or immediately after a user input, previousinteractions (e.g., dialogue) between the AI agent and the user,and the like. The context information in this document may bedynamic and may be varied depending on time, location, content ofdialogue, and other factors.

[0533] The AI agent module may further include a context fusion andlearning module 91, a local knowledge 92, and a dialogue management93.

[0534] The context fusion and learning module 91 may learn theuser's intention on the basis of at least one data. The at leastone data may include at least one sensing data obtained in a clientdevice or in a cloud environment. The at least one data may includea speaker identification, an acoustic event detection, a gender andage detection of a speaker, voice activity detection (VAD), andemotion classification.

[0535] The speaker identification may refer to specifying a personwho speaks in the registered dialogue group by voice. The speakeridentification may include identifying a previously registeredspeaker or registering a new speaker. The acoustic event detectionrecognizes a sound itself beyond the voice recognition technology,thereby recognizing a type of the sound and a location where thesound is generated. The voice activity detection (VAD) is a speechprocessing technology of detecting the presence or absence of humanspeech from an audio signal that may include music, noise, or othersounds. According to an example, the AI agent 74 may check whetherthere is speech from the input audio signal. According to anexample, the AI agent 74 may distinguish between speech data andnon-speech data using a deep neural network (DNN) model.

[0536] The context fusion and learning module 91 may include a DNNmodel to perform the above-described operations, and the intentionof the user input may be checked on the basis of the DNN model andthe sensing information collected in the client device or the cloudenvironment.

[0537] The at least one data is merely an example and may includeany data that may be referred to in checking a user's intention inthe voice processing process. The at least one data may be obtainedthrough the DNN model described above.

[0538] The AI agent module may include a local knowledge 92. Thelocal knowledge 92 may include user data. The user data may includeuser's preference, a user address, a user's initial settinglanguage, a user's contact number list, and the like. According toan example, the AI agent 74 may further define the user's intentionby supplementing the information included in the voice input of theuser using specific information of the user. For example, inresponse to a user's request of "Invite my friends to my birthdayparty," the AI agent 74 may use the local knowledge 92, withoutrequiring more specific information from the user, to determine whothe "friends" are and when and where the "birthday party" isgiven.

[0539] The AI agent 74 may further include a dialog management 93.The AI agent 74 may provide a dialog interface to enable voicedialogue with the user. The dialog interface may refer to a processof outputting a response to a user's voice input through a displayor a speaker. Here, a final outcome output through the dialoginterface may be based on the ASR operation, the NLU operation, andthe TTS operation described above.

[0540] FIG. 41 is a flowchart of a multi-device control methodaccording to an embodiment of the present invention. FIG. 42 is aview illustrating a way in which a response ranking is determinedby combining a context-specific correction score and a distance ina multi-device control method according to an embodiment of thepresent invention.

[0541] The multi-device control method of the present inventionaccording to FIG. 41 is performed in the cloud server or the masterserver described above.

[0542] Referring to FIGS. 41 and 42, in the multi-device controlmethod of the present invention, each device receives a voicecommand which has undergone a preprocessing process in each deviceand performs a voice recognition operation on the voice command(S0, S1).

[0543] In the multi-device control method of the present invention,distances between each of the devices and a sound source may berecognized by receiving decibel information from each device or byidentifying a voice signal metric value received from each device(S2). The voice signal metric value may include signal-to-noiseratio, voice spectrum, voice energy, etc., obtained in thepreprocessing process of each device.

[0544] In the multi-device control method of the present invention,a context-specific correction score stored in a database of astorage unit is identified, and a response ranking is assigned to aplurality of devices by combining the correction score and thedistance (S3, S4). In other words, in the multi-device controlmethod of the present invention, the distances between each of thedevices and the sound source are corrected on the basis of thecontext-specific correction scores, the corrected distances arecompared to each other, and a higher response ranking is assignedas a corrected distance is shorter. Here, the context-specificcorrection score may be set for each device according to the scorebase information and may be updated through learning. The scorebase information may include at least one of always-oncharacteristic information, device on/off information, devicecontrol state information, user usage pattern information for adevice, and usage environment information.

[0545] In the multi-device control method of the present invention,a device to perform a voice command is selected according to theresponse ranking (S5). In other words, in the multi-device controlmethod of the present invention, a device with the highest responseranking may be selected as the device to perform the voicecommand.

[0546] The multi-device control method of the present invention maybe implemented by one or more signal processing and/or on-demandintegrated circuits, hardware, software instructions executed inone or more processors, firmware, or a combination thereof.

[0547] FIG. 43 is a view illustrating a plurality of devices havingdifferent distances from a sound source.

[0548] Both strength of sound and strength of wave are inverselyproportional to the square of a distance from a point (soundsource) that generated the wave. The reason is because, a wave fromthe point where the wave is generated moves at the same velocity inall directions and thus initial energy and wave energy reaching asurface of a sphere separated by a distance from the first energyis the same.

[0549] In the case of FIG. 43, distances from a sound source SS areset such that a robot cleaner DV1 is the closest, a TV DV2 issecond closest, a refrigerator DV3 is third closest, anair-conditioner DV4 is fourth closest, and a washing machine DV5 isthe farthest.

[0550] According to the simple distance-based multi-device controlmethod of a related art, the robot cleaner DV1 closest in distanceresponds to a voice command "turn off". That is, the user wants toturn off the TV DV2, but the robot cleaner DV1 is turned off.

[0551] In contrast, in the multi-device control method of thepresent invention, by prioritizing responses by combiningcontext-specific correction scores and distances for each of thedevices DV1 to DV5, rather than a simple distance, a deviceintended by the user may allow to respond. This context-specificcorrection scores may be mapped to the score base information inputfrom each of the devices DV1 to DV5 and read out.

[0552] As an example of the context-specific correction scores foreach device DV1 to DV5, the refrigerator DV3 may have (-)3 pointsbecause the refrigerator is an always-on home appliance (score baseinformation), the air-conditioner DV4 may have 0 point in a casewhere a current temperature is higher than an optimal temperature(score base information), the robot cleaner DV1 may have 0 pointwhen an achievement rate for a recognized cleaning map is 80% orless (score base information), the TV DV2 may have (+)2 points inthe case of broadcasting advertisement and have (+)1 point in thecase broadcasting a program other than an advertisement (score baseinformation), and the washing machine DV5 may have 0 point in acase where 5 or more minutes have passed since washing started(score base information).

[0553] The context-specific correction scores may be converted intodistance correction values. For example, the score of (-)3 pointsmay correspond to (+)max m, the score of (-)2 points may correspondto (+)3 m, the score of (-)1 point may correspond to 0 m, and thescore of 0 point may correspond to (-)1 m, the score of (+)1 pointmay correspond to (-)2 m, and the score of (+)2 points maycorrespond to (-)3 m.

[0554] As the context-specific correction scores increase, responserankings according to the combination result of the correctionscores and the distance may be higher. According to the aboveexample, the response ranking of the TV DV2 may be the highest, therobot cleaner DV1 may be second highest, the refrigerator DV3 maybe third highest, the air-conditioner DV4 may be fourth highest,and the washing machine DV5 may be the lowest. Therefore, inresponse to the voice command of "turn off", the TV DV2 intended bythe user may be turned off.

[0555] FIGS. 44 and 45 are views illustrating an example ofdetermining a response ranking by correcting a distance with acorrection score according to device characteristics.

[0556] Referring to FIGS. 44 and 45, in the multi-device controlmethod of the present invention, distances between each of thedevices DV 1 and DV 2 and a sound source SS are identified(S111).

[0557] In the multi-device control method of the present invention,the distances between each of the devices DV1 and DV2 and the soundsource SS are corrected with correction scores according to thedevice characteristics (S112).

[0558] In the multi-device control method of the present invention,response rankings are given on the basis of the correcteddistances, and a device to respond is selected according toresponse rankings (S113, S 114).

[0559] For example, when a voice command "turn off" is uttered froma sound source SS (the intention of the user is to turn off the TV)in a state in which both the TV DV1 and the refrigerator DV2 areturned on, a distance between the TV DV1 and the sound source SSmay be 3 m and a distance between the refrigerator DV2 and thesound source SS may be 1 m. In this case, the distance between therefrigerator DV2 and the sound source SS may be corrected to belarger than 3 m, which is the distance of the TV DV1, by thecorrection score of (-) 3 points, depending on the characteristicsof always-on product. Therefore, the response ranking of the TV DV1is assigned to be higher than that of the refrigerator DV2, and theTV DV1 may be turned off according to the user intention.

[0560] FIGS. 46 and 47 are views illustrating an example ofdetermining a response ranking by correcting a distance with acorrection score according to a device context.

[0561] Referring to FIGS. 46 and 47, in the multi-device controlmethod of the present invention, distances between each of thedevices DV1, DV2, and DV3 and a sound source SS are identified(S131).

[0562] In the multi-device control method of the present invention,the distances between each of the devices DV1, DV2 and DV3 and thesound source SS with a correction score according to devicecharacteristics (on/off state, control state of temperature,humidity, or the like) (S132).

[0563] In the multi-device control method of the present invention,response rankings are assigned on the basis of the correcteddistances, and a device to respond is selected according to theresponse rankings (S133, S134).

[0564] For example, when a voice command of "turn on" is issued bythe sound source SS (user intends to turn on the air-conditioner)in a state in which the TV DV1 is turned an air purifier DV2 and anair-conditioner DV3 are turned off, a distance between the TV D1and the sound source SS may be 4 m, a distance between theair-purifier DV2 and the sound source is 1 m, and a distancebetween the air-conditioner DV3 and the sound source SS may be 2 m.In this case, since the TV DV1 is already turned on, the distancebetween the TV DV1 and the sound source SS may be corrected to begreater than 4 m by the correction score of (-2) points. Also, thedistance between the air purifier DV2 and the sound source SS maybe recognized as the original 2 m due to the correction score of(-1) point according to a specific condition (e.g., "agreeable") inwhich the current air condition meets a suitable environment. Also,the distance between the air-conditioner DV3 and the sound sourceSS may be corrected to be smaller than 2 m which is a distance ofthe air-conditioner DV2 by the correction score of (+)2 points dueto the state in which the current temperature is higher than anappropriate temperature. Accordingly, a response ranking of theair-conditioner DV3 is assigned to be higher than the TV DV1 andthe air purifier DV2, and thus, the air-conditioner DV3 may beturned on according to the user intention.

[0565] FIGS. 48, 49, and 50 are views illustrating an example ofdetermining a response ranking by correcting a distance with acorrection score according to a device usage pattern of a user.

[0566] Referring to FIGS. 48, 49, and 50, in the multi-devicecontrol method of the present invention, a user is identifiedthrough voice recognition (S150). The user identification may beperformed in the AI agent module as described above. The useridentification may refer to specifying a person who speaks in aregistered dialog group by voice. The user identification mayinclude a process of identifying a previously registered speaker orregistering a new speaker.

[0567] In the multi-device control method of the present invention,distances between each of the devices DV1 and DV2 and the soundsource SS (S151) are identified (S151).

[0568] In the multi-device control method of the present invention,the distances between each of the devices DV1 and DV2 and the soundsource SS are corrected with correction scores according to adevice usage pattern for each user (S152). The device usage patternfor each user may be stored in advance or updated throughlearning.

[0569] In the multi-device control method of the present invention,response rankings are assigned on the basis of the correcteddistances, and a device to respond is selected according to theresponse rankings (S153, S154).

[0570] For example, when a voice command "turn on" is issued by thesound source SS in a state in which both the TV DV1 and the airpurifier DV2 are turned off, the distance between the TV DV1 andthe sound source SS may be 7 m and the distance between the airpurifier DV2 and the sound source SS is 5 m. The sound source SSmay be a first user SS1 or a second user SS2, and the correctionscore may vary according to a device usage pattern for each user.It is assumed that a device usage pattern of the first user S S1 isrelatively high in the air purifier DV2 and a device usage patternof the second user SS2 is relatively high in the TV DV1 when thedistances to the devices are 5 m to 10 m.

[0571] Here, when the first user SS1 issues a voice command "turnon", the distance between the air purifier DV2 and the sound sourceSS1 is corrected from the original 5 m to 2 m by the correctionscore of (+)2 points according to the device usage pattern of thefirst user SS1, and the distance between the TV DV1 and the soundsource SS may be recognized as the original 7 m by the correctionscore of 0 point. Therefore, the response ranking of the airpurifier DV2 is assigned to be higher than that of the TV DV1, andthe air purifier DV2 may be turned on according to the normal usagepattern of the first user SS1.

[0572] Meanwhile, when the second user SS2 issues a voice command"turn on", the distance between the air purifier DV2 and the soundsource SS1 according to the device usage pattern of the second userSS2 is recognized as the original 5 m by the correction score of 0point and the distance between the TV DV1 and the sound source SSmay be corrected from the original 7 m to 4 m by the correctionscore of (+)2 points. Therefore, the response ranking of the TV DV1is assigned to be higher than that of the air purifier DV2, and theTV DV1 may be turned on according to the normal usage pattern ofthe second user SS2.

[0573] FIGS. 51 and 52 are views illustrating an example ofdetermining a response ranking by correcting a distance with acorrection score according to a usage pattern and anenvironment.

[0574] Referring to FIGS. 51 and 52, in the multi-device controlmethod of the present invention, distances between each of thedevices DV1 and DV2 and a sound source SS are identified(S171).

[0575] In the multi-device control method of the present invention,the distances between each of the devices DV1 and DV2 and the soundsource SS are corrected with correction scores according to usagepatterns and environments (S172).

[0576] In the multi-device control method of the present invention,response rankings are assigned on the basis of the correcteddistances, and a device to respond is selected according to theresponse rankings (S173, S174).

[0577] For example, when a voice command "turn on" is issued by thesound source SS in a state in which both the TV DV1 and an airwasher DV2 are turned off, the distance between the TV DV1 and thesound source SS may be 4 m and the distance between the air washerDV2 and the sound source SS may be 1 m. Here, it is assumed thatthe device usage pattern of the user SS is relatively high in themorning in TV DV1 and the device usage pattern of the user SS isrelatively high at night in the air washer DV2.

[0578] When the user SS issues a voice command "turn on" in themorning, the distance between the TV DV1 and the sound source SSmay be corrected from the original 4 m to 1 m by the correctionscore of (+)2 points and the distance between the air washer DV2and the sound source SS may be corrected from the original 1 m to 2m by the correction score of (-)1 point according to the deviceusage patterns and humidity. Accordingly, the response ranking ofthe TV DV1 may be assigned to be higher than the air washer DV2,and the TV DV1 that matches the intention of the user SS may beturned on.

[0579] Meanwhile, when the user SS issues a voice command "turn on"at night, the distance between the TV DV1 and the sound source SSmay be recognized as the original 4 m due to the correction scoreof 0 point and the distance between the air washer DV2 and thesound source SS may be corrected to (-)2 m from the original 1 m bythe correction score of (+)2 points according to the device usagepattern and humidity. Accordingly, the response ranking of the airwasher DV2 is assigned to be higher than that of the TV DV1, andthe air washer DV2 that matches the intention of the user SS may beturned on.

[0580] FIGS. 53, 54, and 55 are views illustrating an example ofdetermining a response ranking by correcting a distance with acorrection score according to a usage environment.

[0581] Referring to FIGS. 53, 54 and 55, in the multi-devicecontrol method of the present invention, distances between each ofthe devices DV1 and DV2 and the sound source SS are identified(S191).

[0582] In the multi-device control method of the present invention,the distances between each of the devices DV1 and DV2 and the soundsource SS are corrected with correction scores according to usageenvironments such as ratings, fine dust concentration, weather, andthe like (S192).

[0583] In the multi-device control method of the present invention,response rankings are assigned on the basis of the correcteddistances, and a device to respond is selected according toresponse rankings (S193, S194).

[0584] For example, when the voice command "I'm bored" is issued bythe sound source SS in a state in which both the TV DV1 and avehicle DV2 are turned off, a distance between the TV DV1 and thesound source SS may be 1 m and a distance between the car DV2 andthe sound source SS may be 15 m.

[0585] If the user SS issues a voice command "I'm bored" on a finedusty cloudy day, the distance between the TV DV1 and the soundsource SS may be corrected to a minimum value smaller than theoriginal 1 m by the correction score of (+)2 points and thedistance between the vehicle DV2 and the sound source SS may becorrected to a maximum value larger than the original 15 m by acorrection score of (-)3 points according to usage environments.Therefore, the response ranking of the TV DV1 may be assigned to behigher than that of the vehicle DV2, and the TV DV1 matching theintention of the user SS may be turned on.

[0586] Meanwhile, when the user SS issues a voice command "I'mbored" on a clear day with little dust, the distance between the TVDV1 and the sound source SS may be corrected to a maximum valuelarger than the original 1 m by the correction score of (-)3 pointsand the distance between the vehicle DV2 and the sound source SSmay be corrected to a minimum value smaller than the maximum valueby the correction score of (+)3 points according to usageenvironments. Therefore, the response ranking of the vehicle DV2 isassigned to be higher than that of the TV DV1, and an engine, aheater, or an air-conditioner of the vehicle DV2 which matches theintention of the user SS may be turned on.

[0587] FIGS. 56 to 59 are views showing context-specific correctionscores for each device corresponding to voice commands. FIG. 60 isa view showing an example of an operation progress of a device as atraining target (or learning target) according to eachsituation.

[0588] Referring to FIGS. 56 to 59, context-specific correctionscores according to voice commands may be defined in a databaseformat. Such a database may be prepared separately for each device(home appliance) on the basis of a base learning model of a bigdata format. For example, the base learning model may reflect aturn-off transition of the TV according to each situation as shownin FIG. 60.

[0589] According to the context-specific correction score of eachdevice defined in the database may be updated through AI agenttraining module according to a specific voice command and a devicecontext, thereby significantly contributing to enhancement of userconvenience.

[0590] The present disclosure described above may be implemented asa computer-readable code in a medium in which a program isrecorded. The computer-readable medium includes any type ofrecording device in which data that can be read by a computersystem is stored. The computer-readable medium may be, for example,a hard disk drive (HDD), a solid state disk (SSD), a silicon diskdrive (SDD), a ROM, a RAM, a CD-ROM, a magnetic tape, a floppydisk, an optical data storage device, and the like. Thecomputer-readable medium also includes implementations in the formof carrier waves (e.g., transmission via the Internet). Also, thecomputer may include the controller 180 of the terminal. Thus, theforegoing detailed description should not be interpreted limitedlyin every aspect and should be considered to be illustrative. Thescope of the present invention should be determined by reasonableinterpretations of the attached claims and every modificationwithin the equivalent range are included in the scope of thepresent invention.

* * * * *

Multi-device Control System And Method And Non-transitory Computer-readable Medium Storing Component For Executing The Same Patent Application (2025)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Pres. Carey Rath

Last Updated:

Views: 5641

Rating: 4 / 5 (41 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Pres. Carey Rath

Birthday: 1997-03-06

Address: 14955 Ledner Trail, East Rodrickfort, NE 85127-8369

Phone: +18682428114917

Job: National Technology Representative

Hobby: Sand art, Drama, Web surfing, Cycling, Brazilian jiu-jitsu, Leather crafting, Creative writing

Introduction: My name is Pres. Carey Rath, I am a faithful, funny, vast, joyous, lively, brave, glamorous person who loves writing and wants to share my knowledge and understanding with you.