Sélection de la langue

Search

Sommaire du brevet 2891672 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Brevet: (11) CA 2891672
(54) Titre français: PROCEDE ET APPAREIL POUR OBTENIR DE L'INFORMATION TEMPORELLE SUR LE MOUVEMENT INTER-IMAGE DE SOUS-UNITE DE PREDICTION
(54) Titre anglais: METHOD AND APPARATUS FOR DERIVING TEMPORAL INTER-VIEW MOTION INFORMATION OF SUB-PREDICTION UNIT
Statut: Octroyé
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • H04N 19/597 (2014.01)
  • H04N 19/159 (2014.01)
  • H04N 19/176 (2014.01)
  • H04N 19/52 (2014.01)
(72) Inventeurs :
  • PARK, GWANG HOON (Republique de Corée)
  • LEE, MIN SEONG (Republique de Corée)
  • HEO, YOUNG SU (Republique de Corée)
  • LEE, YOON JIN (Republique de Corée)
(73) Titulaires :
  • UNIVERSITY-INDUSTRY COOPERATION GROUP OF KYUNG HEE UNIVERSITY (Republique de Corée)
(71) Demandeurs :
  • UNIVERSITY-INDUSTRY COOPERATION GROUP OF KYUNG HEE UNIVERSITY (Republique de Corée)
(74) Agent: SMART & BIGGAR LP
(74) Co-agent:
(45) Délivré: 2018-11-27
(86) Date de dépôt PCT: 2015-01-05
(87) Mise à la disponibilité du public: 2015-07-03
Requête d'examen: 2015-02-24
Licence disponible: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Oui
(86) Numéro de la demande PCT: PCT/KR2015/000050
(87) Numéro de publication internationale PCT: WO2015/102443
(85) Entrée nationale: 2015-02-24

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
10-2014-0001531 Republique de Corée 2014-01-06
10-2015-0000578 Republique de Corée 2015-01-05
10-2014-0000527 Republique de Corée 2014-01-03

Abrégés

Abrégé anglais


According to the present invention, there is provided A method of encoding a
three-dimensional (3D) image, the method comprising: determining a prediction
mode for a current
block as an inter prediction mode; determining whether a reference block
corresponding to the
current block in a reference picture has motion information; when the
reference block has the
motion information, deriving motion information on the current block for each
sub prediction
block in the current block; and deriving a prediction sample for the current
block based on the
motion information on the current block.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


98
CLAIMS:
1. A method of encoding an image, the method comprising:
determining a prediction mode for a current block as an inter prediction mode;

deriving motion information of the current block; and
deriving a prediction sample for the current block based on the motion
information of
the current block,
wherein the step of the deriving motion information of the current block
comprises:
determining whether a center sub prediction block corresponding to a center
position
of the current block within a reference block has motion information;
deriving motion information of sub prediction blocks within the current block
when
the center sub prediction block within the reference block has motion
information; and
deriving motion information of the current block when the center sub
prediction block
within the reference block does not have motion information.
2. The method of claim 1, wherein the step of the deriving motion
information
of sub prediction blocks within the current block comprises:
determining whether a sub prediction block within the reference block has
motion
information, the sub prediction block within the reference block corresponding
to a current
sub prediction block within the current block;
deriving motion information of the current sub prediction block within the
current
block from the sub prediction block within the reference block when the sub
prediction block
within the reference block has motion information; and
deriving motion information of the current sub prediction block within the
current

99
block from the center sub prediction block within the reference block when the
sub prediction
block within the reference block does not have motion information.
3. The method of claim 1, wherein the reference block is located in a
reference
picture having a temporal order different from a current picture including the
current block.
4. The method of claim 1, wherein the reference block is located in a
reference
picture belonging to the same access unit as the current picture including the
current block.
5. The method of claim 4, wherein the reference block is specified based on
a
position of the current block and a disparity vector of the current block.
6. An apparatus for encoding an image, the apparatus comprising:
a storage module configured to determine a prediction mode for a current block
as an
inter prediction mode; and
a deriving module configured to derive motion information of the current block
and to
derive a prediction sample for the current block based on the motion
information of the
current block,
wherein the deriving module determines whether a center sub prediction block
corresponding to a center position of the current block within a reference
block has motion
information,
derives motion information of sub prediction blocks within the current block
when the
center sub prediction block within the reference block has motion information
and
derives motion information of the current block when the center sub prediction
block
within the reference block does not have motion information.
7. The apparatus of claim 6, wherein the deriving module determines whether
a
sub prediction block within the reference block has motion information, the
sub prediction

100
block within the reference block corresponding to a current sub prediction
block within the
current block,
derives motion information of the current sub prediction block within the
current block
from the sub prediction block within the reference block when the sub
prediction block within
the reference block has motion information and
derives motion information of the current sub prediction block within the
current block
from the center sub prediction block within the reference block when the sub
prediction block
within the reference block does not have motion information.
8. The apparatus of claim 6, wherein the reference block is located in a
reference picture having a temporal order different from a current picture
including the current
block.
9. The apparatus of claim 6, wherein the reference block is located in a
reference picture belonging to the same access unit as the current picture
including the current
block.
10. The apparatus of claim 9, wherein the reference block is specified
based on a
position of the current block and a disparity vector of the current block.
11. A method of decoding an image, the method comprising:
determining a prediction mode for a current block as an inter prediction mode;

deriving motion information of the current block; and
deriving a prediction sample for the current block based on the motion
information of
the current block,
wherein the step of the deriving motion information of the current block
comprises:
determining whether a center sub prediction block corresponding to a center
position

101
of the current block within a reference block has motion information;
deriving motion information of sub prediction blocks within the current block
when
the center sub prediction block within the reference block has motion
information; and
deriving motion information of the current block when the center sub
prediction block
within the reference block does not have motion information.
12. The method of claim 11, wherein the step of the deriving motion
information
of sub prediction blocks within the current block comprises:
determining whether a sub prediction block within the reference block has
motion
information, the sub prediction block within the reference block corresponding
to a current
sub prediction block within the current block;
deriving motion information of the current sub prediction block within the
current
block from the sub prediction block within the reference block when the sub
prediction block
within the reference block has motion information; and
deriving motion information of the current sub prediction block within the
current
block from the center sub prediction block within the reference block when the
sub prediction
block within the reference block does not have motion information.
13. The method of claim 11, wherein the reference block is located in a
reference
picture having a temporal order different from a current picture including the
current block.
14. The method of claim 11, wherein the reference block is located in a
reference
picture belonging to the same access unit as the current picture including the
current block.
15. The method of claim 14, wherein the reference block is specified based
on a
position of the current block and a disparity vector of the current block.
16. An apparatus for decoding an image, the apparatus comprising:

102
a storage module configured to determine a prediction mode for a current block
as an
inter prediction mode; and
a deriving module configured to derive motion information of the current block
and to
derive a prediction sample for the current block based on the motion
information of the
current block,
wherein the deriving module determines whether a center sub prediction block
corresponding to a center position of the current block within a reference
block has motion
information,
derives motion information of sub prediction blocks within the current block
when the
center sub prediction block within the reference block has motion information
and
derives motion information of the current block when the center sub prediction
block
within the reference block does not have motion information.
17. The apparatus of claim 16, wherein the deriving module determines
whether
a sub prediction block within the reference block has motion information, the
sub prediction
block within the reference block corresponding to a current sub prediction
block within the
current block,
derives motion information of the current sub prediction block within the
current block
from the sub prediction block within the reference block when the sub
prediction block within
the reference block has motion information and
derives motion information of the current sub prediction block within the
current block
from the center sub prediction block within the reference block when the sub
prediction block
within the reference block does not have motion information.
18. The apparatus of claim 16, wherein the reference block is located in a
reference picture having a temporal order different from a current picture
including the current
block.

103
19. The apparatus of claim 16, wherein the reference block is located in a
reference picture belonging to the same access unit as the current picture
including the current
block.
20. The apparatus of claim 18, wherein the reference block is specified
based on
a position of the current block and a disparity vector of the current block.

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CA 02891672 2015-02-24
1
METHOD AND APPARATUS FOR DERIVING TEMPORAL INTER-VIEW MOTION
INFORMATION OF SUB-PREDICTION UNIT
[TECHNICAL FIELD]
[1] The present invention relates to apparatuses and methods of
encoding/decoding 3D
images, and more specifically, to image encoding/decoding methods and
apparatuses that derive
inter-view motion information in parallel according to sub prediction units.
[2]
[BACKGROUND ART]
[3] Growing IT industry has spread HD (high definition) broadcast services
worldwide
and more and more users are getting used to HD images.
[4] Accordingly, the users are demanding higher-quality and higher-
resolution images and
a number of organizations are spurring themselves to develop next-generation
imaging devices to
live up to such expectations. As a result, users may experience full HD (FHD)
and ultra HD
(UHD) supportive images.
151 Users demand goes one more step for 3D images that may offer a 3D
feel or effects.
Various organizations have developed 3D images to meet users' such demand.
[6] However, 3D images include depth map information as well as a true
image (texture),
and thus, have more data than 2D images. Accordingly, encoding/decoding 3D
images with
existing image encoding/decoding processes cannot exhibit sufficient
encoding/decoding
efficiency.
[71
[DETAILED DESCRIPTION OF INVENTION]

81786209
2
[8] An aspect of the present disclosure is directed to the
provision of a device and
method for deriving motion information of a block targeted for
encoding/decoding.
191 Another aspect of the present disclosure is directed to the
provision of a device
and method for removing data dependency in deriving motion information of a
block targeted
for encoding/decoding.
[10] Another aspect of the present disclosure is directed to the
provision of a device
and method for increasing image encoding/decoding efficiency by removing data
dependency
in deriving motion information of a block targeted for encoding/decoding on a
per-sub
prediction unit basis.
[11] Another aspect of the present disclosure is directed to the provision
of a device
and method for increasing image encoding/decoding efficiency using motion
information of a
reference block when deriving motion information of a block targeted for
encoding/decoding
on a per-sub prediction unit basis.
[12] According to an aspect of the present invention, there is
provided a method of
encoding an image, the method comprising: determining a prediction mode for a
current block
as an inter prediction mode; deriving motion information of the current block;
and deriving a
prediction sample for the current block based on the motion information of the
current block,
wherein the step of the deriving motion information of the current block
comprises:
determining whether a center sub prediction block corresponding to a center
position of the
current block within a reference block has motion information; deriving motion
information
of sub prediction blocks within the current block when the center sub
prediction block within
the reference block has motion information; and deriving motion information of
the current
block when the center sub prediction block within the reference block does not
have motion
information.
[12a] According to another aspect of the present invention, there is
provided an
apparatus for encoding an image, the apparatus comprising: a storage module
configured to
determine a prediction mode for a current block as an inter prediction mode;
and a deriving
module configured to derive motion information of the current block and to
derive a
CA 2891672 2018-01-22

81786209
3
prediction sample for the current block based on the motion information of the
current block,
wherein the deriving module determines whether a center sub prediction block
corresponding
to a center position of the current block within a reference block has motion
information,
derives motion information of sub prediction blocks within the current block
when the center
sub prediction block within the reference block has motion information and
derives motion
information of the current block when the center sub prediction block within
the reference
block does not have motion information.
112131 According to another aspect of the present invention, there is
provided a
method of decoding an image, the method comprising: determining a prediction
mode for a
current block as an inter prediction mode; deriving motion information of the
current block;
and deriving a prediction sample for the current block based on the motion
information of the
current block, wherein the step of the deriving motion information of the
current block
comprises: determining whether a center sub prediction block corresponding to
a center
position of the current block within a reference block has motion information;
deriving motion
information of sub prediction blocks within the current block when the center
sub prediction
block within the reference block has motion information; and deriving motion
information of
the current block when the center sub prediction block within the reference
block does not
have motion information.
[12c] According to another aspect of the present invention, there is
provided an
apparatus for decoding an image, the apparatus comprising: a storage module
configured to
determine a prediction mode for a current block as an inter prediction mode;
and a deriving
module configured to derive motion information of the current block and to
derive a
prediction sample for the current block based on the motion information of the
current block,
wherein the deriving module determines whether a center sub prediction block
corresponding
.. to a center position of the current block within a reference block has
motion information,
derives motion information of sub prediction blocks within the current block
when the center
sub prediction block within the reference block has motion information and
derives motion
information of the current block when the center sub prediction block within
the reference
block does not have motion information.
CA 2891672 2018-01-22

CA 02891672 2016-10-13
55978-2
4
[13] According to an embodiment of the present invention, there may
be provided a
method of encoding a three-dimensional (3D) image, the method comprising:
determining a
prediction mode for a current block as an inter prediction mode;
determining whether a reference block corresponding to the current block in a
.. reference picture has motion information;
when the reference block has the motion information, deriving motion
information on the current block for each sub prediction block in the current
block; and
deriving a prediction sample for the current block based on the motion
information on the current block.
[14] In some embodiments, the current block and the reference block may be
prediction blocks.
[15] In some embodiments, the motion information on the reference block may
be
positioned at a center of the reference block.
[16] In some embodiments, in the step of the deriving the motion
information on the
current block for each sub prediction block in the current block, if a sub
prediction block in
the reference block corresponding to a sub prediction block in the current
block has motion
information, the motion information on the sub prediction block of the current
block may be
derived as the motion information present in the sub prediction block of the
reference block.
[17] In some embodiments, if a sub prediction block in the reference block
corresponding to a sub prediction block in the current block has not motion
information, the
motion information on the sub prediction block of the current block may be
derived as the
motion information of the reference block.
[18] According to another embodiment of the present invention, there may be

provided an apparatus of encoding a three-dimensional (3D) image, the
apparatus comprising:
.. a storage module determining a prediction mode for a current block as an
inter prediction
mode and determining whether a reference block corresponding to the current
block in a

CA 02891672 2016-10-13
55978-2
reference picture has motion information; a deriving module, when the
reference block has the
motion information, deriving motion information on the current block for each
sub prediction
block in the current block and deriving a prediction sample for the current
block based on the
motion information on the current block.
5 [19] In some embodiments, the current block and the reference
block may be
prediction blocks.
[20] In some embodiments, the motion information on the reference block may
be
positioned at a center of the reference block.
[21] In some embodiments, in the deriving module, if a sub prediction block
in the
reference block corresponding to a sub prediction block in the current block
has motion
information, the motion information on the sub prediction block of the current
block may be
derived as the motion information present in the sub prediction block of the
reference block.
[22] In some embodiments, if a sub prediction block in the reference block
corresponding to a sub prediction block in the current block has not motion
information, the
motion information on the sub prediction block of the current block may be
derived as the
motion information of the reference block.
[23] According to still another embodiment of the present invention, there
may be
provided a method of decoding a three-dimensional (3D) image, the method
comprising:
determining a prediction mode for a current block as an inter prediction mode;
determining
whether a reference block corresponding to the current block in a reference
picture has motion
information; when the reference block has the motion information, deriving
motion
information on the current block for each sub prediction block in the current
block; and
deriving a prediction sample for the current block based on the motion
information on the
current block.
[24] In some embodiments, the current block and the reference block may be
prediction blocks.

CA 02891672 2016-10-13
55978-2
6
[25] In some embodiments, the motion information on the reference block may
be
positioned at a center of the reference block.
[26] In some embodiments, in the step of the deriving the motion
information on the
current block for each sub prediction block in the current block, if a sub
prediction block in
the reference block corresponding to a sub prediction block in the current
block has motion
information, the motion information on the sub prediction block of the current
block may be
derived as the motion information present in the sub prediction block of the
reference block.
[27] In some embodiments, if a sub prediction block in the reference block
corresponding to a sub prediction block in the current block has not motion
information, the
motion infoiniation on the sub prediction block of the current block may be
derived as the
motion information of the reference block.
[28] According to yet still another embodiment of the present invention,
there may
be provided an apparatus of decoding a three-dimensional (3D) image, the
apparatus
comprising: a storage module determining a prediction mode for a current block
as an inter
prediction mode and determining whether a reference block corresponding to the
current
block in a reference picture has motion information; and a deriving module,
when the
reference block has the motion information, deriving motion information on the
current block
for each sub prediction block in the current block and deriving a prediction
sample for the
current block based on the motion information on the current block.
[29] In some embodiments, the current block and the reference block may be
prediction blocks.
[30] In some embodiments, the motion information on the reference block may
be
positioned at a center of the reference block.
[31] In some embodiments, in the deriving module, if a sub prediction block
in the
reference block corresponding to a sub prediction block in the current block
has motion
information, the motion information on the sub prediction block of the current
block may be
derived as the motion information present in the sub prediction block of the
reference block.

CA 02891672 2016-10-13
55978-2
6a
[32] In some embodiments, if a sub prediction block in the reference
block
corresponding to a sub prediction block in the current block has not motion
information, the
motion information on the sub prediction block of the current block may be
derived as the motion
information of the reference block.
[33]
[34] Some embodiments may derive motion information of a block targeted for

encoding/decoding.
[35] Some embodiments may remove data dependency in deriving motion
information
of a block targeted for encoding/decoding.
[36] Some embodiments may increase image encoding/decoding efficiency by
removing data dependency in deriving motion information of a block targeted
for
cncoding/decoding on a per-sub prediction unit basis.
[37] Some embodiments may increase image encoding/decoding efficiency using

motion information of a reference block by removing data dependency in
deriving motion
information of a block targeted for encoding/decoding on a per-sub prediction
unit basis.
[38]
[BRIEF DESCRIPTION OF DRAWINGS]
[39] Fig. I is a view schematically illustrating a basic structure of a 3-
dimensional (3D)
image system.
[40] Fig. 2 is a view illustrating an example of a "balloons" image and an
example of a
depth information map image.
[41] Fig. 3 is a view schematically illustrating a structure in which an
image is split
upon encoding and decoding the image.
[42] Fig. 4 illustrates prediction units that may be included in a coding
unit (CU).
1431 Fig. 5 illustrates an example of an inter view prediction structure in
a 3D image

CA 02891672 2015-02-24
7
codec.
[44] Fig. 6 illustrates an example of a process of encoding and/or
decoding a true
image (texture view) and a dcpth information map (depth view) in a 3D image
encoder and/or
decoder.
[45] Fig. 7 is a block diagram illustrating a configuration of an image
encoder
according to an embodiment of the present invention.
[46] Fig. 8 is a block diagram illustrating a configuration of an image
decoder
according to an embodiment of the present invention.
[47] Fig. 9 is a view illustrating an exemplary prediction structure for a
3D image
codec.
1481 Fig. 10 illustrates an example in which neighboring blocks are
used to configure
a merge candidate list for a current block.
1491 Fig. 11 is a view illustrating an exemplary process of deriving
motion
information on a current block using motion information at a neighboring view.
[501 Fig. 12 is a view illustrating an example in which one prediction unit
(PU) is
split into several sub prediction units.
[51] Fig. 13 is a view illustrating an exemplary process of deriving motion

information on a current block using a reference block.
[52] Fig. 14 is a view illustrating an exemplary reference block used to
derive motion
information on a current block.
[53] Figs. 15a to 15e are views schematically illustrating an exemplary
process of
deriving motion information using motion information stored in a storage
space.
[54] Figs. 16a to 16g are views schematically illustrating another
exemplary process

CA 02891672 2016-10-13
55978-2
8
of deriving motion information using motion information stored in a storage
space.
[55] Fig. 17 is a flowchart illustrating a method of deriving motion
information on a
sub prediction unit of a current block using a sub prediction unit of a
reference block,
according to an embodiment of the present invention.
[56] Fig. 18 is a view illustrating an exemplary process of deriving in
parallel
information on a sub prediction unit of a current block using a sub prediction
unit of a
reference block.
[57] Fig. 19 is a view illustrating an exemplary process of discovering an
available
sub prediction unit when the available sub prediction unit is positioned at
the rightmost and
.. lowermost end of a reference block.
[58] Fig. 20 is a view schematically illustrating times required to derive
motion
information on a per-sub prediction unit basis.
[59] Fig. 21 is a block diagram illustrating a configuration of an inter
prediction
module to which an embodiment of the present invention applies.
[60] Fig. 22 is a flowchart schematically illustrating a method of deriving
motion
information on a sub prediction unit of a current block using a reference
block, according to
an embodiment of the present invention.
[61] Fig. 23 is a flowchart schematically illustrating a method of deriving
motion
information on a sub prediction unit of a current block, according to another
embodiment of
the present invention.
[62] Fig. 24 is a view illustrating an exemplary process of deriving motion

information on a sub prediction unit of a current block using motion
information at a position.
[63] Fig. 25 is a flowchart illustrating a method of deriving motion
information on a
sub prediction unit of a current block using a motion information value
according to another
embodiment of the present invention.

CA 02891672 2016-10-13
55978-2
9
[64] Fig. 26 is a view illustrating an exemplary process of deriving motion

information on a sub prediction unit of a current block using some motion
information.
[65] Fig. 27 is a view schematically illustrating times required to derive
motion
information according to an embodiment of the present invention.
[66]
[DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS]
[67] Hereinafter, embodiments of the present invention are described in
detail with
reference to the accompanying drawings. When determined to make the subject
matter of the
present disclosure unclear, a detailed description of relevant know
configurations or functions
are omitted.
[68] When a component is "connected to" or "coupled to" another component,
the
component may be directly connected or coupled to the other component, or
other
components may intervene. As used herein, the present invention "includes" or
"comprises" a ,
particular component, the present invention does not exclude other components,
and rather
additional components may also be included in the technical spirit of the
present invention or
embodiments of the present invention.
[69] The terms "first" and "second" may be used to describe various
components,
but the components are not limited by the terms. These terms are used only to
distinguish one
component from another. For example, without departing from the scope of the
present
invention, a first component may be denoted a second component, and a second
component
may be denoted a first component.

CA 02891672 2015-02-24
1701 The components as used herein may be independently shown to
represent their
respective distinct features, but this does not mean that each component
should be configured as
a separate hardware or software unit. In other words, the components are shown
separately from
each other for ease of description. At least two of the components may be
combined to
5 configure a single component, or each component may be split into a
plurality of components to
perform a function. Such combination or separation also belongs to the scope
of the present
invention without departing from the gist of the present invention.
[71] Some components may be optional components for enhancing performance
rather than inevitable components for performing essential functions of the
present invention.
10 The present invention may be implemented only with essential components
to realize the gist of
the present invention excluding components used to enhance performance, and
such
configuration also belongs to the scope of the present invention.
[72]
[73] A 3D image offers a stereoscopic effect through a 3D stereoscopic
display as if
the user sees and feels in the real-life world. In this connection, a joint
standardization group,
JCT-3V(The Joint Collaborative Team on 3D Image Coding Extension Development),
of
MPEG(Moving Picture Experts Group) in ISO/IEC and VCEG(Video Coding Experts
Group) in
ITU-T are underway for 3D image standardization.
[741
[75] Fig. 1 is a view schematically illustrating a basic structure of a 3-
dimensional
(3D) image system.
[76] Referring to Fig. 1, the 3D video (3VD) system may include a
sender and a
receiver. In this case, the 3D video system of Fig. I may be a basic 3D video
system as

CA 02891672 2015-02-24
11
considered in 3D image standards that may include standards regarding advanced
data formats
and their related technologies that may support playback of autostereoscopic
images as well as
stereoscopic images using a texture and its corresponding depth information
map.
[77] The sender may generate a multi-view image content.
Specifically, the sender
may generate image information using a stereo camera and a multi-view camera
and a depth
information map (or depth view) using a depth information camera. The sender
may convert a
2D image into a 3D image using a transforming device. The sender may generate
an N (>2)-view
(i.e., multi-view) image content using the generated image information and the
depth information
map. In this case, the N-view image content may contain N-view image
information, its depth
map information, and camera-related additional information. The N-view image
content may be
compressed by a 3D image encoder using a multi-view image encoding scheme, and
the
compressed image content (a bit stream) may be transmitted through a network
to a terminal of
the receiver.
[781 The receiver may decode the image content received from the
sender and may
provide the multi-view image. Specifically, an image decoder (e.g., a 3D image
decoder, a stereo
image decoder, or a 2D image decoder) of the receiver may decode the received
bit stream using
a multi-view image decoding scheme to restore the bit stream into the N-view
image. In this
case, it may generate N (or more)-view virtual view images using the restored
N-view image and
a depth image-based rendering (DIBR) process. The generated N (or more)-view
virtual view
images are played by various 3D displays (e.g., an N-view display, a stereo
display, or a 2D
display), providing the user with a 3D effect.
[79]
[80] Fig. 2 is a view illustrating an example of a "balloons" image and an
example of

CA 02891672 2015-02-24
12
a depth information map image.
[81] Fig. 2(a) illustrates a "balloons" image that is adopted in an MPEG
(an
international standardization organization) 3D image encoding standard. Fig.
2(b) illustrates a
depth information map image corresponding to the "balloons" image shown in
Fig. 2(a). The
depth information map image is the one obtained by representing depth
information shown on
the screen in eight bits per pixel.
[82] The depth information map is used for generating virtual view images,
and the
depth information map is the one obtained by representing the distance between
a camera and a
true object in the real-life world (depth information corresponding to each
pixel at the same
resolution as the texture) in a predetermined number of bits. In this case,
the depth information
map may be obtained using the depth information map camera or using a true
common image
(texture).
[83] The depth information map obtained using the depth information map
camera
offers high-reliable depth information primarily for a standstill object or
scene, but the depth
information map camera operates only within a predetermined distance. In this
case, the depth
information map camera may utilize a measuring scheme using a laser beam or
structured light
or based on time-of-flight of light (TFL).
[84] The depth information map may be generated using a true common image
(texture) and a disparity vector as well. The disparity vector means
information representing the
difference in view between two common images. The disparity vector may be
obtained by
comparing a pixel at the current view and pixels at other views to discover
the most similar one
to the current view pixel and measuring the distance between the current view
pixel and the most
similar pixel.

CA 02891672 2015-02-24
,
13
[85] The texture and its depth information map may be an image(s) obtained
by one
or more cameras. The images obtained by several cameras may be independently
encoded and
may be encoded/decoded using a typical 2D encoding/decoding codec. The images
obtained by
several cameras have a correlation between their views, and for higher
encoding efficiency, may
be thus encoded using prediction between the different views.
[86]
[87] Fig. 3 is a view schematically illustrating a structure in which an
image is split
upon encoding and decoding the image.
[88] For efficient splitting, an image may be encoded and decoded for each
coding
unit (CU). The term "unit" refers to a block including a syntax element and
image samples. A
"unit is split" may mean that a block corresponding to the unit is split.
[89] Referring to Fig. 3, an image 300 is sequentially split into largest
coding units
(LCU), and the split structure of each LCU is determined. As used herein,
"LCU" may mean a
coding tree unit (CTU). The split structure may mean a distribution of coding
units (CU) for
efficiently encoding the image in each LCU 310, and such distribution may be
determined
depending on whether to split one CU into four CUs each reduced in size by 1/2
the size of the
CU in horizontal and vertical directions each. In the same manner, the split
CU may be
recursively split into four CUs each's size reduced to 1/2 thereof in
horizontal and vertical
directions each.
[90] In this case, the splitting of a CU may be recursively performed to a
predetermined depth. Depth information refers to information indicating the
size of a CU and
may be stored for each CU. For example, the depth of an LCU may be 0, and the
depth of a
smallest coding unit (SCU) may be a predetermined largest depth. Here, the LCU
is a coding unit

CA 02891672 2015-02-24
14
with the largest size as mentioned above, and the SCU is a coding unit with
the smallest size.
1911 Whenever an LCU 310 is split by half in horizontal and vertical
directions each,
the depth of the CU is increased by one. For example, if the size of a CU is
2Nx2N at a certain
depth L, the CU, if not split, has a size of 2Nx2N, and if split, its size is
reduced to NxN. In this
case, the depth of the NxN-sized CU turns L+1. In other words, N,
corresponding to the size of
the CU, is reduced by half each time the depth is increased by one.
1921 Referring to Fig. 3, the size of an LCU with a smallest depth
of 0 may be 64x64
pixels, and the size of an SCU with a smallest depth of 3 may be 8x8 pixels.
In this case, the
depth of a CU (LCU) with 64x64 pixels may be represented as 0, a CU with 32x32
pixels as 1, a
CU with 16x16 pixels as 2, and a CU (SCU) with 8x8 pixels as 3.
1931 Further, information as to whether to split a particular CU may
be represented
through one-bit split information of the CU. The split information may be
contained in all other
CUs than SCUs. For example, if a CU is not split, 0 may be retained in the
split information of
the CU, and if split, 1 may be retained in the split information of the CU.
1941
1951 Fig. 4 illustrates prediction units that may be included in a
coding unit (CU).
[961 Among the CUs split from an LCU, a CU that is subjected to no
further splitting
may be split or partitioned into one more prediction units.
1971 A prediction unit (hereinafter, "PU") is a basic unit in which
prediction is
conducted. A prediction unit is encoded and decoded in skip mode, inter mode,
or intra mode.
A prediction unit may be partitioned in various manners depending on the
modes.
1981 Referring to Fig. 4, the skip mode may support a 2Nx2N mode 410
having the
same size as a CU without splitting the CU.

CA 02891672 2015-02-24
. ,
[99] The inter mode may support eight partitioned types for a CU, for
example, a
2Nx2N mode 410, a 2NxN mode 415, an Nx2N mode 420, an NxN mode 425, a 2NxnU
mode
430, a 2NxnD mode 435, an nLx2N mode 440, and an NRx2N mode 445.
[100] The intra mode may support a 2Nx2N mode 410 and an NxN mode 425 for a

5 CU.
[101]
[1021 Fig. 5 illustrates an example of an inter view prediction
structure in a 3D image
codec.
[103] Inter-view prediction for view 1 and view 2 may be conducted using
view 0 as a
10 reference image, and view 0 should be encoded earlier than view 1 and
view 2.
[104] In this case, view 0 may be encoded independently from other views,
and thus,
view 0 is referred to as an independent view. In contrast, view 1 and view 2
that should use view
0 as reference image are referred to as dependent views. An independent view
image may be
encoded using a typical 2D image codec. On the contrary, dependent view images
need go
15 through inter view prediction, and thus, these views may be encoded
using a 3D image codec
equipped with an inter view prediction process.
[105] For increased encoded efficiency, view 1 and view 2 may be encoded
using a
depth information map. For example, a texture and a depth information map,
when encoded, may
be encoded and/or decoded independently from each other. Or, a texture and a
depth information
map, when encoded, may be encoded and/or decoded dependently upon each other
as shown in
Fig. 6.
[106]
[107] Fig. 6 illustrates an example of a process of encoding and/or
decoding a true

CA 02891672 2015-02-24
16
image (texture view) and a depth information map (depth view) in a 3D image
encoder and/or
decoder.
[108] Referring to Fig. 6, the 3D image encoder may include a texture
encoder
(texture encoder) for encoding a true image (texture view) and a depth
information map encoder
(depth encoder) for encoding a depth information map (depth view).
[109] In this case, the texture encoder may encode the texture using the
depth
information map encoded by the depth information map encoder. In contrast, the
depth
information map encoder may encode the depth information map using the texture
encoded by
the texture encoder.
[110] The 3D image decoder may include a true image decoder (texture
decoder) for
decoding a texture and a depth information map decoder for decoding a depth
information map.
[111] In this case, the texture decoder may decode the texture using the
depth
information map decoded by the depth information map decoder. In contrast, the
depth
information map decoder may decode the depth information map using the texture
decoded by
the texture decoder.
[112]
[113] Fig. 7 is a block diagram illustrating a configuration of an image
encoder
according to an embodiment of the present invention.
[114] Fig. 7 illustrates an example image encoder applicable to a multi-
view structure
that may be implemented by extending a single view-structured image encoder.
In this case, the
image encoder of Fig. 7 may be used in a texture encoder and/or depth
information map encoder
as shown in Fig. 6, and the encoder may mean an encoding device.
1115] Referring to Fig. 7, the image encoder 700 includes an inter
prediction module

CA 02891672 2015-02-24
17
710, an intra prediction module 720, a switch 715, a subtractor 725, a
transform module 730, a
quantization module 740, an entropy encoding unit 750, an dequantization
module 760, an
inverse transform module 770, an adder 775, a filter 780, and a reference
picture buffer 790.
[116] The image encoder 700 may perform encoding on an input image in intra
mode
or inter mode to output a bitstream.
[117] Intra prediction means intra picture prediction, and inter prediction
means inter
picture or inter view prediction. In intra mode, the switch 715 switches to
intra mode, and in inter
mode, the switch 715 switches to inter mode.
[118] The image encoder 700 may generate a prediction block for a block
(current
block) of the input picture and then encode a differential between the current
block and the
prediction block.
[119] In intra mode, the intra prediction module 720 may use as its
reference pixel a
pixel value of an already encoded neighboring block of the current block. The
intra prediction
module 720 may generate prediction samples for the current block using the
reference pixel.
[120] In inter mode, the inter prediction module 710 may obtain a motion
vector
specifying a reference block corresponding to the input block (current block)
in a reference
picture stored in the reference picture buffer 790. The inter prediction
module 710 may generate
the prediction block for the current block by performing motion compensation
using the
reference picture stored in the reference picture buffer 790 and the motion
vector.
[121] In a multi-view structure, inter prediction applying to inter mode
may include
inter view prediction. The inter prediction module 710 may configure an inter
view reference
picture by sampling a reference view picture. The inter prediction module 710
may conduct inter
view prediction using a reference picture list including the inter view
reference picture. A

CA 02891672 2015-02-24
= .
18
reference relation between views may be signaled through information
specifying inter view
dependency.
11221 Meanwhile, in case the current view picture and the reference
view picture have
the same size, sampling applying to the reference view picture may mean
generation of a
reference sample by sample copying or interpolation from the reference view
picture. In case the
current view picture and the reference view picture have different sizes,
sampling applying to the
reference view picture may mean upsampling or downsampling. For example, in
case views have
different resolutions, a restored picture of the reference view may be
upsampled to configure an
inter view reference picture.
[123] Which view picture is to be used to configure an inter view reference
picture
may be determined considering, e.g., encoding costs. The encoder may send to a
decoding device
information specifying a view to which a picture to be used as an inter view
reference picture
belongs.
[124] A picture used to predict the current block in a view referenced in
inter view
prediction--that is, reference view--may be the same as a picture of the same
access unit (AU) as
the current picture (picture targeted for prediction in the current view).
[125] The subtractor 725 may generate a residual block (residual signal) by
a
differential between the current block and the prediction block.
[126] The transform module 730 transforms the residual block into a
transform
coefficient. In transform skip mode, the transform module 730 may skip the
conversion of the
residual block.
11271 The quantization module 740 quantizes the transform
coefficient into a
quantized coefficient according to quantization parameters.

CA 02891672 2015-02-24
19
[128] The entropy encoding unit 750 entropy-encodes the values
obtained by the
quantization module 740 or encoding parameters obtained in the course of
encoding into a
bitstream according to a probability distribution. The entropy encoding unit
750 may also
entropy-encode information (e.g., syntax element) for image decoding in
addition to the pixel
information of the image.
1129] The encoding parameters may include, as information necessary
for encoding
and decoding, information inferable in the course of encoding or decoding, as
well as
information such as syntax element encoded by the encoder and transferred to
the decoding
device.
[130] The residual signal may mean a difference between the original signal
and the
prediction signal, a signal obtained by transforming the difference between
the original signal
and the prediction signal, or a signal obtained by transforming the difference
between the
original signal and the prediction signal and quantizing the transformed
difference. From a block
perspective, the residual signal may be denoted a residual block.
[131] In case entropy encoding applies, symbols may be represented in such
a way
that a symbol with a higher chance of occurrence is assigned fewer bits while
another with a
lower chance of occurrence is assigned more bits, and accordingly, the size of
a bitstream for
symbols targeted for encoding may be reduced. As such, image encoding may have
an increased
compression capability through entropy encoding.
1132] Entropy encoding may employ an encoding scheme such as exponential
Golomb, context-adaptive variable length coding (CAVLC), or context-adaptive
binary
arithmetic coding (CABAC). For example, the entropy encoding unit 750 may
perform entropy
encoding using a variable length coding/code (VLC) table. The entropy encoding
unit 750 may

CA 02891672 2015-02-24
derive a binarization method and a target symbol and a probability model of
the target
symbol/bin and may perform entropy encoding using the derived binarization
method and
probability model.
[133] The quantized coefficient may be inverse-quantized by the
dequantization
5 module 760 and may be inverse transformed by the inverse transform module
770. The inverse-
quantized and inverse-transformed coefficient is added to the prediction block
by the adder 775,
thus producing a restored block.
1134] The restored block goes through the filter 780. The filter 780
may apply at
least one or more of a deblocking filter, a sample adaptive offset (SAO), and
an adaptive loop
10 filter (ALF) to the restored block or restored picture. The restored
block, after having gone
through the filter 780, may be stored in the reference picture buffer 790.
11351
[136] Fig. 8 is a block diagram illustrating a configuration of an
image decoder
according to an embodiment of the present invention.
15 [137] Fig. 8 illustrates an example image decoder applicable to
a multi-view structure
that may be implemented by extending a single view-structured image decoder.
[138] In this case, the image decoder of Fig. 8 may be used in a texture
decoder and/or
depth information map decoder as shown in Fig. 6. For ease of description, as
used herein, the
terms "decrypting" and "decoding" may be interchangeably used, or the terms
"decoding device"
20 and "decoder" may be interchangeably used.
[139] Referring to Fig. 8, the image decoder 800 includes an entropy
decoding unit
810, an dequantization module 820, an inverse-transform module 830, an intra
prediction module
840, an inter prediction module 850, a filter 860, and a reference picture
buffer 870.

CA 02891672 2015-02-24
21
[140] The image decoder 800 may receive the bitstream from the encoder,
decode the
bitstream in intra mode or inter mode, and output a reconstructed image, i.e.,
a reconstructed
image.
[141] In intra mode, the switch may switch to intra prediction, and in
inter mode, the
switch may switch to inter prediction.
[142] The image decoder 800 may obtain a residual block restored from the
received
bitstream, generate a prediction block, and add the restored residual block
and the prediction
block to generate a reconstructed block, i.e. restored block.
[143] The entropy decoding unit 810 may entropy-decode the received
bitstream
according to a probability distribution into information such as a quantized
coefficient and
syntax element.
[144] The quantized coefficient is inverse-quantized by the dequantization
module
820 and is inverse transformed by the inverse transform module 830. The
quantized coefficient
may be inverse-quantized/inverse-transformed into a restored residual block.
[145] In intra mode, the intra prediction module 840 may generate a
prediction block
for the current block using a pixel value of an already encoded neighboring
block of the current
block.
11461 In inter mode, the inter prediction module 850 may generate the
prediction
block for the current block by performing motion compensation using the
reference picture
stored in the reference picture buffer 870 and the motion vector.
[147] In a multi-view structure, inter prediction applying to inter
mode may include
inter view prediction. The inter prediction module 850 may configure an inter
view reference
picture by sampling a reference view picture. The inter prediction module 850
may conduct inter

CA 02891672 2015-02-24
= .
22
view prediction using a reference picture list including the inter view
reference picture. A
reference relation between views may be signaled through information
specifying inter view
dependency.
[148] Meanwhile, in case the current view picture (current picture) and the
reference
view picture have the same size, sampling applying to the reference view
picture may mean
generation of a reference sample by sample copying or interpolation from the
reference view
picture. In case the current view picture and the reference view picture have
different sizes,
sampling applying to the reference view picture may mean upsampling or
downsampling.
[149] For example, in case inter view prediction applies to views with
different
resolutions, a restored picture of the reference view may be upsampled to
configure an inter view
reference picture.
[150] In this case, information specifying a view to which a picture to be
used as an
inter view reference picture belongs may be transmitted from the encoder to
the decoder.
1151] A picture used to predict the current block in a view
referenced in inter view
prediction--that is, reference view--may be the same as a picture of the same
access unit (AU) as
the current picture (picture targeted for prediction in the current view).
[152] The restored residual block and the prediction block are added
by the adder 855
into a restored block. In other words, the residual sample and the prediction
sample are added to
each other into a restored sample or restored picture.
[153] The restored picture is filtered by the filter 860. The filter 860
may apply at least
one or more of a deblocking filter, an SAO, and an ALF to the restored block
or restored picture.
The filter 860 outputs a reconstructed (modified) or filtered restored picture
(reconstructed
picture). The reconstructed image is stored in the reference picture buffer
870 for use in inter

CA 02891672 2015-02-24
, .
23
prediction.
[154] Although in the embodiment described in connection with Figs. 7 and 8
the
modules perform their respective functions different from each other, the
present invention is not
limited thereto. For example, one module may perform two or more functions.
For example,
the respective operations of the intra prediction module and the inter
prediction modules as
shown in Figs. 7 and 8 may be carried out by one module (a predicting unit).
[155] Meanwhile, as described above in connection with Figs. 7 and 8, one
encoder/decoder performs encoding/decoding on all of the multiple views.
However, this is
merely for ease of description, and separate encoders/decoders may be
configured for the
multiple views, respectively.
[156] In such case, the encoder/decoder for the current view may perform
encoding/decoding on the current view using information regarding other view.
For example, the
predicting unit (inter prediction module) for the current view may perform
intra prediction or
inter prediction on the current block using the pixel information or restored
picture information
.. of other view.
11571 Although inter view prediction is described herein, a current
layer may be
encoded/decoded using information on other view regardless of whether an
encoder/decoder is
configured for each view or one device processes multiple views.
11581 The description of views according to the present invention may
apply likewise
to layers supportive to scalability. For example, the view as described herein
may be a layer.
[159]
[160] Fig. 9 is a view illustrating an exemplary prediction structure for a
3D image
codec. For ease of description, Fig. 9 illustrates a prediction structure for
encoding textures

CA 02891672 2015-02-24
24
obtained by three cameras and depth information maps respectively
corresponding to the
textures.
[1611 As shown in Fig. 9, the three textures respectively obtained
from the three
cameras are denoted TO, TI, and T2 according to views, and the three depth
information maps
respectively corresponding to the three textures are denoted DO, D1, and D2
according to the
views. Here, TO and DO are images obtained at view 0, TI and D1 at view 1, and
12 and D2 at
view 2. In this case, the squares shown in Fig. 9 are images (pictures).
11621 The images (pictures) are classified into I pictures (intra
pictures), P pictures
(uni-prediction pictures), and B pictures (bi-prediction pictures) depending
on
encoding/decoding types, and each picture may be encoded/decoded depending on
its
encoding/decoding type. For I pictures, images themselves are encoded without
going through
inter prediction. For P pictures, only uni-directionally present reference
images may be
subjected to inter prediction, and for B pictures, bi-directionally present
reference images may be
subjected to inter prediction. In this case, the arrows shown in Fig. 9 denote
directions of
.. prediction. In other words, a texture and its depth information map may be
co-dependently
encoded/decoded depending on prediction directions.
[1631
[1641 Motion information on the current block is needed to
encode/decode an image
through inter prediction. To infer the motion information on the current
block, the following may
come in use: a method using motion information on a block adjacent to the
current block, a
method using a temporal correlation within the same time, and a method using
an inter-view
correlation at a neighboring time. The above-described inter prediction
methods may be used
in combination for one picture. Here, the current block refers to a block
where prediction is

CA 02891672 2015-02-24
performed. The motion information may mean a motion vector, a reference image
number,
and/or a prediction direction (e.g., whether it is uni-directional prediction
or bi-directional
prediction, whether it uses a temporal correlation, or whether an inter-view
correlation is used,
etc.).
5 [165] In
this case, the prediction direction may be typically classified into uni-
directional prediction or bi-directional prediction depending on whether a
reference picture list
(RefPicList) is used or not. The bi-directional prediction is classified into
forward prediction
(Pred_LO: Prediction LO) using a forward reference picture list (LIST 0, LO)
and backward
prediction (Pred_L 1 : Prediction Li) using a backward reference picture list
(LIST 1, Li).
10 Further, the bi-directional prediction Pred_BI: Prediction BI) using
both the forward reference
picture list (LIST 0) and the backward reference picture list (LIST 1) may
indicate that there is
both forward prediction and backward prediction. Even the case where the
forward reference
picture list (LIST 0) is copied to the backward reference picture list (LIST
1) so that two
processes of forward prediction are present may also belong to the category of
bi-directional
15 prediction.
[166] A prediction direction may be defined using predFlagLO and predFlagL
I. In this
case, predFlagLO is an indicator indicating whether the forward reference
picture list (List 0) is
used, and predFlagl is an indicator indicating whether the backward reference
picture list (List
1) is used. For example, in the case of uni-directional prediction and forward
prediction,
20 predFlagLO may be '1', and predFlagL1 may be '0'; in the case of uni-
directional prediction and
backward prediction, predFlagLO and
predFlagL1 '1';' and in the case of bi-directional
prediction, predFlagLO '1,' and predFlagL1 '1.'
[167]

CA 02891672 2015-02-24
26
1168] Fig. 10 illustrates an example in which neighboring blocks are
used to configure
a merge candidate list for a current block.
[169] Merge mode is a method for performing inter prediction. Merge mode
may
employ motion information on neighboring blocks of a current block as motion
information on
the current block (for example, at least one of a motion vector, a reference
picture list, and a
reference picture index). In this case, the use of the motion information on
the neighboring
blocks as motion information on the current block is referred to as merging,
motion merging, or
merging motion.
[170] In merge mode, per-coding unit (CU) merging motion and per-prediction
unit
(PU) merging motion are possible.
[171] The case where merging motion is made on a per-block (e.g., CU or PU)
basis
(for ease of description, hereinafter "block'') requires information regarding
whether the merging
motion is performed per block partition and information regarding which one of
neighboring
blocks of the current block the merging motion is done with.
1172] A merge candidate list may be configured to perform merging motion.
11731 The merge candidate list refers to a list of pieces of motion
information, and this
may be generated before merge mode is performed. Here, the motion information
of the merge
candidate list may be motion information on the neighboring blocks of the
current block or
motion information newly created by combining the pieces of motion information
already
present in the merge candidate list. The motion information on the neighboring
blocks (for
example, a motion vector and/or reference picture index) may be motion
information specified
by the neighboring blocks or motion information stored in the neighboring
blocks (or used to
decode the neighboring blocks).

CA 02891672 2015-02-24
27
[1741 In this case, the neighboring blocks, as shown in Fig. 10, may
include
neighboring blocks A, B, C, D and E positioned spatially adjacent to the
current block and a co-
located candidate block H or M temporally corresponding to the current block.
The co-located
candidate block refers to a block located at a corresponding position in a co-
located picture
.. temporally corresponding to the current picture including the current
block. If the H block is
available in the co-located picture, the H block may be determined as the co-
located candidate
block, and if unavailable, the M block in the co-located picture may be
determined as the co-
located candidate block.
[1751 Upon configuring the merge candidate list, it is determined
whether the motion
information on the neighboring blocks (A, B, C, D, and E) and the co-located
candidate block (H
or M) may be used as merge candidate to configure the merge candidate list of
the current block.
In other words, motion information on blocks available for inter prediction of
the current block
may be added to the merge candidate list as merge candidate.
11761 For example, as a method for configuring a merge candidate list
for an X block,
1) in case a neighboring block A is available, the neighboring block A is
added to the merge
candidate list. 2) thereafter, only when the motion information on neighboring
block B is not the
same as the motion information on neighboring block A, neighboring block B is
added to the
merge candidate list. 3) in the same manner, only when the motion information
on neighboring
block C differs from the motion information on neighboring block B,
neighboring block C is
added to the merge candidate list, and 4) only when the motion information on
neighboring block
D differs from the motion information on neighboring block C, neighboring
block D is added to
the merge candidate list. Further, 5) only when the motion information on
neighboring block E is
different from the motion information on neighboring block D, neighboring
block E may be

CA 02891672 2015-02-24
,
28
added to the merge candidate list, and 6) finally, neighboring block H (or M)
is added to the
merge candidate list. In sum, the neighboring blocks may be added to the merge
candidate list in
the order of A---4:1¨*C--D¨>E--H(or M). Here, the same motion information may
mean using
the same motion vector, the same reference picture, and the same prediction
direction (uni-
directional or bi-directional).
[177] The phrases "adding a neighboring block to a merge candidate
list as merge
candidate" and "adding motion information to a merge candidate list as merge
candidate" are
mixed up herein for ease of description, although the two phrases are
substantially the same in
meaning. For example, a neighboring block as merge candidate may mean motion
information
on the block.
11781 Fig. 11 is a view illustrating an exemplary process of deriving
motion
information on a current block using motion information at a neighboring view.
[179] In connection with Fig. 11, only one view is used to derive the
motion
information on the current block merely for case of description. However,
there may be two or
more neighboring views.
11801 Referring to Fig. 11, a 3D video system may use motion
information at a
neighboring view in order to efficiently encode/decode motion information.
Specifically, the
current block shown in Fig. 11 (the block at current location X) searches a
target block
(reference location XR) located at a neighboring view in order to derive the
motion information
on the current block. In this case, the target block at the neighboring view
means a block
corresponding to the current block. Since only a difference in current picture
between the
current view and the reference view lies in the position of cameras, the
target block at the
neighboring view may be derived from the disparity vector (DV) as described
above.

CA 02891672 2015-02-24
29
[181]
11821 Fig. 12 is a view illustrating an example in which one
prediction unit (PU) is
split into several sub prediction units.
[183] In the example illustrated in Fig. 12, a prediction unit with a size
of 64x64 is
divided into sub prediction units each with a size of 8x8. For ease of
description in connection
with Fig. 12, the size of the prediction unit is 64x64, but without limited
thereto, the size may be
32x32, 16x16, 8x8, or 4x4. In a 3D video system, one prediction unit may be
split into a number
of sub prediction units. In this case, derivation of motion information using
a disparity vector is
carried out on a per-sub prediction unit basis. The sub prediction unit may
have a predetermined
size (e.g., 4x4, 8x8, or 16x16), and the size of the sub prediction unit may
be designated upon
encoding. Information on the size of the sub prediction unit may be included
and signaled in an
image parameter set (VPS) extension syntax.
[184]
[185] Fig. 13 is a view illustrating an exemplary process of deriving
motion
information on a current block using a reference block.
[186] The process of deriving motion information on a current block means
setting up
the motion information present in the reference block with the motion
information on the current
block. However, a 3D video system may derive motion information on a per-sub
prediction unit
basis for the current block X positioned in the current picture at the current
view in order to
efficiently encode/decode motion information.
[1871 In other words, the 3D video system may set the motion
information present in
the sub prediction unit of the reference block XR to the motion information on
the sub prediction
unit of the current block X. In this case, the reference block XR may mean a
reference block XR

CA 02891672 2015-02-24
positioned in the current picture at the reference view. A specific process of
deriving motion
information is described below.
1188]
11891 Fig. 14 is a view illustrating an exemplary reference block
used to derive motion
5 information on a current block.
11901 Referring to Fig. 14, the reference block may mean a PU, and
one reference
block may include a total of 16 sub prediction units. In this case, motion
information on each sub
prediction unit in the current block may be derived from motion information
present in the sub
prediction units of the reference block.
10 [1911 Now described is a method of deriving motion information
on sub prediction
units of a current block using a reference block with reference to Figs. 15a
to 15e and Figs. 16a
to 16g.
1192]
11931 Figs. 15a to 15e are views schematically illustrating an
exemplary process of
15 deriving motion information using motion information stored in a storage
space. In this case, the
reference block used in Figs. 15a to 15e may be a reference block as shown in
Fig. 14.
11941 When the sub prediction unit of the current block brings the mi
on the sub
prediction units of the reference block, all of the sub prediction unit of the
reference block do not
have motion information. In other words, there might be some sub prediction
units of the
20 reference block from which motion information cannot be brought up.
Accordingly, in case there
are sub prediction units from which motion information cannot be obtained, the
ml on a previous
or subsequent sub prediction unit of the currently referenced sub prediction
unit may be put to
use in order to make up for failure to derive motion information from the sub
prediction unit of

CA 02891672 2015-02-24
31
the current block. For example, the motion information on a sub prediction
unit available in the
reference block may be previously stored in preparation for the case where
there is some other
sub prediction unit of the reference block from which motion information
cannot be derived, so
that the previously stored motion information may be inserted into the sub
prediction unit of the
current block to derive the motion information on the current block.
11951 For a better understanding of the above-described method, each
step of an
exemplary method for deriving motion information on a sub prediction unit of a
current block
when a first sub prediction unit of a reference block has its motion
information while a second or
its subsequent sub prediction units of the reference block may not is
described below with
reference to the drawings.
11961
11971 Fig. 15a is a view illustrating the initial state of sub
prediction units of a current
block and a storage space.
1198] Referring to Fig. 15a, Ref denotes a reference block, and Ref
0, 1, 2, and 3
respectively denote sub prediction units in the reference block. That is, Ref
0 means sub
prediction unit 0 of the reference block (a first sub prediction unit of the
reference block), Ref 1
sub prediction unit 1 of the reference block (a second sub prediction unit of
the reference block),
Ref 2 sub prediction unit 2 of the reference block (a third sub prediction
unit of the reference
block), and Ref 3 sub prediction unit 3 of the reference block (a fourth sub
prediction unit of the
reference block). Cur denotes the current block, and Cur 0, 1, 2, and 3
respectively denote sub
prediction units in the current block. That is, Cur 0 means sub prediction
unit 0 of the current
block (a first sub prediction unit of the current block), Cur 1 sub prediction
unit 1 of the current
block (a second sub prediction unit of the current block), Cur 2 sub
prediction unit 2 of the

CA 02891672 2015-02-24
32
current block (a third sub prediction unit of the current block), and Cur 3
sub prediction unit 3 (a
fourth sub prediction unit of the current block).
11991 In this case, 'X marked in Ref 2 of Fig. 15a denotes motion
information being
impossible to derive using sub prediction unit 2 of the reference block.
12001
12011 Fig. 15b shows a first step of deriving motion information from
a sub prediction
unit of the reference block.
[2021 Referring to Fig. 15b, motion information is derived from sub
prediction unit 0
of the reference block for sub prediction unit 0 of the current block. In this
case, since motion
information may be derived from sub prediction unit 0 of the reference block,
motion
information on sub prediction unit 0 of the reference block is stored in the
storage space. In this
case, the motion information stored in the storage space may be defined as
motion information 0,
which is used when motion information cannot be derived from some other sub
prediction units
of the reference block.
[2031
[2041 Fig. 15c shows a second step of deriving motion information
from a sub
prediction unit of the reference block.
[2051 Referring to Fig. 15c, motion information is derived from sub
prediction unit 1
of the reference block for sub prediction unit 1 of the current block. In this
case, since motion
information may be derived from sub prediction unit 1 of the reference block,
motion
information on sub prediction unit 1 of the reference block is stored in the
storage space. In this
case, the stored motion information on sub prediction unit 1 may be defined as
motion
information 1, and motion information 1 may be stored in the storage space
instead of motion

CA 02891672 2015-02-24
. ,
33
information 0. Motion information 1 may be used when motion information cannot
be derived
from some other sub prediction unit of the reference block.
[206]
[207] Fig. 15d shows a third step of deriving motion information from a sub
prediction
unit of the reference block.
[208] Referring to Fig. 15d, an attempt is made to derive motion
information from sub
prediction unit 2 of the reference block for sub prediction unit 2 of the
current block. However,
since no motion information can be derived from sub prediction unit 2 of the
reference block,
motion information on sub prediction unit 2 of the current block is derived
from the motion
information stored in the storage space. In this case, the motion information
stored in the storage
space may be motion information 1.
[209]
12101 Fig. 15e shows a fourth step of deriving motion information
from a sub
prediction unit of the reference block.
[211] Referring to Fig. 15e, motion information is derived from sub
prediction unit 3
of the reference block for sub prediction unit 3 of the current block. In this
case, since motion
information may be derived from sub prediction unit 3 of the reference block,
motion
information on sub prediction unit 3 of the reference block is stored in the
storage space. In this
case, the stored motion information on sub prediction unit 3 may be defined as
motion
information 3, and motion information 3 may be stored in the storage space
instead of motion
information 1. Motion information 3 may be used when motion information cannot
be derived
from some other sub prediction unit of the reference block.
12121

CA 02891672 2015-02-24
34
[213] Figs. 16a to 16g are views schematically illustrating another
exemplary process
of deriving motion information using motion information stored in a storage
space.
[214] Figs. 16a to 16g illustrate an exemplary process of deriving motion
information
in the case where a sub prediction unit of the reference block comes from
which motion
information cannot be derived, followed by another sub prediction unit of the
reference block
from which motion information can be derived.
[215]
[216] Fig. 16a is a view illustrating the initial state of sub prediction
units of a current
block and a storage space.
[217] Referring to Fig. 16a, Ref denotes a reference block, and Ref 0, 1,
2, and 3
respectively denote sub prediction units in the reference block. That is, Ref
0 means sub
prediction unit 0 of the reference block, Ref 1 sub prediction unit 1 of the
reference block, Ref 2
sub prediction unit 2 of the reference block, and Ref 3 sub prediction unit 3
of the reference
block. Cur denotes the current block, and Cur 0, 1, 2, and 3 respectively
denote sub prediction
units in the current block. That is, Cur 0 means sub prediction unit 0 of the
current block, Cur 1
sub prediction unit 1 of the current block, Cur 2 sub prediction unit 2 of the
current block, and
Cur 3 sub prediction unit 3 of the current block, in this case, 'X' marked in
Ref 0 of Fig. 16a
denotes motion information being impossible to derive using sub prediction
unit 0 of the
reference block and sub prediction unit I of the reference block.
[218]
[219] Fig. 16b shows a first step of deriving motion information from a sub
prediction
unit of the reference block.
[220] Referring to Fig. 16b, an attempt is made to derive motion
information from sub

CA 02891672 2015-02-24
. .
prediction unit 0 of the reference block for sub prediction unit 0 of the
current block. However,
as described above, no motion information can be derived from sub prediction
unit 0 of the
reference block, nor is there motion information stored in the storage space.
Accordingly, a
second step is performed.
5 [221]
[222] Fig. 16c shows a second step of deriving motion information from a
sub
prediction unit of the reference block.
[223] Referring to Fig. 16c, an attempt is made to derive motion
information from sub
prediction unit 1 of the reference block for sub prediction unit 1 of the
current block. However,
10 as described above, no motion information can be derived from sub
prediction unit 1 of the
reference block, nor is there motion information stored in the storage space.
Accordingly, a
third step is performed.
[224]
[2251 Fig. 16d shows a third step of deriving motion information from
a sub prediction
15 unit of the reference block.
[226] Referring to Fig. 16d, motion information is derived from sub
prediction unit 2
of the reference block for sub prediction unit 2 of the current block. In this
case, since motion
information may be derived from sub prediction unit 2 of the reference block,
motion
information on sub prediction unit 2 of the reference block is stored in the
storage space. In this
20 case, the motion information stored in the storage space may be defined
as motion information 2,
which is used when motion information cannot be derived from some other sub
prediction units
of the reference block.
[227]

CA 02891672 2015-02-24
36
[228] Fig. 16e shows a fourth step of deriving motion information from a
sub
prediction unit of the reference block.
[229] Referring to Fig. 16e, motion information is derived using motion
information 2
stored in the storage space for sub prediction unit 0 of the current block.
[230]
[231] Fig. 16f shows a fifth step of deriving motion information from a sub
prediction
unit of the reference block.
[232] Referring to Fig. 16f, motion information is derived using motion
information 2
stored in the storage space for sub prediction unit 1 of the current block.
[233]
[234] Fig. I 6g shows a sixth step of deriving motion information from a
sub
prediction unit of the reference block.
[235] Referring to Fig. 16g, motion information is derived from sub
prediction unit 3
of the reference block for sub prediction unit 3 of the current block. In this
case, since motion
information may be derived from sub prediction unit 3 of the reference block,
motion
information on sub prediction unit 3 of the reference block is stored in the
storage space. In this
case, the stored motion information on sub prediction unit 3 may be defined as
motion
information 3, and motion information 3 may be stored in the storage space
instead of motion
information 2. Motion information 3 may be used when motion information cannot
be derived
from some other sub prediction unit of the reference block.
[236]
[237] Fig. 17 is a flowchart illustrating a method of deriving motion
information on a
sub prediction unit of a current block using a sub prediction unit of a
reference block, according

CA 02891672 2015-02-24
37
to an embodiment of the present invention. Each operation in the process of
Fig. 17 may be
performed by an encoder and/or a decoder or an inter prediction module in the
encoder and/or
decoder, for example, the intra prediction module 720 of Fig. 7 or the inter
prediction module
850 of Fig. 8.
[238] A process when a sub prediction unit of a reference block has its
motion
information is first described with reference to Fig. 17. The inter prediction
module determines
whether the sub prediction unit of the reference block has motion information
(S1700).
[239] The inter prediction module, if the sub prediction unit of the
reference block has
motion information, inserts the motion information present in the sub
prediction unit of the
reference block into a sub prediction unit of a current block which is
targeted for deriving motion
information.
12401 Thereafter, the inter prediction module determines whether the
storage space
stores motion information (S1720). If the storage space stores motion
information, step SI750 is
performed. In this case, the storage space has been described above in detail,
so has the motion
information.
12411 Unless the storage space stores motion information, the inter
prediction module
determines whether the sub prediction unit of the current block, which is
targeted for deriving
motion information, is the first sub prediction unit of the current block
(S1730). If the sub
prediction unit of the current block targeted for deriving motion information
is the first sub
prediction unit of the current block, the inter prediction module performs
step S1750.
[242] In step SI730, unless the sub prediction unit of the current
block is the first sub
prediction unit, the inter prediction module inserts the motion information
present in the sub
prediction unit of the reference block into the sub prediction unit(s) of the
current block that are

CA 02891672 2015-02-24
38
positioned ahead of the first sub prediction unit of the current block. For
example, if the sub
prediction unit of the current block, which is targeted for deriving motion
information, is the
third sub prediction unit, the inter prediction module inserts the motion
information on the sub
prediction unit of the reference block into the first and second sub
prediction units of the current
block.
12431 The inter prediction module stores (and updates the existing
information in the
storage space with) the motion information on the sub prediction unit of the
reference block in
the storage space (S1750). In this case, a specific description of storing and
updating motion
information has been given above.
[2441 The inter prediction module determines whether the sub prediction
unit of the
reference block which is targeted for deriving motion information is the last
sub prediction unit
of the reference block (S1790). If the sub prediction unit of the reference
block which is targeted
for deriving motion information is the last sub prediction unit of the
reference block, the inter
prediction module terminates the motion information deriving process. Unless
the sub prediction
unit of the reference block which is targeted for deriving motion information
is the last sub
prediction unit of the reference block, the inter prediction module goes to a
next sub prediction
unit of the reference block for processing (S1780). Thereafter, the inter
prediction module
repeats steps S1700 to S1790.
1245]
[246] If no sub prediction unit of the reference block has motion
information, the
following process proceeds.
[247] The inter prediction module determines whether a sub prediction
unit of the
reference block has motion information (S1700).

CA 02891672 2015-02-24
39
12481 If the sub prediction unit of the reference block does not have
motion
information, the inter prediction module determines whether the storage space
retains motion
information (S1770). Unless the storage space retains motion information, the
inter prediction
module performs step S1790.
[249] In case the storage space retains motion information, the inter
prediction module
inserts the motion information stored in the storage space into the sub
prediction unit of the
reference block which is targeted for deriving motion information (S1750).
[2501 After performing the above steps, the inter prediction module
determines
whether the sub prediction unit of the reference block which is targeted for
deriving motion
information is the last sub prediction unit of the reference block (S1790). If
the sub prediction
unit of the reference block which is targeted for deriving motion information
is the last sub
prediction unit of the reference block, the inter prediction module terminates
the motion
information deriving process. Unless the sub prediction unit of the reference
block which is
targeted for deriving motion information is the last sub prediction unit of
the reference block, the
inter prediction module goes to a next sub prediction unit of the reference
block for processing
(S1780). Thereafter, the inter prediction module repeats steps S1700 to S1790.
12511 Then, the inter prediction module derives a prediction sample
for the current
block based on the motion information on the current block derived by the
above steps. The
prediction sample may mean the above-described prediction signal, and the
prediction signal
may mean a difference between the original signal and the residual signal as
described above.
12521 The above-described process of deriving motion information on a
sub prediction
unit of a current block may specifically apply to 3D images as shown in Table
I. As described
above, the operation shown in Table 1 may be performed by an encoder/decoder
or an inter

CA 02891672 2015-02-24
=.
prediction module of the encoder/decoder.

CA 02891672 2015-02-24
. .
41
[253] [Table 1]
This process has the following inputs.
- Position (xPb, yPb) of left and upper end of current prediction unit
- Width (nPbW) and height of current prediction unit
- Reference view index reiViewldx
- Disparity vector mvDisp
This process has the following outputs.
- Flag availableFlagLXinterView for determining whether temporal inter-view
motion
candidate is available, where LX may be reference picture list LO and Ll.
As used herein, 'temporal inter-view' means that a picture at a different view
from that of the
current picture may be referenced as a picture at other time (i.e., other POC)
at the same view
as the current picture.
- Temporal inter-view motion vector candidate mvLXInterView, where LX may
be reference
picture lists LO and Ll.
- Reference index refIdxLXInterView designating a reference picture present in
reference
picture list RefflicListLX, where LX may be reference picture lists LO and Ll.
LX may be reference picture lists LO and L 1. The following applies to LX.
- Flag availableFlagLXInterView is initialized as 0.
- Motion vector mvLXInterView is initialized as (0,0).
[254]
[255]

CA 02891672 2015-02-24
42
- Reference index refIdxLXInterView is initialized as -1.
Variables nSbW and nSbH are initialized as follows.
nSbW=Min(nPbW, SubPbSize)
nSbH=Min(nPbH, SubPbSize)
Variable ivRefPic is initialized as a picture having the same ViewIdx as
refViewIdx in the
current access unit. Variable
curSubBlockIdx is initialized as 0, and variable
lastAvailableFlag is initialized as 0.
The following applies to yBlk ranging from 0 to (nPbH/nSbH-1) and xBlk ranging
from 0 to
(nPbW/nSbW-1).
- Variable curAvailableFlag is initialized as 0.
- The following applies to X ranging from 0 to 1.
- Flag spPredFlagLI[xBIk][yBlk] is initialized as 0.
- Motion vector spMvLX is initialized as (0,0).
- Reference index spRefIdxLX[xBIkliyBlkl is initialized as -1.
- Reference block position (xRef, yRef) is derived as follows.
xRef=Clip3(0, PicWidthInSamplesL-1),
xPb+xBlk*nSbW+nSbW/2+((mvDisp[0]+2)>>2))
yRef=Clip3(0, PicHeightInSamplesL-1),
(2561

CA 02891672 2015-02-24
43
yPb+yBlk*nSbH+nSbH/2+((mvDi sp [1 ]+2)>>2))
- Variable ivRefPb refers to luma prediction block at (xRef, yRef) in the
inter-view
reference picture indicated by ivRefPic.
- (xIvRefPb, yIvRefPb) refers to the left and upper position of the reference
block
indicated by ivRefPb.
- Unless ivRefPb has been encoded in intra mode, the following is performed on
X
ranging from 0 to 1.
- When X is 0 or current slice is slice B, the following is performed on Y
ranging from X to (1-X).
- refPixListLYIvRef, predFlagLYIvReflx][y], mvLYIvRef[x][y], and
refIdxLYIvReffx][y], respectively, are set to RefPicListLY, PredFlagLY[x][y],
MvLY[x][y],
and RefldxLY[x][y] in the picture indicated by ivRefPic.
- If predFlagLYIvRef[xIvRefPb][yIvRefPb] is 1, the following is
performed on i ranging from 0 to num_ref idx_lX active _minusl (the number of
reference
pictures in reference picture list).
- If POC of refPicListLYIvRefirefldxLYIvRef[xIvRefPb][yIvRefPb]]
is the same as RefPicListLX[i] and spPredFlagLX[xBlk][yBlk] is 0, the
following applies.
spMvLX[xBlk][yBlk]=mvLYIvRef[xlvRefPb][yIvRefPb]
spRefldxLX[xBlk][yBlkl=i
[2571
spPredLfagLX[xBlk][yBlk]=1
eurAvailableFlag= 1

CA 02891672 2015-02-24
44
The following applies according to curAvailableFlag.
- If curAvailableFlag is 1, the following order applies.
1. If lastAvailableFlag is 0, the following applies.
- The following applies to X ranging from 0 to I.
mxLXInterView----spMvLX[xBIk][yBlk]
refIdxLXInterView=spRefldxLX[xBlk][yBlk]
availableFlagLXInterview=spPredFlag[xBlk][yBlk]
- When curSubBlockIdx is larger than 0, the following applies to k
ranging from 0 to (curSubBlockIdx-1).
- Variables i and j are derived as follows.
i=k%(nPSW/nSbW)
j=k1(nPSW/nSbW)
- The following applies to X ranging from 0 to I.
spMvLX[i][j]=spMvLX[xBlk][03111
spRefldxLXiilfil=spRefldxLX[xBIk][yBlk]
spPredFlagLX[i][j]=spPredFlagLX[xBlk][yB111
2. Variable lastAvailableFlag is replaced with 1.
3. xBIk and yBlk are stored in variables xLastAvail and yLastAvail,
respectively.
1258]
- If curAvailableFlag is 0, and lastAvailableFlag is 1, the following applies
to X

CA 02891672 2015-02-24
ranging from 0 to 1.
spMvLX[xB1k][y Blk]=spMvLX[xLastAvail][yLastAvail]
spRefldxLX[xBIk][yBlkl---spRefIdxLX[xLastAvail] [yLastAvai I]
spPredFlagLX[xBIk][yBlk]--
spPredFlagLX[xLastAvail][yLastAvail]
- Variable eurSubBlockldx is set to curSubBlockldx+1.
[259]
1260] Table 1 is now described in detail.
1261] Referring to Table 1, the position of the left and upper end of
the current
5 prediction block, the width and height of the current prediction block, a
reference view index,
and a disparity vector are input to the inter prediction module. In this case,
the position of the left
and upper end of the current prediction block may be denoted (xPb, yPb), where
'xPb' may refer
to the X-axis coordinate of the current prediction block, and 'yPb' the y-axis
coordinate of the
current prediction block. The width of the current prediction block may be
denoted 'nPbW,' and
10 the height of the current prediction block 'nPbH. The reference view
index may be denoted
'refViewIdx,' and the disparity vector 'mvDisp.' In this case, the inter
prediction module may
correspond to the above-described inter prediction module of the image
encoder/decoder.
12621 Referring to Fig. 17, after finishing the process of deriving
the motion
information on the sub prediction unit of the current block using the sub
prediction unit of the
15 reference block, the inter prediction module outputs a flag for
determining whether a temporal
inter-view motion candidate is available, a temporal inter-view motion vector
candidate, and a
reference picture present in a reference picture list. In this case, the flag
for determining whether

CA 02891672 2015-02-24
46
a temporal inter-view motion candidate is available may be defined as
'availableFlagLXInterView,' and the temporal inter-view motion candidate may
be defined as
'mvLXInterView.' The reference picture list may be denoted 'RefPicListLX,' and
the reference
index designating a reference picture present in the reference picture list
may be defined as
'refldxLXInterView.' In `availableFlagLXInterView', `mvLXInterView',
`RefPicListLX", and
`refldxLXInterView,"LX' may be reference picture list 0(List 0, LO) or
reference picture list
l(List 1, L1).
12631 Now described is a method of deriving motion information on a
sub prediction
unit of a current block using a sub prediction unit of a reference block in
order for an inter
prediction module to derive the above-described outputs from the above-
described inputs.
[264] The inter prediction module performs initialization before
deriving motion
information on a sub prediction unit of a current block using a sub prediction
unit of a reference
block. In this case, availableFlagLXInterView is set to 0, mvLXInterView
(0,0), and
refidxLXInterView -1. When the inter prediction module performs
initialization, the width and
height of the sub prediction unit are initialized also. In this case, the
width of the sub prediction
unit may be denoted 'nSbW,' and the height of the sub prediction unit 'nSbH.'
A specific method
of initializing variables nSbW and nSbH is given as Equation 1 below.
12651 [Equation 1]
nSb Min (n TT :SubASize{ nuh_layer_id 1)
12661 r9iH Afin(n.A1-1SubASizei 1)
[2671 In this case, SubPbSize denotes the size (including the height and
width) of the
sub prediction unit designated by an image parameter set (VPS), and nuh layer
id denotes an
index for identifying a layer (e.g., which reference view it is). Min() may be
defined as in

CA 02891672 2015-02-24
. ,
47
Equation 2 to output the smaller of input variables.
[268] [Equation 21
: X
Min(x.y) =j
[269]
1270] The inter prediction module may initialize not only the above-
described
variables but also information for identifying a sub prediction unit of the
current block and the
luma prediction block at (xRef, yRef) in the inter-view reference picture and
information for
identifying whether the motion information stored in the storage space is
available.
12711 In this case, the luma prediction block at (xRef, yRef) in the
inter-view
reference picture is set as a block in a picture having the same view index as
the reference view
index in the current access unit. In this case, the luma prediction block at
(xRef, yRef) in the
inter-view reference picture is defined as 'ivRefPic,' and the access unit
means a unit in which an
image is encoded/decoded. The access unit includes images with different
views, which have the
same picture order count (POC). For example, if there are three views, one
access unit may
include a common image and/or depth information image of the first view, a
common image
and/or depth information image of the second view, and a common image and/or
depth
information image of the third view. The reference view index may be defined
as 'ret-Viewldx;
and the view index 'ViewIdx.' In this case, Viewldx may mean a view of the
current picture.
[2721 In this case, the information for identifying a sub prediction
unit of the current
block for initialization may be set to 0, and the information for identifying
the sub prediction unit
of the current block may be defined as 'curSubBlockIdx.' The information for
identifying
whether the motion information stored in the storage space is available is
also set and initialized
to 0, and the information for identifying whether the motion information
stored in the storage

CA 02891672 2015-02-24
48
space may be defined as lastAvalableFlag.'
[2731
[2741 After initializing the above-described variables, the inter
prediction module
performs the following process on yBlk that ranges from 0 to (nPbH/nSbH-1) and
xBlk that
ranges from 0 to (nPbW/nSbW-1). Here, xBIk means the x coordinate of the
block, and yBlk
means the y coordinate of the block.
12751 First, the inter prediction module initializes the information
for identifying
whether to predict motion information from a sub prediction unit of the
reference block, the sub
prediction unit prediction flag, motion information on the sub prediction
unit, and reference
index of the sub prediction unit. Specifically, the information for
identifying whether to predict
the motion information from the sub prediction unit of the reference block may
be set to 0. In
this case, the information for identifying whether to predict motion
information from the sub
prediction unit of the reference block may be defined as 'curAvailableFlag.'
The sub prediction
unit prediction flag may be set to 0, and the sub prediction unit prediction
flag may be defined as
'spPredFlagL 1 .' To represent coordinates of the block, the sub prediction
unit flag may be
defined as `spPredFlagLI[ xBIk ][ yBlk ]. The motion vector of the sub
prediction unit is set to
(0, 0), and the motion vector of the sub prediction unit may be defined as
'spMvLX.' The
reference index of the sub prediction unit may be set to -1, and the reference
index of the sub
prediction unit may be defined as IspRefldxLX.' To represent coordinates of
the block, the
reference index of the sub prediction unit may be defined as spRefldxLX [ xBlk
][ yBlk ].'
[276] The position (xRef, yRef) of the reference block may be set as
in the following
Equation 3.
12771 [Equation 311

CA 02891672 2015-02-24
49
xRef = ip3(0. Be TT eithinSamplesL-1,
x.117¨ xBik*
n5bTT72¨ ((mvDisi.0]--- 2) >> 21))
yRef = Crip3(0,PicHeightinSampiesL-1.
12781 yTh¨yEi7k* nSb11¨ nSbH12¨ ((muDisp[ij¨ 2'1 >> 2 ) )
12791 Here, xRef means the x coordinate of the position of the
reference block, and
yRef means the y coordinate of the position of the reference block.
PicWidthInSamplesL means
the width at the current picture, and PicHeightInSamplesL means the height at
the current
picture. Clip3() may be defined as in the following Equation 4.
12801 [Equation 4]
X: Z <
aip3(x. y. z) = y: z > y
12811 z: otherwise
[2821
[2831 In case the inter-view reference block is encoded in intra
mode, the following
process is performed on X that ranges from 0 to 1. The inter-view reference
block refers to a
luma prediction block at (xRef, yRef) in the inter-view reference picture
indicated by ivRefPic,
and the inter-view reference block may be defined as 'ivRefPb.' That is,
ivRefPb denotes the
luma prediction block at (xRef, yRef) in the inter-view reference picture
indicated by ivRefPic,
and ivRefPic denotes the inter-view reference picture. The position of the
left and upper end of
the reference block indicated by ivRefPb may be set to (xIvRefPb, yIvRefPb).
12841 When X is 0 or the current slice is slice B, each variable is
reset for Y (Y ranges
from X to (1-X)) as follows. refPicListLYIvRef is set to ReffleListLY in the
picture indicated by
ivRefPic, where RefPicListLY means a reference picture list. predFlagLYIvRef[
x ][ y] is set
to PredFlagLY[ x ][ y ] in the picture indicated by ivRefPic, where PredFlagLY
means an
identifier indicating a reference picture list. mvLYIvRefl x IF y ] is set to
MvLY[ x ][ y ] in the

CA 02891672 2015-02-24
picture indicated by ivRefPic, where MvLY means a motion vector. Likewise,
refidxLYIvRefl x
][ y] is set to RefldxLY[ x ][ y] in the picture indicated by ivRefPie, where
RefldxLY means a
reference index.
[285] In this case, if predFlagLYIvReff xlvRefPb ][ yIvRefPb ] is 1, the
following
5 Equation 5 may apply to i ranging from 0 to num_ref idx_lX_active_minusl
(the number of
reference pictures in the reference picture list).
[286] [Equation 5]
sp.1kL,Ilx.B11d[yBlic] mv.LIT:Ref[zIvRef Th][ylcRef Fb]
spRef IdxLX[sBik][yRici =
spPredFlagLX[rBik][yBiki = 1
[287] curAcailableFlafl --- 1
[288]
10 [289] Meanwhile, referring to Table 1, the following processes
respectively apply to
the case where curAvailableFlag is 1 and the case where curAvailableFalg is 0.
[290] If curAvailableFlag is I, the inter prediction module performs the
following
process.
[291] If lastAvailableFlag is 0, the following Equation 6 may apply to X
ranging from
15 0 to I.
[292] [Equation 6]
ma-LA:Inter View = spiluLX[xBik][yBlk]
ref &LA-Inter View = apRefithraxBikiryBlki
[293] availableFraQ LXInterview = spri-edf7aa[x131k}{uBlid

CA 02891672 2015-02-24
, .
51
[294] If lastAvailableFlag is 0, and curSubBlockIdx is larger than 0, the
following
Equation 7 may apply to variables i and j for k ranging from 0 to
(curSubBlockIdx - 1).
[295] [Equation 7]
(n PSIV/ aSb
[296] = k/ (rIPST4in,.9)10
1297] In this case, the following Equation 8 applies to X ranging from 0
to 1.
12981 [Equation 8]
sp.112.14Xli.} [j] = sp..11vLX[xBik][yaki
spRefidxLiffilbl = spRefidx_LX[xBik][yBlk]
[299] spPredliaaLX[i][i] = sperectRagLX,zBlkj[y.Bik]
[300] 2. After the above-described process, the inter prediction module
replaces
lastAvailableFlag with 1.
[301] 3. Thereafter, the inter prediction module stores xBlk and yBlk in
variables
xLastAvail and yLastAvail, respectively.
[302]
1303] If curAvailableFlag is 1, and lastAvailableFlag is 1, the inter
prediction module
applies the following Equation 9 to X ranging from 0 to 1.
[304] [Equation 911
sp..TfaX zBik] [yak] = spi-fuLtzLast.Availl[yLastAvaill
spRef IdxLX xBik][y.Blk] = xLastAvaill[gastAvaill
[305] spPredFfafiLklx_alki[iBlk] = spPredFlag.LX- xLastAvaillbiLastAvaill
[306] After performing all of the above-described processes, variable
curSubBIockldx
is set to curSubBlockldx+1.

CA 02891672 2015-02-24
52
1307]
[308] The method of deriving motion information on a sub prediction
unit of a current
block described above in connection with Fig. 17, when unable to derive motion
information
from a sub prediction unit of a reference block, uses the motion information
on a sub prediction
unit of the reference block, which has been referenced before (or afterwards).
As such, the
method of deriving motion information according to Fig. 17 should necessarily
reference a sub
prediction unit of other reference block and thus this method is dependent. A
dependent motion
information deriving method is vulnerable to parallel designs, which is
described in detail with
reference to Fig. 18.
13091
1310j Fig. 18 is a view illustrating an exemplary process of deriving
in parallel
information on a sub prediction unit of a current block using a sub prediction
unit of a reference
block.
13111 Referring to Fig. 18, Ref means a reference block, and Refs 0,
1, 2, 3, 4, 5, 6,
and 7 are sub prediction units 0, 1, 2, 3, 4, 5, 6, and 7, respectively, of
the reference block. Cur
means a current block, and Curs 0, 1, 2, 3, 4, 5, 6, and 7 mean sub prediction
units 0, 1, 2, 3, 4, 5,
6, and 7, respectively, of the current block. X marked in Refs 2, 3, 4, and 5
mean that sub
prediction units 2, 3, 4, and 5 of the reference block are unavailable upon
deriving motion
information.
13121 In an embodiment according to Fig. 18, the inter prediction module
detects a sub
prediction unit from which motion information may be derived as described
above, in order to
derive motion information from a sub prediction unit from which motion
information cannot be
derived. Accordingly, the inter prediction module cannot independently derive
motion

CA 02891672 2015-02-24
53
information for each sub prediction unit of the current block, and the above-
described motion
information deriving process is difficult to perform in parallel.
[313]
13141 Fig. 19 is a view illustrating an exemplary process of
discovering an available
sub prediction unit when the available sub prediction unit is positioned at
the rightmost and
lowermost end of a reference block.
13151 Referring to Fig. 19, each square means a sub prediction unit,
where the bold
solid lined one means an available sub prediction unit upon deriving motion
information while
the thinner sold lined ones mean unavailable sub prediction units upon
deriving motion
information. The dash-line arrow indicates an order of discovering motion
information.
13161 In case a sub prediction unit from which motion information may
be derived is
positioned only at the rightmost and lowermost end of the reference block as
shown in Fig. 19,
the sub prediction units should be sequentially subject to discovery of a sub
prediction unit from
which motion information may be derived along the dash-line arrow from the
leftmost and
uppermost end of the reference block. In a typical case, it is not known which
sub prediction unit
in what reference block may be put to use for deriving motion information.
Accordingly, the
sub prediction units of the reference block are subject to sequential
discovery from the first sub
prediction unit of the reference block to determine a sub prediction unit that
may be used for
deriving motion information.
13171 However, the approach of deriving motion information as shown in Fig.
19
requires all of the sub prediction units in the reference block to discover an
available sub
prediction unit, thus causing frequent access to the memory. In this case, if
only a few among the
sub prediction units of the reference block have motion information,
unnecessary sub prediction

CA 02891672 2015-02-24
54
unit discovery occurs. In particular, if none of the sub prediction units in
the reference block are
used to derive motion information, the process of discovering available sub
prediction units of
the reference block only brings about unnecessary memory access without any
benefit. In this
case, "having no motion information" means that the current block failed to
discover a similar
region in the reference block of a neighboring frame.
13181 Accordingly, in case only a few or none of the sub prediction
units in a
reference block have motion information, encoding/decoding the current block
using inter
prediction may lead to more efficiency. In other words, in such case that only
a few or none of
the sub prediction units in a reference block have motion information, it may
be more efficient to
discover a similar region in a neighboring pixel of the current block to
perform
encoding/decoding on the current block.
[319]
[320] Fig. 20 is a view schematically illustrating times required to derive
motion
information on a per-sub prediction unit basis.
[321] Referring to Fig. 20, when the time taken to derive motion
information from one
sub prediction unit is T, and the number of sub prediction units in a
reference block is N, the
time taken to derive all the motion information from the reference block is
NxT. The above-
mentioned motion information deriving method brings about data dependency and
frequent
memory access. Data-dependent motion information deriving methods cannot
independently
derive motion information from each sub prediction unit, and in order to
derive motion
information from one sub prediction unit, it should thus wait until motion
information is derived
from other sub prediction unit. Therefore, the data-dependent motion
information deriving
methods may cause an encoding/decoding delay.

CA 02891672 2015-02-24
[3221 Resultantly, the above-described motion information deriving
method cannot
achieve data parallelization for simultaneously deriving motion information,
and from its design
architecture, the method may cause frequent memory access which deteriorates
memory use
efficiency.
5 1323]
[3241 An apparatus and method for removing dependency when deriving
motion
information is proposed herein to address the above issues. Fig. 21
illustrates an exemplary
configuration of an inter prediction module to which the present invention
applies. A method
of deriving motion information is described in detail with reference to Figs.
22 to 26, according
10 .. to an embodiment of the present invention.
(3251
13261 Fig. 21 is a block diagram illustrating a configuration of an
inter prediction
module 2100 to which the present invention applies.
[3271 Referring to Fig. 21, the inter prediction module 2100 may
include a storage
15 module 2110 and a deriving module 2120. The inter prediction module 2100
may mean the
above-described inter prediction module 710 in the 3D image encoder or the
inter prediction
module 850 in the 3D image decoder. The inter prediction module 2100 of Fig.
21 may apply to
the above-described image encoding/decoding process.
[3281 The storage module 2110 designates a motion information and
stores the same
20 in a storage space. The storage module 2110 may use motion information
present at a position of
the reference block in order to obtain the motion information stored. Here,
the position may be
the center of the reference block or a (sub) prediction unit covering the
center of the reference
block. The motion information stored in the storage module 2110 may be set to
an initial value.

CA 02891672 2015-02-24
56
Unless the motion information can be stored in the storage space, the process
of deriving motion
information on a per-sub prediction unit basis may be omitted. When omitting
the process of
deriving motion information on a per-sub prediction unit basis, inter
prediction may be carried
out as described supra. The storage module 2110 is described below in greater
detail.
[329] The deriving module 2120 performs a process of deriving motion
information
from a sub prediction unit of the current block. In this case, the deriving
module 2120 may
basically perform the above-described motion information deriving process.
However, the
deriving module 2120 proposed herein, unless the sub prediction unit of the
reference block
corresponding to the first sub prediction unit of the current block has motion
information, may
perform discovery to the sub prediction unit of the reference block having
motion information,
and instead of deriving motion information on the first sub prediction unit of
the current block
from the sub prediction unit of the reference block having motion information,
may then derive
motion information on the first sub prediction unit of the current block from
the motion
information stored in the storage module. The deriving module 2120 is
described below in
greater detail.
1330]
[331! Embodiments of the present invention are now described in
detail with reference
to the drawings.
[332]
[333] Embodiment 1
[334] Fig. 22 is a flowchart schematically illustrating a method of
deriving motion
information on a sub prediction unit of a current block using a reference
block, according to an
embodiment of the present invention.

CA 02891672 2015-02-24
. ,
57
[335] In embodiment 1, motion information on a sub prediction unit of a
current block
(current sub unit) is derived based on motion information for the center
position of a reference
block. Embodiment 1 may be performed in an encoder and decoder or a predicting
unit or inter
prediction module of the encoder and decoder. For ease of description herein,
the inter prediction
module 2100 of Fig. 21 performs the operation of embodiment 1.
[336] Referring to Fig. 22, the inter prediction module 2100 may derive the
center
position of the reference block (S2200). The center position of the reference
block may be
derived from Equation 10 below. Here, the reference block may be a block
present at the same
position as the current block in the reference picture, and the reference
block may have the same
size as the current block.
13371 [Equation 10]
[338] X position---xPb+(nPbW>>1)
Y position¨yPb+(nPbH>>1)
[339] Here, xPb and yPb refer to a left and upper position of the current
PU, nPbW the
width of the current PU, and nPbH the height of the current PU.
[340] The inter prediction module 2100 may determine whether there is
motion
information at the center position of the reference block (S2210). The center
position of the
reference block may be specified as described above.
[341] Unless there is motion information available at the center position
of the
reference block, the inter prediction module 2100 may terminate the process of
deriving motion
information. For example, without available motion information at the
center of the
reference block, the inter prediction module 2100 might not derive motion
information on the
current block.

CA 02891672 2015-02-24
58
[342] If motion information is present at the center position of the
reference block, the
inter prediction module 2100 may store the motion information present at the
center position of
the reference block in the storage space (S2220). The motion information
present at the center
position of the reference block may be motion information on the prediction
block including a
full sample position most adjacent to the center of the reference block. A
specific process of
storing motion information by the inter prediction module 2100 has been
described above. The
inter prediction module 2100 may derive motion information on a current sub
prediction unit
based on the stored motion information on the reference block.
1343] The inter prediction module 2100 may determine whether the sub
prediction
unit of the reference block corresponding to the current sub prediction unit
has motion
information (S2240).
13441 In case the sub prediction unit of the reference block has
motion information,
the inter prediction module 2100 may insert into the current sub prediction
unit the motion
information on the sub prediction unit of the reference block (S2250). In
other words, the inter
prediction module 2100 may set the motion information on the sub prediction
unit of the
reference block (for example, motion vector, reference picture index) as the
motion information
on the corresponding current sub prediction unit.
[345] Unless the sub prediction unit of the reference block has
available motion
information, the inter prediction module 2100 inserts into the current sub
prediction unit the
motion information of the reference block stored in the storage space (S2260).
In other words, in
case the motion information on the sub prediction unit of the reference block
corresponding to
the current sub prediction unit is unavailable, the inter prediction module
2100 may set the
motion information on the center of the reference block stored in step S2200
as the motion

CA 02891672 2015-02-24
59
information on the current sub prediction unit.
[346] The inter prediction module 2100 may determine whether the sub
prediction
unit of the reference block corresponding to the current sub prediction unit
is the last sub
prediction unit in the reference block (or in the same meaning whether the
current sub prediction
unit is the last sub prediction unit in the current block) (S2270). The inter
prediction module
2100 may terminate the process of deriving motion information in case the sub
prediction unit of
the reference block is the last sub prediction unit.
13471 Unless the sub prediction unit of the reference block is the
last sub prediction
unit, the inter prediction module 2100 goes on with driving motion information
on a next sub
prediction unit of the current block in order to continue to derive motion
information (S2230).
[348]
13491 The above-described motion information deriving process
according to
embodiment 1 may apply to 3D image decoding as in Table 2.

CA 02891672 2015-02-24
13501 [Table 2]
This process has the following inputs.
- Position (xPb, yPb) of left and upper end of current prediction unit
- Width (nPbW) and height of current prediction unit
- Reference view index retViewIdx
- Disparity vector mvDisp
This process has the following outputs.
- Flag availableFlagLXinterView for determining whether temporal inter-view
motion
candidate is available, where LX may be reference picture list LO and Ll.
= - Temporal inter-view motion vector candidate mvLXInterView, where LX may
be reference
picture lists LO and Ll.
- Reference index refldxLXInterView designating a reference picture present in
reference
picture list RefPicListLX, where LX may be reference picture lists LO and Ll.
LX may be reference picture lists LO and LI. The following applies to LX.
- Flag availableFlagLXInterView is initialized as 0.
- Motion vector mvLXInterView is initialized as (0,0).
- Reference index refIdxLXInterView is initialized as -1.
Variables nSbW and nSbH are initialized as follows.
1351]
Variables nSbW and nSbH are initialized as follows.

CA 02891672 2015-02-24
61
nSbW-Min(nPbW, SubPbSize)
nSbH=Min(riPbH, SubPbSize)
where, SubPbSize is the size including height and width of the sub prediction
unit designated
by VPS.
Variable ivRefPic is initialized as a picture having the same Viewldx as
refViewIdx in the
current access unit.
Variable curSubBlockldx is initialized as O.
Reference position (xRef, yRef) may be derived as follows.
xRefFul I = xPb + ( nPb >> 1 ) ( inDispi 0 I + 2 ) >> 2 )
ykc fl'u 1 I = yPb + ( aPhU>> 1 ) + ( ( nrd)i st-4 1 1+2 ) >> 2 )
\Ref = CI ii)3( 0. Pi cWi (111)116=1)1es], - I. C xRtsfEu ii >> ;') ) << :I )
= CI ii)3( 0. Piclleight InSaino I - I. ylMFuI 1 >> 11 ) << 1 )
ivRefPic is set to the picture having the same ViewIdx as refViewIdx in the
current access
unit. The motion information in the reference picture may be stored in units
of gx8 pixel
blocks. Correction factors xRefFull and yRefFull may be the position of the
center full
sample of the reference block specified using mvDisp.
ivRefPb may be a prediction block covering position (xRef, yRef) in ivRefPic.
(xIvRefPb, yIvRefPb) specifies the left and upper position of ivRefPb.
[352]

CA 02891672 2015-02-24
62
Unless ivRefPb has been encoded in intra mode, the following may apply to Y
ranging from X
to (I-X).
- refPixListLYIvRef, predFlagLYIvRef[x][y], mvLYIvReflx][y], and
refIdxLYIvRef[x][y], respectively, are set to their respective corresponding
variables, i.e.,
RelPicListLY, PredFlagLY[x][y], MvLY[x][y], and RefldxLY[x][y] in the inter-
view
reference picture ivRefPic.
- If predFlagLYIvRef[xIvRefPb][ylvRefPb] is 1, the following may apply to i
ranging from 0 to num_ref idx_lX_active_minusl (the number of reference
pictures in
reference picture list).
If POC (Picture Order Count) of
retPicListLYIvRef[reffdxLYIvRef{xIvRefPb}[yIvRefPb]] is the same as
RefPicListLX[i] and
availableFlagLXInterView is 0, the following applies.
availableFlagLXInterNiou =
maXInterVieA = maVivRefi xlvkefil) II ylvkefiT)
ref Isixl.X = i
If availableFlagLOInterView or availableFlagLlInterview is 1, the following is
performed.
- The following applies to yBlk ranging from 0 to (nPbH/nSbH-1) and xBlk
ranging from 0 to
(nPbW/nSbW-1).
[353]
- Variable curAvailableFlag is initialized as 0.
- The following applies to X ranging from 0 to 1.

CA 02891672 2015-02-24
63
- Flag spPredFlagLl[xBIk][yBlk] is initialized as 0.
- Motion vector spMvLX is initialized as (0,0).
- Reference index spRefIdxLX[xBIk][yBlk] is initialized as -1.
- Reference block position (xRef, yRef) is derived as follows.
xRef---Clip3(0, PicWidthInSamplesL- 1),
xPb+x131k*nSbW+nSbW/2+((mvDisp[0]+2)>>2))
yRef=Clip3(0, PicHeightInSamplesL-1),
yP b+yB lk* nSbH+n S bH/2+((mvDi sp [ 1]+2)>>2))
- Variable ivRefPb refers to luma prediction block at (xRef, yRef) in the
inter-view reference
picture indicated by ivRefPic.
- (xlvRefPb, yIvRefPb) refers to the left and upper position of the reference
block indicated by
ivRefPb.
- Unless ivRefPb has been encoded in intra mode, the following is performed on
X ranging
from 0 to 1.
- When X is 0 or current slice is slice B, the following is performed on Y
ranging from X to
(1-X).
- reffixListLYIvRef, predFlagLY1vRef[x][y], mvLYIvRef[x][y], and
refIdxLYIvReftx][y],
respectively, are set to RefPicListLY, PredFlagLY[x][y], MvLY[xilYl, and
RefldxLY[xitY1 in
the picture indicated by ivRefPic.
1354]
- If predFlagLYIvRef[xIvRefPb][yIvRefPb] is 1, the following is performed on i
ranging from
0 to num_ref idx_IX_active_minusl (the number of reference pictures in
reference picture

CA 02891672 2015-02-24
64
list).
- If POC of refPicListLYIvRef[refldxLYIvRefixIvRetilbilylvRefPb}] is the same
as
RefPicListLX[i] and spPredFlagLX[xBlkilyBlk] is 0, the following applies.
spMvLX[xBlk][yB114--mvLYIvRefix1vRefPb][ylvRefPb]
spRefldxLX[x131k][y13lIc1=i
spPredLfagLX[xBIk][yBlkj----1
curAvailableFlag=1
- The following applies according to eurAvailableFiag.
- If curAvailableFlag is 0, the following applies to X ranging from 0 to 1.
slAvlAf fl 01k 1 = mvIAInterViow
spRefldx1A1 xfilk If yBik 1 = reff4xLX
spPrefflagLX1 xB k JI yfilk 1= availahleFlaglAIntorView
Variable curSubBlockIdx is set to curSubBlockIdx+1.
If availableFlagLOInterView and availableFlagL1InterView are 0, the process is
terminated.
13551
13561 Embodiment 1 is described again based on Table 2.
13571 Referring to Table 2, the position of the left and upper end of
the current
prediction block, the width and height of the current prediction block, a
reference view index,
and a disparity vector are input to the inter prediction module 2100. Here,
the position of the left
and upper end of the current prediction block may be defined as (xPb, yPb).
The width of the
current prediction block may be defined as 'nPbW,' and the height of the
current prediction block
'nPbH.' The reference view index may be defined as 'refViewIdx,' and the
disparity vector

CA 02891672 2015-02-24
'mvDisp.'
[358] After finishing the process of deriving motion information on
the sub prediction
unit of the current block using the sub prediction unit of the reference
block, the inter prediction
module 2100 may output a flag for determining whether inter-view prediction is
possible, an
5 inter-view motion vector, and a reference index designating a reference
picture present in a
reference picture list. In this case, the flag for determining whether a
temporal inter-view motion
candidate is available may be defined as 'availableFlagLXInterView,' and the
temporal inter-
view motion candidate may be defined as 'mvLXInterView.' The reference picture
list may be
denoted 'Refl'icListLX,' and the reference index designating a reference
picture present in the
10 reference picture list may be defined as 'refldxLXInterView.' In
`availableFlagLXInterView',
`mvLXInterView', `RefPicListLX", and `refIdxLXInterView,"LX' may be reference
picture
list 0(List 0, LO) or reference picture list l(List 1, L1).
[3591 Now described is a method of deriving motion information on a
sub prediction
unit of a current block by obtaining the above-described outputs from the
inputs.
15 [360] First, the inter prediction module 2100 performs
initialization before deriving
motion information on a sub prediction unit of a current block using a sub
prediction unit of a
reference block. In this case, availableFlagLXInterView may be set to 0,
myLXInterView (0,0),
and refIdxLXInterView -1. When the inter prediction module 2100 performs
initialization, the
width and height of the sub prediction unit may be initialized also. In this
case, the width of the
20 sub prediction unit may be denoted 'nSbW,' and the height of the sub
prediction unit 'nSbH.'
Equation 11 represents an example of a method for initializing variables nSbW
and nSbH.
[361] [Equation 11]

CA 02891672 2015-02-24
= =
66
n5b i4 Mm (riThITT-SubFlaSize[ nuh_layer_id ])
[362] riSbH= Ifin(nPb_FISubASizei nuhiauer_id i)
[363] In this case, SubPbSize denotes the size (including the height and
width) of the
sub prediction unit designated by a VPS, and nuh_layer_id denotes an index for
identifying a
layer (e.g., which reference view it is). Min() is an operator outputting the
smaller of variables
input.
[364] The inter prediction module 2100 may initialize not only the above-
described
variables but also information for identifying a sub prediction unit of the
current block and the
luma prediction block at (xRef, yRef) in the inter-view reference picture and
information for
identifying whether the motion information stored in the storage space is
available.
[365] In this case, the inter-view reference picture may be set to a
picture having a
view index such as a reference view index in the current access unit. Here,
the inter-view
reference picture may be denoted 'ivRefPic,' and the luma prediction block at
(xRef, yRef) in the
inter-view reference picture may be denoted IvRefPb. One access unit includes
images with
different views, which have the same picture order count (POC). The reference
view index may
be defined as 'refViewIdx,' and the view index 'ViewIdx.'
[366]
[367] The reference position may be a position specifying a prediction
block covering
the center of the reference block according to embodiment 1. The motion
information on the
reference position may be stored in order to derive motion information on the
current sub
prediction unit. Equation 12 shows an exemplary method of deriving the
reference position
(xRef, yRef).
13681 [Equation 12]

CA 02891672 2015-02-24
. ,
67
xRef Full = xl5b¨ (nThrr>> 1) -- ((muDisp[0] ¨ 2) >> 2)
y.Ref Full ¨ yPb¨ (n.13bH>> 1) -- ((mvDisp[1] ¨ 2) >2)
x Ref = Clip3(0.PicificithinSamp(esL ¨ 1,(x Ref Full >> 3) < 3)
[369] giRef = aip3(0,_FcHei,.QhfinSamplesL ¨ 1.(yilef Full >> 3) < 3)
[370] Here, XRefFull and yRefFull denote the position of the full sample
close to the
center of the reference block. That is, xRefFull and yRefFull respectively
denote the x coordinate
and the y coordinate of the sample at an integer position.
[371]
[372] ivRefPb may be a sub prediction unit or prediction block covering
(xRef, yRef).
The position (xIvRefPb, yIvRefPb) of the luma sample may specify the left and
upper end of
ivRefPb.
[373]
[374] Unless ivRefPb has been encoded/decoded in intra mode, the following
processes (1) and (2) may apply to Y ranging from X to (1-X).
[375] refPieListLYIvRef is set to RefPicListLY in the inter-view
reference picture
ivRefPic, predFlagLYIvReft x ][ y ] to PredFlag[ x ][ y 1 in the inter-view
reference picture
ivRefPic, and refIdxLYIvRefj x ][ y ] to RefldxLY[ x ][ y ] in the inter-view
reference picture
ivRefPic.
[3761 if predFlagLYIvRefT xlvRefPb ][ yIvRefPb ] is I, the following
process applies
to i ranging from 0 to num_ref idx_IX_active_minusl(the number of reference
pictures in the
reference picture list X). If POC(Picture Order Count: of refPieListLYIvRef[
refIdxLYIvRefl
xIvRefPb ][ yIvRefPb ] ] is RefPicListLX[ i ], and availableFlagLXInterView is
0, Equation 13

CA 02891672 2015-02-24
68
may apply.
[377] [Equation 13]
a voila &fella LXInter T Tim 1
maXinter T = m vLIIRef leRe f Pb][yluRaFtb]
1378] ref IdxLX=i
1379]
[380] In case availableFlagLOInterView or availableFlagLIInterView is 1,
the inter
prediction module 2100 performs the following process on yBlk that ranges from
0 to
(nPbH/nSbH-1) and xBIk that ranges from 0 to (nPbW/nSbW-1). Here, xBIk means
the x
coordinate, and yBlk means the y coordinate. In other words, if motion
information available at
the center of the reference block is derived, the inter prediction module 2100
may derive motion
information on a per-sub prediction unit basis.
13811 First, the inter predictinon 2100 unit may initialize the
information for
identifying whether to predict motion information from a sub prediction unit
of the reference
block, the sub prediction unit prediction flag, motion information on the sub
prediction unit, and
reference index of the sub prediction unit.
1382] In this case, the information for identifying whether to predict
motion
information from a sub prediction unit of the reference block may be defined
as
'curAvailableFlag,' the sub prediction unit prediction flag ispPredFlagLX1,1
the sub prediction
unit flag ispPredFlagLX[xBIk][yBlk],' the motion vector of the sub prediction
unit 'spMvLX,' the
reference index of the sub prediction unit 'spRefldxLX,' and the reference
index of the sub
prediction unit 'spRefIdxLX[xBIk][yBlk].'
[3831 The position (xRef, yRef) of the reference block is reset on a
per-sub prediction

CA 02891672 2015-02-24
69
unit basis as in the following Equation 14.
13841 [Equation 14]
iRef = 0ip3(0. FierfidthinSarriplesi-1,
xPb nSb IV¨ n5b 1372¨ ((mvDi,sp[0] ¨2) >>
211)
yl?ef CT ip3(0. PicHeightinSa mplesL-1.
[3851 y yBik* 12.5b11,/ 2-- ( (mt=Diso[il¨ 2) >>
2))1
[3861
[3871 PicWidthInSamplesL means the width of the current picture, and
PicIleightInSamplesL means the height of the current picture. Further, Clip30
has been
described above.
[3881
[389] Thereafter, in case the inter-view reference block is encoded in
intra mode, the
following process is performed on X that ranges from 0 to 1.
[390]
[391] When X is 0 or the current slice is slice B, each variable is reset
for Y (Y ranges
from X to (1-X)) as follows. refPicListLYIvRef may be set to reference picture
list
RefPicListLY for a picture specified by ivRefPic (i.e., the inter-view
reference picture).
predFlagLYIvRef[ x ][ y ] is set to PredFlagLY[ x if y ]. PredFlagLY[ x ][ y ]
indicates the
reference picture list that applies at (x,y) in the picture specified by
ivRefPic. mvLYIvRef[ x][ y
] is set to MvLY[ x ][ y ]. MvLY[ x ][ y ] means the motion vector at (x,y) in
the picture
specified by ivRefPic. refldxLYIvReff x ][ y] is set to RefIdxLY[ x ][ y].
RefIdxLY[ x ][ y I
indicates the reference pixel at (x,y) in the picture indicated by ivRefPic.
f3921 In case predFlagLYIvRef[ xIvRetPb ][ yIvRefPb ] is 1, the following
Equation
15 may apply to i ranging from 0 to num_ref idx_lX_active_minusl(the number of
reference

CA 02891672 2015-02-24
. õ.
pictures in the reference picture list) if POC of refPicListLYIvRef[
refIchrLYIvRef[ xlvRefPb
ylvRefPb ] is RefPicListLX[ iJ and spPredFlagLX[ xBIk yB11 c] is 0.
[3931 [Equation 15]
spliuLAIxBik)jyBild = m v LI/1Ra jzkRef Flij[ylV Ref IN
6pReild..rLY,r1311c][yEik] =
spF'red.FlagLX[iBik;[yBild = 1
[394] cur AvailableFog = 1
5 [395] Even after the above-described process has been performed,
if curAvailableFlag
as set is 0 (i.e., unless spRefldxLX=i (e.g., spRefldxLx.---1), and
spPredFlagLX-1 (e.g.,
spPredFlagLX=-0), it may be said that no motion information may be derived on
a per-sub
prediction unit basis. Accordingly, the inter prediction module 2100 may apply
Equation 16 to X
ranging from 0 to 1.
10 13961 In other words, in case motion information cannot be
derived from the sub
prediction unit of the reference block, the inter prediction module 2100 may
derive motion
information on the sub prediction unit of the current block from the motion
information on the
center position of the reference block.
(397]
15 [398] [Equation 16]
= mvEXInter View
8pRef Ida LX12.131ki[yRk] = ref Idx LX.
[399J spPrectRagLX[xBik}billfk] = ovailabieFtagLXInter I 7ew
1400]

CA 02891672 2015-02-24
71
14011 Finally, after all of the above-described processes have been
done, variable,
curSubBlockIdx, is set to curSubBlockIdx + 1, and if availableFlagLOInterView
and
availableFlagL 1 InterView are 0, the process of deriving motion information
according to
embodiment 1 is ended.
[4021
14031 Embodiment 2
[4041 Fig. 23 is a flowchart schematically illustrating a method of
deriving motion
information on a sub prediction unit of a current block, according to another
embodiment of the
present invention. In the example illustrated in Fig. 23, motion information
on a sub prediction
unit of a current block may be derived using a sub prediction unit present at
a position of a
reference block.
[4051 In embodiment 2, the motion information on the sub prediction
unit of the
current block may be derived based on the motion information on the sub
prediction unit
covering the center of the reference block.
[4061 The example shown in Fig. 23 may be performed in an encoder and
decoder or a
predicting unit of the encoder and decoder or the inter prediction module 2100
shown in Fig. 21.
Here, for ease of description, the inter prediction module 2100 performs each
step as shown in
Fig. 23.
14071 Referring to Fig. 23, the inter prediction module 2100 may
derive the position of
the sub prediction unit positioned at the center of the reference block
(center sub prediction unit)
(S2300). The center sub prediction unit positioned in the reference block
means a sub prediction
unit located at the center of the reference block, and the center of the
reference block has been
described above. Equation 17 represents an example of deriving the position of
the center sub

CA 02891672 2015-02-24
. ,
72
prediction unit of the reference block.
[408] [Equation 17]
xPb--( n If Sb if ;12 )* n5b n Sb ITT/12
Center sub prediction unit's X value=
yPb(n If n Sb Hi 2)4' n n 515H/ 2
Center sub prediction unit's Y value--
[409]
[410] here, xPb and yPb refer to a left and upper position of the
current prediction
unit, nPbW the width of the current prediction unit, and nPbH the height of
the current prediction
unit.
1411] The inter prediction module 2100 determines whether the center
sub prediction
unit of the reference block has motion information (S2310), and the position
of the center sub
prediction unit of the reference block has been described above. If no motion
information is
present at the position of the center sub prediction unit of the reference
block, the inter prediction
module 2100 may terminate the motion information deriving process.
[412] In case motion information is present in the center sub prediction
unit of the
reference block, the inter prediction module 2100 may store the motion
information present at
the center position (S2320). A specific process of storing motion information
by the inter
prediction module 2100 has been described above.
[413] The inter prediction module 2100 derives motion information on the
current sub
prediction unit. The inter prediction module 2100 may determine whether the
sub prediction unit
of the reference block corresponding to the current sub prediction unit has
motion information
(S2340).
[414] In case the sub prediction unit of the reference block has motion
information,
the inter prediction module 2100 may insert into the current sub prediction
unit the motion

CA 02891672 2015-02-24
,
73
information present in the sub prediction unit of the reference block (S2350).
Unless the sub
prediction unit of the reference block has motion information, the inter
prediction module 2100
may insert the motion information stored in step S2320 into the current sub
prediction unit
(S2360).
[4151 The inter prediction module 2100 may determine whether the sub
prediction
unit of the reference block which is targeted for deriving motion information
is the last sub
prediction unit (S2370). In case the sub prediction unit of the reference
block is the last sub
prediction unit, the inter prediction module 2100 may terminate the process of
deriving motion
information on the current block. Unless the sub prediction unit of the
reference block is the last
.. sub prediction unit, it goes to a next sub prediction unit of the current
block to continue to derive
motion information (S2330).
[416] The above-described motion information deriving process
according to
embodiment 2 may apply to 3D images as in Table 3.

CA 02891672 2015-02-24
74
[4171 [Table 3]
This process has the following inputs.
- Position (xPb, yPb) of left and upper end of the current prediction unit
- Width (nPbW) and height (nPbH) of the current prediction unit
- Reference view index RefViewldx
- Disparity vector mvDisp
This process has the following outputs.
- Flag availableFlagLXInterView for determining whether temporal inter-view
candidate is
available, where LX may be reference picture lists LO and Li.
- Temporal inter-view motion vector candidate mvLXInterView, where LX may
be reference
picture lists LO and LI.
- Reference index refldxLXInterView designating a reference picture present
in reference
picture list RefPicListLX, where LX may be reference picture lists LO and Li.
LX may be reference picture lists LO and Ll. The following applies to LX.
- Flag availableFlagLXInterView is initialized as 0.
- Motion vector myLXInterView is initialized as (0, 0).
- Reference index refIdxLXInterView is initialized as -1.
Variables nSbW and nSbH are initialized as follows.
14181

CA 02891672 2015-02-24
. ,
- nShW i n( riPhIV SubPI)Si )
- uShEi = i 11( &NI, SubPI)Si ze )
where, SubPbSize is the size including height and width of the sub prediction
unit designated
by VPS.
Variable ivRefPic is initialized as a picture having the same ViewIdx as
refViewIdx in the
current access unit.
Variable eurSubBlockldx is initialized as 0.
Reference position (xRef, yRef) may be derived as follows.
xRcf . O. PicWidthinSamplesi, - 1. xPh + (nPbW / nSbW / 2) * nSIA +
n94/2)
yRcf = O. PiclicightInSample$1, - 1 yPh + (1414J / nsui / 2) * 601
)II / )
ivRefPic is set to the picture having the same Viewldx as refViewIdx in the
current access unit.
ivRefPb is set to the prediction block covering position (xRef, yRef) in
ivRefPic.
(xIvRefPb, yIvRefPb) is set to the left and upper position of the reference
block indicated by
ivRefPb.
[419]
Unless ivRefPb has been encoded in intra mode, the following may apply to X
ranging from 0

CA 02891672 2015-02-24
76
to 1
- When X is 0 or current slice is slice B, the following applies to Y
ranging from X to (1-X).
- refPixListLYIvRef, predFlagLYIvReffx] [y], mvLYIvReffx] [y], and refIdx
LYIvRef[x] [y],
respectively, are set to their respective corresponding variables, i.e.,
RefPicListLY,
PredFlagLY[x][y], MvLY[x][y], and Ref1dxLY[x][y] in the inter-view reference
picture
ivRefPic.
- If predFlagLYIvRef[xIvRefPb][yIvRefPb] is 1, the following may apply to i
ranging from 0
to num_ref idx_1X_active_minusl (the number of reference pictures in reference
picture list).
- If POC (Picture Order Count) of
refPicListLYIvReffrefidxLYIvReflxIvRefPb][yIvRefPb]] is
the same as RefPicListLX[i] and centerPredFlag is 0, the following applies.
emtcrAvailahli,Flag = 1
ecntorMaX = mvalvReff x1vNelP1) 11 ylvRefPb 1
conterRellidxIX =
centerrravriugLX = 1
If centerAvailableFlag is 1, the following is performed.
- The following applies to yBlk ranging from 0 to (nPbH/nSbH-1) and xIllk
ranging from 0 to
(nPbWinSbW-1).
1420]
- Variable curAvailableFlag is initialized as 0.
- The following applies to X ranging from 0 to I.
- Flag spPredFlagLl[xBlk][yBlk] is initialized as 0.

CA 02891672 2015-02-24
77
- Motion vector spMvLX is initialized as (0,0).
- Reference index spRefldxLX[xBIk][yBlk] is initialized as -1.
- Reference block position (xRef, yRef) is derived as follows.
xRef-T1 ip3 (0, PieWidthInSamplesL-1 ),
xPb+xBlk*nSbW+nSbW/2+((mvDisp[0]+2)>>2))
yRe1=Clip3 (0, PieHei ghtInSamplesL-1),
yPb+yBlk*nSbH+nSbH/2+((mvDi sp[ 1 ]+2)>>2))
- Variable ivRefPb refers to luma prediction block at (xRef, yRef) in the
inter-view reference
picture indicated by ivRetPic.
- (xIvRefPb, yIvRefPb) refers to the left and upper position of the reference
block indicated by
ivRefPb.
- Unless ivRefF'b has been encoded in intra mode, the following is
performed on X ranging
from 0 to 1.
- When X is 0 or current slice is slice B, the following is performed on Y
ranging from X to
(I -X).
- refPixListLY1vRef, predFlagLYIvRef[x][y], mvLYIvRef[x][y], and
refldxLYIvRef[x][y],
respectively, are set to RefPieListLY, PredFlagLY[x][y], MvLY[x][y], and
RefldxLY[x][y] in
the picture indicated by ivRefPic.
1421]
- If predFlagLYIvRef[xlvRefPb][yIvRefPb] is 1, the following is performed
on i ranging from
0 to num_ref
JXactive_minusl (the number of reference pictures in reference picture
i list).

CA 02891672 2015-02-24
78
- If POC of refPicListLYIvReftrefldxLYIvRef[xlvRefPb][yIvRefPb]] is the same
as
RefPicListLX[i] and spPredFlagLX[xBlk][yBlk] is 0, the following applies.
splvtAl 01k H vii 1k 1 = mvLYIYRef1 xIvRolVB j I ylvRefP1) 1
s1)Rof1dxf:0 Allk H. yrilk 1 =
spPredFlagLX1 xilik It yBlk 1 1
curAvailaBleFlag = 1
- The following applies according to curAvailableFlag.
- If curAvailableFlag is 0, the following applies to X ranging from 0 to 1.
xBlk if ylfl k 1 = centerMvIA
spRefid.xLX1 x1ik H Alk 1 - centerRefldx1A
spProdnaglAl 7411k H yBlk I - centerPredFlaglA
Variable curSubBlockIdx is set to curSubBlockIdx+1.
Otherwise, i.e., if centerAvailableFlag is 0, the process is terminated.
[422]
[423]
[424]
[425] Embodiment 2 is described again based on Table 3.
[426] The variables in Table 3 are the same as those in Table 2.
[427] The inter prediction module 2100 performs initialization before
deriving motion
information on a current sub prediction unit using a sub prediction unit of a
reference block. The
initialization is the same as that described above in connection with Table 2.

CA 02891672 2015-02-24
. ,
79
[428]
[429] The inter prediction module may specify the position of the center
sub
prediction unit of the reference block. The position of the referenced block
may be determined
based on the reference position, and reference position (xRef, yRef) is
derived as in Equation 18.
[430] [Equation 183
xRef = Ctip3(0, FleIcidthInSamplesL ¨
x.13b (n.FFATT riSb TT' / 2) * n5bIV
nSbri72)
yRef = aip3(0. FicHeighanSarnplesL ¨ 1,
[431] yPb (nPb1/
nSbH 1 2) * nSbH n5bH12)
[432]
[433] ivRefPic is a picture having the same ViewIdx as refViewIdx in the
current
access unit, and ivRefPb is a prediction block or sub prediction unit covering
(xRef, yRef)
derived by Equation 19 in ivRefPie.
[434] (xIvRefPb, ylvRefP13) specifies the left and upper position of
ivRefPb.
[435]
[436] In case ivRefPb has not been encoded/decoded in intra mode, and X is
0 or the
current slice is slice B, the following process applies to Y ranging from X to
(1-X).
[437] As described above in connection with Table 2, refPicListLY1vRef is
set to
RefPieListLY, predFlagLYIvRef[ x ][ y Ito PredFlag[ x ][ y], and
refldxLYIvReff x][ y ] to
RefIdxLY[ x ][ y ].
[438] If predFlagLYIvRefj xIvRefPb ][ yIvRefPb ] is 1, Equation 19
applies to i
ranging from 0 to num_ref idx_IX_active_minus 1 (the number of reference
pictures in the
reference picture list X in case POC (Picture Order Count) of
refPieListLYIvRef[
refldxLYIvRef]I xIvRefPb ][ yIvRefPb ] ] is RefPicListLX[ ii, and
availableFlagLXInterView is

CA 02891672 2015-02-24
0.
[439] [Equation 191
centerAvailableFlag = 1
center311:LX = rnvL YltRef [x.ERef Ft][yiu Ref Pb}
center Fief Id' LX i
[440] centerPredFlagLX = 1
[441] In Equation, centerAvailableFlag denotes whether the center sub
prediction unit
5 of the reference block is available, and centerMvLX means the motion
vector for the center sub
prediction unit of the reference block. Further, centerRefIdxLX refers to the
reference index for
the center sub prediction unit of the reference block, and centerPredFlagLX
refers to the
reference picture list of the center sub prediction unit. Here,
centerAvailableFlag, centerMvLX,
centerRefIdxLX, and/or centerPredFlagLX mean motion information of the center
sub prediction
10 unit. In other words, the inter prediction module 2100 may store in the
storage space the motion
information on thc center sub prediction unit of the reference block set in
Equation 19.
[442]
[443] After the variables have been set as described above, in case
centerAvailableFlag is 1, the inter prediction module 2100 performs the
following process on
15 y131k that ranges from 0 to (nPbH/nSbH-1) and xBIk that ranges from 0 to
(nPbW/nSbW-1).
Here, xBIk means the x coordinate of the block, and yBlk means the y
coordinate of the block. In
other words, if motion information available from the sub block at the center
of the reference
block is derived, the inter prediction module 2100 may derive motion
information on the current
block on a per-sub prediction unit basis.

CA 02891672 2015-02-24
81
[444] First, the inter prediction module 2100 initializes the information
for identifying
whether to predict motion information from a sub prediction unit of the
reference block, the sub
prediction unit prediction flag, motion information on the sub prediction
unit, and reference
index of the sub prediction unit. The initialization is the same as that
described above in
connection with Table 2.
[445] The position (xRef, yRef) of the reference block is reset as shown in
Equation
20 on a per-sub prediction unit basis.
[446] [Equation 20]
xRef = CTip3(0. fleWidthinSamptesL-1.
xPb xBik*nAvir-- nSbri72¨ (imvpisp;:01¨ 2) >>
2)))
yRef = afp3(0,PicHeightinSamp1esfr1.
[447] y_Fb yklik* n SbH-- n 5J/2¨ ((n-tuDisp [1] ¨2) >> 2)))
[448]
[449] Here, xRef means the x coordinate of the position of the reference
block, and
yRef means the y coordinate of the position of the reference block.
PicWidthInSamplesL means
the width of the current picture, and PicHeightInSamplesL means the height of
the current
picture. Clip3() has been described above.
[450]
[451] In case the inter-view reference block is encoded in intra mode, the
inter
prediction module 2100 performs the following process on X that ranges from 0
to 1.
[452] When X is 0 or the current slice is slice B, each variable is reset
for Y (Y ranges
from X to (1-X)) as follows. The initialization is the same as that described
above in connection
with Table 2.

CA 02891672 2015-02-24
_
82
14531 In case predElagLYIvReff xlvRefPb ][ yIvRefPb ] is 1, the
following Equation
21 may apply to i ranging from 0 to num_ref idx_lX_active_minusl (the number
of reference
pictures in the reference picture list) if FOG of refPicListLYIvReff
refIALYIvReff xlvRefPb
yIvReiPb LI is RefPicListLX[ i] and spPredFlagLX[ xBII c] yBlIc J is 0.
14541 [Equation 21]
sp_ifuLX[2-1311c][yBlic] = mvLiaRef Fb][yk Ref Pb]
spRefidxLX[xBik][yak} = i
spPredFlagDarBik][yRk]=
14551 curAvailableFlag -= 1
[4561
14571 Even after the above-described process has been performed, if
curAvailableFlag
as set is 0 (i.e., unless spRefIdxLX=i (e.g., spRefIdxLx----1), and
spPredFlagLX=1 (e.g.,
spPredFlagLX---I)), it may be said that no motion information may be derived
on a per-sub
prediction unit basis. Accordingly, the inter prediction module 2100 may apply
Equation 22 to X
ranging from 0 to 1.
14581 In other words, in case motion information cannot be derived
from the sub
prediction unit of the reference block, the inter prediction module 2100 may
derive motion
information on the sub prediction unit of the current block from the motion
information on the
center sub unit.
[459] [Equation 22]

CA 02891672 2015-02-24
83
sp-lii:LX[rBfk][yBild= centerMuLX
spReficLLX xBild[gBik]= centerflef &Lk-
[460] apPrediTagLX{xBik][giBlic]---= centerPredRagLI
[461]
[462] Finally, after all of the above-described processes have been done,
variable,
curSubBlockIdx, is set to curSubBlockIdx + 1, and if availableFlagLOInterView
and
availableFlagLlInterView are 0, the process of deriving motion information
according to
embodiment 2 is ended.
[463]
14641 Fig. 24 is a view illustrating an exemplary process of deriving
motion
information on a sub prediction unit of a current block using motion
information at a position.
[465] Referring to Fig. 24, the blocks positioned at the upper end of Fig.
24 mean sub
prediction units of the reference block, and the blocks positioned at the
lower end of Fig. 24
mean sub prediction units of the current block. X denotes a position, and
motion information at
X is stored in a storage space. Here, the motion information at the position
of Fig. 24 may mean
motion information at the center position of the reference block as in
embodiment 1, and the
motion information at the positon of Fig. 24 may mean the motion information
on the center sub
prediction unit of the reference block as in embodiment 2.
[4661 Upon deriving the motion information on the sub prediction unit
of the current
block using the motion information at the position, each sub prediction unit
in the reference
block may utilize the motion information at the position. In other words,
motion information on
the plurality of sub prediction units of the current block may be
simultaneously derived using the
motion information at the position, and deriving motion information using the
motion

CA 02891672 2015-02-24
,
84
information at the position may address the issue of data dependency.
Accordingly, upon use of
motion information at the position, the inter prediction module 2100 may
derive motion
information in parallel.
[4671 As described above, embodiments 1 and 2 derive motion
information using
motion information present at any position. Accordingly, the motion
information deriving
methods according to embodiments 1 and 2 enable independent derivation of
motion information
on each sub prediction unit in the reference block. In other words,
embodiments 1 and 2 do not
require sequential discovery of sub prediction units from which motion
information may be
derived in order to find sub prediction units from which motion information
may be derived, and
in case the first sub prediction unit of the reference block is impossible to
use for deriving
motion information, embodiments 1 and 2 derive motion information on the sub
prediction unit
of the current block using predetermined motion information. As such, the
motion information
derivation according to embodiments 1 and 2 remove data dependency, enabling
parallelizcd
derivation of motion information on each sub prediction unit. Further, the
motion information
derivation according to embodiments 1 and 2 prevent additional memory access
in contrast to
existing motion information deriving methods, thus reducing the number of
times of accessing
the memory.
[468]
[469] Embodiment 3
1470] Fig. 25 is a flowchart illustrating a method of deriving motion
information on a
sub prediction unit of a current block using a motion information value
according to another
embodiment of the present invention.
[471] Referring to Fig. 25, embodiment 4 provides a method of setting
default motion

CA 02891672 2015-02-24
,
information and deriving motion information on a current sub prediction unit
from the default
motion information in case motion information is impossible to derive from a
sub prediction unit
of a reference block. Here, the default motion information may mean a zero
vector. A specific
method of deriving motion information according to embodiment 3 is described
below.
5 [472] The inter prediction module 2100 may store the default
motion information in a
storage space (S2500). A specific process of storing motion information by the
inter prediction
module 2100 has been described above.
[473] Subsequently, the inter prediction module 2100 may derive motion
information
on the current sub prediction unit. The inter prediction module 2100 may
determine whether the
10 sub prediction unit of the reference block corresponding to the current
sub prediction unit has
motion information (S2520).
[474] In case the sub prediction unit of the reference block has motion
information,
the inter prediction module 2100 may insert into the current sub prediction
unit the motion
information on the sub prediction unit of the reference block (S2530). Unless
the sub prediction
15 unit of the reference block has motion information, the inter prediction
module 2100 may insert
the motion information stored in the storage space into the current sub
prediction unit (S2540).
[475] The inter prediction module 2100 may determine whether the sub
prediction
unit of the reference block which is targeted for deriving motion information
is the last sub
prediction unit (S2550). In case the sub prediction unit of the reference
block is the last sub
20 prediction unit, the inter prediction module 2100 may terminate the
process of deriving motion
information. Unless the sub prediction unit of the reference block is the last
sub prediction unit,
the inter prediction module 2100 may discover motion information on a next sub
prediction unit
of the reference block in order to continue to derive motion information
(S2510).

CA 02891672 2015-02-24
86
[476]
14771 The above-
described motion information deriving process according to
embodiment 3 may apply to 3D-1-1EVC Draft Text 2 as in Table 4.

CA 02891672 2015-02-24
87
[478] [Table 41
This process has the following inputs.
- Position (xPb, yPb) of left and upper end of current prediction unit
- Width (nPbW) and height of current prediction unit
- Reference view index refViewIdx
- Disparity vector mvDisp
This process has the following outputs.
- Flag availableFlagLXinterView for determining whether temporal inter-view
motion
candidate is available, where LX may be reference picture list LO and Ll .
- Temporal inter-view motion vector candidate mvLXInterView, where LX may be
reference picture lists LO and Ll .
- Reference index refIclxLXInterView designating a reference picture present
in reference
picture list RefPieListLX, where LX may be reference picture lists LO and Ll.
LX may be reference picture lists LO and Ll. The following applies to LX.
- Flag availableFlagLXInterView is initialized as 0.
- Motion vector mvLX1nterView is initialized as (0,0).
- Reference index reildxLXInterView is initialized as -1.
Variables nSbW and nSbH are initialized as follows.
[479]
- nSbW=Min(nPbW, SubPbSize)

CA 02891672 2015-02-24
88
- nSbH=Min(nPbH, SubPbSize)
where, SubPbSize is the size including height and width of the sub prediction
unit
designated by VPS.
Variable ivRefPic is initialized as a picture having the same ViewIdx as
refViewIdx in the
current access unit.
Variable curSubBlockIdx is initialized as 0.
Variables availableFlagLOInterView and availableFlagLlInterview are
initialized as
follows.
ava i I ili I el I agt S )7,c, rti= I
1MA7em = (0, 0)
ref 1(1\1.)7.oro = 0
If current slice is slice B,
avai 1 ab I el:1 agl. Ve ro = I
mvLlZvro = (0. 0)
ref-HM.1km = 0
The following applies to yBlk ranging from 0 to (nPbH/nSbH-l) and xBlk ranging
from 0
to (nPbW/nSbW-1).
14801

CA 02891672 2015-02-24
89
- Variable curAvailabeFlag is initialized as 0.
- The following applies to X ranging from 0 to I.
- Flag spPredFlagLI[xBlkilyBlk] is initialized as 0.
- Motion vector spMvLX is initialized as (0, 0).
- Reference index spRefldxLX[xBlk][yBlk] is initialized as -1.
- Reference block position (xRef, yRef) is derived as follows.
NRef = Clip3( 0, PicWidthInS=p1e51. ¨ 1.
xPb + \Mk * + nSA / 2 + ( ( mvDispf 0 I
+ 2) >> 2 ) )
ykef = Clip3( 0. PicHeightinSm)lcsL ¨ 1.
yPb + 01k 6141 + 61)11 / 2 + ( (
mrDispf 1 I + 2) >> 2 ) )
- Variable ivRefPb refers to the luma prediction block at (xRef, yRef) in the
inter-view
reference picture indicated by ivRefPic.
- (xIvRefPb, yIvRefPb) refers to the left and upper position of the reference
block indicated
by ivRefPb.
- Unless ivRefPb has been encoded in intra mode, the following may apply to X
ranging
from 0 to 1
- When X is 0 or current slice is slice B, the following applies to Y ranging
from X to (1-
X).
refPixListLYIvRef, predFlagLYIvRef[x][y],
mvLYIvRefjx][y], and
refldxLYIvReflx][y], respectively, are set to their respective corresponding
variables, i.e.,
RefPicListLY, PredFlagLY[x][y], MvLY[x][y], and RefIdxLY[x][y] in the inter-
view
reference picture ivRefPic.

CA 02891672 2015-02-24
[4811
- If predFlagLYIvRef[xIvRefPb][yIvRetPb] is 1, the following may apply to i
ranging
from 0 to num_ref iclx_1X_active_minusl (the number of reference pictures in
reference
picture list).
If POC (Picture Order Count) of
refPicListLYIvRefirefldxLYIvRef[xIvRefPb][yIvRetPb]] is the same as
RefPicListLX[i]
and centerPredFlag is 0, the following applies.
splvl.X1 Nklk II yHlk I = nivINIvRefl xlacl-Ph II ylacfPli I
s1)Re1Idx1A1 NJIk II yfIlk I = i
spProdFlagIAI 1k II yIllk I= 1
curAvai lablelThip; = I
The following applies according to curAvailableFlag.
If curAvailableFlag is 0, the following applies to X ranging from 0 to 1.
splvIAI \Rik II yRIk I mvIAZero
suRe1ldx1X1 Olk It 01k I= refldx1AZero
spProdFlagIAI 0)1k II vRIk I = avaitablyilaglAZcro
Variable curSubBlockIdx is set to curSubBlockIdx+1.
14821
[4831 Embodiment 3 is described again based on Table 4. The variables
in Table 3 are
the same as those in Table 2.
5 14841 The inter prediction module 2100 performs initialization before
deriving motion

CA 02891672 2015-02-24
, .
91
information on a current sub prediction unit using a sub prediction unit of a
reference block. The
initialization is the same as that described above in connection with Table 2.
[485]
[486] Further, the variables, availableFlagLXZero, mvLXZero, and
refldxLXZero, are
set as in Equations 23 and 24, Here, X is 0 or 1.
[487] [Equation 23]
a vailable.FiagLOZero
mtl.,0Zfro= (0.0)
[488] re.fida-LOZero = 0
[489] [Equation 24]
arai1ab1egggL1Zero = 1
nalZero= (0.0)
[490] refidx.L1 Zero = 0
[491] Here, availableFlagLXZero means an identifier regarding whether the
default
motion information is available, mvLXZero the default motion information, and
refldxLXZero
the reference index of the default motion information.
1492]
[493] After setting the variables as above, the inter prediction module
2100 performs
the following process on yBlk that ranges from 0 to (nPbH/nSbH-1) and xBlk
that ranges from 0
to (nPbW/nSbW-1). Here, xBlk means the x coordinate of the block, and yBlk
means the y
coordinate of the block.
[494] First, the inter prediction module 2100 initializes the information
for identifying
whether to predict motion information from a sub prediction unit of the
reference block, the sub

CA 02891672 2015-02-24
92
prediction unit prediction flag, motion information on the sub prediction
unit, and reference
index of the sub prediction unit. The initialization is the same as that
described above in
connection with Table 2.
[495] The position (xRef, yRef) of the reference block is reset as
shown in Equation
25 on a per-sub prediction unit basis.
[4961 [Equation 25]
xlief = aip3(0. Pic ifidthinSampiesL-1.
xThxBk*n5bJV¨
((rnuDisp[0] 2) >> 2)))
yRef = CT ip3(0,19cHeightin
[497] yFb yBlk* m5b11--- n Sbk 2 ¨ ((mi:Disp[1]¨ 2) >>
2)))
1498]
14991 In case the inter-view reference block is encoded in intra
mode, the inter
prediction module 2100 may perform the following process on X that ranges from
0 to 1.
[500] When X is 0 or the current slice is slice B, each variable is reset
for Y (Y ranges
from X to (1-X)) as described above in connection with Table 2.
[501] In this case, if predFlagLYIvRef[ xIvRefPb if ylvRefllb ] is 1, the
following
Equation 26 may apply to i ranging from 0 to num_ref idx_lX_active_minusl (the
number of
reference pictures in the reference picture list).
[502] [Equation 26]
spJfuLX[xBlk] BIC = mvL Yli:Ref [x iru Ref F'&][y It: Ref F61
spRef ExLX[x.1311c][ylilk] =
spPredFragLX[xBik][yfilk] =1
15031 curArailableFlaR =1

CA 02891672 2015-02-24
93
[5041
[505] After performing the above-described process, in case
curAvailableFlag is 0, the
inter prediction module 2100 may apply Equation 27 to X ranging from 0 to 1.
1506] In other words, in case motion information cannot be derived
from the sub
prediction unit of the reference block, the inter prediction module 2100 may
derive motion
information on the sub prediction unit of the current block from the
arbitrarily set default motion
information.
[507] [Equation 27]
.spi.it.LX[xBild [yBik] = mvLXZero
spReildsLXIxak][yBik]= ref IdxLXZero
[508] soPredFragLX[sB[k][yEik} = amilable.F7afILXZero
[509]
[510] Finally, after all of the above-described processes have been
done, variable,
curSubBlockIdx, is set to curSubBlockIdx + 1, and if availableFlagLOInterView
and
availableFlagL 1 InterView are 0, the process of deriving motion information
according to
embodiment 3 is ended.
[511]
[512] Fig. 26 is a view illustrating an exemplary process of deriving
motion
information on a sub prediction unit of a current block using some motion
information.
15131 Referring to Fig. 26, the blocks positioned at the upper end of
Fig. 26 mean sub
prediction units of the reference block, and the blocks positioned at the
lower end of Fig. 26
mean sub prediction units of the current block. Further, default motion
information is stored in a
storage space. Here, the default motion information shown in Fig. 26 may mean
default motion

CA 02891672 2015-02-24
94
information arbitrarily set according to embodiment 3.
[514] Upon deriving the motion information on the sub prediction unit
of the current
block using the default motion information, each sub prediction unit in the
reference block may
utilize the default motion information that is arbitrarily set. In other
words, motion information
on the plurality of sub prediction units of the current block may be
simultaneously derived using
the default motion information, and the plurality of sub prediction units of
the current block may
address the issue of data dependency. Accordingly, upon use of default motion
information with
some value, the inter prediction module 2100 may derive motion information in
parallel.
15151 As described above, according to embodiment 3, the inter
prediction module
2100 derives motion information using the default motion information with a
value.
Accordingly, the motion information deriving method according to embodiment 3
enables
independent derivation of motion information on each sub prediction unit in
the reference block.
In other words, embodiment 3 does not require sequential discovery of sub
prediction units from
which motion information may be derived in order to find sub prediction units
from which
motion information may be derived, and in case the first sub prediction unit
of the reference
block is impossible to use for deriving motion information, embodiment 3
derives motion
information on the sub prediction unit of the current block using
predetermined motion
information. As such, the motion information derivation according to
embodiment 3 removes
data dependency, enabling parallelized derivation of motion information on
each sub prediction
unit. Further, the motion information derivation according to embodiment 3
prevents additional
memory access in contrast to existing motion information deriving methods,
thus reducing the
number of times of accessing the memory.
[516]

CA 02891672 2015-02-24
[5171 Fig. 27 is a view schematically illustrating times required to
derive motion
information according to the present invention.
[518] Referring to Fig. 20, when the time taken to derive motion
information from one
sub prediction unit is T, and the number of sub prediction units in a
reference block is N, the
5 time taken to derive all the motion information from the reference block
is NxT. However, upon
deriving motion information according to an embodiment of the present
invention, the motion
information derivation may be parallelized, and thus, the time of deriving
motion information
corresponds to T and a 3D image encoding/decoding delay is reduced.
15191
10 [520] The above-described embodiments may have different applicable
ranges
depending on block sizes, coding unit (CU) depths, or transform unit (TV)
depths. As the
variable for determining an applicable range, a value predetermined in the
encoder/decoder or a
value determined according to a profile or level may be used, or if the
encoder specifies a
variable value in the bitstream, the decoder may obtain the variable value
from the bitstream.
15 [521] For example, in case different applicable ranges apply
depending on CU depths,
there may be a scheme (method A) in which it applies only to a given depth or
more, a scheme
(method B) in which it applies only to the given depth or less, or a scheme
(method C) in which
it applies to the given depth only. In case the methods according to the
present invention apply to
none of the depths, an indicator (flag) may be used to indicate the same, or
it may be indicated
20 with a CU depth that the methods according to the present invention do
not apply, where the CU
depth may be set to be larger than the maximum depth that the CU may have.
[5221 [Table 5]

CA 02891672 2015-02-24
96
Depth of CU (or PI: or Method A Method B Method C
TU) representing
applicable range
0 X 0 0
1 X 0 0
0 0 0
3 0 X X
4 or more 0 X X
[523]
[524] In the above-described embodiments, the methods are described based
on the
flowcharts with a series of steps or units, but the present invention is not
limited to the order of
the steps, and rather, some steps may be performed simultaneously or in
different order with
other steps. It should be appreciated by one of ordinary skill in the art that
the steps in the
flowcharts do not exclude each other and that other steps may be added to the
flowcharts or some
of the steps may be deleted from the flowcharts without influencing the scope
of the present
invention.
[525] Further, the above-described embodiments include various
aspects of examples.
Although all possible combinations to represent various aspects cannot be
described, it may be
appreciated by those skilled in the art that any other combination may be
possible. Accordingly,
the present invention includes all other changes, modifications, and
variations belonging to the
following claims.
15261 The above-described methods according to the present invention
may be
prepared in a computer executable program that may be stored in a computer
readable recording
medium, examples of which include a ROM, a RAM, a CD-ROM, a magnetic tape, a
floppy
disc, or an optical data storage device, or may be implemented in the form of
a carrier wave (for
example, transmission through the Internet).
[5271 The computer readable recording medium may be distributed in
computer

CA 02891672 2016-10-13
55978-2
97
systems connected over a network, and computer readable codes may be stored
and executed
in a distributive way. The functional programs, codes, or code segments for
implementing the
above-described methods may be easily inferred by programmers in the art to
which the
present invention pertains.
[5281 Although the present invention has been shown and described in
connection
with preferred embodiments thereof, the present invention is not limited
thereto, and various
changes may be made thereto without departing from the scope of the present
invention
defined in the following claims, and such changes should not be individually
construed from
the scope of the present invention.
[529]

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , États administratifs , Taxes périodiques et Historique des paiements devraient être consultées.

États administratifs

Titre Date
Date de délivrance prévu 2018-11-27
(86) Date de dépôt PCT 2015-01-05
(85) Entrée nationale 2015-02-24
Requête d'examen 2015-02-24
(87) Date de publication PCT 2015-07-03
(45) Délivré 2018-11-27

Historique d'abandonnement

Il n'y a pas d'historique d'abandonnement

Taxes périodiques

Dernier paiement au montant de 210,51 $ a été reçu le 2023-12-28


 Montants des taxes pour le maintien en état à venir

Description Date Montant
Prochain paiement si taxe applicable aux petites entités 2025-01-06 125,00 $
Prochain paiement si taxe générale 2025-01-06 347,00 $

Avis : Si le paiement en totalité n'a pas été reçu au plus tard à la date indiquée, une taxe supplémentaire peut être imposée, soit une des taxes suivantes :

  • taxe de rétablissement ;
  • taxe pour paiement en souffrance ; ou
  • taxe additionnelle pour le renversement d'une péremption réputée.

Les taxes sur les brevets sont ajustées au 1er janvier de chaque année. Les montants ci-dessus sont les montants actuels s'ils sont reçus au plus tard le 31 décembre de l'année en cours.
Veuillez vous référer à la page web des taxes sur les brevets de l'OPIC pour voir tous les montants actuels des taxes.

Historique des paiements

Type de taxes Anniversaire Échéance Montant payé Date payée
Requête d'examen 800,00 $ 2015-02-24
Le dépôt d'une demande de brevet 400,00 $ 2015-02-24
Taxe de maintien en état - Demande - nouvelle loi 2 2017-01-05 100,00 $ 2016-12-19
Taxe de maintien en état - Demande - nouvelle loi 3 2018-01-05 100,00 $ 2017-12-12
Taxe finale 546,00 $ 2018-10-11
Taxe de maintien en état - brevet - nouvelle loi 4 2019-01-07 100,00 $ 2018-12-28
Taxe de maintien en état - brevet - nouvelle loi 5 2020-01-06 200,00 $ 2019-12-24
Taxe de maintien en état - brevet - nouvelle loi 6 2021-01-05 204,00 $ 2021-01-04
Taxe de maintien en état - brevet - nouvelle loi 7 2022-01-05 204,00 $ 2021-12-31
Taxe de maintien en état - brevet - nouvelle loi 8 2023-01-05 210,51 $ 2023-01-03
Taxe de maintien en état - brevet - nouvelle loi 9 2024-01-05 210,51 $ 2023-12-28
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
UNIVERSITY-INDUSTRY COOPERATION GROUP OF KYUNG HEE UNIVERSITY
Titulaires antérieures au dossier
S.O.
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document. Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(yyyy-mm-dd) 
Nombre de pages   Taille de l'image (Ko) 
Paiement de taxe périodique 2021-01-04 1 33
Paiement de taxe périodique 2021-12-31 1 33
Paiement de taxe périodique 2023-01-03 1 33
Dessins 2015-02-24 37 318
Description 2015-02-24 97 3 209
Revendications 2015-02-24 5 133
Abrégé 2015-02-24 1 15
Dessins représentatifs 2015-06-10 1 12
Page couverture 2015-07-30 1 48
Description 2016-10-13 98 3 321
Revendications 2016-10-13 5 195
Taxe finale 2017-09-29 2 76
Modification après acceptation 2017-09-13 2 70
Retrait d'acceptation 2017-09-28 1 42
Lettre du bureau 2017-10-02 1 51
Demande d'examen 2017-10-30 5 268
Paiement de taxe périodique 2017-12-12 2 84
Modification 2018-01-22 24 987
Description 2018-01-22 98 3 375
Revendications 2018-01-22 6 202
Enregistrer une note relative à une entrevue (Acti 2018-07-10 1 20
Modification 2018-07-10 8 282
Revendications 2018-07-10 6 201
Taxe finale 2018-10-11 2 56
Abrégé 2018-10-18 1 15
Dessins représentatifs 2018-10-29 1 9
Page couverture 2018-10-29 2 46
Paiement de taxe périodique 2018-12-28 1 58
Paiement de taxe périodique 2023-12-28 1 33
Cession 2015-02-24 3 74
PCT 2015-02-24 5 154
PCT 2015-02-24 86 2 979
Modification 2016-01-12 2 68
Demande d'examen 2016-04-13 5 300
Modification 2016-10-13 24 1 178
Paiement de taxe périodique 2016-12-19 2 82
Modification 2017-02-06 2 69