JVET-G_Notes_d9.doc - ITU
13 Jul 2017 ... CTU: Coding tree unit (containing both luma and chroma, ..... For spam fighting  
reasons account registration is only possible at the HM software ... 
		
		
 
        
 
		
		
part of the document
		
		
 
		Joint Video Exploration Team (JVET)
of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11
7th Meeting: Torino, IT, 1321 July 2017Document: JVET-G_Notes_d98
Title:Meeting Report of the 7th meeting of the Joint Video Exploration Team (JVET), Torino, IT, 1321 July 2017Status:Report document from responsible coordinators of JVETPurpose:ReportAuthor(s) orContact(s):Gary SullivanMicrosoft Corp.1 Microsoft WayRedmond, WA 98052 USA
Jens-Rainer OhmInstitute of Communication EngineeringRWTH Aachen UniversityMelatener Straße 23D-52074 AachenTel:Email:
Tel:Email:+1 425 703 5308HYPERLINK "mailto:garysull@microsoft.com"garysull@microsoft.com
+49 241 80 27671HYPERLINK "mailto:ohm@ient.rwth-aachen.de"ohm@ient.rwth-aachen.deSource:Responsible coordinators
Summary
The Joint Video Exploration Team (JVET) of ITU-T WP3/16 and ISO/IEC JTC 1/ SC 29/ WG 11 held its seventh meeting during 1321 July 2017 at the Politecnico di Torino HYPERLINK "http://www.strasbourg-events.com/en" , Torino, IT. The JVET meeting was held under the leadership of Dr Gary Sullivan (Microsoft/USA) and Dr Jens-Rainer Ohm (RWTH Aachen/Germany) as responsible coordinators of the two organizations. For rapid access to particular topics in this report, a subject categorization is found (with hyperlinks) in section  REF _Ref298716123 \r \h  1.14 of this document.
The JVET meeting sessions began at approximately 0900 hours on Thursday 13 July 2017. Meeting sessions were held on all days (including weekend days) until the meeting was closed at approximately 1313 hours on Friday 21 July 2017. Approximately XXX 180 people attended the JVET meeting, and approximately XX 121 input documents were discussed. The meeting took place in a collocated fashion with a meeting of WG11  one of the two parent bodies of the JVET. The subject matter of the JVET meeting activities consisted of studying future video coding technology with a compression capability that significantly exceeds that of the current HEVC standard, or gives better support regarding the requirements of newly emerging application domains of video coding. As a primary goal, the JVET meeting performed an evaluation of compression technology designs proposed in this area, which had been received in response to the Call for Evidence (CfE) as issued by the previous meeting.
Another important goal of the meeting was to review the work that was performed in the interim period since the sixth JVET meeting in producing the Joint Exploration Test Model 6 (JEM6). Video coding results produced with JEM6 had also been included as anchor in the CfE. Furthermore, results from four exploration experiments conducted in the JEM6 framework were reviewed, and other technical input was considered. On this basis, modifications towards JEM7 were planned. 
The JVET produced 7 output documents from the meeting (update):
Algorithm description of Joint Exploration Test Model 6 7 (JEM6JEM7)
Draft Joint Call for Evidence Proposals on video compression with capability beyond HEVC
Algorithm descriptions of projection format conversion and video quality metrics in 360Lib Version 4 
Results of the Call for Evidence on Video Compression with Capability beyond HEVC
Description of Exploration Experiments on coding tools
Common test conditions and evaluation procedures for HDR/WCG video 
Common test conditions and evaluation procedures for 360° video 
Subjective testing method for 360° Video projection formats using HEVC
For the organization and planning of its future work, the JVET established 10 ad hoc groups (AHGs) to progress the work on particular subject areas. 3 Exploration Experiments (EE) were defined on particular subject areas of coding tool testing. The next four JVET meetings are planned for Wed. 18.  Wed. 25 Oct. 2017 under ITU-T auspices in Macao, CN, during Fri. 19  Fri. 26 Jan. 2018 under WG 11 auspices in Gwangju, KR, during 11  20 April 2018 in San Diego, US, and during 10  18 July 2018 under ITU-T auspices in Ljubljana, SI.
The document distribution site  HYPERLINK "http://phenix.it-sudparis.eu/jvet/" http://phenix.it-sudparis.eu/jvet/ was used for distribution of all documents.
The reflector to be used for discussions by the JVET and all its AHGs is the JVET reflector: HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de hosted at RWTH Aachen University. For subscription to this list, see HYPERLINK "https://mailman.rwth-aachen.de/mailman/listinfo/jvet" https://mailman.rwth-aachen.de/mailman/listinfo/jvet.
Administrative topics
Organization
The ITU-T/ISO/IEC Joint Video Exploration Team (JVET) is a group of video coding experts from the ITU-T Study Group 16 Visual Coding Experts Group (VCEG) and the ISO/IEC JTC 1/ SC 29/ WG 11 Moving Picture Experts Group (MPEG). The parent bodies of the JVET are ITU-T WP3/16 and ISO/IEC JTC 1/SC 29/WG 11.
The Joint Video Exploration Team (JVET) of ITU-T WP3/16 and ISO/IEC JTC 1/ SC 29/ WG 11 held its seventh meeting during 1321 July 2017 at the Politecnico di Torino HYPERLINK "http://www.strasbourg-events.com/en" , Torino, IT. The JVET meeting was held under the leadership of Dr Gary Sullivan (Microsoft/USA) and Dr Jens-Rainer Ohm (RWTH Aachen/Germany) as responsible coordinators of the two organizations.
Meeting logistics
The JVET meeting sessions began at approximately 0900 hours on Thursday 13 July 2017. Meeting sessions were held on all days (including weekend days) until the meeting was closed at approximately 1313 hours on Friday 21 July 2017. Approximately XXX 180 people attended the JVET meeting, and approximately XX 121 input documents were discussed. The meeting took place in a collocated fashion with a meeting of WG11  one of the two parent bodies of the JVET. The subject matter of the JVET meeting activities consisted of studying future video coding technology with a compression capability that significantly exceeds that of the current HEVC standard, or gives better support regarding the requirements of newly emerging application domains of video coding. The JVET meeting also performed an evaluation of compression technology designs proposed in this area, in particular responses to the Call for Evidence were evaluated in this context.
Information regarding logistics arrangements for the meeting had been provided via the email reflector  HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de and at  HYPERLINK "http://wftp3.itu.int/av-arch/jvet-site/2017_07_G_Torino/" http://wftp3.itu.int/av-arch/jvet-site/2017_07_G_Torino/.
Primary goals
As a primary goal, the JVET meeting performed an evaluation of compression technology designs proposed in this area, which had been received in response to the Call for Evidence (CfE) as issued by the previous meeting. As this unveiled significant evidence about existence of promising technology, a Draft Call for Proposals was issued, which is intended to be finalized by the October 2017 meeting. Another important goal of the meeting was to review the work that was performed in the interim period since the sixth JVET meeting in producing the Joint Exploration Test Model 6 (JEM6). Video coding results produced with JEM6 had also been included as anchor in the CfE. Furthermore, results from four exploration experiments conducted in the JEM6 framework were reviewed, and other technical input was considered. On this basis, modifications towards JEM7 were planned. 
Documents and document handling considerations
General
The documents of the JVET meeting are listed in Annex A of this report. The documents can be found at  HYPERLINK "http://phenix.it-sudparis.eu/jvet/" http://phenix.it-sudparis.eu/jvet/.
Registration timestamps, initial upload timestamps, and final upload timestamps are listed in Annex A of this report.
The document registration and upload times and dates listed in Annex A and in headings for documents in this report are in Paris/Geneva time. Dates mentioned for purposes of describing events at the meeting (other than as contribution registration and upload times) follow the local time at the meeting facility.
Highlighting of recorded decisions in this report:
Decisions made by the group that might affect the normative content of a future standard are identified in this report by prefixing the description of the decision with the string Decision:.
Decisions that affect the JEM software but have no normative effect are marked by the string Decision (SW):.
Decisions that fix a bug in the JEM description (an error, oversight, or messiness) or in the software are marked by the string Decision (BF):.
This meeting report is based primarily on notes taken by the responsible leaders. The preliminary notes were also circulated publicly by ftp during the meeting on a daily basis. It should be understood by the reader that 1) some notes may appear in abbreviated form, 2) summaries of the content of contributions are often based on abstracts provided by contributing proponents without an intent to imply endorsement of the views expressed therein, and 3) the depth of discussion of the content of the various contributions in this report is not uniform. Generally, the report is written to include as much information about the contributions and discussions as is feasible (in the interest of aiding study), although this approach may not result in the most polished output report.
Late and incomplete document considerations
The formal deadline for registering and uploading non-administrative contributions had been announced as Wednesday, 5 July 2017. Any documents uploaded after 2359 hours Paris/Geneva time on Thursday 24 March were considered officially late, giving a grace period of 24 hrs to those living in different time zones of the world.
All contribution documents with registration numbers JVET-G0113 and higher were registered after the officially late deadline (and therefore were also uploaded late). However, some documents in the F0113+ range might include break-out activity reports that were generated during the meeting, and are therefore better considered as report documents rather than as late contributions.
In many cases, contributions were also revised after the initial version was uploaded. The contribution document archive website retains publicly-accessible prior versions in such cases. The timing of late document availability for contributions is generally noted in the section discussing each contribution in this report.
One suggestion to assist with the issue of late submissions was to require the submitters of late contributions and late revisions to describe the characteristics of the late or revised (or missing) material at the beginning of discussion of the contribution. This was agreed to be a helpful approach to be followed at the meeting.
The following technical design proposal contributions were was registered on time but were was uploaded late:
JVET-G0XXX G0025 (a proposal contribution responding the Call for Evidenceon 
), uploaded 07-XX07.,
 
The following technical design proposal contributions were both registered late and uploaded late:
JVET-G0XXX G0113 (a proposal contribution on 
), EE1 results), uploaded 07-XX10,
JVET-G0123 (a contribution on local QP adaptation in coding of HLG sequences), uploaded 07-07,
JVET-G0146 (a contribution on additional results related to EE1), uploaded 07-11,
JVET-G0156 (a contribution on frame packing for ISP), uploaded 07-14,
JVET-G0157 (a contribution on padding for ISP), uploaded 07-14,
JVET-G0159 (a proposal on block shape dependent intra mode coding), uploaded 07-16.
The following other documents not proposing normative technical content were registered on time but were uploaded late:
JVET-G0055 (an information document on 360° test sequences), uploaded 07-11,
JVET-G0063 (an information document on HLG test sequences), uploaded 07-07,
JVET-G0096 (an information document on evaluation of drone test sequences), uploaded 07-11,
JVET-G0XXX (an information document on on 
), uploaded 07-XX,.
...
The following cross-verification reports were registered on time but were uploaded late: JVET-G0XXX G0086 [uploaded 07-XX11], JVET-G0087 [uploaded 07-11], JVET-G0094 [uploaded 07-08], JVET-G0102 [uploaded 07-10], JVET-G0105 [uploaded 07-11]
 .
(Documents that were both registered late and uploaded late, other than technical proposal documents, are not listed in this section, in the interest of brevity.)
The following contribution registrations were later cancelled, withdrawn, never provided, were cross-checks of a withdrawn contribution, or were registered in error: JVET-G0XXXG0135, JVET-G0139
.
Placeholder contribution documents that were basically empty of content, with perhaps only a brief abstract and some expression of an intent to provide a more complete submission as a revision, were considered unacceptable rejected in the document management system. The initial uploads of the following contribution documents were rejected as placeholders and were not corrected until after the upload deadline: (This case did not happen at the current meeting).
As a general policy, missing documents were not to be presented, and late documents (and substantial revisions) could only be presented when sufficient time for studying was given after the upload. Again, an exception is applied for AHG reports, EE summaries, and other such reports which can only be produced after the availability of other input documents. There were no objections raised by the group regarding presentation of late contributions, although there was some expression of annoyance and remarks on the difficulty of dealing with late contributions and late revisions.
It was remarked that documents that are substantially revised after the initial upload are also a problem, as this becomes confusing, interferes with study, and puts an extra burden on synchronization of the discussion. This is especially a problem in cases where the initial upload is clearly incomplete, and in cases where it is difficult to figure out what parts were changed in a revision. For document contributions, revision marking is very helpful to indicate what has been changed. Also, the comments field on the web site can be used to indicate what is different in a revision.
A few contributions may have had some problems relating to IPR declarations in the initial uploaded versions (missing declarations, declarations saying they were from the wrong companies, etc.). These issues were corrected by later uploaded versions in a reasonably timely fashion in all cases (to the extent of the awareness of the responsible coordinators).
Some other errors were noticed in other initial document uploads (wrong document numbers in headers, etc.) which were generally sorted out in a reasonably timely fashion. The document web site contains an archive of each upload.
Outputs of the preceding meeting
The output documents of the previous meeting, particularly the meeting report JVET-F1000, JEM6 algorithm description JVET-F1001, the Joint Call for Evidence JVET-F1002, the algorithm descriptions of projection format conversion and video quality metrics in 360Lib JVET-F1003, the document Subjective testing method for comparison of 360° video projection formats using HEVC JVET-F1004, the description of exploration experiments JVET-F1011, the JVET common test conditions and evaluation procedures for HDR/WCG video JVET-F1020, and the JVET common test conditions and evaluation procedures for 360° video JVET-F1030, were approved. The JEM6 software implementation (version 6.0), and the 360Lib software implementation (version 3.0) were also approved.
The group had initially been asked to review the prior meeting report for finalization. The meeting report was later approved without modification.
All output documents of the previous meeting and the software had been made available in a reasonably timely fashion.
Attendance
The list of participants in the JVET meeting can be found in Annex B of this report.
The meeting was open to those qualified to participate either in ITU-T WP3/16 or ISO/IEC JTC 1/ SC 29/ WG 11 (including experts who had been personally invited as permitted by ITU-T or ISO/IEC policies).
Participants had been reminded of the need to be properly qualified to attend. Those seeking further information regarding qualifications to attend future meetings may contact the responsible coordinators.
Agenda
The agenda for the meeting was as follows:
IPR policy reminder and declarations
Contribution document allocation
Review of results of previous meeting
Review of AHG reports
Review of inputs to the Call for Evidence, subjective testing of submitted material, and analysis of results
Reports of exploration experiments
Consideration of contributions and communications on project guidance
Consideration of video technology proposal contributions
Consideration of information contributions
Coordination activities
Future planning: Determination of next steps, discussion of working methods, communication practices, establishment of coordinated experiments, establishment of AHGs, meeting planning, refinement of expected standardization timeline, other planning issues
Other business as appropriate for consideration
IPR policy reminder
Participants were reminded of the IPR policy established by the parent organizations of the JVET and were referred to the parent body websites for further information. The IPR policy was summarized for the participants.
The ITU-T/ITU-R/ISO/IEC common patent policy shall apply. Participants were particularly reminded that contributions proposing normative technical content shall contain a non-binding informal notice of whether the submitter may have patent rights that would be necessary for implementation of the resulting standard. The notice shall indicate the category of anticipated licensing terms according to the ITU-T/ITU-R/ISO/IEC patent statement and licensing declaration form.
This obligation is supplemental to, and does not replace, any existing obligations of parties to submit formal IPR declarations to ITU-T/ITU-R/ISO/IEC.
Participants were also reminded of the need to formally report patent rights to the top-level parent bodies (using the common reporting form found on the database listed below) and to make verbal and/or document IPR reports within the JVET necessary in the event that they are aware of unreported patents that are essential to implementation of a standard or of a draft standard under development.
Some relevant links for organizational and IPR policy information are provided below:
HYPERLINK "http://www.itu.int/ITU-T/ipr/index.html"http://www.itu.int/ITU-T/ipr/index.html (common patent policy for ITU-T, ITU-R, ISO, and IEC, and guidelines and forms for formal reporting to the parent bodies)
 HYPERLINK "http://ftp3.itu.int/av-arch/jvet-site" http://ftp3.itu.int/av-arch/jvet-site (JVET contribution templates)
HYPERLINK "http://www.itu.int/ITU-T/dbase/patent/index.html"http://www.itu.int/ITU-T/dbase/patent/index.html (ITU-T IPR database)
HYPERLINK "http://www.itscj.ipsj.or.jp/sc29/29w7proc.htm"http://www.itscj.ipsj.or.jp/sc29/29w7proc.htm (JTC 1/ SC 29 Procedures)
It is noted that the ITU TSB director s AHG on IPR had issued a clarification of the IPR reporting process for ITU-T standards, as follows, per SG 16 TD 327 (GEN/16):
 TSB has reported to the TSB Director s IPR Ad Hoc Group that they are receiving Patent Statement and Licensing Declaration forms regarding technology submitted in Contributions that may not yet be incorporated in a draft new or revised Recommendation. The IPR Ad Hoc Group observes that, while disclosure of patent information is strongly encouraged as early as possible, the premature submission of Patent Statement and Licensing Declaration forms is not an appropriate tool for such purpose.
In cases where a contributor wishes to disclose patents related to technology in Contributions, this can be done in the Contributions themselves, or informed verbally or otherwise in written form to the technical group (e.g. a Rapporteurs group), disclosure which should then be duly noted in the meeting report for future reference and record keeping.
It should be noted that the TSB may not be able to meaningfully classify Patent Statement and Licensing Declaration forms for technology in Contributions, since sometimes there are no means to identify the exact work item to which the disclosure applies, or there is no way to ascertain whether the proposal in a Contribution would be adopted into a draft Recommendation.
Therefore, patent holders should submit the Patent Statement and Licensing Declaration form at the time the patent holder believes that the patent is essential to the implementation of a draft or approved Recommendation.
The responsible coordinators invited participants to make any necessary verbal reports of previously-unreported IPR in technology that might be considered as prospective candidate for inclusion in future standards, and opened the floor for such reports: No such verbal reports were made.
Software copyright disclaimer header reminder
It was noted that, as had been agreed at the 5th meeting of the JCT-VC and approved by both parent bodies at their collocated meetings at that time, the JEM software uses the HEVC reference software copyright license header language is the BSD license with a preceding sentence declaring that other contributor or third party rights, including patent rights, are not granted by the license, as recorded in N10791 of the 89th meeting of ISO/IEC JTC 1/ SC 29/ WG 11. Both ITU and ISO/IEC will be identified in the  and  tags in the header. This software is used in the process of designing the JEM software, and for evaluating proposals for technology to be included in the design. This software or parts thereof might be published by ITU-T and ISO/IEC as an example implementation of a future video coding standard and for use as the basis of products to promote adoption of such technology.
Different copyright statements shall not be committed to the committee software repository (in the absence of subsequent review and approval of any such actions). As noted previously, it must be further understood that any initially-adopted such copyright header statement language could further change in response to new information and guidance on the subject in the future.
Note: This applies also to the 360Lib video conversion software as well as the JEM and HM.
Communication practices
The documents for the meeting can be found at  HYPERLINK "http://phenix.it-sudparis.eu/jvet/" http://phenix.it-sudparis.eu/jvet/. 
It is reminded to send notice to the chairs in cases of changes to document titles, authors etc.
JVET email lists are managed through the site  HYPERLINK "https://mailman.rwth-aachen.de/mailman/options/jvet" https://mailman.rwth-aachen.de/mailman/options/jvet, and to send email to the reflector, the email address is  HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de. Only members of the reflector can send email to the list. However, membership of the reflector is not limited to qualified JVET participants.
It was emphasized that reflector subscriptions and email sent to the reflector must use real names when subscribing and sending messages and subscribers must respond to inquiries regarding the nature of their interest in the work. The current number of subscribers was 741.
For distribution of test sequences, a password-protected ftp site had been set up at RWTH Aachen University, with a mirror site at FhG-HHI. Accredited members of JVET may contact the responsible JVET coordinators to obtain the password information (but the site is not open for use by others).
Terminology
Some terminology used in this report is explained below:
ACT: Adaptive colour transform.
AI: All-intra.
AIF: Adaptive interpolation filtering.
ALF: Adaptive loop filter.
AMP: Asymmetric motion partitioning  a motion prediction partitioning for which the sub-regions of a region are not equal in size (in HEVC, being N/2x2N and 3N/2x2N or 2NxN/2 and 2Nx3N/2 with 2N equal to 16 or 32 for the luma component).
AMVP: Adaptive motion vector prediction.
AMT: Adaptive multi-core transform.
AMVR: (Locally) adaptive motion vector resolution.
APS: Active parameter sets.
ARC: Adaptive resolution conversion (synonymous with DRC, and a form of RPR).
ARSS: Adaptive reference sample smoothing.
ATMVP: Advanced temporal motion vector prediction.
AU: Access unit.
AUD: Access unit delimiter.
AVC: Advanced video coding  the video coding standard formally published as ITU-T Recommendation H.264 and ISO/IEC 14496-10.
BA: Block adaptive.
BC: See CPR or IBC.
BD: Bjøntegaard-delta  a method for measuring percentage bit rate savings at equal PSNR or decibels of PSNR benefit at equal bit rate (e.g., as described in document VCEG-M33 of April 2001).
BIO: Bi-directional optical flow.
BL: Base layer.
BoG: Break-out group.
BR: Bit rate.
BV: Block vector (used for intra BC prediction).
CABAC: Context-adaptive binary arithmetic coding.
CBF: Coded block flag(s).
CC: May refer to context-coded, common (test) conditions, or cross-component.
CCLM: Cross-component linear model.
CCP: Cross-component prediction.
CG: Coefficient group.
CGS: Colour gamut scalability (historically, coarse-grained scalability).
CL-RAS: Cross-layer random-access skip.
CPMVP: Control-point motion vector prediction (used in affine motion model).
CPR: Current-picture referencing, also known as IBC  a technique by which sample values are predicted from other samples in the same picture by means of a displacement vector called a block vector, in a manner conceptually similar to motion-compensated prediction.
CTC: Common test conditions.
CVS: Coded video sequence.
DCT: Discrete cosine transform (sometimes used loosely to refer to other transforms with conceptually similar characteristics).
DCTIF: DCT-derived interpolation filter.
DF: Deblocking filter.
DMVR: Decoder-side motion vector refinement.
DRC: Dynamic resolution conversion (synonymous with ARC, and a form of RPR).
DT: Decoding time.
ECS: Entropy coding synchronization (typically synonymous with WPP).
EE: Exploration Experiment  a coordinated experiment conducted toward assessment of coding technology.
EMT: Explicit multiple-core transform.
EOTF: Electro-optical transfer function  a function that converts a representation value to a quantity of output light (e.g., light emitted by a display.
EPB: Emulation prevention byte (as in the emulation_prevention_byte syntax element).
ECV: Extended Colour Volume (up to WCG).
EL: Enhancement layer.
ET: Encoding time.
FRUC: Frame rate up conversion (pattern matched motion vector derivation).
HDR: High dynamic range.
HEVC: High Efficiency Video Coding  the video coding standard developed and extended by the JCT-VC, formalized by ITU-T as Rec. ITU-T H.265 and by ISO/IEC as ISO/IEC 23008-2.
HLS: High-level syntax.
HM: HEVC Test Model  a video coding design containing selected coding tools that constitutes our draft standard design  now also used especially in reference to the (non-normative) encoder algorithms (see WD and TM).
HyGT: Hyper-cube Givens transform (a type of NSST).
IBC (also Intra BC): Intra block copy, also known as CPR  a technique by which sample values are predicted from other samples in the same picture by means of a displacement vector called a block vector, in a manner conceptually similar to motion-compensated prediction.
IBDI: Internal bit-depth increase  a technique by which lower bit-depth (8 bits per sample) source video is encoded using higher bit-depth signal processing, ordinarily including higher bit-depth reference picture storage (ordinarily 12 bits per sample).
IBF: Intra boundary filtering.
ILP: Inter-layer prediction (in scalable coding).
IPCM: Intra pulse-code modulation (similar in spirit to IPCM in AVC and HEVC).
JEM: Joint exploration model  the software codebase for future video coding exploration.
JM: Joint model  the primary software codebase that has been developed for the AVC standard.
JSVM: Joint scalable video model  another software codebase that has been developed for the AVC standard, which includes support for scalable video coding extensions.
KLT: Karhunen-Loève transform.
LB or LDB: Low-delay B  the variant of the LD conditions that uses B pictures.
LD: Low delay  one of two sets of coding conditions designed to enable interactive real-time communication, with less emphasis on ease of random access (contrast with RA). Typically refers to LB, although also applies to LP.
LIC: Local illumination compensation.
LM: Linear model.
LP or LDP: Low-delay P  the variant of the LD conditions that uses P frames.
LUT: Look-up table.
LTRP: Long-term reference pictures.
MC: Motion compensation.
MDNSST: Mode dependent non-separable secondary transform.
MMLM: Multi-model (cross component) linear mode.
MPEG: Moving picture experts group (WG 11, the parent body working group in ISO/IEC JTC 1/ SC 29, one of the two parent bodies of the JVET).
MPM: Most probable mode (in intra prediction).
MV: Motion vector.
MVD: Motion vector difference.
NAL: Network abstraction layer (as in AVC and HEVC).
NSQT: Non-square quadtree.
NSST: Non-separable secondary transform.
NUH: NAL unit header.
NUT: NAL unit type (as in AVC and HEVC).
OBMC: Overlapped block motion compensation (e.g., as in H.263 Annex F).
OETF: Opto-electronic transfer function  a function that converts to input light (e.g., light input to a camera) to a representation value.
OOTF: Optical-to-optical transfer function  a function that converts input light (e.g. l,ight input to a camera) to output light (e.g., light emitted by a display).
PDPC: Position dependent (intra) prediction combination.
PMMVD: Pattern-matched motion vector derivation.
POC: Picture order count.
PoR: Plan of record.
PPS: Picture parameter set (as in AVC and HEVC).
QM: Quantization matrix (as in AVC and HEVC).
QP: Quantization parameter (as in AVC and HEVC, sometimes confused with quantization step size).
QT: Quadtree.
QTBT: Quadtree plus binary tree.
RA: Random access  a set of coding conditions designed to enable relatively-frequent random access points in the coded video data, with less emphasis on minimization of delay (contrast with LD).
RADL: Random-access decodable leading.
RASL: Random-access skipped leading.
R-D: Rate-distortion.
RDO: Rate-distortion optimization.
RDOQ: Rate-distortion optimized quantization.
ROT: Rotation operation for low-frequency transform coefficients.
RPLM: Reference picture list modification.
RPR: Reference picture resampling (e.g., as in H.263 Annex P), a special case of which is also known as ARC or DRC.
RPS: Reference picture set.
RQT: Residual quadtree.
RRU: Reduced-resolution update (e.g. as in H.263 Annex Q).
RVM: Rate variation measure.
SAO: Sample-adaptive offset.
SD: Slice data; alternatively, standard-definition.
SDT: Signal dependent transform.
SEI: Supplemental enhancement information (as in AVC and HEVC).
SH: Slice header.
SHM: Scalable HM.
SHVC: Scalable high efficiency video coding.
SIMD: Single instruction, multiple data.
SPS: Sequence parameter set (as in AVC and HEVC).
STMVP: Spatial-temporal motion vector prediction.
TBA/TBD/TBP: To be announced/determined/presented.
TGM: Text and graphics with motion  a category of content that primarily contains rendered text and graphics with motion, mixed with a relatively small amount of camera-captured content.
UCBDS: Unrestricted center-biased diamond search.
UWP: Unequal weight prediction.
VCEG: Visual coding experts group (ITU-T Q.6/16, the relevant rapporteur group in ITU-T WP3/16, which is one of the two parent bodies of the JVET).
VPS: Video parameter set  a parameter set that describes the overall characteristics of a coded video sequence  conceptually sitting above the SPS in the syntax hierarchy.
WCG: Wide colour gamut.
WG: Working group, a group of technical experts (usually used to refer to WG 11, a.k.a. MPEG).
WPP: Wavefront parallel processing (usually synonymous with ECS).
Block and unit names in HEVC:
CTB: Coding tree block (luma or chroma)  unless the format is monochrome, there are three CTBs per CTU.
CTU: Coding tree unit (containing both luma and chroma, synonymous with LCU), with a size of 16x16, 32x32, or 64x64 for the luma component.
CB: Coding block (luma or chroma), a luma or chroma block in a CU.
CU: Coding unit (containing both luma and chroma), the level at which the prediction mode, such as intra versus inter, is determined in HEVC, with a size of 2Nx2N for 2N equal to 8, 16, 32, or 64 for luma.
PB: Prediction block (luma or chroma), a luma or chroma block of a PU, the level at which the prediction information is conveyed or the level at which the prediction process is performed in HEVC.
PU: Prediction unit (containing both luma and chroma), the level of the prediction control syntax within a CU, with eight shape possibilities in HEVC:
2Nx2N: Having the full width and height of the CU.
2NxN (or Nx2N): Having two areas that each have the full width and half the height of the CU (or having two areas that each have half the width and the full height of the CU).
NxN: Having four areas that each have half the width and half the height of the CU, with N equal to 4, 8, 16, or 32 for intra-predicted luma and N equal to 8, 16, or 32 for inter-predicted luma  a case only used when 2N×2N is the minimum CU size.
N/2x2N paired with 3N/2x2N or 2NxN/2 paired with 2Nx3N/2: Having two areas that are different in size  cases referred to as AMP, with 2N equal to 16 or 32 for the luma component.
TB: Transform block (luma or chroma), a luma or chroma block of a TU, with a size of 4x4, 8x8, 16x16, or 32x32.
TU: Transform unit (containing both luma and chroma), the level of the residual transform (or transform skip or palette coding) segmentation within a CU (which, when using inter prediction in HEVC, may sometimes span across multiple PU regions).
Block and unit names in JEM:
CTB: Coding tree block (luma or chroma)  there are three CTBs per CTU in P/B slice, and one CTB per luma CTU and two CTBs per chroma CTU in I slice.
CTU: Coding tree unit (synonymous with LCU, containing both luma and chroma in P/B slice, containing only luma or chroma in I slice), with a size of 16x16, 32x32, 64x64, or 128x128 for the luma component.
CB: Coding block, a luma or chroma block in a CU.
CU: Coding unit (containing both luma and chroma in P/B slice, containing only luma or chroma in I slice), a leaf node of a QTBT. Its the level at which the prediction process and residual transform are performed in JEM. A CU can be square or rectangle shape.
PB: Prediction block, a luma or chroma block of a PU.
PU: Prediction unit, has the same size to a CU.
TB: Transform block, a luma or chroma block of a TU.
TU: Transform unit, has the same size to a CU.
Opening remarks
Reviewed logistics, agenda, working practices, policies, document allocation
Results of previous meeting: JEM, meeting report, etc.
Goals of the meeting: Evaluation of the results of the joint Call for Evidence (CfE), produce a new version of the JEM algorithm description and software, evaluation of status progress in EEs and new proposals, selection of test sequences and common test conditions for evaluation testing, expert viewing assessment of JEM status, improved 360Lib software, define new EEs.
Pending adequate results of the joint CfE, produce preliminary joint Call for Proposals (to be issued by parent bodies)
Scheduling of discussions
Scheduling: Generally meeting time was scheduled during 09002000 hours, with coffee and lunch breaks as convenient. Ongoing scheduling refinements were announced on the group email reflector as needed. Some particular scheduling notes are shown below, although not necessarily 100% accurate or complete:
Thu. 13 Jul, 1st day
0900-1300 Opening, AHG reports (chaired by JRO and GJS)
CfE responses
Fri. 14 July, 2nd day
EEs
CfE response FastVDO G0021
EE related
Sat. 15 July, 3rd day
0900-XXX BoG on 360° video (chaired by J. Boyce)
Sun. 16 July, 4th day
Morning: 
Analysis and Development of JEM
Non-EE Technology Proposals
3,7
Afternoon: 
Extended Colour Volume Coding
Complexity Analysis
Encoder Optimization
8,10,11
Mon. 17 July, 5th day
1800-2000 BoG on extended colour (chaired by A. Segall)
Tue. 18 July, 6th day
1400-1700 JVET plenary (chaired by JRO): Remaining documents, revisits, EE planning
1715- BoG on extended colour (chaired by A. Segall)
1800- BoG on 360° video (chaired by J. Boyce)
Afternoon: CfE viewing
Wed. 19 July, 7th day
1500-1800 JVET plenary (chaired by JRO&GJS): BoG review, CfE, CfP
Afternoon: CfE viewing
Thu. 20 July, 8th day
0900-1000 JVET plenary (chaired by JRO): CfE viewing results, CfP preparation
1000-1140 Joint meeting with parent bodies
1200-1245 JVET plenary: Revisits, late documents (chaired by JRO&GJS)
1400-1630 JVET plenary: Draft CfP, planning of AHGs (chaired by JRO&GJS)
1645-1900 BoG on 360 (Chaired by J. Boyce)
1800-1900 BoG on extended colour (chaired by A. Segall)
Fri. 21 July, 9th day
0900-XX00 1313 JVET plenary (chaired by JRO&GJS): Document approval, revisits, AHG establishment, future planning, 
any other business.
Contribution topic overview
The approximate subject categories and quantity of contributions per category for the meeting were summarized (check numbers)
AHG reports (9) (section  REF _Ref400626869 \n \h 2)
Analysis, development and improvement of JEM (3) (section  REF _Ref383632975 \r \h 3)
Test material (7) (section  REF _Ref443720177 \r \h 4)
Call for Evidence (12) (section  REF _Ref475640122 \r \h 5)
Exploration experiments (47) (section  REF _Ref451632240 \r \h 6)
EE1 and related: Intra Prediction (24)
EE2 and related: Decoder-side motion vector derivation (6)
EE3 and related: Adaptive QP for 360o video (8)
EE4 and related: 360o projection modifications and padding (8)
Non-EE technology proposals (13) (section  REF _Ref487322293 \r \h 7)
Extended colour volume coding (8) (section  REF _Ref471468020 \r \h 8)
Coding of 360o video projection formats (20) (section  REF _Ref471468028 \r \h 9)
Complexity analysis (1) (section  REF _Ref451632402 \r \h 10)
Encoder optimization (2) (section  REF _Ref487322369 \r \h 11)
Metrics and evaluation criteria (0) (section  REF _Ref464029002 \r \h 12)
Withdrawn (2) (section  REF _Ref487322392 \r \h 13)
Joint meetings, plenary discussions, BoG reports, Summary of actions (section  REF _Ref432847868 \r \h 14)
Project planning (section  REF _Ref354594526 \r \h 15)
Output documents, AHGs (section  REF _Ref451632559 \r \h 16)
AHG reports (9)
These reports were discussed Thursday 13 July 10001215 (chaired by GJS and JRO).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3259" JVET-G0001 JVET AHG report: Tool evaluation (AHG1) [M. Karczewicz, E. Alshina]
This document reports the work of the JVET ad hoc group on Tool evaluation (AHG1) between the 6h JVET meeting at Hobart, Australia (31 March  7 April 2017) and the 7th Meeting at Turin, Italy (13 21 July 2017).
A total of 50+ e-mails related to AhG1 and EE activities were sent to the JVET reflector including EE tests scheduling and EE summary discussion
Algorithms included into JEM6.0 are described in JVET-F1001. There is a list of tools below. Tools modified at the JVCT-F meeting are marked as bold. The biggest change is addition of division-free bilateral filter after inverse transform.
JEM6.0 tools:
Block structure
Larger Coding Tree Unit (up to 256x256) and transforms (up to 64x64) 
Quadtree plus binary tree (QTBT) block structure 
Intra prediction improvements
65 intra prediction directions 
4-tap interpolation filter for intra prediction 
Boundary filter applied to other directions in addition to horizontal and vertical ones 
Cross-component linear model (CCLM) prediction 
Position dependent intra prediction combination (PDPC) 
Adaptive reference sample smoothing
Inter prediction improvements
Sub-PU level motion vector prediction 
Locally adaptive motion vector resolution (AMVR) 
1/16 pel motion vector storage accuracy
Overlapped block motion compensation (OBMC) 
Local illumination compensation (LIC) 
Affine motion prediction 
Pattern matched motion vector derivation( modified in JEM6.0
Bi-directional optical flow (BIO) ( modified in JEM6.0
Decoder-Side Motion Vector Refinement (DMVR) 
Transform
Explicit multiple core transform
Mode dependent non-separable secondary transforms Signal dependent transform (SDT) ( disabled by default
In-loop filter
Bilateral filter
Adaptive loop filter (ALF) 
Content adaptive clipping 
Enhanced CABAC design 
Context model selection for transform coefficient levels
Multi-hypothesis probability estimation
Initialization for context models
Performance progress for JEM (HM-KTA) in terms of BD-rate gain vs. encoder time increase in random access test configuration is demonstrated on Figure 1. Results are based on Software Development AHG reports. Some encoder run-time reduction is observed for JEM6.0 compared to JEM5.0, but still encoder run time is much higher compared to HM (>(10).
Screen content coding tools were enabled for HEVC at the last meeting for class F (screen content) which is optional ( not included to the averaging) It needs to be noticed that SCM 16.15 outperforms JEM in all-intra (19%) and random access (7%) configurations even JEMs encoder is much slower.
The progress of JEM performance in RA test configuration.
Coding performance compared to HEVC summary.
JEM6.0 (6th meeting)
Test configurationBD-rateTimeYUVEnc. Dec. All Intra"20%"28%"28%(63(2Random Access"29%"36%"35%(12(10Low Delay-B"22%"28%"29%(10(8Low Delay-B"26%"31%"32%(7(5
JEM5.0 (5rd meeting)
Test configurationBD-rateTimeYUVEnc. Dec. All Intra"20%"28%"28%(63(2Random Access"29%"35%"34%(12(10Low Delay-B"22%"29%"29%(10(8Low Delay-B"26%"31%"32%(7(5Significant gain is observed in both three color components. In random access test highest gain over HEVC is observed for DaylightRoad test sequence (39.6%), lowest gain JEM shows for ToddlerFountain video (15.2% only).
JVET Common Test Conditions use integer QP settings Qp=22, 27, 32, 37 for all videos in test set. For CfE bit-matching was performed. The table below shows BD-rate performance JEM6.0 for video test set from CfE.
JEM5.0.1 vs HM under CfE test conditions (RA conditions)
ResolutionSeq.BD-rateYUV4KCrossWalk1"37.9%"43.9%"47.6%FoodMarket3"34.9%"46.5%"48.9%Tango1"36.4%"55.0%"49.5%CatRobot1"40.2%"52.4%"45.5%DaylightRoad1"40.7%"53.5%"38.1%BuildingHall1"33.3%"41.4%"46.5%ParkRunning2"31.6%"26.2%"29.3%CampfireParty"37.9%"35.7%"56.6%2KBQTerrace"30.4%"50.5%"61.2%RitualDance"27.8%"37.9%"41.7%TimeLapse"26.8%"61.3%"67.1%BasketballDrive"32.1%"46.9%"43.3%Cactus"36.0%"49.1%"45.1%4K All"36.6%"44.3%"45.2%2K All"30.6%"49.1%"51.7%All"34.3%"46.2%"47.7%In the SDR category (most relevant to this AhG) 2 CfE responses were submitted, even higher performance than the JEM was demonstrated.
At the 2nd JVET meeting Exploration Experiments practice was established. In 6th JVET meeting 4 EEs were created. For each new coding tool under consideration special SW branch was created. After implementation of each tool announcement via JVET reflector was done. For all 4 EEs input contribution for this meeting were submitted. Summary of exploration experiments is provided in JVET-F1001.
In total 20 contributions on coding tools were noted to have been submitted in following categories:
Structure (0)
Intra (13),
Inter (4),
Transform (0),
Entropy coding (1),
In-loop filter (2).
The AHG recommended to:
Consider encoder complexity as one of the criteria when evaluating the tools. Encourage further encoder and decoder complexity reduction.
Review all the related contribution. 
Continue the Exploration Experiments practice.
In the discussion, it was remarked that the primary difference between the results with the CfE test conditions (34%) and the usual JEM CTC measured performance (29%) appears to be due to the bit rate range that was tested and the selection of test sequences (2K and 4K only, without screen content, and not including the most difficult test sequence).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3236" JVET-G0002 JVET AHG report: JEM algorithm description editing (AHG2) [J. Chen, E. Alshina, J. Boyce]
This document reports the work of the JVET ad hoc group on JEM algorithm description editing (AHG2) between the 6th meeting at Hobart, AU (31 March  7 April 2017) and the 7th JVET meeting at Torino, IT, (July 13-21, 2017).
During the editing period, on top of JVET-E1001 Algorithm Description of Joint Exploration Test Model 5, the editors worked on the following two aspects to produce the final version of JVET-F1001 Algorithm Description of Joint Exploration Test Model 6.
Integrate the following adoptions, which change the encoding or decoding process, at the 6th JVET meeting
JVET-F0028, BIO without block extension
JVET-F0031, removal of redundant syntax signalling for transform skip
JVET-F0032, enhanced FRUC Template Matching Mode, aspect 2
JVET-F0096, division-free bilateral in-loop filter
Editorial improvements by editors
The document also incorporated the fix of text and software bug related to transformed coefficients zero-out for large transform (Ticket#44) which was confirmed at the Hobart JVET meeting.
Currently the document contains the algorithm description as well as encoding logic description for all new coding features in JEM6.0 beyond HEVC. Compared to HEVC, the following new coding features are included in JEM6.
Block structure
Quadtree plus binary tree (QTBT) block structure with larger CTUs (software supports 256×256, CTC use 128×128)
Intra prediction
65 intra prediction directions with improved intra mode coding 
4-tap interpolation filter for intra prediction
Boundary filter applied to other directions in addition to horizontal and vertical ones 
Cross-component linear model (CCLM) prediction
Position dependent intra prediction combination (PDPC)
Adaptive reference sample smoothing
Inter prediction
Sub-PU level motion vector prediction
Locally adaptive motion vector resolution (AMVR)
1/16 pel motion vector storage accuracy
Overlapped block motion compensation (OBMC)
Local illumination compensation (LIC)
Affine motion prediction
Pattern matched motion vector derivation
Bi-directional optical flow (BIO)
Decoder-side motion vector refinement
Transform
Large block-size transforms with high-frequency zeroing
Adaptive multiple core transform
Mode dependent non-separable secondary transforms
Signal dependent transform (SDT, disabled by default)
Loop fillter
Bilateral filter
Adaptive loop filter (ALF)
Content adaptive clipping
Enhanced CABAC design
Context model selection for transform coefficient levels
Multi-hypothesis probability estimation
Initialization for context models
Among all of them, the bilateral filter was newly adopted at the 6th JVET metting. The pattern matched motion vector derivation method was enhanced by adding the switching feature between uni-prediction and bi-prediction in template matching merge mode. BIO was modified in a way that the additional memory access is eliminated.
The AHG recommended to:
Continue to edit the algorithm description of Joint Exploration Model document to ensure that all agreed elements of JEM are described 
Continue to improve the editorial quality of the algorithm description of Joint Exploration Model document and address issues relating to mismatches between software and text.
Identify and improve the algorithm description for critically important parts of JEM design for better understanding of complexity.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3174" JVET-G0003 JVET AHG report: JEM software development (AHG3) [X. Li, K. Suehring]
This report summarizes the activities of the AhG3 on JEM software development that has taken place between the 6th and 7th JVET meetings.
Software development was continued based on the HM-16.6-JEM-5.1 version. A branch was created in the software repository to implement the JEM-6 tools based on the decisions noted in section 12.4 in the notes of 6th JVET meeting. All integrated tools were included in macros to highlight the changes in the software related to that specific tool.
HM-16.6-JEM-6.0 was released on Apr. 27th, 2017. 
Several branches were created for exploration experiments on top of HM-16.6-JEM-6.0. Note that these branches are maintained by the proponents of exploration experiments.
During the 6th JVET meeting, it was requested to clean bug fix related macros in JEM. Such cleanings have been made in branch HM-16.6-JEM-6.0-dev. A list of macros that have been removed was provided in the report.
The JEM software is developed using a subversion repository located at:
 HYPERLINK "https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/" https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/
The implementation of JEM-6 tools has been performed on the branch
 HYPERLINK "https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/branches/HM-16.6-JEM-5.1-dev" https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/branches/HM-16.6-JEM-5.1-dev
The released version of HM-16.6-JEM-6.0 can be found at
 HYPERLINK "https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/tags/HM-16.6-JEM-6.0" https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/tags/HM-16.6-JEM-6.0 
The branches of exploration experiments can be found at 
 HYPERLINK "https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/branches/candidates" https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/branches/candidates
The performance of HM-16.6-JEM-6.0 over HM-16.6-JEM-5.0.1 and HM-16.15 under test conditions defined in JVET-B1010 is summarized as follows. As agreed in 6th JVET meeting, HM-16.15-SCM-8.4 is used as HM anchor for class F sequences. Note that 8-bit internal bit depth was used for HM-16.15-SCM-8.4 when testing class F.
[Add table from report - paste behaving strangely]
The JEM bug tracker is located at
 HYPERLINK "https://hevc.hhi.fraunhofer.de/trac/jem" https://hevc.hhi.fraunhofer.de/trac/jem
It uses the same accounts as the HM software bug tracker. For spam fighting reasons account registration is only possible at the HM software bug tracker at 
 HYPERLINK "https://hevc.hhi.fraunhofer.de/trac/hevc" https://hevc.hhi.fraunhofer.de/trac/hevc
Please file all issues related to JEM into the bug tracker. Try to provide all the details, which are necessary to reproduce the issue. Patches for solving issues and improving the software are always appreciated.
The AHG recommends
To continue software development on the HM-16.6 based version
Encourage people to test JEM software more extensively outside of common test conditions.
Encourage people to report all (potential) bugs that they are finding.
Encourage people to submit bitstreams/test cases that trigger bugs in the JEM.
Clarify the internal bit-depth settings for HM-16.15-SCM-8.4 when testing class F.
It was noted that the prior JCT-VC SCC CTCs use 8 bit encoding and the Class F test sequences are 8 bit. The coding performance difference was small, it was agreed to use 10 bit encoding always for consistency in comparisons.
SCC and 4:4:4 remain key issues. No progress was reported on SCC support.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3258" JVET-G0004 JVET AHG report: Test material (AHG4) [T. Suzuki, V. Baroncini, J. Chen, J. Boyce, A. Norkin]
The test sequences used for CfE (JVET-E1002) were available on  HYPERLINK "ftp://jvet@ftp.ient.rwth-aachen.de" ftp://jvet@ftp.ient.rwth-aachen.de in directory /jvet-cfe (please contact the JCT-VC chairs for login information).
HM/JEM anchors (defined in JVET-F1002) were generated and verified by cross checker. 
HM anchors:
 HYPERLINK "ftp://jvet@ftp.ient.rwth-aachen.de/jvet-cfe/anchors-hm" ftp://jvet@ftp.ient.rwth-aachen.de/jvet-cfe/anchors-hm
JEM anchors:
 HYPERLINK "ftp://jvet@ftp.ient.rwth-aachen.de/jvet-cfe/anchors-jem" ftp://jvet@ftp.ient.rwth-aachen.de/jvet-cfe/anchors-jem
Additional 4 rate points for HM were discussed and defined as in the following table (columns with non-integer bit rate identifiers).
SDR target bit rates
 Target bit rates [kbps]SequencesRate 11.5Rate 22.5Rate 34.5Rate 44.5UHD1, UHD2 10001250150019502400320040005200UHD3, UHD4, UHD515001950240032004000550070009100UHD68001000120016002000265033004300UHD7, UHD82000265033004650600080001000013000HD14005006008001000135017002300HD29001200150020502600295043005600HD31802302803804806408001050HD48001000120016002000275035004600HD550065080010001200160020002750
Colour primary and matrix conversion issue
The sequences provided by SJTU were converted to YUV assuming SMPTE ST 170 (ITU-R BT.601). When ffmpeg is used with default setting, YUV file converted from RGB will be in SMPTE ST 170 or ST 240. Those needed to be revised. SJTU is working to revise the test sequences.
BCOM test sequence (CatRobot) was converted by ffmpeg by default setting (JCTVC-V0086). It means the sequence should be revised ? (4:4:4 -> 4:2:0 conversion was done by HDR tools)
Other test sequences for SDR are BT.709 container, not BT.2020 (Some of test sequence owner said they captured as RAW of Sony F65).
Chroma sampling position issue
The default setting of ffmpeg chroma subsampling assumes chroma location type 1 for 4:2:0 data. (the sampled chroma data have a placement right in the middle of 4 corresponding luma samples.) However, most of the current HDTV systems use chroma location type 0 (horizontally co-sited, placed in the middle vertically) or chroma location 2 (co-sited with the top left luma sample). BT.2020 and BT.2100 specs even mandate the use of location 2 where as most software assume type 0 or type 2 depending on colour primaries when converting back to 4:4:4.
The Huawei test sequences seem to be converted by ffmpeg with default setting (chroma location is centered for the surrounding luma, i.e., type 1). Those should be revised.
Full range issue
Cactus uses full range of video signal. If original 4:4:4 file is available, it is better to re-generate.
Contributions
Relevant contributions to this meeting included submissions on CfE anchor generation, a new test sequence, studies for the current test materials, and non-normative encoding techniques for adaptive quantization and denoising pre-processing.
Conclusions and discussions
The AHG recommended:
To review all related contributions. 
To evaluate visual quality of HM/JEM anchors.
To perform viewing of new test sequences
To discuss further actions to select new test materials for JVET activity, toward CfP.
In the discussion, the use of an assumption of BT.601 for conversion to YUV should be fixed. BT.709 seems OK to use.
It was commented that if we modify a test sequence to fix the chroma location or narrow range, we should use a different name for the sequence to avoid confusion over which version to use.
It was commented that there may be other problems with colour conversion relating to the use of ffmpeg as a colour conversion tool. HDRtools includes format conversion support, and using it should help avoid such problems.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3175" JVET-G0005 JVET AHG report: Memory bandwidth consumption of coding tools (AHG5) [X. Li, E. Alshina, T. Ikai, H. Yang]
The document summarizes activities of AhG on memory bandwidth consumption of coding tools between the 6th and the 7th JVET meetings.
One relevant contribution was noted (JVET-G0061).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3204" JVET-G0006 JVET AHG Report: 360° video conversion software development (AHG6) [Y. He, V. Zakharchenko]
The document summarizes activities on 360-degree video content conversion software development between the 6th and the 7th JVET meetings.
The 360Lib-3.0 software package integrated all adoptions about projection format and metrics calculation:
Metrics:
Improved cross-format S-PSNR-NN (JVET-F0042);
Projection formats and frame packing:
Adjusted cubemap projection (JVET-F0025);
Rotated sphere projection (JVET-F0036);
Compact octahedron projection with padding (JVET-F0053);
Software:
Platform independent floating point to integer conversion (JVET-F0041);
Modification for spherical rotation (JVET-F0056);
Fix for bug ticket #47, #49
360Lib-3.0 related release:
360Lib-3.0rc1 with support of HM-16.15 and JEM-6.0rc1 was released on Apr. 21th, 2017;
360Lib-3.0 with support of HM-16.15 and JEM-6.0 was released on May. 1st, 2017; 
360Lib-3.0 based HM-16.15 testing results was released on May. 1st, 2017;
360Lib-3.0 based JEM-6.0 testing results was released on May. 16th, 2017.
Bug fix after 360Lib-3.0 release:
Ticket #51 and #52 were submitted after 360Lib-3.0 release. They do not affect CTC and EE tests. The fixes were provided in the development branch (360Lib-3.1-dev).
The 360Lib software is developed using a Subversion repository located at:
 HYPERLINK "https://jvet.hhi.fraunhofer.de/svn/svn_360Lib/" https://jvet.hhi.fraunhofer.de/svn/svn_360Lib/
The released version of 360Lib_3.0 can be found at:
 HYPERLINK "https://jvet.hhi.fraunhofer.de/svn/svn_360Lib/tags/360Lib-3.0/" https://jvet.hhi.fraunhofer.de/svn/svn_360Lib/tags/360Lib-3.0/
360Lib-3.0 testing results can be found at:
 HYPERLINK "ftp://ftp.ient.rwth-aachen.de/testresults/360Lib-3.0" ftp.ient.rwth-aachen.de/testresults/360Lib-3.0
360Lib bug tracker
 HYPERLINK "https://hevc.hhi.fraunhofer.de/trac/jem/newticket?component=360Lib" https://hevc.hhi.fraunhofer.de/trac/jem/newticket?component=360Lib
The tables below measure performance characteristics for random access encoding comparisons.
The table below lists the HM-16.15 based coding performance with different projection formats according to 360o video CTC (JVET-F1030) compared to ERP coding.
HM-16.15-360Lib-3.0 testing (HM ERP coding as anchor)
Projection
formatE2E WS-PSNRE2E WS-PSNR for 8K sequencesE2E WS-PSNR for 4K sequencesYUVYUVYUVCMP-3.5%-2.5%-2.5%-4.6%-2.7%-3.2%-1.8%-0.9%-0.2%EAP11.5%-2.3%-3.2%14.1%-2.1%-2.6%5.4%-1.1%-3.1%OHP-2.2%1.5%0.4%-4.7%0.2%-0.9%3.3%6.8%5.3%ISP-5.2%-0.4%-1.3%-7.2%-1.1%-2.4%-0.9%3.3%2.8%SSP-9.7%-2.9%-3.5%-10.7%-3.8%-4.2%-7.6%1.5%0.1%ACP-11.0%-6.1%-6.3%-12.0%-6.0%-6.1%-9.5%-5.3%-5.5%RSP-9.9%-5.1%-5.2%-11.3%-5.9%-6.3%-7.3%-1.5%-1.5%
The table below lists the JEM-6.0 based coding performance with different projection formats compared to ERP coding
JEM-6.0-360Lib-3.0 testing (JEM ERP coding as anchor)
Projection
formatE2E WS-PSNRE2E WS-PSNR for 8K sequencesE2E WS-PSNR for 4K sequencesYUVYUVYUVCMP-4.2%-1.6%-2.5%-5.5%-2.2%-3.4%-1.8%1.0%0.2%EAP13.1%-5.3%-7.6%16.1%-4.2%-6.9%6.6%-5.7%-8.4%OHP-3.0%3.6%0.4%-5.7%0.7%-1.2%3.2%11.8%5.4%ISP-5.6%1.4%-1.5%-8.1%-0.5%-2.5%-0.3%7.0%2.2%SSP-11.6%-4.4%-6.6%-11.9%-4.4%-6.0%-10.4%0.1%-4.3%ACP-12.6%-7.3%-8.8%-13.7%-7.3%-8.5%-10.7%-6.0%-8.7%RSP-11.8%-8.2%-9.0%-12.5%-8.4%-9.1%-10.4%-4.9%-6.5%
The table below compares the JEM-6.0 ERP coding with HM-16.15 ERP coding. 
JEM-ERP vs HM-ERP coding (HM ERP coding as anchor)
JEM ERP vs. HM ERPE2E SPSNR-NNE2E SPSNR-IE2E CPP-PSNRE2E WS-PSNRYUVYUVYUVYUVTrolley-15.6%-31.2%-33.7%-15.6%-31.0%-33.5%-15.7%-30.9%-33.5%-15.7%-31.2%-33.8%GasLamp-20.7%-45.2%-42.2%-20.7%-45.0%-41.9%-20.7%-45.0%-41.9%-20.6%-45.3%-42.2%Sb_in_lot-25.0%-34.3%-44.8%-25.0%-34.1%-44.6%-25.0%-34.1%-44.7%-25.0%-34.3%-44.8%Chairlift-30.3%-48.7%-45.8%-30.3%-48.6%-45.7%-30.3%-48.5%-45.8%-30.3%-48.7%-45.9%KiteFlite-16.8%-35.8%-39.1%-16.8%-35.4%-38.9%-16.8%-35.4%-38.8%-16.8%-35.7%-39.1%Harbor-19.4%-42.1%-42.4%-19.4%-41.9%-42.1%-19.5%-41.9%-42.2%-19.5%-42.2%-42.5%PoleVault-18.2%-19.1%-20.8%-18.1%-18.3%-20.2%-18.1%-18.4%-20.3%-18.2%-19.1%-21.0%AerialCity-25.3%-45.2%-31.3%-25.2%-44.5%-30.5%-25.2%-44.5%-30.5%-25.3%-45.3%-31.3%DrivingInCity-29.0%-42.8%-37.6%-28.8%-42.0%-36.9%-28.8%-41.8%-36.8%-29.0%-42.8%-37.6%DrivingInCountry-28.8%-35.5%-39.5%-28.7%-34.8%-39.0%-28.7%-34.8%-39.0%-28.8%-35.5%-39.5%Overall-22.9%-38.0%-37.7%-22.9%-37.6%-37.3%-22.9%-37.5%-37.3%-22.9%-38.0%-37.8%8K-21.3%-39.6%-41.3%-21.3%-39.3%-41.1%-21.3%-39.3%-41.1%-21.3%-39.6%-41.4%4K-27.7%-41.2%-36.1%-27.6%-40.4%-35.5%-27.6%-40.4%-35.4%-27.7%-41.2%-36.1%
The table below lists the conversion-only end-to-end WS-PSNR for different projection formats compared to that of the ERP format.
Conversion only results (ERP format as anchor)
Projection formatE2E WS-PSNRE2E WS-PSNR for 8K sequencesE2E WS-PSNR for 4K sequencesYUVYUVYUVERP45.6356.6956.5245.6058.2257.8145.6854.3954.59CMP0.680.340.310.810.460.460.470.150.08EAP-1.49-0.34-0.09-1.89-0.48-0.09-0.90-0.14-0.09OHP1.290.520.511.490.560.540.990.460.45ISP2.170.720.692.250.630.602.050.860.82SSP3.802.222.172.190.640.626.224.604.50ACP2.920.920.892.830.860.853.061.010.95RSP3.191.671.642.280.660.644.563.183.13
The AHG recommended:
To continue software development of the 360Lib software package.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3270" JVET-G0007 JEM coding of HDR/WCG material (AHG7) [A. Segall, E. François, D. Rusanovskyy]
This document summarizes the activity of AHG7: JEM Coding of HDR/WCG Material between the  6th meeting in Hobart, AU (31 March  7 April 2017) and the 7th meeting in Torino, IT (13  21 July 2017).
Accomplishments by the AhG reportedly included:
HM anchor bit-streams for the CfE HDR category were prepared and made available
JEM anchor bit-streams for the CfE HDR category were prepared and made available
HM and JEM rate points and bit-streams for the HLG test material were prepared and made available 
Note that more information of these activities is provided in the input documents (listed below).  
Twelve contributions related to the AHG were identified, including two CfE responses, a document about the JEM anchor for HDR, a contribution of a new test sequence, and eight other contributions (on QP delta, HLG issues, HDR metrics, SDR content in HDR containers).
The AHG recommended the following:
Review all input contributions
Review responses to the HDR CfE category and prepare the Summary of Respones as appropriate 
Review new HDR test material and rate point information; discuss if sequences and configurations for the HDR CTC or CfP should be modified.
Prepare the HDR video section of the Draft Call for Proposals
In the discussion, the presenter noted that a BoG recommendation at the previous meeting was to reduce the numer of metrics used for HDR evaluation. It was commented that the various metrics tend to perform roughly consistently for SDR but inconsistenly for HDR.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3248" JVET-G0008 JVET AHG report: 360° video coding tools and test conditions [J. Boyce, A. Abbas, E. Alshina, G. v. d. Auwera, Y. Ye]
This document summarizes the activity of AHG8: 360° video coding tools and test conditions between the 6th meeting in Hobart, AU (31 March  7 April 2017) and the 7th meeting in Torino, IT (13  21 July 2017).
Output documents JVET-F1003, JVET-F1004, and JVET-F1030 were made available.
HM anchors for the CfE 360 video category were prepared by InterDigital, cross-checked by Samsung and Qualcomm, and made available on May 19. JEM anchors for the CfE 360 video category were prepared by Samsung, cross-checked by InterDigital, and made available on June 10. Dynamic viewport trajectories for CfE subjective viewing were selected by Mathias Wien, Minhua Zhou and Jill Boyce, and documented in JVET-G0066.
Anchors were prepared for the OMAF projections subjective viewing (by Samsung, cross-checked by Intel), and provided with a reporting template in an update version of JVET-F1004 on May 18.
Email activity on the reflector was limited to the kickoff message and announcements about availability of anchors.
There were 37 contributions noted as related to 360º video coding, which were classified as follows, with non-final document counts as listed.
Video CfE responses (4)
EE3 and related (7)
EE4 and related: 360 projection modifications and padding (7)
Quality assessment and metrics (4)
Projection formats and padding (9)
Coding tools (1)
Use cases (2)
Test content (3)
The AHG recommends the following:
Review input contributions
Review 360 video CfE responses, and contribute towards preparation of Summary of Responses to CfE document
Assist MPEG OMAF activity with subjective evaluation of projection formats
Review new 360 video test material, and consider adding or replacing test sequences for common test conditions and/or CfP
Refine common test conditions for 360° video, including objective metrics
Prepare a 360° video section of a preliminary Call for Proposals
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3249" JVET-G0009 JVET AHG report: 4:4:4 support in JEM (AHG9) [A. M. Tourapis, X. Li]
The document summarizes activities of the AhG on 4:4:4 support in the JEM software that took place between the 6th and the 7th JVET meetings.
One relevant contribution was noted: JVET-G0148.
An email initiating the activities for this AHG was circulated on May 16th. The email reintroduced the mandates of the activity and provided a test plan for performing the evaluation of the software. In particular, the following plan was suggested:
First, identify a few sequences for running our tests with. We would suggest using very low resolution sequences (e.g. QCIF and/or CIF) since the intent is not to run comparison and true video coding tests (at least at this stage) but instead examine whether the software supports (or not) 4:4:4 coding. Low resolution sequences could permit us to perform more and faster tests.
Sequences should be split into two groups, one sequence (e.g. the 4:4:4 QCIF foreman sequence) for conducting an early fail/pass test, and a second group with more (very short) sequences for which we will need to run a more thorough test, only and only if the first test passes. The fail/pass test would include a simple encoding run, and if that passes (without a crash) a decoding run that is required to fully match the output of the encoder. If any of these steps fail, then the test would fail. We are not at all concerned about RD performance here. If the fail/pass test passes, then the secondary test is performed so as to determine whether there are other problems that are not immediately identifiable. Again the result of this would be a fail or pass.
Given these tests, create an experiment that would help us understand the state of the software. The following was suggested:
Identify all macros in the software. Then perform the fail/pass test on the following two cases:
all macros enabled
all macros disabled
If the second case passes, then also run the longer test to see if the second case also passes that. if the second case fails, then we would have a more serious problem and we may wish to sit down and identify what is broken in the software. A more thorough analysis experiment would need to be considered at that stage. If any tool passes, then combination of tools could be considered. A possible solution would be to first test if enabling all would pass the test since then we would avoid having to try all possible combinations. 
If the second case passes both tests then start enabling one by one all macros in the software and run the fail/pass test on all of them independently. If any of them again pass the fail/pass test, the more thorough test could be run. Here we may wish to identify which tools we should be enabling first based on some type of priority, or whether we should just enable tools based on some predefined order (e.g. alphabetic or order defined in the software). 
An encoding/decoding pass btw does not mean that everything works fine. We should also compare (if all goes well) RD performance vs the HM. RD performance also vs 4:2:0 JEM could be of interest. But likely we will not have much time to do this at this stage.
One additional company (Mediatek) showed interest in participating and assisting with these efforts. Several new, low resolution 4:4:4 sequences, at a resolution of 320x180, were generated and circulated amongst the interested parties. These sequences were used for the above tests as well as some of the existing 4:4:4 sequences. As part of the tests, a few bugs were identified and resolved.
Given the performed tests, it has been reported that the software seems to currently operate in terms of 4:4:4 coding, with, however all RExt tools of HEVC disabled (e.g. the cross-component transform). It is also not known whether the JEM coding tools also behave as intended; however, no mismatches were reported when encoding and decoding.
The AHG recommended
To review all related contributions.
Analysis, development and improvement of JEM (3)
Contributions in this category were discussed Sunday 15th 0920-0945 (chaired by JRO).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3197" JVET-G0090 Unified adaptive search range setting in JEM and HM [T. Ikai, Y. Yasugi (Sharp)]
This contribution is a follow up contribution of JVET-F0044 and JCTVC-AA0043. It proposes the common adaptive search range of 96-384 for RA aiming to unify the current JEMs 256-256 search range and HMs 64_256 adaptive search range.
In JEM 6.0 RA condition, the suggested range of 96_384 shows 0.05 % bdrate gain with 2 % encoding time reduction.
Recommend JCT-VC to use the range 96_384 as CTC for HM. HM Anchors should follow CTC. (was adopted in JCT-VC)
Decision(CTC): Change search range in JEM to 96_384
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3261" JVET-G0144 Cross-check of Unified adaptive search range setting in JEM and HM (JVET-G0090) [Y.-H. Ju, P.-H. Lin, C.-C. Lin, C.-L. Lin (ITRI)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3210" JVET-G0101 On internal QP increase for bitrate matching [P. Hanhart, Y. He, Y. Ye (InterDigital), X. Ma, H. Chen, H. Yang, M. Sychev (Huawei)]
In the current HM/JEM implementation, the internal QP can be increased by one starting from a specified absolute Picture Order Count (POC) when encoding a sequence to meet a target bitrate. However, the QP increment operation is performed after all other QP adjustments, such as adjustment based on temporal level. This means that it is the frame level QP (calculated from base QP) instead of the base QP that is increased. As a result, depending on the base QP and QP offset model parameters, the resulting QP may be different when compared to increasing the base QP directly. To facilitate the QP tuning for rate matching when using parallel encoding in RA configuration, this contribution proposes to increase the base QP by one instead of increasing the frame QP by one, starting from the QP switching point.
Generally, there is wide consensus that this method should be used to achieve better rate matching. It is however mentioned that a consequence could be a variation of QP larger than one in the higher temporal layers (up to 2) due to rounding in the axQP+b equation. Therefore, it should be checked if this is consistent with the conditions of CfE/CfP.
Further, it is desirable to use the same approach for both HM and JEM -> JCT-VC has decided that the same encoder modification will be integrated in HM software. 
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3282" JVET-G0164 On Improvement of Test Conditions [Maksim Sychev (Huawei)] [late]
It is proposed to enhance the Current CTC/CfE/CfP test conditions by defining a supplementary and temporary set of source materials which will be selected from the existing database for each meeting for optional tests in order to check if a particular coding tool avoids losses in performance for non-CTC content.
Was presented Thu 20, 1220-1245.The proponent suggests that only coding losses should be taken into consideration here, kind of sanity  check.
General idea good, but may not be practical to conduct these additional tests regularly. Could produce too much overhead in testing, and very late results.
May also be difficult to select appropriate sequences for the large set. For example, there were good reasons for not using certain test sequences that had been proposed, because they were too easy to code, etc. So they might not provide the information that is sought. 
It is remarked that it would only make sense if such additional test is mandatory. One expert notes that it might also have the side effect that overly complex technology is not passing it due to time restrictions.
Further thoughts necessary if such procedure could be implemented in a later standardization phase. No action at this moment.
Test material (7)
Contributions in this category were discussed XXX Tuesday 18th afternoon XX00-XX00 (chaired by 
).JRO) unless otherwise noted.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3153" JVET-G0053 Test Sequences for Virtual Reality Video Coding from LetinVR [R. Guo, W. Sun (LetinVR)]
Was reviewed in 360 BoG.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3155" JVET-G0055 Test Sequences for Virtual Reality Video Coding from InterDigital [E. Asbun, Y. He, P. Hanhart, Y. Ye (InterDigital)] [late]
Was reviewed in 360 BoG.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3205" JVET-G0096 AhG4: Evaluation on drone test sequences [X. Zheng, W. Li (DJI)] [late]
This document provides the evaluation on drone sequences that were proposed at Hobart meeting. The problems related to YUV conversion for those sequences are addressed at the document, and the new converted YUV versions and the test results are also provided at this document.
Luma BR reduction of JEM over HM is 17%/22% for the BeachMountain and MountainBay sequences. However, as the PSNR ranges are non overlapping, BD values are not valid.
Confirmed that conversion is correct.
Viewing session Tuesday evening (BoG T. Suzuki)  see report 
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3262" JVET-G0145 AHG4: Evaluation report of drone test sequences [Y.-H. Ju, C.-C. Lin, P.-H. Lin, C.-L. Lin (ITRI)] [late]
This contribution crosschecked the encoding results from JVET-G0096. The two drone test sequences, BeachMountain2 and MountainBay2, were encoded by HM16.15 and JEM6.0. In addition, subjective evaluations are also provided in this document. 
For the two drone test sequences, it is observed that the differences between the uncompressed and compressed video are not obvious when the QP value is small (i.e. the rate point is high). Therefore, the contribution focuses more on evaluating the encoding performances of the rate point 2 and rate point 3.
It is also reported that on the water surface of BeachMountain, flickering is observed in the HM output. It should be tried to identify whether this could be due to some inappropriate setting of HM anchors. Similar effects were observed in two of the HLG sequences.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3264" JVET-G0147 New Test Sequences for Spherical Video Coding from GoPro [A. Abbas, D. Newman (GoPro)] [late]
Was reviewed in 360 BoG.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3273" JVET-G0155 Selected medical imaging sequences for JVET development [D. Nicholson (Vitec), J.-M. Moureaux, A. Chaabouni (CRAN/CNRS), J. Lambert (ParisTech)] [late]
This contribution proposes new test sequences from the medical domain for the development of JVET, and the future video coding standard to be developed from it. The proposed set contains 8-bit YCbCr 4:2:2 video content.
Relevant use case. What are the quality requirements? Contributor says that QP32 may still be OK, but better quality up to at least visual lossless is usually required.
Is it appropriate to use only YUV 8 bit? RGB 444 could be provided as well.
Further study on bit rates necessary, not an urgent issue to put them in the test set.
Call for Evidence (12)
General discussions
Discussed Thursday 12:20 and afternoon (GJS & JRO).
The G0022 HDR-category proposal raises the question of whether out-of-loop processing is intended to be allowed in the CfE/CfP or not, and whether sequence-by-sequence customization is intended to be allowed. The proponent of G0022 said they considered this technique to be part of the decoding process, although it lies outside the prediction loop (as it is a process that is necessary for producing properly viewable pictures if the corresponding pre-processing has been applied at the encoder side). We need to consider and clarify this point.
(For that matter, the deblocking and SAO filtering of a non-reference picture, and even other aspects of the decoding process of a non-reference picture, is also outside of the prediction loop.)
It was commented that the CfE is silent on the question of multi-pass and hand-tuning per-sequence encoding techniques. We need to consider and try to clarify this point. It might be difficult to find an appropriate formulation.
Some further consideration of quantization and luma-dependent processing issues may also be desirable for finalizing the CfP.
For a CfP, the 360 video proposals should also be required to provide software that renders a viewport into a 10 bit 4K video (1800x1800).
Subsequent actions and following aspects were discussed Wed. afternoon
- Output document, providing a high-level description of the responses received, testing method, objective criteria (probably with some comment that they may not be fully conclusive), results of subjective viewing.
- Develop a Draft CfP, using as basis a merge of HEVC CfP (of Jan. 2010) and the CfE (A. Segall takes care of initial version)
- Dates tbd
- Reduce the number of sequences to 4-5 for each class for subjective testing
Classes:
- SDR UHD 3840x2160 (other sequences should be cropped to that size)
- SDR HD
- HDR HD PQ
- HDR UHD HLG? (requires more investigation of sequences, not in draft CfP)
- 360 Video 8K (or 6K), rendered to dynamic viewports 75°x75°, 1800x1800; investigate to generate alternative viewports approx. 100°x60°, 1920x1080 (not in draft CfP), and test both of them 
- Include low delay test cases for HD?, possibility of downsampling additional UHD sequences
- Potentially add more sequences only for objective criteria (could be shorter) without rate matching: PSNR range matching would be necessary, e.g. for each sequence request results that match the PSNR of HEVC anchors within a range +/-0.x dB
- SDR WVGA 832x480 (downsampled from UHD)
Decision(CTC): Crop Rollercoaster, Tango, Toddlerfountain to width of 3840 (new names)
Responses
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3177" JVET-G0021 FastVDO Response to JVET CfE for HDR [P. Topiwala, M. Krishnan, W. Dai (FastVDO)]
Discussed Friday 14:45 (GJS & JRO).
The approach described in this contribution is based on applying data-adaptive transfer functions to the input video. Input video was provided in Y2 CbCr 4:2:0 format after (static) PQ conversion, which results in non-linear light. The proposed approach reverts the video to linear light, and adapts the transfer function on the basis of luminance. Data adaptation is automatic, and triggered by changes in mean luminance. Results in the DE100 and PSNRL100 metric reportedly show a gain of 12% and 3% respectively vs JEM anchors, while showing 41.3% and 30% gains vs the HM anchors.
The spirit of this is to remap the data using a different transfer function two parameters  the maximum and mean brightness  and adjust the transfer function.
This uses a parametric model, incorporating mean and maximum luminance (minimum always assumed zero). Luminance here is not the same as source data luma value, as a conversion is used to convert to linear light in RGB444. This is adapted dynamically per frame.
As described, the modified transfer function is applied in the RGB (4:4:4) domain and then the data is converted back to the 4:2:0 Y2 CbCr domain for encoding.
The contributor used the JEM6.0 codebase.
The proponent asserts that this is different from reshaping, and reshaping might be beneficial in addition. The proponent also believes that it could also be combined with HLG material (not only PQ as demonstrated here).
It was commented that the peak brightness may be unknown for HLG data; the proponent responded that something along these lines may still be beneficial.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3216" JVET-G0022 CfE response to the HDR category from Technicolor [E. François, F. Le Leannec (Technicolor)]
Discussed Thursday 12:20 (GJS & JRO).
This contribution describes the Technicolor response to the Joint Call for Evidence on Video Compression with Capability beyond HEVC, for the HDR category. The proposed technology is directly based on the Exploratory Test Model mapping (a.k.a. reshaping with luma-dependent chroma scaling) previously tested in MPEG and JCT-VC. The proposed mapping, applying directly to the input Y2 CbCr 4:2:0 BT.2100 PQ content, is based on a single scaling table used both for luma and chroma. The scaling table is used to build a mapping 1D (piece-wise linear) look-up-table applied to the luma component, and to perform a luma-based cross-component scaling of the chroma components. One scaling table per shot was used in the reported experiments.
The described scheme is something not supported by the existing CRI SEI message.
In previous work, a non-normative encoding approach using adaptive QP selection based on luma level was developed, and a version of this has been used in the anchors. (As a non-normative approach, this requires overhead bits for signalling the QP).
The described scheme has a somewhat similar spirit to the non-normative technique (but on a sample-wise basis rather than block-wise basis). It involves sending information to specify the mapping function. The encoder chooses the function parameters based on an analysis of the video content. This could be done based on statistics of IRAP pictures, for example, or by pre-analysis (and possibly shot-by-shot analysis of video scenes). As tested, the encoder considered only the first frame of each test sequence for setting the parameter values.
The encoder process for deciding what mapping function to send to the decoder was not precisely described in the contribution.
The JEM was used for the core encoding-decoding process (with QP adaptation disabled).
HDR metrics using JEM coding with HM anchors as reference
HDR metrics using JEM coding with JEM anchors as reference
Note: for wPSNRV of Cosmos1, the RD curves do not overlap. 
As discussed in previous work, this brings up the question of whether to consider this as normative or as a pre-/post-processing technique. The proponent remarked that the pictures are not really reasonably viewable if the technique is not applied at the decoder side.
With out-of-order decoding, one may need to store two copies of each picture or to incorporate the post-processing within a display/output process.
The gain shown in the objective metrics is not especially large when considering the effect of the proposed feature individually, depending on the metric. The proponent acknowledged that the visual effect is also not especially large.
In previous work, a similar process was shown to be possible to use to provide a form of SDR backward compatibility, where the non-remapped signal is interpreted as an SDR picture. Such a usage is not really relevant in the context of a new coding design as primarily considered in JVET.
For JVET purposes, it also does not seem to especially matter whether something is affecting the core decoding process or not, since the entire core decoding process would presumably be new.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3176" JVET-G0023 Qualcomms response to Joint CfE in 360-degree video category [M. Coban, G. Van der Auwera, M. Karczewicz (Qualcomm)]
Discussed Thursday 15:00 (GJS & JRO).
The interior of the encoding/decoding processes is not modified. This contribution is only about a projection mapping.
An adjusted cubemap projection (ACP) with padding is submitted to JVETs Call for Evidence on 360-degree video coding. ACP was previously described in F0025 but without padding. The padded ACP reportedly offers efficient compression of 360-degree video and reduced seam artefacts in the rendered viewports. ACP is an enhancement of CMP by adjusting the sampling on the cube faces to be nearly uniform. With the proposed padding scheme for ACP, the RA coding gain is reportedly 10.7% (e2e WS-PSNR) compared with the equirectangular projection in accordance with the JVET common test conditions for 360-degree video, while for CfE conditions, the reported coding gain is reportedly 8.1% (e2e WS-PSNR).
The scheme is included in 360Lib version 3.
F0025 had slightly higher objective coding gain.
A 3x2 layout of the cubemap is used, with padding around 3x1 regions. The padding is done by extending the rectangular faces, not by duplicating samples from spatially adjacent faces.
To compensate for the increased number of samples used for the padding without increasing the total coded picture size, the number of active samples is reduced.
The padding was 4 luma samples wide.
Blending is not used for the duplicated regions. The padding areas are discarded after decoding.
It was remarked that padding should mainly be useful with regard to subjective boundary artefacts. In discussions it was noted that an alternative to padding is geometry-aware operation of motion compensation could be an alternative to padding.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3180" JVET-G0024 InterDigitals Response to the 360° Video Category in Joint Call for Evidence on Video Compression with Capability beyond HEVC [P. Hanhart, X. Xiu, F. Duanmu, Y. He, Y. Ye (InterDigital)]
Discussed Thursday 15:30 (GJS & JRO).
This proposal is the response to the joint Call for Evidence (CfE) on video compression with capability beyond HEVC by InterDigital in the 360° video category.
Four technologies are highlighted:
Hybrid cubemap projection (similar in concept to ACP, with a generalization of the warping parameters) without padding, with a 3x2 layout slightly different than 360Lib default in order to facilitate prediction order. The encoder can select an appropriate set of projection format parameters to code the input video content.
Intra prediction using face continuity: reference sample derivation for intra prediction is modified to consider the spherical nature of the 360º video;
Inter prediction with geometry padding (which would involve limiting the MV range to prevent excessive padding unless implemented on the fly in the decoder  currently allowing half the face width/height of off-picture region): reference sample derivation for intra prediction is modified to consider the spherical nature of the 360º video;
In-loop filtering with face continuity: spherical neighbours are used to perform in-loop filtering.
Parameters of HCP (one for horizontal and one for vertical correction, controlling a second order polynomial) are determined per face. The parameters are adapted for each IRAP and subsequent GOP; the criterion is minimization of squared sample error between the original and reconstructed ERP (an iterative algorithm with convergence after 34 iterations).
It was suggested to possibly show some video examples later during the week to get some understanding about variation of parameters.
Average bit rate saving was reported as 31% compared to HM anchors, 11.5% compared to JEM (further detailed below). Subjective results reportedly indicate less visibility of face boundaries.
For padding and loop filtering, the decoder requires knowledge about the projection. The padding extends each face by half of its size.
The table below reports the BD-rate of the response compared to the HM 360° anchors.
Comparison of the response with the HM 360° anchors in terms of coding performance.
SequenceE2E WS-PSNR YE2E WS-PSNR UE2E WS-PSNR VSkateboardInLot"39.64%"63.18%"70.08%Chairlift"45.95%"66.27%"60.73%KiteFlite"23.02%"56.60%"63.16%Harbor"25.92%"56.11%"56.50%Trolley"21.56%"44.27%"50.52%Average"31.22%"57.29%"60.20%Comparison with the JEM-based 360° anchors
The table below reports the BD-rate of the response compared to the JEM 360° anchors. As reported in the table, the response can effectively improve the coding performance of the current JEM by providing an average BD-rate savings of 11.5% based on E2E WS-PSNR Y. The maximum bit rate savings of the response can reach 22% (sequence Chairlift).
Comparison of the response with the JEM 360° anchors in terms of coding performance.
SequenceE2E WS-PSNR YE2E WS-PSNR UE2E WS-PSNR VSkateboardInLot"15.38%"25.27%"24.29%Chairlift"22.06%"17.77%"17.60%KiteFlite"6.35%"9.09%"14.69%Harbor"7.41%"12.71%"12.91%Trolley"6.41%"6.88%"11.25%Average"11.52%"14.35%"16.15%
ACP is a special case of the HCP scheme.
The parameters of the warping were calculated from the first picture of each IRAP segment using an optimization to minimize the conversion error.
In the discussion, it was commented that the interaction with tiles should be considered, as the alignment of tile boundaries with the faces may be desirable.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3223" JVET-G0025 Samsungs response to Joint CfE on Video Compression with Capability beyond HEVC (360° category) [E. Alshina, K. Choi, V. Zakharchenko, S. N. Akula, A. Dsouza, C. Pujara, K. K. Ramkumaar, A. Singh] [late]
Discussed Thursday 16:15 (GJS & JRO).
The interior of the encoding/decoding processes is replaced by a general-purpose coding engine described in JVET-G0029 (a CfE response in the SDR category). This contribution focuses on the projection mapping.
The contribution uses a new layout for compact icosahedral projection and doesnt use any 360°-specific changes at the block level.
The projection mapping has two aspects
A new layout for compact icosahedral projection (ISP), with padding of 8 luma samples on vertical & horizontal discontinuous boundaries and 12 luma samples on diagonal discontinuous boundaries. No special padding was used around the exterior picture boundaries.
Sphere rotation prior to conversion to ISP.
On, average 30% (Y), 43% (Cb) and 47% (Cr) BD-rate gain over the HEVC CfE anchor was reported (as a mixture of the effects of the projection mapping and the general-purpose coding engine modifications).
Compared to the JEM CfE anchor, the benefit was estimated roughly to be 7%, split roughly into 2% from the coding engine and 5% from the projection mapping changes.
BD-rate performance relatively to HM 360° CfE anchor
Test sequenceE2EWS-PSNR BD-rateYCbCrTrolley"21.2%"28.0%"38.7%Skateboarding_in_lot"36.4%"45.5%"55.9%Chairlift"41.9%"51.3%"43.0%KiteFlite"22.8%"42.1%"52.0%Harbor"26.5%"45.9%"46.9%Average"29.8%"42.6%"47.3%
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3182" JVET-G0026 Polyphase subsampling applied to 360-degree video sequences in the context of the Joint Call for Evidence on Video Compression [A. Gabriel, E. Thomas (TNO)]
Discussed Thursday 16:45 (GJS & JRO).
Bitstreams were not submitted for this contribution. The proponent said that the results were varying substantially from sequence to sequence, so there is some lack of maturity in the technology at this point.
This contribution presents an analysis of the compression efficiency of 360° video sequences transformed via the polyphase subsampling technique presented in JVET-B0043. This may be conceptually similar to the H.261 Annex D graphics sampling scheme.
This proposed technique decomposes the input video signal (luma and chroma components) by subsampling into lower resolution descriptions using a polyphase subsampling. The multiple lower resolution versions of the signal are then encoded in the same video bitstream via a temporal multiplexing. This enables the decoder to select and to decode the appropriate number of resolution components for the desired output resolution. The polyphase subsampling technique without low-pass filtering is known to be potentially create artefacts such as aliasing when high spatial frequency is present in the signal. The first reported experiment is the comparison of 360 4K video sequences encoded with HM against the polyphase subsampled sequences encoded with HM. The results reportedly show that the best Y-PSNR BD-rate impact is "67.7% for the sequence SkateboardingInLot while the worst is 86.9% for the sequence Trolley. The average Y-PSNR BD-rate impact over all the sequences is 14.3%. The second experiment reported is a simulcast comparison, that is the comparison of 360 4K video sequences encoded with HM plus the 360 video sequences downscaled in HD encoded with HM the against the polyphase subsampled sequences encoded with HM. The results reportedly show that the best Y-PSNR BD-rate impact is "76.50% for the sequence SkateboardingInLot while the worst is 31.30% for the sequence Trolley. The average Y-PSNR BD-rate impact over all the sequences is reportedly "19.10%.
It was remarked that some of the inconsistency in the results may be from bit rate matching effects.
A participant highlighted that the measured performance is reported from measurements made directly in the ERP domain, not considering the spherical domain.
It was commented that since the I frame refresh is not altered to compensate for the 4 coded pictures used per high-resolution frame, the I frame frequency is effectively four times higher than for the anchors, which could adversely effect the results. The QP variation would also affect different spatial positions differently.
The contributor remarked that the encoding time was faster with the scheme than without it. A participant commented that this may be due to the motion search early termination. Each MV value is effectively doubled.
It was commented that the aliasing in the sampling structure might have subjective quality issues.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3200" JVET-G0028 InterDigitals Response to the SDR Category in Joint Call for Evidence on Video Compression with Capability beyond HEVC [X. Xiu, Y. He, Y. Ye (InterDigital)]
Discussed Thursday 17:45 (GJS & JRO).
This is basically a simplification proposal relative to the JEM.
The goal of this response is to provide evidence on compression efficiency improvement over HEVC Main 10 Profile for the standard dynamic range (SDR) content. The response is based on the Joint Exploration Model (JEM) 6.0 reference platform with a couple of simplifications on the bi-directional optical flow (BIO) coding tool. Compared to the HM-based SDR anchors, it reportedly achieves on average 34.23% BD-rate reduction with 9.9 and 8.7 times increase in encoding and decoding time. Compared to the JEM-based SDR anchors, it reportedly shows 0.14% BD-rate increase with 8% and 25% reductions in encoding and decoding time, respectively.
Comparison with the HM-based SDR anchors
The table below shows the BD-rate and complexity performance of the proposed solution compared to the JEM-based SDR anchors.
Comparison of the proposal with the HM-based SDR anchors in terms of both coding performance and encoding/decoding complexity
SequenceYUVEncodingDecodingClass ACrosswalk1-37.80%-43.84%-47.46%1120%1098%FoodMarket3-34.86%-46.58%-48.91%883%963%Tango1-36.28%-55.06%-49.63%1302%1088%CatRobot1-40.01%-52.31%-45.41%1045%1096%DaylightRoad1-40.64%-53.53%-38.09%1123%991%BuildingHall1-33.29%-41.42%-46.50%654%865%ParkRunning2-31.54%-26.14%-29.23%1193%1068%CampfireParty-37.90%-35.74%-56.57%1888%765%Class BBQTerrace-30.37%-50.58%-61.19%759%925%RitualDance-27.66%-38.00%-41.68%1692%907%Timelapse-26.70%-61.40%-67.03%720%939%BasketballDrive-32.04%-46.87%-43.39%1468%996%Cactus-35.87%-49.10%-45.11%1140%1010%Average-34.23%-46.20%-47.71%1091%971%Comparison with the JEM-based SDR anchors
The table below shows the BD-rate and complexity performance of the proposed solution compared to the JEM-based SDR anchors. As shown in the table below, the proposed solution can effectively reduce the complexity of the current JEM by providing 8% and 25% reductions for the encoding and decoding time, respectively. The maximum reduction of the decoding time can reach 40% (sequence Timelapse). When it comes to the compression performance, the proposed solution has a minor impact on the overall BD-rate with an average 0.14% BD-rate increase.
Comparison of the proposal with the JEM-based SDR anchors in terms of both coding performance and encoding/decoding complexity
SequenceYUVEncodingDecodingClass ACrosswalk10.17%0.20%0.22%93%75%FoodMarket30.06%0.02%0.13%92%76%Tango10.14%"0.06%"0.07%93%73%CatRobot10.43%0.17%0.14%91%69%DaylightRoad10.17%"0.04%"0.01%91%82%BuildingHall10.08%"0.01%"0.05%92%66%ParkRunning20.04%0.03%0.03%89%81%CampfireParty0.04%0.00%0.01%93%83%Class BBQTerrace0.07%"0.14%0.00%91%79%RitualDance0.14%"0.16%"0.04%92%84%Timelapse0.13%"0.19%"0.76%92%60%BasketballDrive0.15%0.02%"0.08%91%82%Cactus0.15%"0.06%"0.09%90%64%Average0.14%"0.02%"0.04%92%75%
It was asked whether it is proposed to modify the JEM in response to this. The proponent said this was not proposed to be done for purposes of this meeting.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3162" JVET-G0029 Samsungs response to Joint CfE on Video Compression with Capability beyond HEVC (SDR category) [E. Alshina, K. Choi (Samsung)]
Discussed Thursday 18:00 (GJS & JRO).
This is basically a simplification proposal relative to the JEM.
The primary goal of this contribution is to demonstrate the possibility of achieving higher performance gain using smaller number of coding tools compared to JEM 6.0. In average 36% (Y), 42% (Cb) and 44% (Cr) BD-rate gain over CfE anchor was achieved.
[Clean up below]
For this CfE response some JEM tools were disabled, another were replaced or modified. Changes are relatively small since the goal of this contribution just to illustrate video codec design principles Samsung applies preparing CfP response. Choosing tools to replace or modify we tried to resolve several issues which make JEM unfriendly and impractical. In particular:
Initialization for context models (2.6.3 in [3])
Initial probability states of context models for inter-coded slices instead of slice-type and QP determined initialization can be (not always) initialized by copying states from previously coded pictures. The map of contexts which will be alternatively initialized is signalled in slice header. Encoder should decide about context initialization. Overhead associated with signalling the map of context initialized alternatively is not critical under JVET test condition (slice size is equal to picture) but it becomes quite heavy for typical sizes of slices. Those aspects make this particular part of JEM impractical.
Bi-directional optical flow (2.3.8 in [3])
Motion refinement on a sample level is overkill for the majority of high resolution videos (2K, 4K and higher). Making motion vector refinement for 4´ð4 cluster of samples actually leads to performance improvement for high resolution classes (A1, A2, B), but for low resolution (classes C, D in JVET ctc performance drop is 0.3
0.4% (EE2 results) [4]. Clustering of samples in BIO leads to reduction for the number of calculations. We support this reasonable approach, and demonstrated an option of clustering w/o performance degradation [5].
Four-tap intra interpolation filter, Position dependent intra prediction combination, Adaptive reference sample smoothing (2.2.2, 2.2.5 and 2.2.6 in [3])
Suppose one uses 3 taps smoothing filter for intra prediction reference and then applies bi-linear filter for fractional position calculation. Two those operations can be effectively combined and represented as 4-taps interpolation filter for Intra [6]. By doing that we would reduce overall latency in Intra prediction and make Intra design cleaner. If multiple 4-taps interpolation filters is the part of JEM design anyway then it is reasonable to merge reference samples smoothing with Intra interpolation filter. Taking into account only the subset of reference smoothing filters from ARSS and PDPC design one can maintain reasonable memory size for interpolation filter coefficients storage.
Small tools
Some tools in JEM6.0 have negligible performance impact under CfE test conditions (tools under horizontal line in Fig.1). Functionalities associated with those tools were not included to this CfE response. By disabling weak tools we would not claim those tools are useless, we just express preference of having reasonable number (at most 20) of strong tools in a standard rather than including multiple weak tools with duplicated functionality.
CfE response performance vs CfE anchor (matched bit rates).
ResolutionSeq.BD-rate YUV4KCrossWalk1"38.8%"34.5%"38.8%FoodMarket3"35.9%"42.3%"44.8%Tango1"37.6%"49.4%"43.9%CatRobot1"41.6%"53.6%"45.5%DaylightRoad1"42.2%"51.5%"33.8%BuildingHall1"35.1%"39.4%"45.5%ParkRunning2"35.3%"20.6%"24.1%CampfireParty"43.4%"33.2%"58.4%2KBQTerrace"31.9%"42.5%"54.3%RitualDance"28.7%"34.3%"37.9%TimeLapse"29.2%"61.8%"65.7%BasketballDrive"33.7%"43.6%"39.1%Cactus"37.3%"44.5%"40.4%4K All"38.8%"40.6%"41.8%2K All"32.2%"45.3%"47.5%All"36.2%"42.4%"44.0%
In the discussion, a participant commented that we should think about what we call a  tool , and consider that small things can sometimes add up. Simplicity and straightforward design is a goal, and sometimes one  tool  can have many complicated little elements, etc.
Anchor generation
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3167" JVET-G0066 Viewpaths for the CfE VR sequences [M. Wien (RWTH), J. Boyce (Intel), M. Zhou (Broadcom)]
Discussed Friday 11:30 (GJS & JRO).
(Intentionally uploaded late to avoid disclosing the viewport selection to proponents in advance.)
This document presents proposed viewports for subjective evaluation of the 360° video sequences submitted to the Joint Call for Evidence on Video Compression with Capability beyond HEVC (CfE) in JVET. The paths have been determined based on viewing of the 360° video sequences on a head-mounted display. It was suggested that a unidirectional viewport is not well suited to cover the content of the sequences. Instead, paths with multiple corner points specifically selected for each of the 360° video sequences are proposed.
It was discussed whether the viewports should use bilinear or Lanczos interpolation. By default, the stand-along conversion software uses Lanczos-3 (6 tap) for luma and Lanczos-2 (4 tap), and after discussion it was agreed to just use that.
It was agreed to use these viewports for the current meeting, and discuss whether to refine them for further work.
It was agreed that proponents should provide software to remap their decoded video back to 8K ERP, and all content will then be mapped the same way from 8K ERP for the dynamic viewport generation.
A participant commented that the viewport movement may be a bit more rapid than what is ideal.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3192" JVET-G0085 Information on CfE anchor generation for HDR content [A. K. Ramasubramonian, D. Rusanovskyy (Qualcomm), E. François (Technicolor), F. Hiron, J. Zhao, A. Segall (Sharp)]
no need to be presented, just for information
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3201" JVET-G0093 AHG4: SDR anchor generation for Joint Call for Evidence by Qualcomm [H.-C. Chuang, J. Chen, M. Karczewicz]
no need to be presented, just for information
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3228" JVET-G0117 AHG4: SDR anchor generation for Joint Call for Evidence by Samsung [K. Choi, E. Alshina (Samsung)] [late]
no need to be presented, just for information
Results
Results were presented Wednesday morning. Will be made available in output doc G1005. No objective results will be included in that output doc.
For subjective tests in CfP
SDR HD:
- Remove TimeLapse, add downsampled FoodMarket (other part than in UHD) 
SDR UHD
- Remove CrossWalk, Tango, Building Hall
HDR PQ HD
- test sequences OK
HDR HLG 4K (potentially added by next meeting):
- should be at least 3 sequences for establishing another class
- coded bitstreams (NHK and Sony sequences) should be sent to Vittorio prior to Macao
- Still need to clarify if possible (labs equipment) to run 4K 1000 nits content.
360 Video
- Test sequences OK, but further review in BoG if more might be useful.
- Possible solution that proponents submit the MD5 checksum of the 8K reconstruction before getting viewport
- To be clarified: 
Some alignment of rates seems necessary, work plan.
BoG on 360 and HDR are mandated to further work out constraints of those test scenarios.
Colour remapping: Frame dependent should be allowed, but not as fixed or sequence-dependent pre-/postprocessing
Exploration experiments (47)
General (1)
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3254" JVET-G0010 Exploration Experiments on Coding Tools Report [E. Alshina, L. Zhang]
Discussed Friday 09:00 (GJS & JRO).
Four experiments on coding tools were agreed to be carried out between the JVET-F and JVET-G meetings in order to get a better understanding of technologies considered for inclusion in the next version of JEM/360Lib, and analyse and verify their performance, complexity and interaction with existing JEM/360Lib tools. This report summarizes the status of each experiment.
EE1: Intra prediction (25)
General
Discussed Friday 09:00 (GJS & JRO).
From Summary Report JVET-G0010  brief description of the technology.
Four JEM technologies are modified in listed contributions:
The first tested technology is unequal weight planar prediction (UW-Planar) and unequal weight 66-angular (UW-66) prediction replace JEM planar and 66 modes. In some sense, UW-prediction emulates a combination of Intra prediction with PDPC on top. 
Second technology affected by tests in this EE is Adaptive Reference Samples Smoothing (ARSS). Constrained ARSS and explicit signaling for ARSS flag (instead hiding of it in a parity of transform coefficients) are proposed.
Third JEM technology affected by proposed changes is Position Dependent Prediction Combination (PDPC). Among different variant of constrained PDPC following were selected for EE testing:
Constrained PDPC for large CU with at least 2 non-zero coefficients (PDPC-L)
PDPC flag is not signalled and PDPC is not applied if block is 4×4, 8×4, 4×8 or if amount of non-zero luma transform coefficients in CU is 1 or 0.
If Intra CU uses Planar then PDPC is always applied w/o additional signaling (P-PDPC), an effect is like UW-Planar.
If Intra CU uses 66-angular prediction, then PDPC is always NOT applied w/o additional signalling (66-PDPC)
The fourth JEM technology affected by proposed changes is Non-Separable Secondary Transform (NSST). Currently CU with ARRS can use NSST, but block with PDPC cannot use NSST. The variant to be testes in NSST is allowed in combination with PDPC.
Questions recommended to be answered during EE tests: Check the performance complexity impact of individual changes proposed as follows:
[Q] Check unequal weight Planar and 66-angular modes gain in absence of PDPC (test 1 vs test 8);
[A] Based on JVET-G0114 results Unequal weight predictions utilize 0.6% of 0.8% PDPC gain in all intra and 0.3% of 0.4% in random access configurations w/o signalling and encoder decision for PDPC index. 
[Q] Check the gain if unequal weight prediction replaces PDPC for Planar and 66-angular modes in absence of ARSS (test 2 vs test 9);
[A] Based on JVET-G0113 and JVET-G0077 results Test #2 vs test#9 shows gain, BD-rate is "0.15%/"0.1%/"0.1% (Y/U/V) in  all Intra , and "0.09%/0.0%/"0.2% (Y/U/V) in  random access . 
[Q] Check the drop of PDPC performance if it is applied only in combination with Planar (no PDPC for other modes) in absence of ARSS (test 7 vs test 9);
[A] Based on JVET-G0113 and JVET-G0104 results Test #7 vs test#9 shows drop 0.1%/0.1%/0.1% (Y/U/V) in all Intra, and 0.1%/0.2.%/0.0% (Y/U/V) in random access. Encoder runtime for Tests 7 is ~20% faster than Test 9 (no data from the same cluster).
So, PDPC-P provides 0.6% of 0.8% (AI) and 0.3% of 0.4% (RA) from current PDPC gain in the absence ARSS.
Note from clarification during discussion: PDPC-P (synonymous with P-PDPC) stands for a PDPC variant used always for planar mode, over all block sizes.
[Q] Check the gain and complexity of PDPC for large blocks (PDPC-L) only in absence of ARSS (test 4 vs test 7);
[A] When ARSS is off, the gain of PDPC-L is 0.58% and 0.39% with ~30% and ~6% encoder running time increase for AI and RA, respectively (runtime from the same cluster).
Note from clarification during discussion: PDPC-L is not used in case of planar mode, for all block sizes with 64 luma samples and more. The PDPC of current JEM is used for all block sizes and also includes planar mode.
[Q] Check the gain and complexity of NSST and PDPC combination allowed for PDPC-L and P-PDPC in absence of ARSS (test 3 vs test 4);
[A] Additional gain of NSST and PDPC combination with P-PDPC and PDPC-L on, ARSS off, is 0.05%(AI) and 0.04%(RA) with 6% and 2% encoder running time increase.
[Q] Check performance and complexity impact if PDPC is not used for 66-angular mode (test 5 vs test 4);
[A] Performance and complexity are almost the same
[Q] Check the speed-up of ARSS if flag is explicitly signaled but not hidden in the parity of transform coefficients (test 6)
[A] 0.1% loss with 6% encoder running time decrease.
For the comparison ARSS (JEM6.0) gain is 0.33% and 0.15% with 41% and 4% encoder run time increase (AI and RA). ARSS with explicit signalling would give 0.24% with 36% and 4% encoder run-time increment (AI and RA).
[Q] Check the performance and encoding time reduction for UWP and UW66 with PDPC for other modes and constrained as in F0024 ARSS with explicit ARSS flag signalling as in F0055 (test 10);
[A] Coding gain of 0.07% with 20% encoder running time saving under AI configuration. 
[Q] Proponents are requested to provide estimation for memory size needed for their test implementation compared to memory size JEM6.0 PDPC uses.
[A] The required memory size for UWP, PDPC-L, P-PDPC is summarized in the table below.
Memory usage of coding tools tested in EE1
ToolsMemory sizePDPC in JEM6.08400bits (=5 * 35 * 6 * 8 bits)UWP1260 bits (=126 * 10 bits)P-PDPC240 bits (=5 * 1 * 6 * 8 bits)PDPC-L8400bits 
(with PDPC memory for planar mode included, same as the PDPC in JEM6.0)
Summary: The definitions of EE tests (10 defined in EE descriptions plus 4 new tests), documents, cross-check reports as well as overall coding performance for available test conditions and for each test required by EE1 are tabulated. In addition, the coding gain of luma component and encoder running time ratio compared to JEM6 are depicted in the figure below.
Graphical summary of EE1 test results of coding gains and encoder complexity under AI configuration (negative is bad, positive is good)
In the discussion, it was commented that the Test 4.1 combination seems promising, as it saves runtime, has no loss, and appears as a knee in the curve. A further complexity reduction is available in Test 2.1 and Test 7. The AI encoder is very slow (slower than RA), and would benefit from simplification. (This assumes that the runtime measurements are valid.)
Test 7 provides a significant reduction in encoder run time, and the loss in compression efficiency is acceptable as a tradeoff. This would also bring the encoder runtime of AI closer to RA.
Decision: Adopt EE1 Test 7 into JEM7 (modifying PDPC to become P-PDPC used unconditionally for planar mode, and removing ARSS). This removes ARSS, and replaces the current PDPC by P-PDPC
Summary of EE1 tests
#TestInformation and additional comparison pointContributionY-BD-rate (Enc/DecTime Ratio)Cross-checks1UWP+UW66,
PDPC off,
ARSS onInstead of PDPC which can be on/off on a block level UWP and UW66 are used. Those 2 modes emulated effect of PDPC for Planar and mode=66.
Compare performance-complexity trade-off of PDPC (from Test 8) vs just UWP+UW66 (Test1). HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3185" JVET-G0078
AI: 0.1% (83%/100%) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3225" JVET-G01142UWP+UW66,
PDPC for other modes,
ARSS offCompare to Test 9 to see UWP and UWP66 efficiently emulate PDPC effect for Planar and mode=66. HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/documents/7_Torino/wg11/JVET-G0077-v2.zip" JVET-G0077AI: 0.2% (66%/101%) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3257" JVET-G01422.1*UWP+UW66,
PDPC for other modes,
ARSS off,
PDPC-L Compare to Test 2 we expect performance gain, with similar encoding run time. Goal is to prove PDPC-L provides coding gain for UWP and UWP66. HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3187" JVET-G0080AI: 0.1% (66%/98%) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3198" JVET-G00913P-PDPC,
PDPC-L,
ARSS off,
NSST+PDPCCompare to Test 4 to see the gain from non-restricted combination of NSST and PDPC. HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3213" JVET-G0104AI: "0.2% (80%/101%)
RA: "0.1% (97%/99%) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3226" JVET-G01154P-PDPC,
PDPC-L,
ARSS offCompare to Test 7 to see the gain of PDPC-L with P-PDPC and ARSS off. HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3213" JVET-G0104AI: "0.1% (75%/100%)
RA: "0.1% (95%/99%) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3193" JVET-G00864.1*Test 4 with decreased by 1 number of intra RD checks at encoderDifferent encoder complexity point for Test 4 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3213" JVET-G0104AI: 0.0% (69%/100%)4.2*Test 4 with increased by 1 number of intra RD checks at encoderDifferent encoder complexity point for Test 4 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3213" JVET-G0104AI: "0.2% (82%/100%)5P-PDPC,
66-PDPC,
PDPC-L,
ARSS offFurther encoder run-time reduction compared to Test 4. HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3213" JVET-G0104AI: "0.1% (76%/100%)
RA: "0.1% (95%/100%) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3256" JVET-G01416PDPC on,
ARSS is explicitly signaled as in F0055Check the speed-up of ARSS if flag is explicitly signalled but not hidden in the parity of transform coefficients (no hiding, no multi-pass RDOQ) by comparing Test 6, Test 9 and JEM6.0JVET-G0104AI: 0.1% (94%/99%)
RA: 0.0% (99%/99%) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3170" JVET-G00697P-PDPC,
ARSS offHow much of PDPC gain is utilized by only P-PDPC? Should be faster than test 9, but some drop is expected.JVET-G0104AI: 0.5% (55%/100%)
RA: 0.3% (89%/100%) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3193" JVET-G00868PDPC off,
ARSS onBenchmark showing PDPC performance / complexity HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3224" JVET-G0113AI: 0.7% (87%/100%)
RA: 0.4% (99%/104%) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3170" JVET-G00699PDPC on,
ARSS offBenchmark showing ARSS performance / complexity HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3224" JVET-G0113AI: 0.3% (71%/100%)
RA: 0.1% (96%/104%) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3170" JVET-G006910UWP+UW66,
PDPC on for other modes,
ARSS constrained as in F0024 but explicit signaling as in F0055ARRS gain with minimal (in some sense) encoding run-time associated with ARSS. To be compared with Test 2. HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3186" JVET-G0079AI: "0.1% (80%/100%) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3194" JVET-G008710.1*UWP+UW66,
PDPC on for other modes,
PDPC-L,
ARSS constrained as in F0024 but explicit signalling as in F0055Compare to Test 2 to prove PDPC-L provides coding gain for ARSS HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3187" JVET-G0080AI: "0.1% (80%/100%) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3198" JVET-G0091*: newly added tests to EE1.
Primary (14)
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3184" JVET-G0077 EE1: UWP&UW66 with PDPC for other intra mode and ARSS off (Test2) [H. M. Jang, J. Lim, S.-H. Kim (LGE), K. Panusopone, S. Hong, Y. Yu, L. Wang (Arris)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3185" JVET-G0078 EE1 Test 1: UWP+UW66, PDPC off, ARSS on [K. Panusopone, S. Hong, Y. Yu, L. Wang (Arris)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3186" JVET-G0079 EE1 Test 10: UWP+UW66, PDPC on for other modes, ARSS constrained as in F0024 but explicit signalling as in F0055 [K. Panusopone, S. Hong, Y. Yu, L. Wang (Arris), H. M. Jang, J. Lim, S.-H. Kim (LGE)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3187" JVET-G0080 Additional EE1 Tests (Test 2.1: UWP+UW66, PDPC for other modes, PDPC-L, ARSS off, and Test 10.1: UWP+UW66, PDPC on for other modes, PDPC-L, ARSS constrained as in F0024 but explicit signalling as in F0055) [K. Panusopone, S. Hong, Y. Yu, L. Wang (Arris)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3213" JVET-G0104 EE1: Alternative setting for PDPC mode and explicit ARSS flag (tests 3-7) [M. Karczewicz, V. Seregin, A. Said, N. Hu, X. Zhao (Qualcomm)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3224" JVET-G0113 EE1 (Tests 8 and 9) Performance of RASS and PDPC in presence of other tools [E. Alshina (Samsung)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3170" JVET-G0069 EE1: Crosscheck of tests 6, 8 and 9 [V. Drugeon (Panasonic)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3193" JVET-G0086 EE1: Cross-check of test4 and test7 [J. Lee, H. Lee, J. Kang (ETRI)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3194" JVET-G0087 EE1: Cross-check of test10 [H. Ko, S.-C Lim, J. Kang (ETRI)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3198" JVET-G0091 EE1: Crosscheck of Additional EE1 Tests (Test 2.1 and Test 10.1) (JVET-G0080) [T. Ikai, Y. Yasugi (Sharp)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3225" JVET-G0114 EE1 Cross-check for Test 1 [E. Alshina (Samsung)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3226" JVET-G0115 EE1 Cross-check for Test 3 [E. Alshina (Samsung)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3256" JVET-G0141 EE1: Cross-check of JVET-G0104, test 5 [F. Racape, E. François, F. Le Léannec (Technicolor)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3257" JVET-G0142 EE1: Cross-check of JVET-G0077, test 2 [F. Racape, E. François, F. Le Léannec (Technicolor)] [late]
Related (11)
Contributions in this category were discussed Friday 14:45 (chaired by JRO and GJS).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3163" JVET-G0062 EE1-Related: Harmonization UW Prediction method with improved PDPC [H. M. Jang, J. Lim, S.-H. Kim (LGE)]
Discussed Friday 14:45 (GJS & JRO).
This contribution proposes a harmonization method between improved PDPC (PDPC-L) and recently introduced two new modes which are un-equal weighted planar mode (UWP) and vertical diagonal mode (UW66). From experimental results, it was reported that 0.18% luma coding gain is achieved along with 20% encoding complexity reduction and 0.15% luma coding gain with 26% encoding complexity reduction for AI for Test A and Test B respectively.
It was remarked that loss in chroma was observed.
The contribution would end in an encoder complexity reduction of 20% relative to JEM6 in AI, while providing a bit rate reduction in the range of 0.10.2%. Relative to the outcome of EE1, this is not a desirable operation point.
With the action taken to simplify the design in response to the EE per Test 7, the complexity range in which this scheme is operating appears to be higher than our current focus, and no interest was expressed by non-proponents, so no action was taken on this.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3238" JVET-G0126 EE1-Related: Crosscheck of JVET-G0062 on UW prediction fix [T. Ikai, Y. Yasugi (Sharp)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3169" JVET-G0068 Non EE1: Unified-PDPC: Unification of intra filters [M. Philippe, K. Sharman (Sony Europe)]
Discussed Friday 15:15 (GJS & JRO).
This contribution presents a unification of the reference sample filtering process in JEM for intra prediction. A unified filter is presented as a replacement of RSAF (Reference sample adaptive filtering)  also known as ARSS (Adaptive Reference Sample Smoothing)  filtering and PDPC (Position Dependent intra Prediction Combination) filtering. Results reportedly show an encoding time speed up of 16% on average for AI configuration, without any impact on the overall luma BD-rate.
During the presentation it is suggested that the method could also be combined with PDPC-P. This would imply filtering more often (in cases where NSST is used), without additional signalling and without additional mode checks by encoder. According to the results reported in combination with other configurations, the expected BR reduction could be in the range of 0.1%
The proponent suggested that a combination of the proposal with the Test 7 scheme could be used that should provide a coding efficiency benefit (due to applying the filtering more often) without adding substantial complexity (without additional signalling). However, test results for that combination were not available.
Several experts supported evaluating that alternative possibility in an EE on top of the adopted EE1 test 7 (if the EE is continued).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3246" JVET-G0134 Non-EE1: Cross-check of JVET-G0068 on unification of intra filters (test 3.1.2) [V. Seregin (Qualcomm)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3188" JVET-G0081 Comparisons between UWP, W66 and Planar, Angular mode 66 under the same coding conditions [K. Panusopone, S. Hong, Y. Yu, L. Wang (Arris)]
Discussed Friday 15:45 (GJS & JRO).
This contribution compares UWP, W66 against planar, angular mode 66 under four coding settings. Simulation results show that range of average luma BD-rate gains from UWP and W66 over Planar and Angular mode 66, under the same coding condition, is from 0.1%0.6% for AI. The coding gains are larger (0.2%0.8% for AI) for class A video test sequences. Little impact on processing time is reported when using UWP, W66, instead of planar, Angular mode 66, in all coding conditions.
Runtime is expected to be the same as Test 7.
A new version of the presentation deck should be uploaded as shown during the presentation of the proposal.
The proponent suggested that a combination of the proposal with the Test 7 scheme could be used that should provide a coding efficiency benefit. However, test results for that combination were not available.
It is suggested to replace planar prediction by UWP (and P-PDPC on top of it as per EE1 test 7), and further replace mode 66 by W66. Some concern is expressed that this would make the already most complex mode (planar+PDPC) even more complex (UWP e.g. requiring 2 multiplications per sample and some more processing). Therefore, it would also be interesting to see gains of W66 separately. The current results would not allow exact foresight about expected gains.
A participant commented that a more unified design would preferable to avoid the multiple steps of processing and that planar mode is already the most complex mode and this further increases that complexity.
Some interest was expressed in evaluating that alternative possibility in an EE.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3233" JVET-G0122 Crosscheck of JVET-G0081 Comparisons between UWP, W66 and Planar, Angular mode 66 under the same coding conditions [Alexey Filippov, Vasily Rufitskiy (Huawei)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3217" JVET-G0107 Non-EE1: PDPC without a mode flag [V. Seregin, M. Karczewicz, A. Said, X. Zhao (Qualcomm)]
Discussed Friday 16:15 (GJS & JRO).
This contribution proposes a modified design of PDPC without flag signalling, and instead PDPC use is defined by the NSST index. Test results reportedly show 0.2% luma BD rate loss and 42% encoder speed-up on average in all intra configuration.
The contribution proposes the coupling of PDPC with NSST index 1, and re-introduces MDIS with NSST index 3. With NSST index 0 (NSST off) and 2, neither PDPC nor MDIS is used. ARSS is not used at all.
Unlike EE1 Test 7, this uses NSST with all intra modes (not only with planar). 
As an interesting aspect, this proposal seems to indicate some interaction of PDPC/NSST that has not been investigated so far.
It is also questioned how useful it is to re-invoke MDIS.
It was commented that the scheme requires more table memory than the Test 7 scheme.
The proponent said this could improve coding efficiency relative to Test 7, recovering most of the gain that was lost by Test 7.
Some interest was expressed in evaluating that alternative possibility in an EE.
It was suggested to have some offline discussion to potentially identify other aspects to be studied in a relevant EE.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3272" JVET-G0154 EE1 related: Cross check of JVET-G0107 [M. Philippe, K. Sharman (Sony Europe)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3218" JVET-G0108 Non-EE1: Fix for strong intra smoothing filtering [V. Seregin, X. Zhao, M. Karczewicz (Qualcomm)]
Discussed Friday 16:50 (GJS & JRO).
This contribution proposes changing the strong intra smoothing in case when ARSS tool is disabled and MDIS is enabled. In the current JEM implementation, the strong intra smoothing is implemented using a bit shift operation, however a block can have a rectangular shape, so the shift operation is incorrect when the sum of width and height of the block is not a power of two value. In this contribution, the strong intra smoothing process is split into two parts, one is associated with a width, and the second one is associated with a height of the block, so the bit shift operation can be kept. Test results reportedly show 0.3% BD rate loss with 70% encoder running time for AI configuration, the loss and encoder speed up come due to disabled ARSS in the test. The loss is due to disabling ARSS.
It was commented that it seems questionable whether the strong intra smoothing filter is really needed. When introduced, it was for perceptual reasons, although it actually had some PSNR loss.
So two options seem worth considering: 1) getting rid of the filter, 2) using this adjustment of the denominator in it.
Interest was expressed in evaluating these avenues an EE.
The contributor said that in the HM context, the filter seems visually helpful for the particular test sequence that had been used to justify its inclusion (a test sequence with a smooth gradient coded with a middle-range QP, such as 2227). In the JEM context, the contributor did not identify a need for the filter when testing with that sequence.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3250" JVET-G0136 Non-EE1: Crosscheck of G0108 on strong intra smoothing filtering [T. Ikai (Sharp)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3263" JVET-G0146 EE1: Additional tests comparing UWP/UW66 with P-PDPC in EE1 tests [M. Karczewicz, V. Seregin, A. Said, N. Hu, X. Zhao (Qualcomm)] [late]
Discussed Friday 17:10 (GJS & JRO).
This information contribution provides additional tests results based on EE1 software by enabling UWP and UW66 macros. It provides a comparison with P-PDPC tests, where other aspects of the software are the same.
Suggested conclusions are:
- P-PDPC is better than UWP alone
- UW66 provides gains on top of both UWP and P-PDPC, and it can be considered separately.
- overall difference compared to test 7 marginal, both in terms of compression and runtime.
EE2: Decoder-side motion vector derivation (6)
General
Discussed Friday 10:00 (GJS & JRO).
From Summary Report JVET-G0010:
In JEM-6.0 BIO is performed during the regular motion compensation process as well as the Overlapped Block Motion Compensation (OBMC) process for bi-directional prediction. In one aspect, this contribution proposes to restrict BIO operation from OBMC. In addition, sample-based BIO operation is performed in JEM-6.0. In another aspect of this contribution, 4x4 clustering is performed. Thirdly, motion vector refinement is used for motion prediction of subsequent blocks. A supplementary part of this contribution also suggests to use a fixed-point implementation instead of division for BIO. Test results including both division and fixed-point implementation are provided.
Questions recommended to be answered during EE tests.
[Q]: What is performance and complexity effect of BIO and OBMC redesign (aspect 1)?
[A]: Without division, it is 4%/11% reduction in enc./dec. runtime with 0.18% performance drop (RA), and 2%/4% reduction in enc./dec. runtime with 0.05% performance drop (LDB). With division, it is 6%/14% reduction in enc./dec. runtime with 0.17% performance drop (RA), and 3%/4% reduction in enc./dec. runtime with 0.03% performance drop (LDB). 
[Q]: What is performance and complexity effect of computing motion vector refinement per 4x4 block (aspect 2)?
[A]: In Test 2 (aspect1 + aspect 2) without division, it is 8%/22% reduction in enc./dec. runtime with 0.18% performance drop (RA), and 3%/6% reduction in enc./dec. runtime with 0.01% performance drop (LDB). With division, it is 9%/23% reduction in enc./dec. runtime with 0.19% performance drop (RA), 3%/6% reduction in enc./dec. runtime with no performance impact (LDB). The contribution also reports when testing 4x4 BIO kernel alone, 5%/13% reduction in enc./dec. runtime (RA) with 0.02% performance drop, 1%/4% reduction in enc./dec. runtime (LDB) with 0.02% performance drop.
[Q]: What is performance and complexity effect of usage the motion vector refinement for prediction of subsequent MV (aspect 3)?
[A]: In Test 3 (Test 2 + aspect 3) without division, it is 6%/10% reduction in enc./dec. runtime with 0.06% performance drop (RA), and 3%/6% reduction in enc./dec. runtime with 0.01% performance drop (LDB). With division, it is 6%/10% reduction in enc./dec. runtime with 0.06% performance drop (RA), and 2%/6% reduction in enc./dec. runtime without performance impact (LDB).
Summary: The majority of the performance and complexity change occurs in RA configuration. For the version with division, aspects 1 provides 4% reduction in encoding time and up to 14% reduction in decoding time, with 0.2% performance drop. When combined with aspect 1, aspect 2 provides 9% reduction in encoding time and 23% in decoding time. In combination with the other two aspects, aspect 3 provides 0.1% coding gain with 6% reduction in encoding time and 10% reduction in decoding time. For LDB configuration, the runtime reduction is between 3% of encoding time and 6% of decoding time with the performance impact within 0.05%. The division-free version of the software has minor impact on the coding efficiency (within 0.01%) and with runtime differences within 1%-2% compared to the division version. The proposed methods have relatively better results for high resolutions sequences. For class A1, A2 and B sequences, Test1, Test2 and Test3 provide average 0.1%, 0.1% and 0.0% BD rate change, respectively.
Summary of EE2 tests
#TestContributionY-BD-rate
(Enc/DecTime Ratio)Cross-checks1on BIO and OBMC redesign (aspect 1) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3189" JVET-G0082
RA: 0.2% (ET (0.96, DT (0.89)
LD: 0.0% (ET (0.98, DT (0.96) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3202" JVET-G0094 (ETRI)
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3227" JVET-G0116 (Samsung)2Test 1 + computing motion vector refinement per 4x4 block (aspect 2)RA: 0.2% (ET (0.92, DT (0.78)
LD: 0.0% (ET (0.97, DT (0.94) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3227" JVET-G0116 (Samsung)
3Test 2 +motion vector refinement for prediction of subsequent MV (aspect 3)RA: 0.1% (ET (0.94, DT (0.90)
LD: 0.0% (ET (0.97, DT (0.94) HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3227" JVET-G0116 (Samsung)
In the discussion, Test 2 seemed like a good tradeoff.
Decision: Adopt EE2 Test 2 into JEM7.
It was further discussed that the aspect of replacing the division in BIO is irrelevant at this stage of development, as it does not have any impact on encoder/decoder runtime.
Regarding general aspects of EEs and other experiment, it was suggested to ordinarily report only one digit past the decimal point of percentage BD impacts.
Decision: Agreed (to be reflected in templates, etc.).
Primary (3)
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3189" JVET-G0082 EE2: A block-based design for Bi-directional optical flow (BIO) [H.-C. Chuang, J. Chen, X. Li, Y.-W. Chen, M. Karczewicz, W.-J. Chien (Qualcomm)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3202" JVET-G0094 EE2: Cross-check of EE2 test1 (JVET-G0082) [H. Lee, J. Kang (ETRI)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3227" JVET-G0116 EE2: Cross-check for block-based BIO design [E. Alshina (Samsung)] [late]
Related (3)
Contributions in this category were discussed Friday 14th 1715-1745 (chaired by JRO and GJS).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3190" JVET-G0083 EE2-related: A simplified gradient filter for Bi-directional optical flow (BIO) [H.-C. Chuang, J. Chen, K. Zhang, M. Karczewicz (Qualcomm)]
Discussed Friday 17:20 (GJS & JRO).
The presentation deck was requested to be uploaded.
This contribution presents a technique using a simplified gradient filter to achieve better trade-off between complexity and coding efficiency for BIO design. Three additional tests were also performed in combination with the three elements of EE2 proposal. For Test 4.1, 0.1% BD-rate saving with 6%/9% reduction in encoding/decoding time is reported for RA configuration, and 0.1% BD-rate saving with 2%/5% reduction in encoding/decoding time is reported for LDB configuration. For Test 4.2, 0.1% BD-rate increase with 9%/22% reduction in encoding/decoding time for RA configuration and 0.1% BD-rate saving with 2%/5% reduction in encoding/decoding time for LDB configuration were reported. For Test 4.3, 0.1% BD-rate saving with 5%/12% reduction in encoding/decoding time is obtained for RA configuration, and 0.1% BD-rate saving with 1%/2% reduction in encoding/decoding for LDB configuration.
It was remarked that the proposed scheme has more stages of processing (although fewer computations), whereas the current scheme is more parallel. More study of that issue is needed.
It was commented that SIMD implementation could give the runtime benefit described here.
It was remarked that implementation optimization is not really our focus at the exploration stage, so EE study of this change was not planned.
It was commented that OBMC does not use BIO MV refinement, which seems inconsistent with other aspects of the design. 
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3214" JVET-G0105 EE2-related: Crosscheck of A simplified gradient filter for Bi-directional optical flow (JVET-G0083) [M. Ikeda (Sony)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3251" JVET-G0137 EE2-related: Crosscheck of JVET-G0083 on gradient filter modification [T. Ikai (Sharp)] [late]
EE3: Adaptive QP for 360° video (8)
General
Discussed Friday 10:30 (GJS & JRO).
From Summary Report JVET-G0010:
The three listed contributions propose similar methods for encoding 360° video with the ERP projection by using adaptive QP, at the CTU level, based on the location of the CTU in the picture. The QP offset is calculated based on the WS-PSNR weight. It is asserted that the proposed methods are encoder change only.
In F0038, the QP at the equator is equal to the QP from the cfg, whereas in F0049 and F0072 the QP normalize the WS-PSNR weighting formula, i.e. QP is decreased by 2 at the equator. 
In F0038 and F0072, the QP is calculated based on the central vertical position of each CTU, whereas in F0049 the average value of the weight for all height positions within the CTU is computed.
So these approaches have minor difference in average QP value calculation, with no significant performance effect.
JVET-G0070JVET-G0106:
Lambda in SAO process has been adjustedNo lambda changesIt was noted that the lambda adjustment has some effect.
Questions recommended to be answered during EE tests: 
[Q] Compare the performance of 3 adaptive QP methods both for HM (with block level delta QP signaling) and JEM (with and without signaling QP per block). 
[A] Was not tested due to the improper support of delta QP in JEM.
[Q] Check an effect of equivalent QP adaptation for other formats (for ex, CMP). For JEM, also a method could be studied without signaling, where the QP adaptation is derived at decoder from knowledge of projection format. 
[A] Was addressed by Test 5 ( JVET-G0106), -2.2% avg BD-rate gain for CMP has been demonstrated.
[Q] Investigate rotations where higher detailed content is brought to the pole, for example, (0,-90) rotation (on top HM only, both anchor coded with fixed QP and modification with QP adaptation should be rotated). Rotation shall be performed during 8K to 4K downsampling of ERP. 
[A] Was addressed by Tests 1.1 and 2.1. ERP was pre-rotated by 90 degree. Gain provided by adaptive QP increases to 8..9% since this rotation brings area with higher texture to the pole areas. The pole areas in ERP are areas that are relatively over-sampled compared to the equator area so that when high texture content is at the pole areas, HEVC has tendency to spend too many bits to encode it but the quality would not increase in linear manner.
[Q] Proponents are requested to prepare subjective quality demonstration.
[A] There are several figures in contributions submitted; viewing session during meeting is required.
Summary of EE3 tests
#TestDocY-BD-rate (Enc/DecTime Ratio)Cross-checks1Adaptive QP with F0038&F0049 weighting for ERP in HM-360Lib HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3171" JVET-G0070"5.0% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3232" JVET-G0121 (KDDI)1.1Adaptive QP with F0038&F0049 weighting for rotated ERP in HM-360Lib HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3171" JVET-G0070"9.3% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3232" JVET-G0121 (KDDI)2Adaptive QP with F0072 weighting for ERP in HM-360Lib HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3215" JVET-G0106"4.2% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3237" JVET-G0125 (Interdigital)2.1Adaptive QP with F0072 weighting for rotated ERP in HM-360Lib HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3215" JVET-G0106"8.1% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3237" JVET-G0125 (Technicolor)5Adaptive QP (as in F0072) for CMP in HM-360Lib (with signalling) HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3215" JVET-G0106"2.2% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3235" JVET-G0124 (Interdigital)
Summary: QP adaptation based on WS-PSNR weights for ERP provides 4~5% performance gain in WS-PSNR metric. Visual quality assessment is needed in order to verify performance improvement (as well metrics for 360video).
Note from discussion: Rotated versions are compared against rotated originals, so the rate gains are not comparable directly.
It was further reported that results with adaptive QP for JEM were not generated, because mismatches between encoder and decoder were found. However, as it is asserted that adaptive QP basically works in JEM (as it is also used for luma-adaptive QP in HDR), this should be some problem at the interface between JEM and 360lib which needs to be resolved if JEM anchors need to use adaptive QP as well.
It was noted that for a more uniformly sampled projection format (e.g. CMP) there is less benefit for the adaptive QP usage.
It was remarked that some relevant input is found in JVET-G0099.
In the discussion, it was suggested to include (encoder non-normative) adaptive QP selection for 360° video with WS-PSNR optimization. Further discussion seemed desirable after some offline study and subjective viewing (e.g. to determine if this produces any undesirable artefacts).
 A subjective test session was held. See further notes under BoG XXXX . No action was taken in terms of changing anchors.
These should be done for both non-rotated and rotated versions. (rotated versions are reported to match better in terms of rate, whereas in case of non-rotated versions, the rate of adaptive QP is typically higher by 1520 %).
If no visual artifacts are observed, the 360 anchors should be updated with versions using adaptive QP.
Primary (6)
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3171" JVET-G0070 EE3-JVET-F0049/F0038 Adaptive QP for ERP videos [Hendry, M. Coban (Qualcomm), F. Racape, F. Galpin (Technicolor)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3232" JVET-G0121 EE3 Test1 and 1.1: Cross-check of JVET-G0070 [K. Kawamura, S. Naito (KDDI)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3215" JVET-G0106 EE3: Adaptive QP for 360° video [Y. Sun, L. Yu (Zhejiang Univ.)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3235" JVET-G0124 EE3: Cross-check for JVET-G0106 (Test 5  Adaptive QP for CMP in HM-360Lib) [Hendry (Qualcomm)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3237" JVET-G0125 EE3: Cross-check for JVET-G0106 Test2 [X. Xiu, Y. He, Y. Ye (InterDigital)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3255" JVET-G0140 EE3: Cross-check of JVET-G0106 (Test 2.1  Adaptive QP with F0072 weighting for rotated ERP in HM-360Lib) [F. Racape, F. Galpin (Technicolor)] [late]
Related (2)
Contributions in this category were discussed Friday 14th 1745 (chaired by JRO and GJS).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3196" JVET-G0089 EE3 Related: Adaptive quantization for JEM-based 360-degree video coding [X. Xiu, Y. He, Y. Ye (InterDigital)]
Discussed Friday 17:45 (GJS & JRO).
This contribution presents an adaptive quantization method to enhance the performance of JEM-based 360-degree video coding. Compared to the HM-based adaptive quantization parameter (QP) methods in the current EE2, the main differences of the proposed method include: 1) it is proposed to independently apply different QP offsets for the luma and the chroma components for each coding tree unit (CTU); 2) it is proposed to use the input slice-level QP for the CTUs with the highest spherical sampling density and gradually decrease the QP value for the CTUs with lower spherical sampling density; 3) it is proposed to skip delta QP signaling, and let encoder/decoder derive and use the same QP adjustment. The simulation results are provided for the equirectangular projection (ERP) and cubemap (CMP) projection formats. It is reported that compared to the anchors based on JEM-6.0-360Lib-3.0, the proposed method provides on average {Y, Cb, Cr} gains of {5.0%, 6.8%, 6.8%} for the ERP and {2.6%, 11.2%, 12.4%} for the CMP in term of end-to-end WS-PSNR.
See also section  REF _Ref487785595 \r \h 6.4.1 for a related discussion.
In this proposal, no signalling overhead is used. The decoder adjusts the QP automatically based on awareness of the mapping type.
It was commented that the signalling overhead for explicit signalling should be very small (in HM, it is 0.1% bit rate overhead for the signalling), so these results should similarly apply in that case (if the encoder is working properly). It is also mentioned by experts that more flexibility of QP control would be desirable e.g. for rate control.
Action item: Software coordinators of JEM and 360lib are mandated to resolve the issues with explicit signalling of QP adaptation. If it is demonstrated that adaptive QP does not cause visual artifacts, this should be implemented in JEM anchors.
AQP_ERP vs. ERPSPSNR-NN (End to End)SPSNR-I (End to End)CPP-PSNR (End to End)WS-PSNR (End to End)YUVYUVYUVYUVTrolley-2.0%-2.4%-5.0%-2.0%-2.4%-5.0%-2.0%-2.4%-5.0%-2.0%-2.4%-4.9%GasLamp-2.0%-1.3%3.6%-2.1%-1.5%3.5%-2.1%-1.4%3.6%-2.1%-1.3%3.7%Skateboarding_in_lot-9.6%-19.4%-17.6%-9.6%-19.3%-17.6%-9.6%-19.3%-17.6%-9.5%-19.3%-17.5%Chairlift-8.0%-7.6%-7.7%-8.0%-7.6%-7.8%-7.9%-7.6%-7.8%-7.9%-7.5%-7.8%KiteFlite-3.5%-3.7%-6.3%-3.5%-3.8%-6.3%-3.5%-4.0%-6.4%-3.5%-3.9%-6.4%Harbor-1.6%-4.6%-4.5%-1.6%-4.6%-4.5%-1.6%-4.7%-4.6%-1.6%-4.6%-4.5%PoleVault-7.9%-10.5%-11.4%-7.8%-10.2%-11.2%-7.8%-10.2%-11.2%-7.8%-10.6%-11.4%AerialCity-5.4%-5.3%-5.8%-5.4%-5.5%-5.9%-5.3%-5.4%-5.8%-5.4%-5.3%-5.8%DrivingInCity-0.8%1.4%2.5%-0.8%1.3%2.4%-0.7%1.3%2.3%-0.7%1.5%2.5%DrivingInCountry-9.3%-14.2%-15.8%-9.2%-14.3%-15.8%-9.2%-14.4%-15.9%-9.3%-14.3%-15.8%Overall-5.0%-6.8%-6.8%-5.0%-6.8%-6.8%-5.0%-6.8%-6.8%-5.0%-6.8%-6.8%BD-rate performance of the proposed adaptive quantization method for the ERP, compared to the JEM-6.0-360Lib-3.0 anchor
BD-rate performance of the proposed adaptive quantization method for the CMP, compared to the JEM-6.0-360Lib-3.0 anchor
AQP_CMP vs. CMPSPSNR-NN (End to End)SPSNR-I (End to End)CPP-PSNR (End to End)WS-PSNR (End to End)YUVYUVYUVYUVTrolley-2.7%-8.2%-7.5%-2.6%-8.3%-7.6%-2.6%-8.2%-7.4%-2.6%-8.1%-7.4%GasLamp-2.2%-10.1%-10.8%-2.2%-10.1%-10.9%-2.3%-10.0%-10.8%-2.3%-9.9%-10.8%Skateboarding_in_lot-2.8%-12.2%-13.9%-2.8%-12.3%-13.9%-3.0%-12.4%-14.1%-3.0%-12.4%-14.1%Chairlift-3.8%-13.4%-12.2%-3.8%-13.4%-12.2%-3.7%-13.2%-12.1%-3.7%-13.1%-12.1%KiteFlite-1.9%-10.3%-8.4%-1.9%-10.3%-8.5%-1.9%-10.4%-8.5%-1.9%-10.3%-8.4%Harbor-2.1%-7.8%-8.3%-2.0%-7.8%-8.3%-2.2%-8.0%-8.4%-2.1%-7.9%-8.4%PoleVault-1.4%-10.7%-14.0%-1.3%-10.5%-13.8%-1.4%-10.6%-13.8%-1.4%-10.8%-14.1%AerialCity-2.9%-13.9%-12.4%-2.9%-14.0%-12.5%-3.0%-14.1%-12.3%-3.0%-13.9%-12.3%DrivingInCity-2.3%-9.7%-9.3%-2.3%-9.8%-9.3%-2.6%-9.9%-9.6%-2.5%-9.8%-9.5%DrivingInCountry-3.2%-15.8%-17.7%-3.4%-15.9%-17.7%-3.2%-15.6%-17.5%-3.0%-15.5%-17.5%Overall-2.5%-11.2%-11.5%-2.5%-11.2%-11.5%-2.6%-11.2%-11.5%-2.6%-11.2%-11.4%
Additionally, similar to the tests in EE3, supplemental simulations were conducted by applying the proposed adaptive quantization method on the rotated ERP pictures where the content that contains detailed texture/edges are rotated from the equator to the pole. To rotate the input ERP pictures, the SVideoRotation parameter in the encoder configure file is set equal [0 90 0]. The table below presents the BD-rate savings in terms of the end-to-end distortion measurements of the proposed method for the rotated ERP.
BD-rate performance of the proposed adaptive quantization method for the rotated ERP, compared to the rotated ERP coded by JEM-6.0-360Lib-3.0
AQP_ERP_ROT vs. ERP_ROTSPSNR-NN (End to End)SPSNR-I (End to End)CPP-PSNR (End to End)WS-PSNR (End to End)YUVYUVYUVYUVTrolley-6.8%-7.7%-3.8%-6.8%-7.8%-3.9%-6.7%-7.7%-3.7%-6.7%-7.6%-3.7%GasLamp-6.6%-6.4%-8.2%-6.6%-6.6%-8.4%-6.4%-6.5%-8.3%-6.4%-6.4%-8.2%Skateboarding_in_lot-9.4%-8.0%-7.1%-9.4%-8.1%-7.1%-8.9%-7.4%-6.4%-9.0%-7.4%-6.4%Chairlift-10.4%-14.4%-14.5%-10.4%-14.4%-14.6%-10.3%-14.5%-14.7%-10.4%-14.5%-14.7%KiteFlite-8.3%-8.3%-6.8%-8.3%-8.4%-6.9%-8.0%-7.9%-6.4%-8.0%-7.9%-6.3%Harbor-11.5%-12.8%-13.8%-11.6%-12.8%-13.8%-11.3%-12.7%-13.5%-11.4%-12.6%-13.5%PoleVault-11.0%-15.2%-17.0%-11.3%-15.3%-17.0%-11.1%-15.0%-16.7%-10.9%-15.1%-16.7%AerialCity-8.8%-11.5%-12.5%-8.9%-11.8%-12.7%-8.5%-11.4%-12.6%-8.5%-11.3%-12.3%DrivingInCity-8.9%-10.5%-9.9%-9.0%-10.7%-10.2%-8.3%-9.9%-9.4%-8.4%-9.9%-9.5%DrivingInCountry-8.3%-13.6%-6.2%-8.7%-13.7%-6.5%-8.5%-13.6%-6.3%-8.2%-13.5%-6.1%Overall-9.0%-10.8%-10.0%-9.1%-11.0%-10.1%-8.8%-10.7%-9.8%-8.8%-10.6%-9.7%
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3267" JVET-G0150 Crosscheck of JVET-G0089 on adaptive quantization for JEM-based 360-degree video coding [Hendry, M. Coban (Qualcomm) [late]
EE4: 360° Projection Modifications and Padding (3)
General
Discussed Friday 12:00 (GJS & JRO).
See specific descriptions integrated below.
From Summary Report JVET-G0010:
Summary of EE4 tests
#TestDocY-BD-rate (Enc/DecTime Ratio)Cross-check1EAP-based SSP HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3206" JVET-G0097-10.2% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3241" JVET-G01292EAP-based SSP with padding width = 8 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3206" JVET-G0097-11.1% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3241" JVET-G01293EAP-based SSP with padding width = 16 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3206" JVET-G0097-11.0% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3241" JVET-G01294EAP-based SSP with padding width = 32 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3206" JVET-G0097-10.8% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3241" JVET-G01295ERP-based SSP with padding width = 8 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3206" JVET-G0097-10.6% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3211" JVET-G01026ERP-based SSP with padding width = 16 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3206" JVET-G0097-10.7% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3211" JVET-G01027ERP-based SSP with padding width = 32 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3206" JVET-G0097-10.4% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3211" JVET-G01028SSP projection with padding width = 8 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3240" JVET-G0128-8.2% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3242" JVET-G01309SSP projection with padding width = 16 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3240" JVET-G0128-5.5% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3242" JVET-G013010SSP projection with padding width = 32 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3240" JVET-G01281.1% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3242" JVET-G013011PERP with blending, padding on both sides, each padding width = 8 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3207" JVET-G00980.3% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3243" JVET-G013112PERP with blending, padding on both sides, each padding width = 16 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3207" JVET-G00980.5% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3243" JVET-G013113PERP with cropping, padding on both sides, each padding width = 8 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3207" JVET-G00980.4% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3243" JVET-G013114PERP with cropping, padding on right side, padding width = 16 HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3207" JVET-G00980.4% HYPERLINK "http://phenix.int-evry.fr/jvet/doc_end_user/current_document.php?id=3243" JVET-G0131
Summary: Padding on the discontinuity boundary resolves visual quality artefacts. For some projections (ERP and EAP based SSP) it results in objective gain ~1%, for another objective performance degradation is observed. For majority of cases padding size 8 is sufficient enough to resolve visual artefacts.
Decision (SW): Replace SSP by EAP-based SSP (with padding) in the 360Lib software.
Decision: Include 8-luma-sample ERP padding in anchor (on each side). In software, the padding width can be a compile-time macro parameter. The padding regions are added to the picture size that has been used previously, so more samples are being coded (within the 1% tolerance).
Decision: Blending should be used in the anchor. (It is also desirable to support not blending.)
G1003 (360Lib algori) and G1030 (CTC) should describe these aspects, as applicable.
ISP and OHP have been using padding already.
EAP doesnt have padding currently. Its usage would be analogous to padding for ERP, but adding padding capability to that is not a high priority.
For cube map, there are two variants that have historically been discussed, CMP and ACP, which are conceptually similar. Each has two 3x1 areas that have no seams within these regions and can hypothetically be padded separately. The software doesnt currently support padding for that, but adding padding capability to that is not a high priority.
Four-sample padding was tested for a somewhat different ACP scheme (G0023).
RSP has a sort of padding scheme.
Primary (7)
Contributions in this category were discussed Friday 14th 1200 (chaired by GJS and JRO).
From Summary Report JVET-G0010:
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3206" JVET-G0097 EE4: ERP/EAP-based segmented sphere projection with different padding sizes [Y.-H. Lee, H.-C. Lin, J.-L. Lin, S.-K. Chang, C.-C. Ju (MediaTek)]
From summary report G0010:
Brief description of the technology. 
This contribution proposes an EAP-based segmented sphere projection which uses equal-area projection (EAP) on the equatorial segment and applies padding on the north and south poles. By applying the equal-area projection on the equatorial segment with padding on the poles, better quality consistency can be reached and virtual artefacts are efficiently reduced.
Questions recommended to be answered during EE tests: 
To compare SSP with this proposal, investigate the following aspects in this EE:
[Q] EAP vs ERP in the center section.
[A]: The difference in the performance between ERP and EAP based SSP is 0.3~0.4%. 
[Q] Padding widths for SSP-based projections. Including some informal subjective testing.
[A]: Padding improves performance ~1%, size = 8 seems to be enough.
Previous proposal JVET-F0037
Brief description of the technology. 
This contribution describes a padding method for the SSP format in order to mitigate the seam artifact. In order to mitigate this artifact, padding pixels are added to border area.
The following aspects should be further studied in this EE:
Questions recommended to be answered during EE tests:
[Q] Padding widths for SSP projection, but without changing the padding width per sequence.
[A]: Objectively padding leads to the lose 9.7% gain ( 8.2% gain, but there visual quality improvement is observed. Padding size 8 seams to be sufficient enough.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3207" JVET-G0098 EE4: Padded ERP (PERP) projection format [J. Boyce, Z. Deng (Intel)]
Brief description of the technology. 
This contribution proposes a padded ERP format to support a feature of the omnidirectional projection indication SEI message which allows the value of the yaw range to represent more than 360 degrees for ERP sequences, by padding on the left and right edge regions of the picture. After decoding, the PERP format can be converted back to the ERP format by blending the duplicated samples by applying a distance-based weighted average between the left edge region of the picture and the padding region on the right edge of the picture.
Questions recommended to be answered during EE tests: 
Investigate the following aspects in this EE:
[Q] The padding amount for the proposed PERP. 
[A]: Objectively padding leads to the lose 0.3
0.4%, but there visual quality improvement is observed.
[Q] To convert back to the ERP, blending the duplicated samples vs. cropping
[A]: Blending provides slightly better performance (~0.1%).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3211" JVET-G0102 EE4: Cross-check of EE4 tests 57 (JVET-G0097) [Z. Deng (Intel)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3241" JVET-G0129 EE4: Cross-check of EE4 tests 14 (JVET-G0097) [Y. Lu, J. Li, Z. Wen, X. Meng (Owl Reality)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3240" JVET-G0128 EE4: Padding method for Segmented Sphere Projection [Y. Lu, J. Li, Z. Wen, X. Meng (Owl Reality)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3242" JVET-G0130 EE4 Cross-check for Test 810 (JVET-G0128) [Y.-H. Lee, J.-L. Lin (MediaTek)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3243" JVET-G0131 EE4 Cross-check for Test 1114 (JVET-G0098) [Y.-H. Lee, J.-L. Lin (MediaTek)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3278" JVET-G0160 Calculation of objective metrics for padded ERP SW implementation for EE4 (JVET-G0098) [J. Boyce, Z. Deng, L. Xu] [late]
was reviewed in BoG on 360°
Related (0)
No contributions in this category were noted.
Non-EE Technology proposals (13)
Intra coding (2)
Contributions in this category were discussed Sunday 15th 0945-1005 (chaired by JRO).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3160" JVET-G0060 Improvements for Intra Prediction Mode Coding [Y. Han, J. An, J. Zheng (HiSilicon)]
This contribution reports intra prediction mode coding with modified selected modes derivation and MPM modes initialization order to improve coding efficiency. The presented selected modes are angular modes with specific offset to the angular modes in the MPM list. The proposed MPM modes initialization order moves the DC mode to the back of the above left mode. Compared to the HM16.6-JEM6.0 anchor, the proposed technique reports an average BD-rate improvement of 0.14% for the common test condition of AI configuration with almost no change in encoding time and decoding time.
Offset +/-2 is suggested here. Note there was a previous contribution JVET-D0114 which proposed a similar approach with +/-1 offset, and according to proponents of that contribution, by that time it gave better results.
Several experts supported including this in EE1.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3199" JVET-G0092 Crosscheck of Improvements for Intra Prediction Mode Coding (JVET-G0060) [Y. Yasugi, T. Ikai (Sharp)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3277" JVET-G0159 Block shape dependent intra mode coding [V. Seregin, W. -J. Chien, M. Karczewicz, N. Hu, X. Zhao (Qualcomm)] [late]
This proposal presents modified intra mode coding considering block shape, and secondary most probable mode list. Simulation results reportedly show that proposed methods provide 0.17% BD rate saving for luma in all intra configuration on average.
Include first aspect (secondary MPM mode) in EE1.
Inter coding (2)
Contributions in this category were discussed Sunday 15th 1005-1030 (chaired by JRO).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3166" JVET-G0065 Simplification and improvements on FRUC [J. Seo, J. Lee, S.-H. Kim, H. M. Jang, J. Lim (LGE)]
This contribution suggests 2 modifications of FRUC in JEM. First modification is a fast FRUC method by skipping sub-block process when true bi-prediction condition is satisfied. From the test results, it has been observed that the proposed fast FRUC method reduces 2% and 3% in the encoding and decoding time while achieving luma BD rate savings of 0.07% for RA configuration. The second change is a restriction of search round for FRUC refinement. It has been observed that no maximum search round of refinement in FRUC is defined in JEM6.0. Therefore, some practical ways of restriction of search round are investigated. Experimental results reportedly show that the proposed method causes marginal BD-rate change on RA, LDB and LDP configurations, respectively.
The second aspect does not lead to noticeable run time reduction. However, due to the adoption of F0032 of last meeting, this contribution also points out that the bound of the search range was eliminated (software bug), whereas it is still described in the text.
Action item (BF/SW coordinators): Implement the search range restriction.
It is generally noted that the true limits of FRUC in terms of worst case memory bandwidth, computation etc. are not well studied yet and in case of putting technology like this into a standard more serious limitations would be necessary. The suggested approach goes into this direction, but is only marginally imposing additional restrictions. The currently demonstrated benefit is relatively low.
Further study encouraged.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3230" JVET-G0119 Cross-check of simplification and improvements on FRUC (JVET-G0065) [K. Choi, E. Alshina (Samsung)] [late]
Loop filters (4)
Contributions in this category were discussed Sunday 15th 1030-1130 (chaired by JRO).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3183" JVET-G0076 Bilateral filter simplification [R. Vanam, Y. He, Y. Ye (InterDigital)]
A division-free bilateral filter was adopted in the JEM-6.0 reference software during the Hobart meeting. In this proposal, a scheme is presented to simplify this bilateral filter.  The proposed scheme by-passes the bilateral filtering on a reconstructed block when its associated inverse quantized transform coefficients are zeros except for the DC coefficient.  It is reported that this simplification yields an average decoding speed-up of 1% for AI, with negligible luma BD rate increase of up to 0.03% across all configurations.
Benefit not obvious  does not change the worst case and requires another condition check. 
No action.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3268" JVET-G0151 Cross-check for Bilateral Filter Simplification (JVET-G0076) [A. Gadde, L. Zhang (Qualcomm)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3203" JVET-G0095 Unified Adaptive Loop Filter for Luma and Chroma [J. An, J. Zheng (HiSilicon)]
This contribution proposes a unified adaptive loop filter (UALF) for luma and chroma. In the proposed unified ALF mode, the chroma sample classification and on/off decision directly re-use the results of the co-located luma sample so that there is no complexity increase for the chroma classification. The chroma ALF coefficients also re-use those of luma but with a constant 5x5 diamond taps to keep the low complexity of chroma filtering. In the original separate ALF mode, a CTU-level on/off control for chroma ALF is proposed. The proposed method achieves around 1% chroma BD-rate gain for AI configuration and averagely 4.6% chroma BD-rate gain for RA, LB, LP configurations based on JEM6.0 without noticeable runtime change. By shifting the chroma gain to luma, around 0.5% luma gain is achieved for RA, and 0.4% luma gain is achieved for LD. By together with 4x4 level luma block classification, around 0.4% luma gain is achieved for RA, and 0.3% luma gain is achieved for LD, with reduced ALF coefficients storage compared to JEM6.0.
Question: Have the visual artifacts of the last meeting been resolved? The cross-checker reports that no artifacts are observed (however not checked for the version with chroma lambda modification). Generally, the subsampling method seems more consistent than in the previous proposal; however, several experts still raised the opinion that luma and chroma may have quite different characteristics and therefore need different filters.
Upon further request, it is remarked that the method switches between unified and separate ALF, and for some sequences separate ALF was better.
Derivation for unified mode is done jointly for luma and chroma. Decision between separate and unified mode does not need a separate pass.
The aspect of 4x4 luma classification is interesting, probably reducing complexity; it is however not clear why this is claimed to reduce the storage of coefficients.
It would further be interesting to see whether the usage of different lambda for chroma (which is providing luma gain) might already provide gain when used without the method.
A disadvantage of the unified method compared to current ALF is coming from the fact that the chroma filtering is using classification that is dependent on luma. Luma and chroma cannot be processed in parallel. All this needs better understanding.
Further study in EE, investigating
using 4x4 classification with current ALF (simplification)
using 4x4 luma classification with unified method (also investigate the impact of using chroma lambda modification without the proposed method)
investigate the benefit of switching unified/separate luma&chroma ALF 
This should include visual testing for possible chroma artifacts.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3229" JVET-G0118 Cross-check of Unified Adaptive Loop Filter for Luma and Chroma (JVET-G0095) [K. Choi, E. Alshina (Samsung)] [late]
Other (5)
Contributions in this category were discussed Sunday 15th 1130-1315 (chaired by JRO).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3168" JVET-G0067 Chroma Adjustment for SDR Video [J. Ström, P. Wennersten, K. Andersson, R. Sjöberg (Ericsson)]
This contribution discusses an alternative way of subsampling 4:4:4 sequences to 4:2:0 called chroma adjustment. This method was originally created for HDR content as reported in JCTVC-Z0022, but this document asserts that it may be beneficial also for certain SDR sequences with saturated colors such as CampfireParty. Compared to the version of CampfireParty currently used in JVET, the contribution claims that the chroma adjustment version can avoid amplification of noise in some areas, avoid suppression of noise in other areas and avoid a blurring of some edges. The contribution measures the increase in luminance quality from the chroma adjustment method by measuring PSNR on a gamma corrected version of luminance. It is reported that this results in a BD rate change of   -21% for CampfireParty for random access over the current 4:2:0 sequence. The contribution however claims that the perceptual effect seems smaller than this figure suggests, perhaps due to the sequence being noisy. The contribution asserts that non-saturated sequences do not benefit as much from the processing, and results for a non-saturated sequence (TrafficFlow) are also reported, -1%. Furthermore, the contribution notes that the chroma positions in the current 4:2:0 sequence are incorrect. The contribution proposes to that the CampfireParty sequence should be replaced by the one obtained using the proposed technique, and that 4:4:4 representations of all current test sequences should be made available.
Definitely chroma positions in CampfireParty are wrong. Should be type 0 in ffmpeg conversion that was used.
Campfire is the most extreme case.
Question: Does it have impact on the compression benefit of JEM vs HM? Not known.
It was mentioned that the picture examples shown also seem to remove some sharpness from luma.
Generally, Campfire is a good sequence to spot coding artifacts.
Actions:
Correction of chroma positions in CampfireParty using HDRtools. 
Building Hall, Daylight Road and Park running also need to be checked for correct chroma position (Type 0). Side activity (J. S., Teruhiko S. & original proponents will check and report back). It was later confirmed that the three sequences need to be modified. 
Decision(CTC): Replace the four sequences mentioned above by new version (different names). Original providers will regenerate the sequences.
Further study on other aspects.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3178" JVET-G0073 Adaptive quantization and denoising for future video coding CfP and CTC [R. Sjöberg, K. Andersson, P. Wennersten (Ericsson)]
This contribution states that video codec developments within ITU-T and MPEG have mainly been done using sequences containing unfiltered noise and test models using static QP values within pictures. The contribution points out that it is common in the industry to apply denoising filters and use adaptive quantization to optimize the subjective quality.
The contribution proposes that inclusion of these two tools should be considered for the test model and common test conditions for the future video coding project. It is also proposed that a test case for SDR video where denoising and adaptive quantization is allowed should be considered for the CfP for future video coding.
The contribution claims that if this is not addressed, there is a risk that the final video coding standard becomes less efficient than it could be for practical encoders that optimize for subjective quality.
It is reported in the contribution that compared to JEM-6.0, MS-SSIM luma BD-rates of "2.8%, "11.8%, "12.2%, "13.0% for AI, RA, LDB, LDP are achieved when applying denoising before encoding and using adaptive quantization. The corresponding PSNR luma BD-rate numbers are reported to be 15.4%, 7.6%, 8.2%, 7.4%.
Denoising alone gives a small BD rate reduction for RA/LDB/LDP. However, this was done based on 8 bit sequences.
Subjective viewing with some poll was executed. Some tendency was observed for some sequences that people tended to judge the denoised and adaptive QP version better for QP37.
It is suggested to consider denoising and adaptive QP in a later test model in standard development.
It is further suggested to include a test class where preprocessing and adaptive quantization is allowed.
Question raised: How was the difference HM/JEM influenced by the denoising? Not known.
Would be interesting to see separate benefits of denoising and adaptive quantization.
It is also mentioned that in some real applications denoising would only be used when combined with noise reconstruction after decoding
CfP should concentrate on normative tools, and allowing preprocessing and rate control makes comparison more difficult. Nevertheless, performance of coding tools on denoised sequences would be interesting to investigate, which would however imply that same method should be used for all proposals.
Establish AHG (R. Sjöberg et al) on denoising and adaptive quantization:
investigate impact of denoising on performance of HM and JEM
study impact of adaptive QP
perform an assessment of visual quality
These aspects could become relevant for development phase after CfP. CfP should concentrate on normative aspects.
Generally, preference is more variety of test sequences for CfP, including noisy and less noisy ones, and variety in terms of content. In the end, the number of test cases will be limited, and duplicating content in noisy and denoised versions would be undesirable.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3222" JVET-G0112 Arithmetic coding with context-dependent double-window adaptation response [A. Said, M. Karczewicz, L. Zhang, V. Seregin (Qualcomm)]
The arithmetic coding implementation in JEM-6 employs a parallel double window with fixed window parameters for estimating binary symbol probabilities. This proposal extends the current estimation technique to allow different pairs of window parameters for each coding context. Since those parameters are set only during context initialization, this modification does not affect the encoding and decoding computational complexity. Simulation results show that, without any complexity changes, the new method produces luma BD-rate coding gains of 0.13% for All Intra, 0.32% for Random Access, 0.10% for Low Delay B, and 0.11% for Low Delay P.
The proposal uses same context initialization for all slice types which reduces some storage. However, it requires additional storage for a/b parameters that control the adaptation speed (it is claimed that overall, no increase of storage occurs)
The parameters were trained using the CTC test sequences. Two different sets of parameters were used for QP < 30 or >30
Interesting gain in particular for RA; some loss is observed in chroma for LD configurations
Further study in EE:
investigate performance for other sequences not used for training
investigate performance if only the uniform initialization over all slice types is used (without separate QP ranges)
investigate performance for other QP ranges than in CTC.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3260" JVET-G0143 Cross-check of JVET-G0112 - Arithmetic coding with context-dependent double-window adaptation response [K. Sharman, M. Philippe (Sony)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3265" JVET-G0148 AHG9: encoding/decoding capability of JEM6 for 4:4:4 colour format [J. Kim, X. Xu, S. Lei (Mediatek)]
This contribution studied encoding and decoding capability of JEM for 4:4:4: colour format under Intra Only test condition. Following 4 tests are conducted to 5 low resolution sequences and 11 high resolution sequences. 
All macros disabled 
RExt parameters on: JEM6.0 can encode and decode 4:4:4 colour format sequences without any code change.
RExt parameters off: JEM6.0 can encode and decode 4:4:4 colour format sequences without any code change.
All macros enabled
RExt parameters on: When RExt parameters are on, JEM6.0 encoder crashes and does not produce any output.
RExt parameters off: JEM6.0 with All macro on can encode and decode 4:4:4 colour format sequences with 1 line code change which is included in rev.523.
All macros indicate macros in the JEM software related to technologies beyond HEVC. When All macros disabled, JEM related parameters are also disabled (removed) and parameter setting is same as HM16.14.
Further investigations indicate that the crash in 2a. does not occur any more when disabling cross component prediction from RExt.
All investigations so far were performed with AI.
Note that current results were achieved when using small resolution test cases. Currently, simulations are running with full resolution sequences, no crash observed so far. Gains compared to HM are in the range of 15-20% (for AI)
Further study  continue investigation of AHG. Would be interesting to see impact of inter coding tools and identify the problem of CCP further.
Extended colour volume coding (8)
Contributions in this category were discussed Sunday 15th 1500-1735 (chaired by JRO) unless noted otherwise.
Test conditions and evaluation (3)
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3212" JVET-G0103 AHG7: Candidate rate points of HLG material for anchor generation [S. Iwamura, S. Nemoto, A. Ichigaya (NHK)]
This contribution proposes candidate rate points of 4K Hybrid Log-Gamma (HLG) material for generating anchor bitstreams with HM16.15 and JEM6.0. Integer QP was used for JEM bitstream generation, whereas QP increment frame option was used to generate HM bitstream. Since the effectiveness of QP adaptation technique for HLG contents is still under discussion, luma delta QP and chroma QP offset which are described in JCTVC-Z1017 were not applied. The generated bitstreams are available in JVET FTP site. A viewing session is also planned at this meeting to collect opinions on determining anchor rate points for CfP under consideration.
7 sequences, 6 or 7 rate points with matching HM/JEM rates were generated for each.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3231" JVET-G0120 AHG7 Cross-check of anchor generation of HLG content in JVET-G0103 [K. Kawamura, S. Naito (KDDI)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3271" JVET-G0153 AHG7: Analysis of HDR metrics [E. François (Technicolor)] [late]
This contribution discusses about the metrics specified in JEM Common Test Conditions on HDR (JVET-E1020) for evaluating coding performance of HDR content and extended colour volume content coding. It reports observations made on the metrics currently considered in the AHG7.
Linear regression with outlier rejection based on M estimator.
wPSNR-Y and PSNRL100 metrics are well correlated (0.81).
wPSNR-Y and tPSNR-Y metrics are less correlated (0.61)
wPSNR-U/DE100 and wPSNR-V/DE100 are well correlated (0.90/0.92), as well as their average.
The proponent suggests to use wPSNR-Y and wPSNR-U/V as primary metrics, and keep the other for further information
However, the weighting used is approximately similar to the quantization method used in the anchors, and also used for the material used in the test  
The difference between wPSNR-Y and PSNRL100 could be explainable by the fact that luminance and luma are somewhat different (in case of linear weighting e.g. saturated colours would interpreted differently).
t-PSNR is quite related to PQ. Question is raised whether this would be correlated to the PSNR in the container. Provided that this is the case, this might be a good candidate to drop. It was reported in an update of JVET-G0153 that the correlation is indeed very high. Decision(CTC): Skip t-PSNR from HDR CTC
Otherwise, provided that from the CfE a reasonable report of MOS can be obtained, the metrics should be compared to that. Further analysis to be done in the AHG. 
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3164" JVET-G0063 New Test Sequence of 4K Hybrid Log-Gamma [T. Tsukuba, M. Ikeda, T.Suzuki (Sony)] [late]
This contribution presents new Hybrid Log-Gamma test sequences for future video coding standardization. It provides information of test sequences, resolution, frame count, frame rate, chroma format, bit depth as well as coding results provided by JEM6.0 and HM16.15.
5 sequences 3840x2160, 10 bit 420. Could be made available in RGB 444 as well. Captured with Sony CineAlta F65RS.
Organize viewing session, investigate appropriateness for coding artifacts visibility, looking at HM results (all 4 rate points)
After the viewing of these sequences and those of JVET-G0103, BoG JVET-G0165 further discussed them and suggested actions. 
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3173" JVET-G0072 AHG7: Analysis of currently proposed HLG content [E. François, F. Le Leannec, F. Galpin (Technicolor)]
was discussed in BoG JVET-G0165
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3279" JVET-G0161 Report of expert viewing of HLG test sequences [M. Ikeda(Sony)]
was reviewed in BoG JVET-G0165
Tools (5)
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3154" JVET-G0054 Mapping SDR content into HDR containers [C. Fogg (MovieLabs)]
was discussed in BoG JVET-G0165
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3159" JVET-G0059 AHG7: On the need of luma delta QP for BT.2100 HLG content [ HYPERLINK "mailto:iwamura.s-gc@nhk.or.jp" S. Iwamura, S. Nemoto, A. Ichigaya (NHK), M. Naccari (BBC)]
This contribution presents an analysis on the distribution of code levels for content represented using the BT.2100 Hybrid Log-Gamma (HLG) transfer characteristics. The main purpose is to address one mandate of the AHG7 (JEM coding of HDR/WCG material) which requests exploration to study and evaluate the application of QP adaptation in the context of an HLG container. Accordingly, the same experiment described in m37439 has been conducted, but taking into account the fundamental differences between BT.2100 Perceptual Quantiser (PQ, which was the focus for m37439) and HLG. Over sequences belonging to Class A  E, a linear relationship between the code levels associated with the BT.709 and BT.2100 HLG containers is reported. This result suggests that no significant redistribution of code levels between bright and dark image areas is observed when a given content is represented in these two containers. Therefore, the luma delta QP devised in m37439 (and also included in JCTVC-Z1017) does not seem to be necessary when coding HLG material. This claim is also confirmed by coding experiments where average BD-rate losses of 4% for luma and 10% for chroma are reported when the anchor is BT.709 compressed content. These coding results suggest that no particular trend or correction in the bitrate distribution should be expected when a BT.709-optimised codec compresses HLG material.
Whereas a linear fitting of 2100/709 is sufficient for BQSquare, some deviation from linear behaviour is found in the scatterplots of the entire test set.
It is also mentioned that it seems some values are missing from the scatterplot of BQSquare.
For the BD rate computation, plain PSNR was used  how would it behave with other metrics used commonly in HDR?
It was suggested by other experts that more analysis might be necessary to come to a final conclusion. Also an ultimate conclusion most likely also requires subjective testing.
It is also mentioned that the assumption of consumer displays always having 100 nits peak luminance that is used in this analysis may be not generic enough.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3191" JVET-G0084 Luma/Chroma QP Adaptation for Hybrid Log-Gamma Sequences Encoding [K. Kawamura, S. Naito (KDDI)]
This contribution presents a study of luma/chroma QP adaptation for hybrid log-gamma sequences encoding. For PQ content, both luma QP adaptation and chroma QP offset is meaningful to improve the subjective quality. Similar approach is applied to the HLG content to study the subjective quality influence. Based on the subjective evaluation, luma QP adaptation provide negligible improvement while chroma QP offset provides significant improvement at the ultra-low bitrate range.
Subjective tests were not performed with exactly matching bit rates. Typically, the bit rate is higher.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3252" JVET-G0138 Cross-check of JVET-G0084 on luma/chroma QP adaptation for HLG material [S. Iwamura, S. Nemoto, A. Ichigaya (NHK)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3234" JVET-G0123 AHG7: Experiments on using local QP adaptation in the context of an HLG container [E. François, F. Hiron (Technicolor)] [late]
This document reports experiments related to HLG content coding. The approach is based on the luma-based QP adaptation used in the current HM and JEM anchors. The dQP table used for BT.2100 PQ content is converted based on the HLG conversion chain of display-referred linear-light content. Various derived dQP tables, for different content peak luminance, have been tested. For non-native HLG HDR content (content initially provided in EXR or BT.2100 PQ format), reported BD-rate gains are of 2.0% for tPSNR-Y, 1.2% for PSNRL100, 1.5% for wPSNR-Y, 3.2% for DE100, 0.6% and 2.4% for wPSNR-U and V. For native HLG content, reported BD-rate gains are of 1.2% for tPSNR-Y, 1.5% for PSNRL100, 1.9% for wPSNR-Y, 2.4% for DE100, 4.7% and 3.0% for wPSNR-U and V. Partial visual observations are also reported.
Objective gain is observed both for PQ sequences converted to HLG, and for native HLG. The contributor reports that some difference is subjectively visible, but not as clear as it was the case for PQ coded material. Rate-matched coded results are available  include in the HLG anchor viewing, and further discuss in BoG.
Coding of 360° video projection formats (20)
Conversion tools, 360lib (0)
No contributions in this category.
Packing and Projection formats (7)
Contributions in this category were discussed in BoG JVET-G0158 Sat. 15th (chaired by Jill Boyce).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3151" JVET-G0051 AHG8: A study on quality impact of line re-sampling rate in EAP [M. Zhou (Broadcom)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3221" JVET-G0111 AHG8: Crosscheck of JVET-G0051 A study on quality impact of line re-sampling rate in EAP [Y. He, X. Xiu, P. Hanhart, F. Duamnu, Y. Ye (InterDigital)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3156" JVET-G0056 AHG8: A study on Equi-Angular Cubemap projection (EAC) [M. Zhou (Broadcom)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3220" JVET-G0110 AHG8: Crosscheck of JVET-G0056 A study on Equi-Angular Cubemap projection [F. Duanmu, X. Xiu, P. Hanhart, Y. Ye, Y. He (InterDigital)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3165" JVET-G0064 Stereoscopic 360 video compression with the next generation video codec [F. Henry, J. Jung, A. Ouach, B. Ray, P. Schwellenbach (Orange)]
Should be discussed with parent bodies.
It is not requested to add stereoscopic test cases to CfP.
Report that BR saving using MV-HEVC for stereo 360° video is around 30% on average compared to simulcast, which indicates that a very simple extension of a monoscopic video codec already solves the problem.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3209" JVET-G0100 AHG8: A study of 360Lib projections on global motion sequences [M. Coban, G. Van der Auwera, M. Karczewicz (Qualcomm)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3274" JVET-G0156 AHG8: Efficient Frame packing method for Icosahedral projection (ISP) [C. Pujara, A. Dsouza, S. N. Akula, A. Singh, R. K. K., R. Gadde, V. Zakharchenko, E. Alshina, K. P. Choi (Samsung)] [late]
Quality assessment and metrics (6)
Contributions in this category were discussed in BoG JVET-G0158 Sat. 15th (chaired by Jill Boyce).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3152" JVET-G0052 AHG8: A study on quality impact of coded picture resolution in 360 video coding [M. Zhou (Broadcom)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3157" JVET-G0057 AHG8: Influence of coding size on objective gain in 360-degree video CTC [G. Van der Auwera, M. Coban, M. Karczewicz (Qualcomm)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3181" JVET-G0075 AHG8: On reliability of S-PSNR-NN and S-PSNR-I as quality metrics for 360-degree video [Y. Ye, Y. He (InterDigital)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3195" JVET-G0088 AHG8: On the derivation of weighted to spherically uniform PSNR (WS-PSNR) for adjusted cubemap projection (ACP) format [X. Xiu, Y. He, Y. Ye (InterDigital)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3266" JVET-G0149 Crosscheck of JVET-G0088 on the derivation of WS-PSNR for ACP format [Hendry, M. Coban (Qualcomm)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3269" JVET-G0152 AhG8: Subjective Quality Evaluation for Omnidirectional (360°) Videos [A. Singla, A. Raake (TU Ilmenau), W. Robitza, P. List, B. Feiten (Deutsche Telekom)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3280" JVET-G0162 Framework for assessing 360-video experience quality [Wenjie Zou, Fuzheng Yang, Yi Li (Xidian Univ.), Haoping Yu (Huawei)] [late]
was reviewed in BoG.
Coding tools (1)
Contributions in this category were discussed in BoG JVET-G0158 Sat. 15th (chaired by Jill Boyce).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3158" JVET-G0058 AHG8: Reference picture extension of ACP format 360-degree video [M. Coban, G. Van der Auwera, M. Karczewicz (Qualcomm)]
Reference picture extension could also be done on-the-fly at the decoder. For ACP, this might be more complex than for CMP (nonlinear equation). However it is done, it would require normative decoder change. Extension width/height 64 samples, which requires approx. 23% increase of reference pixel count.
Padding (6)
Contributions in this category were discussed in BoG JVET-G0158 Sat. 15th (chaired by Jill Boyce).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3172" JVET-G0071 AHG8: ACP with padding for 360-degree video [G. Van der Auwera, M. Coban, M. Karczewicz (Qualcomm)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3244" JVET-G0132 AHG8: Crosscheck of JVET-G0071 ACP with padding for 360-degree video [Y. He (InterDigital)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3179" JVET-G0074 AHG8: ECP with padding for 360-degree video [G. Van der Auwera, M. Coban, M. Karczewicz (Qualcomm)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3245" JVET-G0133 AHG8: Crosscheck of JVET-G0074 ECP with padding for 360-degree video [Y. He (InterDigital)] [late]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3208" JVET-G0099 Padded ERP (PERP) projection format for OMAF subjective test [J. Boyce, Z. Deng (Intel)]
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3275" JVET-G0157 AHG8: Padding investigation in compact ISP format [A. Dsouza, S. N. Akula, C. Pujara, A. Singh, R. K. K., R. Gadde, V. Zakharchenko, E. Alshina, K. P. Choi (Samsung)] [late]
HL syntax (0)
No contributions in this category.
Complexity analysis (1)
Contributions in this category were discussed Sunday 15th 1735-1750 (chaired by JRO).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3161" JVET-G0061 AHG5 External Memory Access Evaluation with the Consideration of Cache [X. Li, T. Hsieh, J. Chen, M. Karczewicz (Qualcomm)]
In this proposal, external memory access by decoder is studied with the consideration of cache. It is reported that the statistics by a widely used profiling tool (Valgrind) indicates that cache size has a very big impact on decoders external memory reading access given the same bitstream. It is further reported that different cache sizes lead to over 500 times difference in external memory reading access by JEM-6.0 decoder. It is proposed to further study the external memory access with the consideration of cache.
Useful information, more realistic than memory models used in context of HEVC development  could become important in the context of future standards development.
The tool only considers memory reading, not writing
Further study recommended. Would be interesting to see the difference JEM/HM as an example.
Encoder optimization (2)
Contributions in this category were discussed Sunday 15th 1750-1800 (chaired by JRO).
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3219" JVET-G0109 A modification of fast algorithm in intra mode selection [P.-H. Lin, C.-L. Lin, C.-C Lin (ITRI)]
This contribution proposes to modify a process in a intra prediction fast algorithm, which keeps several MPM candidates in the RDO list even if their SATD is bigger in P or B slices. The simulation results show that significant gain is observed in the class A1 in RA condition.
Results inhomogeneous over classes, loss for some sequences. Increases encoding time by 4%
No action at this moment - further study.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3239" JVET-G0127 Crosscheck of a modification of fast algorithm in intra mode selection (JVET-G0109) [Y. Yasugi, T. Ikai (Sharp)] [late]
Metrics and evaluation criteria (0)
No contributions in this category.
Withdrawn (2)
See under  REF _Ref369460175 \r \h 1.4.2.
JVET-G0135 Withdrawn
JVET-G0139 Withdrawn
Joint Meetings, BoG Reports, and Summary of Actions Taken
Exploration Experiments (update)
The setup of Exploration Experiments was discussed, and an initial draft of the EE document was reviewed in the plenary (chaired by JRO). This included the list of all tools that are intended to be investigated in EEs during the subsequent meeting cycle:
EE1: Intra prediction and mode coding (continue)
JVET-G0081
JVET-G0107
JVET-G0108
JVET-G0060
JVET-G0159
EE2 (new): Entropy coding
JVET-G0112
EE3 (new): ALF:
JVET-G0095
Li Zhang is mandated to compile the EE document with remote assistance by Elena Alshina, to be circulated by Thursday and reviewed Friday.
It was agreed to give the editors the discretion to finalize the document during the two weeks after the meeting, and circulate/discuss it on the reflector appropriately.
Joint meetings
BoGs
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3276" JVET-G0158 Report of BoG on 360 Video [J. Boyce]
This BoG on 360 Video met on July 15, 2017.
The BoG recommends the following:
JVET-G0088: Modify the WS-PSNR calculation for ACP
Change the metrics in the CTC as follows:
oldnewcodecWS-PSNR, PSNRWS-PSNR, S-PSNR-NNcross-formatS-PSNR-I, CPP-PSNR, and S-PSNR-NNCPP-PSNR, S-PSNR-NNE2ES-PSNR-I, CPP-PSNR, S-PSNR-NN and WS-PSNRS-PSNR-NN and WS-PSNRviewport ( 2 dynamic )PSNRPSNRReplace EAP with proposed AEP with beta  1/1.4
Add the equi-angle cubemap (EAC) projection format to 360Lib
Add the equatorial cylindrical projection (ECP) format to 360Lib
Decision: These recommendations were approved in the JVET plenary Wed. afternoon.
The BoG suggests discussion of the following:
Discuss JVET-G0064 Stereoscopic 360 video compression with the next generation video codec at parent body level
Discuss criteria for adding or removing projection formats from 360Lib and the 360 video CTC anchor generation
From discussion in JVET plenary Wed afternoon: The purpose of generating anchors for all projection formats is mainly for the reason of sanity check when changes to 360lib are made. For this, it is sufficient to generate HM anchors. Presence of a projection format in 360lib is not meant to give it any status in the context of standardization work. They are mainly there for the purpose of giving experts the possibility to experiment with them (provided that interest is expressed), and giving the capability of studying methods of comparing different formats. The set of formats present in 360lib should be limited to a necessary minimum, representing typical cases and avoid duplications.
Decision: It was agreed that the list of projection formats included in the CTC & 360Lib will not grow further, to avoid having so many that we cant properly study them. If we want to add one, we need a decision to remove one. Anchors for projection formats to be made available only with HM and ERP for JEM.
It was further discussed whether to drop the codec-level WS-PSNR would be useful, but several experts expressed opinion that at current stage it is still needed for sanity check.
Regarding EE3, it was concluded that no consistent quality improvement could be observed by adaptive QP, therefore this should not be used in CfP anchors.
Continue study of adaptive QP in AHG on 360 video, also for JEM, and also possibly investigating less aggressive methods. Note that in CfP we should not disallow adaptive QP if it is purely geometry dependent.
Several topics have been identified for revisit by the BoG. 
Viewing of coded versions of proposed new test sequences is planned.
The BoG met again on July 18, 2017.
The BoG recommends the following:
360 video software coordinators still responsible for providing anchors for all projection formats in 360Lib for HM, and for ERP for JEM. Proponents are encouraged to also provide cross-checked anchor data for other projection formats for JEM, and communicate to software coordinators to include in report. 
The BoG suggests discussion of the following:
Criteria for adopting projection maps in future meetings should be discussed in track. Options include:
1. Require that WS-PSNR implementation be available for any new projection format, and WS-PSNR metrics be compared to S-PSNR-NN
If WS-PSNR implementation for the projection foramt is not available, the proposed projection format can be put into an EE, but not adopted into the 360Lib and its descrition document
2. Stop requiring use of codec level WS-PSNR
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3283" JVET-G0165 BoG Report: Extended Colour Volume and High Dynamic Range [A. Segall]
The BoG on Extended Colour Volume and High Dynamic Range coding met during the 6th JVET meeting in Turin, Italy.  The mandates of the group were:
Discuss the viewing sessions conducted for HLG content
Review JVET-G0072 and JVET-G0054
Suggest Actions
The BoG met on July 17, 2017, from 6:00PM to 8:00PM, and July 18, 2017, from 5:15PM to 8:00PM.
It was proposed to include HLG7 and HLG1 in the CTC at this meeting, with the goal of selecting HLG content at the next meeting once more information was provided about the sequences proposed by SONY.
The rate points for HLG7 will be reported in JVET-G0103, and those rate points will be included in the BoG report when available.  
Recommendation: For HLG7, select 4 rate points covering the range of Rate 0 to Rate 5.
Recommendation: For HLG1, select 4 rate points covering the range of Rate 0 to Rate 4.
Recommendation: Include HLG7 and HLG1 in the CTC at this meeting with the intention of selecting HLG content for the CfP at the next meeting once more information is provided for the sequences proposed by SONY. DayStreet and PeopleInShoppingCenter were considered interesting.
Note: Confirm if a visually transparent operating point is needed.  
HM should still have some artifacts at highest rate point.
HLG anchors should not use QP adaptation.
Decision(CTC): Include HLG1 and HLG7 in test set.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3281" JVET-G0163 BoG report on test material [T. Suzuki]
The mandates of this BoG are as follows:
Viewing of drone test sequences proposed by JVET-G0096 and JVET-G0145
Viewing of medical test sequences proposed by JVET-G0155
Discuss further action on drone test sequences
Comments from viewers:
Medical sequence (JVET-G0155)
Good to evaluate such medical sequences and should include in JVET test set
Subjective viewing maybe difficult. JVET experts are not medical doctor and the point of evaluation is not clear
Good for objective comparison
Drone test sequences (JVET-G0096)
Camera is captured in compressed format ?
No captured as RAW data
Picture quality is good (clean and less noise)
Too much water sequences in JVET test set
Flickering can ve observed even at high bit rate
QP maybe too high for lowest bit rate
Beach Mountain is not transparent at high bit rate. We can observe coding artifacts
Comparing HM and JEM at the same bit rate, improvement of JEM is obvious
Both two sequences dont include structure, e.g. building, car, etc
Drone sequence with structure, e.g. building is interesting
People perfer MountainBay then Beach Mountain
Down sampled HD is also good to investigate
BoG recommends,
To study coding characteristics of medical test sequences proposed by JVET-G0155
To include MountainBay sequence, proposed by JVET-G0096, to CfP test sequence for objective comparison.
Both 4K (Original size) and downsampled HDTV
To study further subjective quality and appropriate rate points for MountainBay by the next meeting, and then discuss again if it should be include in subjective test of CfP.
Presented in JVET plenary Friday 21 morning. 
Medical sequences should be made available as 4:2:0
MountainBay (UHD) to be included in objective test set for CfP. Further investigate whether also HD downsampled version should be included.
List of actions taken affecting JEM7 and 360lib4
The following is a summary, in the form of a brief list, of the actions taken at the meeting that affect the text of the JEM7 or 360Lib4.0 description. Both technical and editorial issues are included. This list is provided only as a summary  details of specific actions are noted elsewhere in this report and the list provided here may not be complete and correct. The listing of a document number only indicates that the document is related, not that it was adopted in whole or in part.
Was presented and confirmed to be complete Fri morning in the JVET plenary.
Encoder only or CTC/software changes
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3166" JVET-G0065 Simplification and improvements on FRUC [J. Seo, J. Lee, S.-H. Kim, H. M. Jang, J. Lim (LGE)]
Implement the search range restriction in FRUC (bug fix, done by software coordinators)
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3197" JVET-G0090 Unified adaptive search range setting in JEM and HM [T. Ikai, Y. Yasugi (Sharp)]
JVET-G0101 QP switching
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3271" JVET-G0153 AHG7: Analysis of HDR metrics [E. François (Technicolor)] [late]
Skip t-PSNR from HDR CTC & Excel sheets
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3283" JVET-G0165 BoG Report: Extended Colour Volume and High Dynamic Range [A. Segall]
Include HLG1 and HLG7 in test set.
General: It was agreed to ordinarily report only one digit past the decimal point of percentage BD impacts.
Syntax/semantics/decoding process changes
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3213" JVET-G0104 EE1: Alternative setting for PDPC mode and explicit ARSS flag (tests 3-7) [M. Karczewicz, V. Seregin, A. Said, N. Hu, X. Zhao (Qualcomm)]
Adopt EE1 Test 7 into JEM7
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3189" JVET-G0082 EE2: A block-based design for Bi-directional optical flow (BIO) [H.-C. Chuang, J. Chen, X. Li, Y.-W. Chen, M. Karczewicz, W.-J. Chien (Qualcomm)]
Adopt EE2 Test 2 into JEM7
Changes in 360lib
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3206" JVET-G0097 EE4: ERP/EAP-based segmented sphere projection with different padding sizes [Y.-H. Lee, H.-C. Lin, J.-L. Lin, S.-K. Chang, C.-C. Ju (MediaTek)]
Replace SSP by EAP-based SSP (with padding) in the 360Lib software.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3207" JVET-G0098 EE4: Padded ERP (PERP) projection format [J. Boyce, Z. Deng (Intel)]
Include 8-luma-sample ERP padding in anchor (on each side). In software, the padding width can be a compile-time macro parameter. The padding regions are added to the picture size that has been used previously, so more samples are being coded (within the 1% tolerance).
Blending should be used in the anchor. 
Option of not using blending should be supported by software.
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3151" JVET-G0051 AHG8: A study on quality impact of line re-sampling rate in EAP [M. Zhou (Broadcom)]
Replace EAP with the proposed AEP with beta = 1/1.4
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3156" JVET-G0056 AHG8: A study on Equi-Angular Cubemap projection (EAC) [M. Zhou (Broadcom)]
Add EAC format to 360Lib
 HYPERLINK "http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=3179" JVET-G0074 AHG8: ECP with padding for 360-degree video [G. Van der Auwera, M. Coban, M. Karczewicz (Qualcomm)]
Add ECP w/ padding to the 360Lib software.
JVET-G0088 Change of WS-PSNR for ACP
General: It was agreed that the list of projection formats included in the CTC & 360Lib will not grow further, to avoid having so many that we cant properly study them. If we want to add one, we need a decision to remove one. Anchors for projection formats to be made available only with HM and ERP for JEM.
(check that all above acronyms are in list)
Project planning
JEM description drafting and software
The following agreement has been established: the editorial team has the discretion to not integrate recorded adoptions for which the available text is grossly inadequate (and cannot be fixed with a reasonable degree of effort), if such a situation hypothetically arises. In such an event, the text would record the intent expressed by the committee without including a full integration of the available inadequate text.
Plans for improved efficiency and contribution consideration
The group considered it important to have the full design of proposals documented to enable proper study.
Adoptions need to be based on properly drafted working draft text (on normative elements) and HM encoder algorithm descriptions  relative to the existing drafts. Proposal contributions should also provide a software implementation (or at least such software should be made available for study and testing by other participants at the meeting, and software must be made available to cross-checkers in EEs).
Suggestions for future meetings included the following generally-supported principles:
No review of normative contributions without draft specification text
JEM text is strongly encouraged for non-normative contributions
Early upload deadline to enable substantial study prior to the meeting
Using a clock timer to ensure efficient proposal presentations (5 min) and discussions
The document upload deadline for the next meeting was planned to be XXday XX April 2017.
As general guidance, it was suggested to avoid usage of company names in document titles, software modules etc., and not to describe a technology by using a company name.
General issues for Experiments 
Note: This section was drafted during the second JVET meeting, and is kept here for information about the EE procedure.
Group coordinated experiments have been planned. These may generally fall into one category:
Exploration experiments (EEs) are the coordinated experiments on coding tools which are deemed to be interesting but require more investigation and could potentially become part of the main branch of JEM by the next meeting.
A description of each experiment is to be approved at the meeting at which the experiment plan is established. This should include the issues that were raised by other experts when the tool was presented, e.g., interference with other tools, contribution of different elements that are part of a package, etc. (E. Alshina will edit the document based on input from the proponents, review is performed in the plenary)
Software for tools investigated in EE is provided in a separate branch of the software repository
During the experiment, further improvements can be made
By the next meeting it is expected that at least one independent party will report a detailed analysis about the tool, confirms that the implementation is correct, and gives reasons to include the tool in JEM
As part of the experiment description, it should be captured whether performance relative to JEM as well as HM (with all other tools of JEM disabled) should be reported by the next meeting.
It is possible to define sub-experiments within particular EEs, for example designated as EEX.a, EEX.b, etc., where X is the basic EE number.
As a general rule, it was agreed that each EE should be run under the same testing conditions using one software codebase, which should be based on the JEM software codebase. An experiment is not to be established as a EE unless there is access given to the participants in (any part of) the TE to the software used to perform the experiments.
The general agreed common conditions for single-layer coding efficiency experiments are described in the output document JVET-B1010.
Experiment descriptions should be written in a way such that it is understood as a JVET output document (written from an objective third party perspective, not a company proponent perspective  e.g. referring to methods as improved, optimized etc.). The experiment descriptions should generally not express opinions or suggest conclusions  rather, they should just describe what technology will be tested, how it will be tested, who will participate, etc. Responsibilities for contributions to EE work should identify individuals in addition to company names.
EE descriptions should not contain excessively verbose descriptions of a technology (at least not unless the technology is not adequately documented elsewhere). Instead, the EE descriptions should refer to the relevant proposal contributions for any necessary further detail. However, the complete detail of what technology will be tested must be available  either in the CE description itself or in referenced documents that are also available in the JVET document archive.
Any technology must have at least one cross-check partner to establish an EE  a single proponent is not enough. It is highly desirable have more than just one proponent and one cross-checker.
Some agreements relating to EE activities were established as follows:
Only qualified JVET members can participate in an EE.
Participation in an EE is possible without a commitment of submitting an input document to the next meeting.
All software, results, documents produced in the EE should be announced and made available to all EE participants in a timely manner.
A separate branch under the experimental section will be created for each new tool include in the EE. The proponent of that tool is the gatekeeper for that separate software branch. (This differs from the main branch of the JEM, which is maintained by the software coordinators.)
New branches may be created which combine two or more tools included in the EE document or the JEM. Requests for new branches should be made to the software coordinators.
Dont need to formally name cross-checkers in the EE document. To promote the tool to the JEM at the next meeting, we would like see comprehensive cross-checking done, with analysis that the description matches the software, and recommendation of value of the tool given tradeoffs.
Timeline:
T1 = JEM5.0 SW release + 4 weeks: Integration of all tools into separate EE branch of JEM is completed and announced to JVET reflector.
Initial study by cross-checkers can begin.
Proponents may continue to modify the software in this branch until T2
3rd parties encouraged to study and make contributions to the next meeting with proposed changes
T2: JVET-F meeting start  3 weeks: Any changes to the exploration branch software must be frozen, so the cross-checkers can know exactly what they are cross-checking. An SVN tag should be created at this time and announced on the JVET reflector.
This procedure was again confirmed during the closing plenary of the third JVET meeting. It was further confirmed that the Common Test Conditions of JVET-B1010 are still valid, however the CTC encoder setting will be reflected in the config file that is attached to the JEM4.0 package.
Software development and anchor generation
Software coordinators will work out the detailed schedule with the proponents of adopted changes.
Any adopted proposals where software is not delivered by the scheduled date will be rejected.
The planned timeline for software releases was established as follows:
JEM7.0 including all adoptions from section  REF _Ref452305285 \r \h 12.4 will be released by 2017-08-04.
The results about coding performance of JEM7.0 will be reported by 2017-08-11.
Further versions may be released for additional bug fixing, as appropriate 
Timeline of 360lib4.0: 2 weeks after the meeting (2017-08-04). 
Further versions may be released as appropriate for bug fixing.
Timelines and volunteers for CfP anchors:
Also action seems necessary for rate tuning in the following sequences:
Cat Robot rates 3 and 4 lower
Daylight road all rates lower
BQ terrace: Highest rate lower
Ritual Dance rates 3 and 4 lower
Market rates 1 and 2 lower
Show Girl rates 2-4 lower
Starting rates 3 and 4 lower
The change from 4096 to 3840 width for some sequences also requires proportional lowering of rates.
New rate points are determined regarding the HM anchor quality. JRO will propose new rate points by 08-04, and if agreed by 08-11, anchor generation can start.
HM 16.16 anchors by 09-01
JEM 7.0 anchors by 09-29
For SDR: HD/RA, HD/LD, UHD: Samsung/Qualcomm
For 360: InterDigital/Samsung
For HDR: Technicolor/Qualcomm
New HM anchors will be generated using HM 16.16. JEM anchors will be based on JEM 7.0.
responsibilities for updating sequences: Original contributors except Campfire (Alexis Tourapis)
New sequences needed by 08-01.
Investigation and generating anchors for objective testing:
CTC set (UHD, HD) plus MountainBay
Identify more sequences via AHG4
Discuss by next meeting how to formulate PSNR matching 
Excel sheets to be attached to CfP (Sept. 29), to be exercised with JEM vs. HM: 
SDR: J. Chen
HDR: E. François
360: Y. He
Note: MD5 checksums shall not be included in anchor bitstreams.
Output documents and AHGs
The following documents were agreed to be produced or endorsed as outputs of the meeting. Names recorded below indicate the editors responsible for the document production.
JVET-G1000 Meeting Report of the 7th JVET Meeting [G. J. Sullivan, J.-R. Ohm] [2017-10-15] (near next meeting)
Intermediate versions of the meeting notes (d0 
 d8) were made available on a daily basis during the meeting.
JVET-G1001 Algorithm description of Joint Exploration Test Model 7 (JEM7) [J. Chen, E. Alshina, G. J. Sullivan, J.-R. Ohm, J. Boyce] [2017-08-18] (MPEG N17055)
See list of new adoptions under  REF _Ref452305285 \r \h 12.4. During the closing plenary, no complaints were made about the accuracy of that list.
JVET-G1002 Draft Joint Call for Proposals on Video Compression with Capability beyond HEVC [A. Segall, M. Wien, V. Baroncini, J. Boyce, T. Suzuki] [2017-08-11] (MPEG N17053)
Draft was discussed Wed 1400 and in joint meeting with parent bodies Wed 1600. Was again presented during closing plenary on Friday.
The companies responsible for providing the HM and JEM anchors will also provide the corresponding Excel templates for the cases of SDR, HDR and 360.
(see responsibilities under  REF _Ref488411497 \r \h 16.4)
JVET-G1003 Algorithm descriptions of projection format conversion and video quality metrics in 360Lib Version 4 [Y. Ye, E. Alshina, J. Boyce] [2017-08-18] (MPEG N17056)
See list of new adoptions under  REF _Ref479326928 \r \h 12.4.3. During the closing plenary, no complaints were made about the accuracy of that list.
JVET-G1004 Results of the Call for Evidence on Video Compression with Capability beyond HEVC [M. Wien, V. Baroncini, P. Hanhart, J. Boyce, A. Segall] [2017-08-25] (MPEG N17054)
Was presented. Some updates: Make proposal results anonymous; identify confidence intervals by same colours; distinguish between half and full MOS.
JVET-G1010 JVET common test conditions and software reference configurations [K. Suehring, X. Li] [2017-08-01]
Reflects updates of test sequences.
JVET-G1011 Description of Exploration Experiments on coding tools [E. Alshina, L. Zhang] [2017-08-11] (MPEG N17057)
Initial version was presented in the closing plenary on Friday 20th Jan. Additional tests were proposed in the initial version related to JVET-G0146. This was removed, as it had non been agreed before, and the reported benefit seems to be rather low (variation compared to EE1 test 7 clearly less than 0.1%).
See list of EEs under  REF _Ref472668843 \r \h 12.1.
JVET-G1020 JVET common test conditions and evaluation procedures for HDR/WCG video [A. Segall, E. François, D. Rusanovskyy] [2017-07-28]
JVET-G1030 JVET common test conditions and evaluation procedures for 360° video [E. Alshina, J. Boyce, A. Abbas, Y. Ye] [2017-07-28] 
It was reminded that in cases where the JVET document is also made available as MPEG output document, a separate version under the MPEG document header should be generated. This version should be sent to GJS and JRO for upload.
Title and Email ReflectorChairsMtgTool evaluation (AHG1)
( HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de)
Coordinate the exploration experiments.
Investigate interaction of tools in JEM and exploration experiment branches.
Discuss and evaluate methodologies and criteria to assess the benefit of tools, and how to ease the assessment of single tools in terms of encoder runtime.
Study and summarize new technology proposals.
Discuss methodologies for objective comparison in the forthcoming Call for Proposals. E. Alshina, M. Karczewicz (cochairs)NJEM algorithm description editing (AHG2)
( HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de)
Produce and finalize JVET-G1001 Algorithm Description of Joint Exploration Test Model 7.
Gather and address comments for refinement of the document.
Coordinate with the JEM software development AHG to address issues relating to mismatches between software and text.J. Chen (chair), E. Alshina, J. Boyce (vice chairs)NJEM software development (AHG3)
( HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de)
Coordinate development of the JEM7.0 software packages and their distribution.
Produce documentation of software usage for distribution with the software.
Prepare and deliver JEM7.0 software version and the reference configuration encodings according to JVET-G1010 common conditions.
Coordinate with AHG on JEM model editing and errata reporting to identify any mismatches between software and text, and make further updates and cleanup to the software as appropriate.
Investigate the implementation of SCC coding tools in JEM.
Coordinate with AHG6 for integration of 360 video software.
Coordinate with AHG6 for integration of 360 video software.X. Li, K. Suehring (co-chairs)NTest material and visual assessment (AHG4)
( HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de)
Maintain the video sequence test material database for development of future video coding standards.
Identify and recommend appropriate test materials and corresponding test conditions for use in the development of future video coding standards.
Identify missing types of video material, solicit contributions, collect, and make available a variety of video sequence test material.
Discuss and prepare HM anchors at additional rate points for the Call for Proposals.
Evaluate new test sequences, and prepare for the visual assessment in the next meeting.
Discuss subjective comparison methodologies, and make logistic arrangements for the forthcoming Call for Proposals.
Prepare viewing equipment arrangements for the upcoming meeting.V. Baroncini, T. Suzuki (co-chairs), J. Chen, J. Boyce, A. Norkin (vice chairs)NMemory bandwidth consumption of coding tools (AHG5)
( HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de)
Study the methodology of measuring decoder memory bandwidth consumption, including cache models.
Develop software tools for measuring both average and worst case of memory bandwidth.
Make analysis for examples of JEM coding tools.
Study the impact of memory bandwidth on specific application cases. X. Li (chair), E. Alshina, R. Hashimoto, T. Ikai, H. Yang (vice chairs) N360° video conversion software development (AHG6)
( HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de)
Prepare and deliver 360Lib-4.0 software version and common test condition configuration files according to JVET-G1030.
Generate CTC HM anchors for all projection formats, CTC JEM anchors for the ERP projection format, and a reporting template for the common test conditions. 
Produce documentation of software usage for distribution with the software.Y. He, V. Zakharchenko (co-chairs)NJEM coding of HDR/WCG material (AHG7)
( HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de)
Coordinate generation of HM and JEM anchors for the draft CfP.
Study and evaluate available HDR/WCG test content.
Study objective metrics for quality assessment of HDR/WCG material.
Evaluate transfer function conversion methods, including methods that may be standardized by BT.[HDR-OPS]
Study and refine test conditions and anchors for the JEM coding of HDR/WCG content.
Study additional aspects of coding HDR/WCG content.A. Segall (chair), E. François, D. Rusanovskyy (vice chairs)N360° video coding tools and test conditions (AHG8)
( HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de)
Study the effect on compression and subjective quality of different projections formats, resolutions, and packing layouts. 
Discuss refinements of common test conditions, test sequences, and evaluation criteria. 
Study consistency of and potential improvements to the objective quality metrics in CTC.
Coordinate effort to prepare for CfP testing, including anchor generation, in collaboration with AHG5.
Solicit additional test sequences, and evaluate suitability of test sequences on head-mounted displays and normal 2D displays.
Produce and finalize JVET-G1003 algorithm descriptions of projection format conversion process and objective quality metrics in 360Lib. 
Produce and finalize JVET-G1030 JVET common test conditions and evaluation procedures for 360 video. 
Study coding tools dedicated to 360 video, and their impact on compression.
Study the effect of viewport resolution, field of view, and viewport speed/direction on visual comfort.
Study the impact of coding resolution vs original ERP resolution on coding efficiency.J. Boyce (chair), A. Abbas, E. Alshina, G. v. d. Auwera, Y. Ye (vice chairs)Y (Phone)4:4:4 support in JEM (AHG9)
( HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de)
Evaluate JEM6.0 software in terms of 4:4:4 chroma sampling support: Identify the tools in JEM6.0 that are not able to support 4:4:4 chroma sampling appropriately. Also identify whether this is due to implementation limitations or tool characteristics.
Further investigate the problems associated with RExt tools, and perform an investigation with JEM inter coding tools.A. Tourapis (chair), X. Li, X. Xu (vice chairs)NDenoising and adaptive quantisation (AHG10)
( HYPERLINK "mailto:jvet@lists.rwth-aachen.de" jvet@lists.rwth-aachen.de)
Investigate the impact of using denoising filters on input sequences before encoding with HM and JEM.
Study the impact of using adaptive quantization in context of HM and JEM SDR coding.
Perform visual quality assessment for cases using denoising filters, renoising, and adaptive quantisation.
Study objective error metrics for measuring small subjective compression efficiency improvements when adaptive quantisation is used.
Solicit input contributions demonstrating subjective benefits over the JEM 7 anchor.R. Sjöberg (chair), E. Alshina, S. Ikonin, A. Norkin, T. Wiegand (vice chairs)N
Future meeting plans, expressions of thanks, and closing of the meeting
Future meeting plans were established according to the following guidelines:
Meeting under ITU-T SG 16 auspices when it meets (starting meetings on the Tuesday or Wednesday of the first week and closing it on the Tuesday or Wednesday of the second week of the SG 16 meeting  a total of 67.5 meeting days), and
Otherwise meeting under ISO/IEC JTC 1/SC 29/WG 11 auspices when it meets (starting meetings on the Thursday or Friday prior to such meetings and closing it on the last day of the WG 11 meeting  a total of 8.5 meeting days). 
In cases where high workload is expected for a meeting, an earlier starting date may be defined.
Some specific future meeting plans (to be confirmed) were established as follows:
Wed. 18  Wed. 25 Oct. 2017, 8th meeting under ITU-T auspices in Macao, CN.
Fri. 19 Jan.  Fri. 26 Jan. 2018, 9th meeting under WG 11 auspices in Gwangju, KR.
Wed. 11 Apr.  Fri. 20 Apr. 2018, 10th meeting under WG 11 auspices in San Diego, US.
Tue. 10  Wed. 18 July 2018, 11th meeting under ITU-T auspices in Ljubljana, SI.
The agreed document deadline for the 8th JVET meeting is Tuesday 10 Oct. 2017. Plans for scheduling of agenda items within that meeting remain TBA.
UNINFO was thanked for the excellent hosting of the 7th meeting of the JVET. NHK and GBTech were thanked for providing viewing equipment. Vittorio Baroncini, Philippe Hanhart, Atsuro Ichigaya, Shunsuke Iwamura, Shimpei Nemoto, and Mathias Wien were thanked for conducting the visual tests in the context of the Call for Evidence. The participants in the expert viewing were also thanked. CRAN/CNRS, GoPro, InterDigital, LetinVR, and Sony were thanked for offering new test sequences.
The 7th JVET meeting was closed at approximately 1313 hours on Friday 21 July 2017.
Annex A to JVET report:List of documents
JVET numberMPEG numberCreatedFirst uploadLast uploadTitleAuthors  HYPERLINK "current_document.php?id=3259" JVET-G0001m412262017-07-11 13:44:252017-07-11 13:45:002017-07-13 09:35:13JVET AHG report: Tool evaluation (AHG1)M. Karczewicz, E. Alshina HYPERLINK "current_document.php?id=3236" JVET-G0002m410032017-07-08 00:31:252017-07-12 00:40:462017-07-12 00:40:46JVET AHG report: JEM algorithm description editing (AHG2)J. Chen, E. Alshina, J. Boyce HYPERLINK "current_document.php?id=3174" JVET-G0003m409102017-07-05 20:03:272017-07-12 20:59:192017-07-12 20:59:19JVET AHG report: JEM software development (AHG3)X. Li, K. Sühring HYPERLINK "current_document.php?id=3258" JVET-G0004m412232017-07-11 11:52:232017-07-12 02:34:212017-07-13 15:24:13JVET AHG report: Test material (AHG4)T. Suzuki, V. Baroncini, J. Chen, J. Boyce, A. Norkin HYPERLINK "current_document.php?id=3175" JVET-G0005m409112017-07-05 20:06:192017-07-12 21:03:032017-07-12 21:03:03JVET AHG report: Memory bandwidth consumption of coding tools (AHG5)X. Li, E. Alshina, T. Ikai, H. Yang HYPERLINK "current_document.php?id=3204" JVET-G0006m409462017-07-06 04:03:052017-07-13 06:07:582017-07-13 06:07:58JVET AHG report: 360 video conversion software development (AHG6)Y. He, V. Zakharchenko HYPERLINK "current_document.php?id=3270" JVET-G0007m412632017-07-13 02:39:102017-07-13 09:04:252017-07-13 09:04:25JVET AhG report: JEM coding of HDR/WCG material (AHG7)A. Segall, E. François, D. Rusanovskyy HYPERLINK "current_document.php?id=3248" JVET-G0008m411682017-07-10 20:29:322017-07-12 19:00:422017-07-12 19:00:42JVET AHG report: 360 video coding tools and test conditionsJ. Boyce, A. Abbas, E. Alshina, G. v. d. Auwera, Y. Ye HYPERLINK "current_document.php?id=3249" JVET-G0009m412072017-07-11 03:17:042017-07-13 09:37:312017-07-13 09:37:31JVET AHG report: 4:4:4 support in JEM (AHG9) A.M. Tourapis, X. Li HYPERLINK "current_document.php?id=3254" JVET-G0010m412162017-07-11 08:31:212017-07-11 13:54:132017-07-14 09:05:22Exploration Experiments on Coding Tools ReportE. Alshina, L. Zhang HYPERLINK "current_document.php?id=3177" JVET-G0021m409142017-07-05 22:39:172017-07-05 22:40:222017-07-19 11:52:28FastVDO Response to JVET CfE for HDRP. Topiwala, M. Krishnan, W. Dai (FastVDO) HYPERLINK "current_document.php?id=3216" JVET-G0022m409632017-07-06 13:21:182017-07-06 13:21:492017-07-13 08:56:40CfE response to the HDR category from TechnicolorE. François, F. Le Leannec (Technicolor) HYPERLINK "current_document.php?id=3176" JVET-G0023m409122017-07-05 20:56:412017-07-05 20:57:572017-07-13 12:25:34Qualcomms response to Joint CfE in 360-degree video categoryM. Coban, G. Van der Auwera, M. Karczewicz (Qualcomm) HYPERLINK "current_document.php?id=3180" JVET-G0024m409192017-07-05 23:57:002017-07-06 00:00:062017-07-13 10:06:28InterDigitals Response to the 360º Video Category in Joint Call for Evidence on Video Compression with Capability beyond HEVCP. Hanhart, X. Xiu, F. Duanmu, Y. He, Y. Ye (InterDigital) HYPERLINK "current_document.php?id=3223" JVET-G0025m409752017-07-07 02:11:272017-07-07 07:38:272017-07-13 12:27:51Samsung's response to Joint CfE on Video Compression with Capability beyond HEVC (360 category) E. Alshina, K. Choi, V. Zakharchenko, S. N. Akula, A. Dsouza, C. Pujara, K. K. Ramkumaar, A. Singh (Samsung) HYPERLINK "current_document.php?id=3182" JVET-G0026m409212017-07-06 00:20:332017-07-06 00:23:062017-07-13 18:07:49Polyphase subsampling applied to 360-degree video sequences in the context of the Joint Call for Evidence on Video CompressionA. Gabriel, E. Thomas (TNO) HYPERLINK "current_document.php?id=3200" JVET-G0028m409412017-07-06 03:27:532017-07-06 03:29:362017-07-13 09:21:45InterDigitals Response to the SDR Category in Joint Call for Evidence on Video Compression with Capability beyond HEVCX. Xiu, Y. He, Y. Ye (InterDigital) HYPERLINK "current_document.php?id=3162" JVET-G0029m408862017-07-05 10:21:062017-07-06 02:16:112017-07-13 12:32:02Samsung's response to Joint CfE on Video Compression with Capability beyond HEVC (SDR category)E. Alshina, K. Choi (Samsung) HYPERLINK "current_document.php?id=3151" JVET-G0051m408112017-06-06 19:50:432017-07-03 17:56:192017-07-17 09:06:15AHG8: A study on quality impact of line re-sampling rate in EAPM. Zhou (Broadcom) HYPERLINK "current_document.php?id=3152" JVET-G0052m408122017-06-06 19:53:322017-07-03 17:56:552017-07-03 17:56:55AHG8: A study on quality impact of coded picture resolution in 360 video coding M. Zhou (Broadcom) HYPERLINK "current_document.php?id=3153" JVET-G0053m408482017-06-13 04:54:342017-06-13 04:59:152017-06-13 04:59:15Test Sequences for Virtual Reality Video Coding from LetinVRR. Guo, W. Sun (LetinVR) HYPERLINK "current_document.php?id=3154" JVET-G0054m408552017-06-29 21:17:492017-07-06 11:33:512017-07-06 11:33:51Mapping SDR content into HDR signal containersC. Fogg (MovieLabs) HYPERLINK "current_document.php?id=3155" JVET-G0055m408712017-07-03 21:40:472017-07-11 02:20:022017-07-21 22:41:31Test Sequences for Virtual Reality Video Coding from InterDigitalE. Asbun, Y. He, P. Hanhart, Y. He, Y. Ye (InterDigital) HYPERLINK "current_document.php?id=3156" JVET-G0056m408722017-07-04 00:08:352017-07-04 00:15:412017-07-17 09:06:54AHG8: A study on Equi-Angular Cubemap projection (EAC)M. Zhou (Broadcom) HYPERLINK "current_document.php?id=3157" JVET-G0057m408732017-07-04 02:19:122017-07-06 02:09:292017-07-19 14:21:58AHG8: Influence of coding size on objective gain in 360-degree video CTCG. Van der Auwera, M. Coban, M. Karczewicz (Qualcomm) HYPERLINK "current_document.php?id=3158" JVET-G0058m408762017-07-04 08:25:202017-07-06 08:38:402017-07-15 09:51:03AHG8: Reference picture extension of ACP format 360-degree videoM. Coban, G. Van der Auwera, M. Karczewicz (Qualcomm) HYPERLINK "current_document.php?id=3159" JVET-G0059m408832017-07-05 03:34:582017-07-05 04:04:462017-07-14 02:50:56AHG7: On the need of luma delta QP for BT.2100 HLG contentS. Iwamura, S. Nemoto, A. Ichigaya (NHK), M. Naccari (BBC) HYPERLINK "current_document.php?id=3160" JVET-G0060m408842017-07-05 04:17:112017-07-05 04:27:242017-07-11 09:43:23Improvements for Intra Prediction Mode CodingY. Han, J. An, J. Zheng (HiSilicon) HYPERLINK "current_document.php?id=3161" JVET-G0061m408852017-07-05 05:57:472017-07-05 18:58:292017-07-16 09:37:00AHG5 External Memory Access Evaluation with the Consideration of CacheX. Li, T. Hsieh, J. Chen, M. Karczewicz (Qualcomm) HYPERLINK "current_document.php?id=3163" JVET-G0062m408872017-07-05 10:32:072017-07-06 00:52:202017-07-14 09:11:42EE1-Related : Harmonization UW Prediction method with improved PDPCH. M. Jang, J. Lim, S.-H. Kim (LGE) HYPERLINK "current_document.php?id=3164" JVET-G0063m408882017-07-05 10:46:242017-07-07 14:12:062017-07-11 10:53:25New Test Sequence of 4K Hybrid Log-GammaT. Tsukuba, M. Ikeda, T. Suzuki (Sony) HYPERLINK "current_document.php?id=3165" JVET-G0064m408892017-07-05 10:50:242017-07-05 16:17:112017-07-15 11:34:14Stereoscopic 360 video compression with the next generation video codecF. Henry, J. Jung, A. Ouach, B. Ray, P. Schwellenbach (Orange) HYPERLINK "current_document.php?id=3166" JVET-G0065m408912017-07-05 13:23:352017-07-06 01:07:132017-07-14 17:48:13Simplification and improvements on FRUCJ. Seo, J. Lee, S.-H. Kim, H.M. Jang, J. Lim (LGE) HYPERLINK "current_document.php?id=3167" JVET-G0066m408922017-07-05 13:35:372017-07-13 13:21:032017-07-13 18:11:56Viewpaths for the CfE VR sequencesM. Wien (RWTH), J. Boyce (Intel), M. Zhou (Broadcom) HYPERLINK "current_document.php?id=3168" JVET-G0067m408962017-07-05 15:24:132017-07-06 00:03:572017-07-16 11:02:43Chroma Adjustment for SDR VideoJ. Ström, P. Wennersten, K. Andersson, R. Sjöberg (Ericsson) HYPERLINK "current_document.php?id=3169" JVET-G0068m409002017-07-05 17:15:232017-07-05 17:17:252017-07-14 09:19:17Non EE1 : Unified-PDPC : unification of intra filtersM. Philippe, K. Sharman (Sony Europe) HYPERLINK "current_document.php?id=3170" JVET-G0069m409012017-07-05 17:16:582017-07-05 17:23:152017-07-11 10:03:48EE1: Crosscheck of tests 6, 8 and 9V. Drugeon (Panasonic) HYPERLINK "current_document.php?id=3171" JVET-G0070m409032017-07-05 17:27:332017-07-05 19:02:582017-07-05 19:02:58EE3-JVET-F0049/F0038 Adaptive QP for ERP videosHendry, M. Coban (Qualcomm), F. Racape, F. Galpin (Technicolor) HYPERLINK "current_document.php?id=3172" JVET-G0071m409042017-07-05 18:32:552017-07-05 22:17:332017-07-15 10:59:45AHG8: ACP with padding for 360-degree videoG. Van der Auwera, M. Coban, M. Karczewicz (Qualcomm) HYPERLINK "current_document.php?id=3173" JVET-G0072m409052017-07-05 18:56:052017-07-05 23:55:122017-07-05 23:55:12AHG7: Analysis of currently proposed HLG contentE. François, F. Le Leannec, F. Galpin (Technicolor) HYPERLINK "current_document.php?id=3178" JVET-G0073m409152017-07-05 22:40:242017-07-05 22:43:392017-07-19 15:43:33Adaptive quantization and denoising for future video coding CfP and CTCR. Sjöberg, K. Andersson, P. Wennersten (Ericsson) HYPERLINK "current_document.php?id=3179" JVET-G0074m409182017-07-05 23:48:432017-07-06 01:47:432017-07-15 10:24:24AHG8: ECP with padding for 360-degree videoG. Van der Auwera, M. Coban, M. Karczewicz (Qualcomm) HYPERLINK "current_document.php?id=3181" JVET-G0075m409202017-07-05 23:59:262017-07-06 01:06:022017-07-14 18:14:21AHG8: On reliability of S-PSNR-NN and S-PSNR-I as quality metrics for 360-degree videoY. Ye, Y. He (InterDigital) HYPERLINK "current_document.php?id=3183" JVET-G0076m409242017-07-06 00:38:092017-07-06 02:46:222017-07-14 18:05:19Bilateral filter simplificationR. Vanam, Y. He, Y. Ye (InterDigital) HYPERLINK "current_document.php?id=3184" JVET-G0077m409252017-07-06 00:58:442017-07-06 01:01:082017-07-14 09:12:07EE1: UWP&UW66 with PDPC for other intra mode and ARSS off (Test2) H. M. Jang, J. Lim, S.-H. Kim (LGE), K. Panusopone, S. Hong, Y. Yu, L. Wang (Arris) HYPERLINK "current_document.php?id=3185" JVET-G0078m409262017-07-06 01:15:492017-07-06 01:40:562017-07-06 01:40:56EE1 Test 1: UWP+UW66, PDPC off, ARSS onK. Panusopone, S. Hong, Y. Yu, L. Wang (Arris) HYPERLINK "current_document.php?id=3186" JVET-G0079m409272017-07-06 01:21:022017-07-06 01:41:232017-07-13 10:54:29EE1 Test 10: UWP+UW66, PDPC on for other modes, ARSS constrained as in F0024 but explicit signaling as in F0055K. Panusopone, S. Hong, Y. Yu, L. Wang (Arris), H. M. Jang, J. Lim, S.-H. Kim (LGE) HYPERLINK "current_document.php?id=3187" JVET-G0080m409282017-07-06 01:23:032017-07-06 01:47:522017-07-14 09:08:16Additional EE1 Tests (Test 2.1 : UWP+UW66, PDPC for other modes, PDPC-L, ARSS off, and Test 10.1 : UWP+UW66, PDPC on for other modes, PDPC-L, ARSS constrained as in F0024 but explicit signaling as in F0055)K. Panusopone, S. Hong, Y. Yu, L. Wang (Arris) HYPERLINK "current_document.php?id=3188" JVET-G0081m409292017-07-06 01:25:212017-07-06 02:42:382017-07-14 17:12:08Comparisons between UWP, W66 and Planar, Angular mode 66 under the same coding conditionsK. Panusopone, S. Hong, Y. Yu, L. Wang (Arris) HYPERLINK "current_document.php?id=3189" JVET-G0082m409302017-07-06 01:50:242017-07-06 03:01:492017-07-06 03:01:49EE2: A block-based design for Bi-directional optical flow (BIO)H.-C. Chuang, J. Chen, X. Li, Y.-W. Chen, M. Karczewicz, W.-J. Chien (Qualcomm) HYPERLINK "current_document.php?id=3190" JVET-G0083m409312017-07-06 01:52:252017-07-06 03:44:192017-07-14 18:08:53EE2-related: A simplified gradient filter for Bi-directional optical flow (BIO)H.-C. Chuang, J. Chen, K. Zhang, M. Karczewicz (Qualcomm) HYPERLINK "current_document.php?id=3191" JVET-G0084m409322017-07-06 02:29:082017-07-06 23:16:332017-07-06 23:16:33Luma/Chroma QP Adaptation for Hybrid Log-Gamma Sequences EncodingK. Kawamura, S. Naito (KDDI) HYPERLINK "current_document.php?id=3192" JVET-G0085m409332017-07-06 02:30:072017-07-06 08:37:352017-07-06 08:37:35AhG7: Information on CfE anchor generation for HDR contentA. K. Ramasubramonian, D. Rusanovskyy (Qualcomm), E. François (Technicolor), F. Hiron, J. Zhao, A. Segall (Sharp) HYPERLINK "current_document.php?id=3193" JVET-G0086m409342017-07-06 02:40:032017-07-11 08:22:172017-07-11 08:22:17EE1: Cross-check of test4 and test7J. Lee, H. Lee, J. Kang (ETRI) HYPERLINK "current_document.php?id=3194" JVET-G0087m409352017-07-06 02:54:212017-07-11 06:15:422017-07-11 06:15:42EE1: Cross-check of test10H. Ko, S.-C Lim, J. Kang (ETRI) HYPERLINK "current_document.php?id=3195" JVET-G0088m409362017-07-06 03:05:292017-07-06 03:14:372017-07-14 11:35:54AHG8: On the derivation of weighted to spherically uniform PSNR (WS-PSNR) for adjusted cubemap projection (ACP) formatX. Xiu, Y. He, Y. Ye (InterDigital) HYPERLINK "current_document.php?id=3196" JVET-G0089m409372017-07-06 03:05:562017-07-06 03:22:092017-07-14 11:46:49EE3 Related: Adaptive quantization for JEM-based 360-degree video codingX. Xiu, Y. He, Y. Ye (InterDigital) HYPERLINK "current_document.php?id=3197" JVET-G0090m409382017-07-06 03:17:142017-07-06 03:35:202017-07-06 03:35:20Unified adaptive search range setting in JEM and HMT. Ikai, Y. Yasugi (Sharp) HYPERLINK "current_document.php?id=3198" JVET-G0091m409392017-07-06 03:17:352017-07-06 04:12:072017-07-06 04:12:07EE1: Crosscheck of Additional EE1 Tests (Test 2.1 and Test 10.1) (JVET-G0080)T. Ikai, Y. Yasugi (Sharp) HYPERLINK "current_document.php?id=3199" JVET-G0092m409402017-07-06 03:17:542017-07-06 04:26:202017-07-06 04:26:20Crosscheck of Improvements for Intra Prediction Mode Coding (JVET-G0060)Y. Yasugi, T. Ikai (Sharp) HYPERLINK "current_document.php?id=3201" JVET-G0093m409432017-07-06 03:45:252017-07-06 14:52:182017-07-06 14:52:18AHG4: SDR anchor generation for Joint Call for Evidence by QualcommH.-C. Chuang, J. Chen, M. Karczewicz (Qualcomm) HYPERLINK "current_document.php?id=3202" JVET-G0094m409442017-07-06 04:00:392017-07-08 15:10:132017-07-08 15:10:13EE2: Cross-check of EE2 test1 (JVET-G0082)H. Lee, J. Kang (ETRI) HYPERLINK "current_document.php?id=3203" JVET-G0095m409452017-07-06 04:00:592017-07-06 04:31:002017-07-11 09:45:12Unified Adaptive Loop Filter for Luma and ChromaJ. An, J. Zheng (HiSilicon) HYPERLINK "current_document.php?id=3205" JVET-G0096m409472017-07-06 04:12:472017-07-11 16:22:512017-07-20 23:52:00AhG4: Evaluation on drone test sequencesX. Zheng, W. Li (DJI) HYPERLINK "current_document.php?id=3206" JVET-G0097m409482017-07-06 04:45:062017-07-06 05:48:242017-07-10 13:10:15EE4: ERP/EAP-based segmented sphere projection with different padding sizesY.-H. Lee, H.-C. Lin, J.-L. Lin, S.-K. Chang, C.-C. Ju (MediaTek) HYPERLINK "current_document.php?id=3207" JVET-G0098m409492017-07-06 05:24:342017-07-06 05:32:402017-07-06 05:32:40EE4: Padded ERP (PERP) projection formatJ. Boyce, Z. Deng (Intel) HYPERLINK "current_document.php?id=3208" JVET-G0099m409502017-07-06 05:44:282017-07-06 05:48:442017-07-06 21:21:52Padded ERP (PERP) projection format for OMAF subjective testJ. Boyce, Z. Deng (Intel) HYPERLINK "current_document.php?id=3209" JVET-G0100m409512017-07-06 05:47:442017-07-06 08:37:432017-07-15 09:52:59AHG8: A study of 360Lib projections on global motion sequencesM. Coban, G. Van der Auwera, M. Karczewicz (Qualcomm) HYPERLINK "current_document.php?id=3210" JVET-G0101m409522017-07-06 05:52:542017-07-06 05:59:362017-07-13 11:01:35On internal QP increase for bitrate matchingP. Hanhart, Y. He, Y. Ye (InterDigital), X. Ma, H. Chen, H. Yang, M. Sychev (Huawei) HYPERLINK "current_document.php?id=3211" JVET-G0102m409532017-07-06 07:06:472017-07-10 06:59:382017-07-11 04:14:08EE4: Cross-check of EE4 tests 5-7 (JVET-G0097)Z. Deng (Intel) HYPERLINK "current_document.php?id=3212" JVET-G0103m409542017-07-06 07:57:492017-07-06 08:03:282017-07-18 18:52:44AHG7: Candidate rate points of HLG material for anchor generationS. Iwamura, S. Nemoto, A. Ichigaya (NHK) HYPERLINK "current_document.php?id=3213" JVET-G0104m409552017-07-06 08:03:542017-07-06 08:49:112017-07-06 08:49:11EE1: Alternative setting for PDPC mode and explicit ARSS flag (tests 3-7)M. Karczewicz, V. Seregin, A. Said, N. Hu, X. Zhao (Qualcomm) HYPERLINK "current_document.php?id=3214" JVET-G0105m409572017-07-06 09:23:242017-07-11 12:02:382017-07-14 05:21:38EE2-related: Crosscheck of A simplified gradient filter for Bi-directional optical flow (JVET-G0083)M. Ikeda (Sony) HYPERLINK "current_document.php?id=3215" JVET-G0106m409592017-07-06 09:53:552017-07-06 13:58:002017-07-10 10:56:02EE3: Adaptive QP for 360° videoYule Sun, Lu Yu (Zhejiang University) HYPERLINK "current_document.php?id=3217" JVET-G0107m409652017-07-06 16:11:452017-07-06 16:31:522017-07-08 03:37:00Non-EE1: PDPC without a mode flagV. Seregin, M. Karczewicz, A. Said, X. Zhao (Qualcomm) HYPERLINK "current_document.php?id=3218" JVET-G0108m409662017-07-06 16:11:552017-07-06 16:35:142017-07-06 16:35:14Non-EE1: Fix for strong intra smoothing filteringV. Seregin, X. Zhao, M. Karczewicz (Qualcomm) HYPERLINK "current_document.php?id=3219" JVET-G0109m409672017-07-06 17:40:442017-07-06 17:44:122017-07-16 10:49:58A modification of fast algorithm in intra mode selectionP.-H. Lin, C.-L. Lin, C.-C Lin (ITRI) HYPERLINK "current_document.php?id=3220" JVET-G0110m409682017-07-06 20:20:112017-07-06 20:25:422017-07-06 20:25:42AHG8: Crosscheck of JVET-G0056 A study on Equi-Angular Cubemap projectionF. Duanmu, X. Xiu, P. Hanhart, Y. Ye, Y. He (InterDigital) HYPERLINK "current_document.php?id=3221" JVET-G0111m409692017-07-06 20:31:182017-07-06 21:13:472017-07-06 21:13:47AHG8: Crosscheck of JVET-G0051 A study on quality impact of line re-sampling rate in EAPY. He, X. Xiu, P. Hanhart, F. Duamnu, Y. Ye (InterDigital) HYPERLINK "current_document.php?id=3222" JVET-G0112m409702017-07-06 20:39:492017-07-06 20:45:302017-07-14 02:44:25Arithmetic coding with context-dependent double-window adaptation responseA. Said, M. Karczewicz, L. Zhang, V. Seregin, X. Zhao (Qualcomm) HYPERLINK "current_document.php?id=3224" JVET-G0113m409762017-07-07 02:17:072017-07-10 02:00:292017-07-10 04:23:55EE1 (Tests 8 and 9) Performance of RASS and PDPC in presence of other toolsE. Alshina (Samsung) HYPERLINK "current_document.php?id=3225" JVET-G0114m409772017-07-07 02:18:172017-07-10 03:29:002017-07-10 03:29:00EE1 Cross-check for Test 1E. Alshina (Samsung) HYPERLINK "current_document.php?id=3226" JVET-G0115m409782017-07-07 02:18:452017-07-10 04:24:282017-07-10 04:24:28EE1 Cross-check for Test 3E. Alshina (Samsung) HYPERLINK "current_document.php?id=3227" JVET-G0116m409792017-07-07 02:19:262017-07-10 07:18:452017-07-10 07:18:45EE2 Cross-check for block-based BIO designE. Alshina (Samsung) HYPERLINK "current_document.php?id=3228" JVET-G0117m409802017-07-07 02:21:342017-07-10 03:28:362017-07-10 03:28:36AHG4: SDR anchor generation for Joint Call for Evidence by SamsungK. Choi, E. Alshina (Samsung) HYPERLINK "current_document.php?id=3229" JVET-G0118m409812017-07-07 02:22:382017-07-10 12:16:122017-07-10 12:16:12Cross-check of Unified Adaptive Loop Filter for Luma and Chroma (JVET-G0095)K. Choi, E. Alshina (Samsung) HYPERLINK "current_document.php?id=3230" JVET-G0119m409822017-07-07 02:23:512017-07-13 08:22:332017-07-13 08:22:33Cross-check of Simplification and improvements on FRUC (JVET-G0065)K. Choi, E. Alshina (Samsung) HYPERLINK "current_document.php?id=3231" JVET-G0120m409892017-07-07 04:39:472017-07-07 04:41:542017-07-10 09:31:43AHG7 Cross-check of anchor generation of HLG content in JVET-G0103K. Kawamura, S. Naito (KDDI) HYPERLINK "current_document.php?id=3232" JVET-G0121m409902017-07-07 04:43:432017-07-07 04:45:112017-07-16 08:33:18EE3 Test1 and 1.1: Cross-check of JVET-G0070K. Kawamura, S. Naito (KDDI) HYPERLINK "current_document.php?id=3233" JVET-G0122m409922017-07-07 14:06:472017-07-14 15:47:142017-07-14 15:47:14Crosscheck of JVET-G0081 "Comparisons between UWP, W66 and Planar, Angular mode 66 under the same coding conditions"Alexey Filippov, Vasily Rufitskiy (Huawei) HYPERLINK "current_document.php?id=3234" JVET-G0123m409972017-07-07 17:46:242017-07-07 17:51:012017-07-15 23:26:24AHG7: Experiments on using local QP adaptation in the context of an HLG containerE. François, F. Hiron (Technicolor) HYPERLINK "current_document.php?id=3235" JVET-G0124m410012017-07-07 23:07:192017-07-11 04:24:212017-07-11 04:24:21EE3: Cross-check for JVET-G0106 (Test 5 -- Adaptive QP for CMP in HM-360Lib)Hendry, M. Coban (Qualcomm) HYPERLINK "current_document.php?id=3237" JVET-G0125m410092017-07-09 02:14:492017-07-13 12:03:592017-07-13 12:03:59EE3: Cross-check for JVET-G0106 Test2X. Xiu, Y. He, Y. Ye (InterDigital)  HYPERLINK "current_document.php?id=3238" JVET-G0126m410322017-07-10 03:56:152017-07-13 00:07:002017-07-13 00:07:00EE1-Related: Crosscheck of JVET-G0062 on UW prediction fixT. Ikai, Y. Yasugi (Sharp) HYPERLINK "current_document.php?id=3239" JVET-G0127m410332017-07-10 03:56:482017-07-11 12:36:512017-07-14 13:09:15Crosscheck of a modification of fast algorithm in intra mode selection (JVET-G0109)Y. Yasugi, T. Ikai (Sharp) HYPERLINK "current_document.php?id=3240" JVET-G0128m410352017-07-10 04:54:342017-07-10 04:57:482017-07-15 16:18:36EE4: Padding method for Segmented Sphere ProjectionY. Lu, J. Li, Z. Wen, X. Meng (Owlii) HYPERLINK "current_document.php?id=3241" JVET-G0129m410362017-07-10 05:00:422017-07-15 17:10:562017-07-15 17:10:56EE4: Cross-check of EE4 tests 1-4 (JVET-G0097)Y. Lu, X. Meng (Owlii) HYPERLINK "current_document.php?id=3242" JVET-G0130m411042017-07-10 15:49:572017-07-15 18:03:182017-07-15 18:03:18EE4 Cross-check for Test 8-10 (JVET-G0128)Y.-H. Lee, J.-L. Lin (MediaTek) HYPERLINK "current_document.php?id=3243" JVET-G0131m411052017-07-10 15:50:422017-07-14 15:12:112017-07-14 15:12:11EE4 Cross-check for Test 11-14 (JVET-G0098)Y.-H. Lee, J.-L. Lin (MediaTek) HYPERLINK "current_document.php?id=3244" JVET-G0132m411532017-07-10 19:44:432017-07-13 00:20:592017-07-13 00:20:59AHG8: Crosscheck of JVET-G0071 ACP with padding for 360-degree videoY. He (InterDigital) HYPERLINK "current_document.php?id=3245" JVET-G0133m411552017-07-10 19:51:292017-07-13 00:21:222017-07-13 00:21:22AHG8: Crosscheck of JVET-G0074 ECP with padding for 360-degree videoY. He (InterDigital) HYPERLINK "current_document.php?id=3246" JVET-G0134m411572017-07-10 20:03:372017-07-11 20:12:052017-07-11 20:12:05Non-EE1: Cross-check of JVET-G0068 on unification of intra filters (test 3.1.2) V. Seregin (Qualcomm) HYPERLINK "current_document.php?id=3247" JVET-G0135m411672017-07-10 20:24:22Withdrawn HYPERLINK "current_document.php?id=3250" JVET-G0136m412102017-07-11 04:36:132017-07-15 08:49:312017-07-15 08:49:31Non-EE1: Crosscheck of G0108 on strong intra smoothing filteringT. Ikai (Sharp) HYPERLINK "current_document.php?id=3251" JVET-G0137m412112017-07-11 04:36:292017-07-14 06:52:562017-07-14 06:52:56EE2-related: Crosscheck of JVET-G0083 on gradient filter modificationT. Ikai (Sharp) HYPERLINK "current_document.php?id=3252" JVET-G0138m412122017-07-11 07:08:462017-07-15 13:23:152017-07-16 15:39:37Cross-check of JVET-G0084 on luma/chroma QP adaptation for HLG materialS. Iwamura, S. Nemoto, A. Ichigaya (NHK) HYPERLINK "current_document.php?id=3253" JVET-G0139m412152017-07-11 08:29:15Withdrawn HYPERLINK "current_document.php?id=3255" JVET-G0140m412202017-07-11 10:51:192017-07-11 11:39:452017-07-11 11:39:45EE3: Cross-check of JVET-G0106 (Test 2.1 -- Adaptive QP with F0072 weighting for rotated ERP in HM-360Lib)F. Racape, F. Galpin (Technicolor) HYPERLINK "current_document.php?id=3256" JVET-G0141m412212017-07-11 10:54:202017-07-11 17:42:122017-07-11 17:58:15EE1: Cross-check of JVET-G0104, test 5F. Racape, E. François, F. Le Leannec (Technicolor) HYPERLINK "current_document.php?id=3257" JVET-G0142m412222017-07-11 10:56:482017-07-13 12:02:592017-07-13 12:02:59EE1: Cross-check of JVET-G0077, test 2F. Racape, E. François, F. Le Leannec (Technicolor) HYPERLINK "current_document.php?id=3260" JVET-G0143m412272017-07-11 15:33:502017-07-11 15:56:372017-07-14 17:21:05Cross-check of JVET-G0112 - Arithmetic coding with context-dependent double-window adaptation responseK. Sharman, M. Philippe (Sony) HYPERLINK "current_document.php?id=3261" JVET-G0144m412282017-07-11 15:50:212017-07-14 18:19:452017-07-14 18:19:45Cross-check of Unified adaptive search range setting in JEM and HM (JVET-G0090)Y.-H. Ju, P.-H. Lin, C.-C. Lin, C.-L. Lin (ITRI) HYPERLINK "current_document.php?id=3262" JVET-G0145m412292017-07-11 15:59:452017-07-15 16:24:362017-07-15 16:24:36AHG4: Evaluation report of drone test sequencesY.-H. Ju, C.-C. Lin, P.-H. Lin, C.-L. Lin (ITRI) HYPERLINK "current_document.php?id=3263" JVET-G0146m412322017-07-11 19:19:112017-07-11 20:14:062017-07-11 20:14:06EE1: Additional tests comparing UWP/UW66 with P-PDPC in EE1 testsM. Karczewicz, V. Seregin, A. Said, N. Hu, X. Zhao (Qualcomm) HYPERLINK "current_document.php?id=3264" JVET-G0147m412342017-07-11 20:46:012017-07-13 04:51:192017-07-14 17:33:17New Test Sequences for Spherical Video Coding from GoProA. Abbas, D. Newman (GoPro) HYPERLINK "current_document.php?id=3265" JVET-G0148m412362017-07-12 01:47:362017-07-12 01:51:042017-07-17 20:54:26AHG9 : encoding/decoding capability of JEM6 for 4:4:4 colour formatYZ\flnprwx|¦§®µ¼½¿ÀÕñçàØÑàÑàÑàÑàÑçÊàÊù¬Ê¤uk_UhKbhrFÑ5aJh1«hx{
5H*aJhKbh2ø5aJhKbhÇ1ï5aJhKbhô\5aJhKbhà5aJhKbh¬æ6aJhKbh¬æhKbhzô5h\kµh£}cHdh¤MZ'Hh¤MZ'h£}hKbh\kµhKbhzôhKbhä~¼h1«hä~¼H*hKbh2øhKbhzô5aJjhKbhzô5UaJ 'Y¦	ððáÔuh
¤}}}}}}ùõíæõíæõíæßæõ×ùõíæõíæõíæõÏÈõíæíæõíæõíæõíæõíæõíæõíæõíæõíæõíæíæõíæõíæõíæõíæíæõíæõhKbhj1*B*PJUaJmH	nHphÿsH	tH0hKbhÊø>*B*PJaJmH	nHphÿsH	tH2Ãòókóµóÿó!ôÆôÇôõõ´õBönöâöõö÷÷é÷ø¾øÉøÐøÜøùøù÷÷÷÷ï÷êåååååàààêååÛÒÒÒÒÒ	$IfgdÚEêgd1«Ggd1«Ogd1«gdÚEêK
&F\gd1«L
&F\gd1«õó÷óþóÿó ô!ôÅôÆôõõõOõPõ~õõõ³õ´õµõöö@öAöBömönöoö±ö²öàöáöâöôöõöööF÷G÷÷÷÷è÷é÷ù÷WøXøhøøø½ø¾øùù*ù+ù/ùfùjù ù¤ùÕùÙùúúHúLú÷óïóïóïóïçóçÞçïóïçóçÞçïóïçóçÞçïóïçóçÞçïóïÖË¿ËÖ·°ï¢Ë¢Ë¢Ë¢Ë¢Ë¢Ë¢Ë¢hÚEêhÚEê5\mH	sH	hÚEêhÚEêhìnUmH	sH	hÚEêhÚEêH*mH	sH	hÚEêhÚEêmH	sH	hÚEêmH	sH	hr:ÆhÚEê0JjhÚEêUhìnUhÚEêh1«hÚEêH*@ùùùùùù ù"ù$ù&ù(ùf]]]]]]]]]	$IfgdÚEêkdä$$Ifl4;Ö\ÿ>'B(`gÙgéÿÿÿÿÿÿÿÿgõÿÿÿÿÿÿÿÿg&ÿÿÿÿÿÿÿÿ
t öÝ'6ööÖÿÿÿÿÖÿÿÿÿÿÿÿÿÿÿÿÿÿÖÿÿÿÿÿÿÿÿÿÿÿÿÖÿ4Ö4Ö
lBÖaöpÖ(ÿÿÿÿÿÿÿÿytµ:Ï
(ù*ù+ù/ù5ù;ùAùGùMùSùYù_ùeùfùjùpùvù|ùùùùùùù ù¤ùªùöñööööööööööìööööööööööçööFfU¡FfÒFfø	$IfgdÚEêªù¯ù´ùºù¿ùÅùÊùÏùÔùÕùÙùßùåùëùñù÷ùýùúú
úúúúú$ú+ú1ú7úööööööööñööööööööööìöööööööFfªFfü¥	$IfgdÚEê7ú=úBúGúHúLúSúYú_úfúlúrúxú~úú
úúúúú¢ú¨ú®ú´úºúÀúÁúöööñööööööööööìööööööööööçFfb¸Ff©³Ff¯	$IfgdÚEêLú
úúÃúÓú6û7ûlûmûÅûÆûÙûÚûÞûüüMüQüüü»ü¿üøüüü6ý:ýtýuývý
ýÀýÁýóýôýTþBÍÎ&':;?vz¨¬ãçGKy}«¯ïñòóõõçõßõ×ÐÌçõçõçõçõçõçõçõçõçõÀ·ßõ×ÐÌ«õßõßõ×ÐÌçõçõçõçõçõçõçõçõçõçõßõ×õhÚEêhÚEê5mH	sH	hÚEê\mH	sH	h±hÚEê\mH	sH	hìnUhÚEêhÚEêhìnUmH	sH	hÚEêmH	sH	hÚEêhÚEê5\mH	sH	hÚEêhÚEêmH	sH	BÁúÂúÃú7ûmûxûûû¨ûÅûúúõðççççç	$IfgdÚEêgd1«Ogd1«gdÚEê	ÅûÆûÇûÉûËûÍûÏûÑûÓûÕû×ûf]]]]]]]]]	$IfgdÚEêkd»$$Ifl4;Ö\ÿ>%1(`gÙgçÿÿÿÿÿÿÿÿgîÿÿÿÿÿÿÿÿgÿÿÿÿÿÿÿÿ
t öÌ'6ööÖÿÿÿÿÖÿÿÿÿÿÿÿÿÿÿÿÿÿÖÿÿÿÿÿÿÿÿÿÿÿÿÖÿ4Ö4Ö
lBÖaöpÖ(ÿÿÿÿÿÿÿÿytµ:Ï
×ûÙûÚûÞûäûêûðûöûüûüü
üüüüü#ü)ü/ü5ü;ü@üFüLüMüQüWüöñööööööööööìööööööööööçööFf°ÇFfÃFf7¾	$IfgdÚEêWü\üaügülürüwü}üüüüüüüü¤üªü°üµüºü»ü¿üÆüÌüÒüÙüßüåüööööööööñööööööööööìöööööööFfÑFfeÌ	$IfgdÚEêåüìüñü÷üøüüüý	ýýýý"ý)ý/ý5ý6ý:ýAýGýMýTýZý`ýgýmýsýtýöööñööööööööööìööööööööööçFfKßFflÚFf·Õ	$IfgdÚEêtýuývýÁýôýþþ þ-þ9þúúõðççççç	$IfgdÚEêgd1«Ogd1«gdÚEê	9þ:þ;þ=þ?þAþCþEþNEEEEEE	$IfgdÚEê°kdâ$$Ifl4;Örÿ¡Ô:"m+`ggKÿÿÿÿgpÿÿÿÿÿÿÿÿgpÿÿÿÿÿÿÿÿgpÿÿÿÿÿÿÿÿ
t ö°'6ööÖÿÿÿÿÿÖÿÿÿÿÿÿÿÿÿÿÿÿÿÿÖÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÖÿ4Ö4Ö
lBÖaöpÖ2ÿÿÿÿÿÿÿÿÿÿytµ:ÏEþGþIþKþMþOþQþSþTþ\þcþjþqþxþþþþþþ¢þ©þ°þ±þ¹þÀþÇþÎþÕþöööööööñöööööööööööööìöööööFfLêFf~å	$IfgdÚEêÕþÜþãþêþñþøþÿþÿ
ÿÿÿÿ&ÿ-ÿ4ÿ;ÿBÿIÿPÿWÿ^ÿeÿlÿmÿwÿ~ÿ
ÿÿööööööööñöööööööööööööìööööFfîôFfï	$IfgdÚEêÿÿÿ¡ÿ¨ÿ¯ÿ¶ÿ½ÿÄÿËÿÌÿÖÿÝÿäÿëÿòÿùÿ#*+29@öööööööööñöööööööööööööìöööFfÿFf?ú	$IfgdÚEê@GNU\cjqx¦´»ÂÉÐ×ÞåæñøööööööööööñöööööööööööööìööFfj
Ffï	$IfgdÚEêøÿ
")07>EFT[bipw~
¡¨©ºöööööööööööñöööööööööööööìöFf`Ffå	$IfgdÚEêºÁÈÏÖÝäëòù%,3:AHOV]dklööööööööööööñöööööööööööööìFfV FfÛ	$IfgdÚEêlov} §®µ¼ÃÄÇÎÕÜãêñøÿ
öööööööööööööñöööööööööööööFfÃ%	$IfgdÚEêÎàì	&úõðëââââ	$IfgdÚEêgd1«Ogd1«gdÚEêFf>+&'(*,.02468f]]]]]]]]]	$IfgdÚEêkd½.$$Ifl4CÖ\ÿ>/×)`gÙgIÿÿÿÿÿÿÿÿg¨ÿÿÿÿÿÿÿÿg¨ÿÿÿÿÿÿÿÿ
t ör)6ööÖÿÿÿÿÖÿÿÿÿÿÿÿÿÿÿÿÿÿÖÿÿÿÿÿÿÿÿÿÿÿÿÖÿ4Ö4Ö
lBÖaöpÖ(ÿÿÿÿÿÿÿÿytµ:Ï
8:;?EKQW]ciouvz¢§¨¬²öñööööööööööìööööööööööçööFfò8Ffó4Ffñ0	$IfgdÚE겸¾ÄÊÐÖÜâãçìñöû
#(-27ööööööööñööööööööööìöööööööFfô?Ffs*B*PJaJmH	nHphÿsH	tH-jhKbhK#ôPJUaJmH	nHsH	tH$hKbhK#ôPJaJmH	nHsH	tH/£$¤$·$¸$Ü$Ý$Þ$æ$ó$ô$%
%%
%%%=%F%H%I%L%M%N%R%S%_%b%d%e%f%¿%À%Ê%üøüøüñæÛæÐæÅºÅ¯¨¤¤¨¤¨¤ü{b0hKbhY[>*B*PJaJmH	nHphÿsH	tHhKbhY[mH	sH	jhKbhY[UmH	sH	hKbhOuh1«hOuH*hOuhKbh2øhKbh*ZmH	sH	hKbh"aëmH	sH	hKbhý'«mH	sH	hKbhVU÷mH	sH	hKbh'lmH	sH	hKbhtoËmH	sH	hKbh³w$h³w$hìnU Ý$Þ$%e%&'y'ç'()
)ø)%-
.(/Ñ/1182G3444¯4úõðëððððæáëðððÜÔððÏðððÊgdtoËOgd£o$a$gdæ"¸	gdm}gdm}	gdU°	gd;@¹Ogd1«gd*Zgd³w$Ê%Ë%&&&
&&/&X&i&j&k&n&&°&¸&''''w'x'y'Î'Ï'ß'ç'ï'ô'õ'(((âϿϿϳ¨ ¨³¨ ¨ ¨¨ ¨  xa-jhKbhU°PJUaJmH	nHsH	tH*hìnUnHtH*hm`hìnUnHtHh1«hìnUnHtHhìnUnHtHh_nHtHh4;h_nHtHh4;h_nHo(tHhjPJaJmH	nHsH	tH$hKbhY[PJaJmH	nHsH	tH9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH (u(v((())
))g)h)r)s)£)¥)¯)±)¶)¸)íÖ½Öí¢oR?/?/?/hjPJaJmH	nHsH	tH$hKbhY[PJaJmH	nHsH	tH9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tHhKbhY[mH	sH	jhKbhY[UmH	sH	hm}hW($nHtHhU°PJaJmH	nHsH	tH0hKbhU°>*B*PJaJmH	nHphÿsH	tH-jhKbhU°PJUaJmH	nHsH	tH$hKbhU°PJaJmH	nHsH	tH¸)Ì)Î)Ó)Õ)Ü)Þ)å)ç)ø)
,,$-%-.
.Õ.î.//'/(/)/////É/Ð/Ñ/!0íÝíÝíÝíÝíÕÊÕ¾º¾¶¾¶¾º®ª®wdTdN
hæ"¸aJhm}PJaJmH	nHsH	tH$htgýhÎ@PJaJmH	nHsH	tH3jhÎ@>*B*PJUaJmH	nHphÿsH	tH0htgýhÎ@>*B*PJaJmH	nHphÿsH	tHhÎ@jhÎ@UhL+
hìnUhOuhìnUnHtHhFRhOunHtHhOunHtHhjPJaJmH	nHsH	tH$hKbhY[PJaJmH	nHsH	tH!0.0Í0Ù0	11%1*1+11â172824K4P4`44444«4¬44¯4°4Å4Ü4Ý4á4é4ë4í4÷4ø4÷ñèñâÞÚÞÚÞÚÞÚÞÚÞÚÞÚÏĹ®¹§ § |g|ZHh«UZGhKbhú)V)Hh¬UZGhú)Vhú)VʬUZG*H*Hh¬UZGhú)VHh«UZGhú)VhKbh[GÞhú)VcHdh«UZGhKbh[GÞhKbh«hKbh"aëmH	sH	hKbhÏmH	sH	hKbhQmH	sH	hKbh'lmH	sH	h£ohæ"¸
h£oaJh7ihæ"¸aJ
hæ"¸aJhÉ#hæ"¸aJ"ø4555555,5-5.5/55555Ô5Ö5Ü5Þ5ì5í5ïèïØÎÁÎÁ½®£®mZJZJZ:h®qPJaJmH	nHsH	tHhjPJaJmH	nHsH	tH$hKbh®qPJaJmH	nHsH	tH9jhKbh®q>*B*PJUaJmH	nHphÿsH	tH0hKbh®q>*B*PJaJmH	nHphÿsH	tHhKbh®qmH	sH	jhKbh®qUmH	sH	hìnUHh¬UZGhKbhú)VHh¬UZGhú)VhKbh5Thú)VcHdh¬UZGhKbh[GÞhKbh[GÞhú)VcHdh¬UZG¯4.5í566ë67³7È8o99Ó9¤:;ʲ>±?@7AAÚAñABúõðëõúõúúúúæúúúæúáúúÜÜ×Î@^@gdm}gdÊøOgdm}	gdm}	gdU°gdm}Ogdú)V	gd;@¹Ogd1«í5666a6b6l6m6²6´6¼6¾6Ã6Å6Ï6Ñ6ë6777^7_7õáÒÇÒ®~n~n~n~n~jfWLWhKbhY[mH	sH	jhKbhY[UmH	sH	hìnUhª|	hjPJaJmH	nHsH	tH$hKbh®qPJaJmH	nHsH	tH9jhKbh®q>*B*PJUaJmH	nHphÿsH	tH0hKbh®q>*B*PJaJmH	nHphÿsH	tHhKbh®qmH	sH	jhKbh®qUmH	sH	'hm}hÎ@hú)VcHdh¤UZGnHtHHh¤UZGhú)V_7i7j777 7¢7³788.888C8L8Ç8È8n9o999Ç9Ë9Ò9Ó9Ô9çÊ·§·§·
~wsosksfO-jhKbhU°PJUaJmH	nHsH	tH	*hìnUhhìnUhoh1«hìnUh1«hoh1«hìnUnHtHh1«honHtHh1«honHo(tHhjPJaJmH	nHsH	tH$hKbhY[PJaJmH	nHsH	tH9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tHÔ9-:.:8:9:¤:-;Y;;;É9>:>v>w>>>>>>>±>²>íÖ½Öí²§~ÖíÖ½Öíníníní[WShìnUh*H$hKbhU°CJPJmH	nHsH	tHh±PJaJmH	nHsH	tHhìnUnHtHh1«hnHtHh1«hìnUnHtHh1«honHtHh1«honHtHh1«honHtH0hKbhU°>*B*PJaJmH	nHphÿsH	tH-jhKbhU°PJUaJmH	nHsH	tH$hKbhU°PJaJmH	nHsH	tH²>³>?
????S?U?^?f?x?????§?°?±?@@6A7ABACAAAÙAÚAëAâÉâÉâɶ£¶¶¶£¶¶}vovkak]k]RhKbh»LµmH	sH	hìnUhm}h8)56h8)h1«h8)h1«hìnUh1«h¬-h1«hìnU5nHtH$hm}hÌmPJaJmH	nHsH	tH$hm}hroPJaJmH	nHsH	tH$hm}h±PJaJmH	nHsH	tH0hm}h±>*B*PJaJmH	nHphÿsH	tH9jhm}h±>*B*PJUaJmH	nHphÿsH	tHëAíAïAðAñAøABBB+B8B9B0D[D\DEEE¯E¶EÉEÊEFF"FLFFFÈFèFG
G G6GH©HªH¶H IõêõßÔÌÁ¹±¹©±±©±±{w±±©{o{wkgkckckh¡Nhm}he_*h%A(h½lhìnUh½l*h½lnHtH*hX`h½lnHtH*hìnUnHtH*h±h½lnHtHhìnUnHtHh½lnHtHh[nHtHh%A(h[mH	sH	hm}mH	sH	hm}h[mH	sH	hKbhûx²mH	sH	hKbh(~,mH	sH	hKbh9ämH	sH	&B9B\DEFF
GTG+HªH¶HI
IRI[IgI³IrJÏJÂKìKCLMLM*M³PTQúúúúúúúúúúúúúúúúúõúðúçâúúú	gd
*`@^@gde_Ogd¡NOgd.Ogd1« IQItII±I²I¿I×IßIëIûI JFJpJrJJJÏJKOKÁKÂKìKúKCLLLMLNL§L¨L²L³LÛLÝLèLêLõL÷LMüøôüôøôüøôüôüøôðôøüðøüèüàÕÆ»Æ¢
rbrbrbrhjPJaJmH	nHsH	tH$hKbh
*`PJaJmH	nHsH	tH9jhKbh
*`>*B*PJUaJmH	nHphÿsH	tH0hKbh
*`>*B*PJaJmH	nHphÿsH	tHhKbh
*`mH	sH	jhKbh
*`UmH	sH	h%A(he_mH	sH	he_mH	sH	*hm}h¡Nhå#ìh.he_h¡N&M)M*M7MVMNN|OOPPwtttt¤$Ifgd±kdó$$IfTlÖ\ÿ4ù¾ ÅÅÅ
t ö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöyt±T>w@wPw`wpwwtttt¤$Ifgd±kdW$$IfTlÖ\ÿ4ù¾ ÅÅÅ
t ö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöyt±TwwØwEyy£y±y¿yÍyzupeeee¤$Ifgd±gd1«Ogd1«(gd1«kd»$$IfTlÖ\ÿ4ù¾ ÅÅÅ
t ö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöyt±TwwÖwØwøwPxZx9yByDyEyyyÍyÎyz2zFzxzzºzÈzøz{6{v{x{z{Ê{Ì{w|x|}}õêâÖÇÖÇ·Ç« uuuuuuume]UhßC nHtHh+nHtHhLhnHtHhnHtH h±hCJmH	nHsH	tH#h±h5CJmH	nHsH	tHhìnUnHtHhhnHtHhìnUmH	nHsH	tHhh6mH	nHsH	tHhhmH	nHsH	tHhmH	nHsH	tHhìnUmH	sH	hhmH	sH	h±hmH	sH	!ÍyÎyÞyz z0ztttt¤$Ifgd±kd$$IfTlÖ\ÿ4ù¾ ÅÅÅ
t ö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöyt±T0z2zFzVzfzvztttt¤$Ifgd±kd$$IfTlÖ\ÿ4ù¾ ÅÅÅ
t ö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöyt±Tvzxzzz¨z¸ztttt¤$Ifgd±kdç$$IfTlÖ\ÿ4ù¾ ÅÅÅ
t ö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöyt±T¸zºzÈzÖzæzöztttt¤$Ifgd±kdK$$IfTlÖ\ÿ4ù¾ ÅÅÅ
t ö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöyt±Tözøz{{${4{tttt¤$Ifgd±kd¯$$IfTlÖ\ÿ4ù¾ ÅÅÅ
t ö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöyt±T4{6{F{V{f{v{tttt¤$Ifgd±kd$$IfTlÖ\ÿ4ù¾ ÅÅÅ
t ö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöyt±Tv{x{z{Ì{x|}K~q~CÇîôzuuupuuukcJ
&Fcgd1«gd1«	gd
*`Ogd1«gd±kdw$$IfTlÖ\ÿ4ù¾ ÅÅÅ
t ö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöyt±T}}q}r}|}}}
}}Ó}Ô}á}ã}í}ï}ö}ø}~	~
~~~~~!~*~,~-~/~;~=~K~p~q~è~ðåð̯|lllllllllld\dhìnUnHtHhC\LnHtHhjPJaJmH	nHsH	tHhC\LPJaJmH	nHsH	tHhìnUPJaJmH	nHsH	tH$hKbh
*`PJaJmH	nHsH	tH9jhKbh
*`>*B*PJUaJmH	nHphÿsH	tH0hKbh
*`>*B*PJaJmH	nHphÿsH	tHhKbh
*`mH	sH	jhKbh
*`UmH	sH	!è~é~ë~û~ÿ~BCST ¡¤ÆÇíîÿóô $^gvêëì "fvÎÐÒâäôõíõíõíåíõíõíõíåíåíÝíÝíÝíåíåíõíõíõåíåÑíÑåÁ³¥³¥h±hC\LCJnHtH#h%A(hC\L5CJmH	nHsH	tHh±hC\LCJnHo(tHh%A(hC\L5CJnHtHh%A(hC\L5CJnHo(tHhC\LhC\LnHo(tHh²WnHtHhìnUnHtHhC\LnHtHhC\LhC\LnHtH/ô !ì"¨ÎÐÒ÷òííèÝÝUÝkdÛ$$Ifc4ÖÖ0ÿø	Y`d
a
t Ö0ÿÿÿÿÿÿö6öÖÿÿÖÿÿÖÿÿÖÿÿ4Ö4Ö
laöpÖÿÿÿÿytµ:Ϥ$Ifgd±gd1«Ogd1«gd±J
&Fcgd1«	ÒÖÜâäôñññ9.¤$Ifgd±¸kd}$$Ifc4ÖÖ\ÿø	ÌY d
½
t Ö0ÿÿÿÿÿÿö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöpÖ(ÿÿÿÿÿÿÿÿyt%A($¤$Ifa$gd%A(ôüþ
 J`btv ¢®°´¶Êàâôö
$
&
(
*
.
0
@
d
f
j
l
n
p
ïÝïÝïÝïÏÂïÝïÏÂïÝïÝïÏÂïÝïÏÂïÝïÝïÝïÝïÝïÏ´¢¢Ï{hKbhC\LnHtHhC\LnHtH&h±hC\L5CJmH	nHo(sH	tH#h±hC\L5CJmH	nHsH	tHh±hC\L5CJnHtHh±hC\LCJnHtHh±hC\LCJnHo(tH#h±hC\LCJmH	nHo(sH	tH h±hC\LCJmH	nHsH	tH-ô JXôôô=ôô¶kdU$$IfcÖÖ\ÿø	ÌYd
½
t Ö0ÿÿÿÿÿÿö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöpÖ(ÿÿÿÿÿÿÿÿytµ:Ϥ$Ifgd±Xftv¦ôô=ôôô¶kd%$$IfcÖÖ\ÿø	ÌYd
½
t Ö0ÿÿÿÿÿÿö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöpÖ(ÿÿÿÿÿÿÿÿytµ:Ϥ$Ifgd±¦´¶ÊØæôô=ôôôô¶kdõ$$IfcÖÖ\ÿø	ÌYd
½
t Ö0ÿÿÿÿÿÿö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöpÖ(ÿÿÿÿÿÿÿÿytµ:Ϥ$Ifgd±ôö
 
.
H====¤$Ifgd±¶kdÅ$$IfcÖÖ\ÿø	ÌYd
½
t Ö0ÿÿÿÿÿÿö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöpÖ(ÿÿÿÿÿÿÿÿytµ:Ï.
0
@
N
\
j
H====¤$Ifgd±¶kd$$IfcÖÖ\ÿø	ÌYd
½
t Ö0ÿÿÿÿÿÿö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöpÖ(ÿÿÿÿÿÿÿÿytµ:Ïj
l
n
p
¼â´HCC>99Ogd1«	gd
*`gd±¶kde$$IfcÖÖ\ÿø	ÌYd
½
t Ö0ÿÿÿÿÿÿö6öÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿÖÿÿÿÿ4Ö4Ö
laöpÖ(ÿÿÿÿÿÿÿÿytµ:Ïp
r
 ¢¬®¼áâ³´ÿ^®¸ÆÊÚr|úTj®º.8¸Æú "êðåð̯||qq|qqqqqqqqqqqq|h¾Xh¾XnHtHhìnUnHtHh¾XnHtHhjPJaJmH	nHsH	tH$hKbh
*`PJaJmH	nHsH	tH9jhKbh
*`>*B*PJUaJmH	nHphÿsH	tH0hKbh
*`>*B*PJaJmH	nHphÿsH	tHhKbh
*`mH	sH	jhKbh
*`UmH	sH	,´®"ìܳA-ª(1357@Iúúúúúúúõððúëúúââââââ	$IfgdÑ}>gd1«Ggd1«	gd
*`Ogd1«êìÛܲ³pq{|}÷ùÿ@AçèøðøèàøàøèøÑÆÑ}m}m}m}eøeøZRZøh8OnHtHh8Oh8OnHtHhÑ}>nHtHhjPJaJmH	nHsH	tH$hKbh
*`PJaJmH	nHsH	tH9jhKbh
*`>*B*PJUaJmH	nHphÿsH	tH0hKbh
*`>*B*PJaJmH	nHphÿsH	tHhKbh
*`mH	sH	jhKbh
*`UmH	sH	hjnHtHhsfnHtHh¾XnHtHhìnUnHtH,-=©ª'(ü%&6®¾§°HIÇÈ  ª ¬ @¡B¡N¡P¡¡¡¦¡¨¡¢¢î¢ð¢ü¢þ¢>£@£L£õíáÒÆ¶¦áÒáÒtÒÆ¶¦hhhhhhhhhhh²WmH	nHsH	tHhÑ}>hÑ}>6mH	nHsH	tHhìnUmH	sH	hÑ}>hÑ}>mH	sH	hµ:ÏhÑ}>mH	nHsH	tHh1«hìnU5mH	nHsH	tHh1«hÑ}>5mH	nHsH	tHhìnUmH	nHsH	tHhÑ}>hÑ}>mH	nHsH	tHhÑ}>mH	nHsH	tHhìnUnHtHhÑ}>hÑ}>nHtH(IJêkd5$$IfTlÖÖÿUUbhA$Á
ÙÙ
t Ö0ÿÿÿÿÿÿö6öÖÿÿÿÿÿÿÖÿÿÿÿÿÿÖÿÿÿÿÿÿÖÿÿÿÿÿÿ4Ö4Ö
laöpÖêòú&.6 ¦§¨¶¾ÆÎÔÙÚâìôü	$,4:öööñöööööööìöööööööçööööööFfA¸Ff$µFf²	$IfgdÑ}>:?@AKS[chmno¢£¤«³»ÃÉÏÐöñöööööööìöööööööçöööööööâFfµÄFfÁFf{¾Ff^»	$IfgdÑ}>ÐØàèðöûöööööö	$IfgdÑ}>ûüêkd¼Æ$$IfTlÖÖÿUUbhA$Á
ÙÙ
t Ö0ÿÿÿÿÿÿö6öÖÿÿÿÿÿÿÖÿÿÿÿÿÿÖÿÿÿÿÿÿÖÿÿÿÿÿÿ4Ö4Ö
laöpÖOgd1«(gd1«	éêækd¸Ç$$IflÖÖÿUUbhA$Á
ÙÙ
t Ö0ÿÿÿÿÿÿö6öÖÿÿÿÿÿÿÖÿÿÿÿÿÿÖÿÿÿÿÿÿÖÿÿÿÿÿÿ4Ö4Ö
laöpÖà ì ø ¡¡¡¡¡4¡@¡N¡\¡d¡l¡n¡p¡¡¡¦¡´¡¼¡Ä¡Æ¡È¡â¡î¡ú¡öööööñöööööööìöööööööçööööFfÙFfÖFfïÒ	$IfgdÑ}>ú¡¢¢¢¢¢6¢B¢N¢Z¢b¢j¢l¢|¢¢¢ª¢¶¢¾¢Æ¢È¢Ê¢â¢î¢ü¢
££öööñöööööööìöööööööçööööööFfDâFf3ßFf"Ü	$IfgdÑ}>££££2£>£L£Z£b£j£l£n£££¦£´£¼£Ä£Æ£È£Ö£â£ð£þ£¤¤¤öñöööööööìöööööööçöööööööâFfîFfwëFffèFfUå	$IfgdÑ}>L£N£¦£¨£â£ä£ð£ò£,¤.¤:¤*B*PJUaJmH	nHphÿsH	tH0hKbh
*`>*B*PJaJmH	nHphÿsH	tHhKbh
*`mH	sH	jhKbh
*`UmH	sH	hìnUnHtHh²WnHtHhÑ}>nHtHhµ:ÏhÑ}>mH	nHsH	tHh²WmH	nHsH	tH¤ ¤,¤:¤H¤P¤X¤öööööö	$IfgdÑ}>X¤Z¤\¤gd±ækdð$$IflÖÖÿUUbhA$Á
ÙÙ
t Ö0ÿÿÿÿÿÿö6öÖÿÿÿÿÿÿÖÿÿÿÿÿÿÖÿÿÿÿÿÿÖÿÿÿÿÿÿ4Ö4Ö
laöpÖ*B*PJUaJmH	nHphÿsH	tH0hKbhd3C>*B*PJaJmH	nHphÿsH	tHhKbhd3CmH	sH	jhKbhd3CUmH	sH	h%A(hlgümH	sH	!¿sÀæÀ«Á
ÂÃHÃÄHÄÅGÅOÅßÅûÅÆLÆTÆ}ÆÆÆËÆÇadzǽÇÈrÈúúúúõúõúðúçúúúúúúúúúúúúúúú@^@gdlgü	gd®q	gd9ÃOgd1«
ÂÂdÂeÂoÂp¨ª«Â¿ÂÁÂÚÂÜÂõÂ÷ÂÿÂÃÃ
ÃÃGÃHÃIâãÃîÃÄÄÄÄÄGÄHÄIÄ¢Äðåð̯vðåð̯vg\hKbh®qmH	sH	jhKbh®qUmH	sH	h1«hìnUnHtHh1«h8)nHtHhjPJaJmH	nHsH	tH$hKbh9ÃPJaJmH	nHsH	tH9jhKbh9Ã>*B*PJUaJmH	nHphÿsH	tH0hKbh9Ã>*B*PJaJmH	nHphÿsH	tHhKbh9ÃmH	sH	jhKbh9ÃUmH	sH	$¢Ä£ÄÄ®ÄôÄöÄýÄÿÄÅFÅGÅNÅOÅ]ÅwÅÝÅÞÅßÅûÅÆKÆLÆTÆVÆ|Æ}ÆÆ¥ÆÉÆÇ²Ç³ÇÈð׺§§§|qiaiYaYaYaYaYaYiYiYiYhàhùnHtHhlgünHtHhçJ²nHtHh%A(hlgümH	sH	hlgümH	sH	hìnUnHtHh1«h8)nHtHhjPJaJmH	nHsH	tH$hKbh®qPJaJmH	nHsH	tH9jhKbh®q>*B*PJUaJmH	nHphÿsH	tH0hKbh®q>*B*PJaJmH	nHphÿsH	tHjhKbh®qUmH	sH	 rÈȺÈÉɠɬÉYÊ}ÊñËÌÌ8Ì~̺̳ÍÎNÏÏ.ТÐÑÒúúúúõìçúúÞÙúúúÑÑÑÉÁÉÉÑM
&Fegd1«L
&Fegd1«K
&Fegd1«gd«#@^@gdhÈ	gdK#ô@^@gdD`©gdtoËOgd1«ÈÉÉÉÉɠɬÉÉÊÊÊÊÊÊDÊFÊPÊRÊXÊYÊ|Ê}ÊÊøíâ×ÏâÄvffSKC*B*PJaJmH	nHphÿsH	tH$hKbhK#ôPJaJmH	nHsH	tH-jhKbhK#ôPJUaJmH	nHsH	tHhKbh9ämH	sH	h²WmH	sH	hKbhé.ØmH	sH	hKbhèUmH	sH	h1«hZSLnHtHhZSLnHtHʰʴʷÊÃÊÇÊóÊõÊ3Ë5ËUËZËÎËÏËðËñËöËÌÌ
ÌÌÌÌÌ7Ì8ÌZÌ}Ì~̹̺̲ͳÍÎÎMÏNÏÏÏ-Ð.СТÐ
ÑÑÑÒÒÒÒÒìÒíÒ½Ó¾ÓÔÔ6Õ8ÕDÕFÕPÕRÕÕÕüõüõüõüõüõüõüõñæÛÐÈÐæüÁ¹±üñüñüñüñüñüñüñüñüñüõñ©üñüñüñüñü¥ü¥ü¥ü¥he	h2øh)óhìnUnHtHh«#nHtHhKbh«#h²WmH	sH	hKbh9ämH	sH	hKbhTmH	sH	hKbhèUmH	sH	hìnUh«#h«#h«#@ÒÒÒíÒ¾ÓÔòÕÖ×õרýئÙÚÛ¥Û
Ü@ܾÜóÜÊÝÞÝÞo߯ßñßúõíèíèíèèèíèèíèíèíèèíèíèãgd1«Wgd1«G
&Ffgd1«Ogd1«gd[GÞÕ¨Õ¬Õ®ÕðÕòÕÖÖ××נפץ×â׿×ô×õ×'Ø(Ø>Ø?ØØØüØýإ٦ÙÚÚÛÛ¤Û¥ÛÜ
Ü?Ü@ܾܽÜòÜóÜÉÝÊÝ
ÞÞÜÞÝÞnßoßÅ߯ßðßñßàà à!à%à&à)à*à2à4à7à8àHàIàRàTàhàüøôøðøðøðøüøüøüøðéåéåéðøðøðéðøðøðøðøðøðøðøðøðøðøðøðÞðÔ˾±¾±¾±¾Ô¾Ë¾±¾Ô¾hj.ShìCJmH	sH	hj.Sh«#CJmH	sH	hj.Sh«#CJhj.Sh«#5CJh«#h«#h)óh)óh)óhìnUhe	h«#hìFñß÷ßààà3àööiöökd$$IfTlÖ2Ö0ÿ°;
t Ö0ÿÿÿÿÿÿö6öÖÿÿÖÿÿÖÿÿÖÿÿ4Ö4Ö
laöpÖÿÿÿÿytv«T	$Ifgd«#3à4à8àSàrii	$Ifgd«#kd#$$IfTlÖ2Ö0ÿ°;
t Ö0ÿÿÿÿÿÿö6öÖÿÿÖÿÿÖÿÿÖÿÿ4Ö4Ö
laöpÖÿÿÿÿytv«TSàTà[àzàrii	$Ifgd«#kdÅ$$IfTlÖ_Ö0ÿ°;
t Ö0ÿÿÿÿÿÿö6öÖÿÿÖÿÿÖÿÿÖÿÿ4Ö4Ö
laöpÖÿÿÿÿytv«Thàiàlàmàpàqàyà{àà¸àÔàÕàÖàAâBâ_âfâhâkâlârâsâ¥â¦âÉâÊâËâÍâÎâüâãããã¥ã¦ãªã¬ã9ä:äøäùäå	å
ååååååwåxå³å´åÈåóæóæóæÜæÎæÜÊÆÂ¾Æ¾·Æ·Æ·Æ¾Â¬¾Æ¾Æ¾¨Æ¨¾Æ¾¨ÂÆÂ ÆÆÂh«#h«#h^mhUàhdh66
*hj.Sh66
hj«	hìhìUh)óh)óhìhìnUh)óh«#hj.Sh«#CJmH	o(sH	hj.Sh«#5CJhj.Sh«#CJmH	sH	hj.ShìCJmH	sH	6zà{àààÔàriii	$Ifgd«#kdg$$IfTlÖzÖ0ÿ°;
t Ö0ÿÿÿÿÿÿö6öÖÿÿÖÿÿÖÿÿÖÿÿ4Ö4Ö
laöpÖÿÿÿÿytv«TÔàÕàÖàBâÊâÌâÍâÎâ:äùä´åÉårmhhmmmhhhcgd1«Ogd1«gd[GÞkd		$$IfTlÖ2Ö0ÿ°;
t Ö0ÿÿÿÿÿÿö6öÖÿÿÖÿÿÖÿÿÖÿÿ4Ö4Ö
laöpÖÿÿÿÿytv«TÈåÉåÐåáåõåüåæ æ$æ2æ3æ4æRæÜæÞæçççç,ç.ç3ç4ç5ç6ç7çççççç¥ç¦çªç«ç²ç³çè
èèèèEèPèVèZè]è£è¤è¥èéééé+é,é-ééüòæòæÜòæòÕòÌÂÌÂÌòÌÂÌòÌÂ̳©³ÂÌÂÌÂ̳©³©ÌÂæÂÌÂ̳©³Ì³©jhߪhìCJUhߪhì0JCJhߪhì>*CJjhߪhì>*CJUhߪhìCJo(hߪhìCJ
hì5CJhM5hì5CJhߪhì5CJo(hߪhì5CJhìnU9ÉåËåÐåüå	æ'æ4æôôôôôô
$$Ifgdj.S4æ5æ7æAæKæSæ:////
$$Ifgdj.SÅkd®»$$IfÖHÖÿvÜ*CJo(jhߪhìCJUhߪhì0JCJjhߪhì>*CJUhߪhì>*CJ4XíZí^íní~íí:1111	$Ifgdv«Åkd¾$$IfÖlÖÿvÜ*CJ6ìõîõöõÿõjöuööö÷öööööööö	$Ifgdv«÷÷÷÷÷I÷:1111	$Ifgdv«ÅkdÔÂ$$IfÖwÖÿvÜ*CJUhߪhì>*CJ7zù{ù~ùù¡ùáù:1111	$Ifgdv«ÅkdBÄ$$IfÖwÖÿvÜgdm}	gd;@¹Ogd1«õ ÿ !%!'!1!3!@!B!I!K!\!!!Ç!È! "!"+"K"t"u"*B*PJUaJmH	nHphÿsH	tH0hKbh
*`>*B*PJaJmH	nHphÿsH	tH'ã%ä%=&>&H&I&U&V&W&X&s&u&&&&&&&ô&õ&ÿ&èÕè¼èÕ¬Õ¬ÕÕÕrgrN0hKbh
*`>*B*PJaJmH	nHphÿsH	tHhKbh
*`mH	sH	jhKbh
*`UmH	sH	hm}hW($nHtHhð >PJaJmH	nHsH	tHhroPJaJmH	nHsH	tHhdPJaJmH	nHsH	tH0hø?¬hð >>*B*PJaJmH	nHphÿsH	tH$hø?¬hð >PJaJmH	nHsH	tH-jhø?¬hð >PJUaJmH	nHsH	tHÿ&'5'7'A'C'J'L'c'''¢'ª'Ü(þ(*¦*§*g+h+ß+à+,,,,w-x-y-Ò-Ó-Ý-Þ-".âϿϿϿϷ¯¤·¤·¤·¯·¯·¯··¯·¯nUn0hKbhK#ô>*B*PJaJmH	nHphÿsH	tH$hKbhK#ôPJaJmH	nHsH	tH-jhKbhK#ôPJUaJmH	nHsH	tH*hߪhá-nHtHhá-há-nHtHhìnUnHtHhá-nHtHhroPJaJmH	nHsH	tH$hKbh
*`PJaJmH	nHsH	tH9jhKbh
*`>*B*PJUaJmH	nHphÿsH	tH!".$.8.9.:.;.... .å.ç.ô.ö.//	////*/M/N/S/_/"0#0=0>0@0_0`0b0¹0º0ðÝÍ«««ðððððwodwdowo\wo\wohnHtHhº! hº! nHtHhìnUnHtHhº! nHtH0hKbhU°>*B*PJaJmH	nHphÿsH	tH$hKbhU°PJaJmH	nHsH	tH-jhKbhU°PJUaJmH	nHsH	tHhm}hW($nHtHhK#ôPJaJmH	nHsH	tH$hKbhK#ôPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH"*/N/#0>0`0º01E1M1q1144j4¾466Í8N9Å:¾>Ó>Õ>Ú>ç>úúúõúúìçúúúúßÚßÚßÚúÕÌÌÌ	$Ifgdv«gd1«Wgd1«G
&Fggd1«gd^m@^@gdhÈOgd£}Ogd1«º011@1B1C1D1E1L1M1p1q1113444i4j4½4¾46666Ì8Í8M9N9Ä:Å:>½>¾>Ò>Ó>ç>ð>þ>????øíâ×Ì×íÈÁ¹±©¢©©©©©©©©©¢©sihR¡h¥0U5CJh¥0Uh¥0U5CJo(h¥0Uh¥0U5CJ
hR¡5CJhj.Sh¥0U5CJhR¡h¥0Uh¥0Uh¥0UhìnUhkhìnUnHtHh^mnHtHhKbh^mh^mhKbh"aëmH	sH	hKbh9ämH	sH	hKbhÊømH	sH	hKbhèUmH	sH	hJqnHtH)ç>ñ>????9?¡?öööDööö²kdmÆ$$IfÖ«Örÿ,
 d 8&T D
ÔÖ0ÿÿÿÿÿÿö¤&ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.S	$Ifgdv«???-?6?8?9?:????? ?¢?¤?¯?°?¹?º?À?Â?Í?Î?×?Ø?Þ?ß?8@9@:@D@E@L@M@N@§@¨@©@³@´@¶@½@¾@û@AA
AAAAõëâÜâÒÅâµÅªÅâÒâââÒâââÅâŪÅÒâÅâ~ŪÅÒâÒâÜâÒââjXÉhߪh¥0UCJUj7Èhߪh¥0UCJU	j´ðhߪh¥0UCJhߪh¥0U0JCJjÇhߪh¥0UCJUjhߪh¥0UCJUhߪh¥0UCJo(
hR¡CJhߪh¥0UCJhj.Sh¥0U5CJh¥0Uh¥0U5CJ1¡?¢?À?Þ?M@¿@À@Â@öööööBö³kdyÊ$$If4ÖÖrÿ,
 d 8&T` D
ÔÖ0ÿÿÿÿÿÿö¤&ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.S	$Ifgdv«Â@AA&ADA¶A·A¸AööööööB³kdKÌ$$If4ÖmÖrÿ,
 d 8&T  D
ÔÖ0ÿÿÿÿÿÿö¤&ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.S	$Ifgdv«A A&A(A3A4A=A>ADAEAAA AªA«AA´AµAúABB	BBBBB%B'B2B3B*B*PJUaJmH	nHphÿsH	tH0hKbh
*`>*B*PJaJmH	nHphÿsH	tHhKbh
*`mH	sH	jhKbh
*`UmH	sH	hm}hW($nHtHh
*`PJaJmH	nHsH	tH$hKbh
*`PJaJmH	nHsH	tHhroPJaJmH	nHsH	tH?GHGJGOGPGTGaGlGnGoGpGÉGÊGÔGÕG6H8H?HAHIHKHbH
HH¹HºHJJyJJ^KlK:L;LÍLüôüíüíüíéÚÏÚ¶vvvnfnf[n[n[n[fnhº! hº! nHtHhìnUnHtHhº! nHtHhroPJaJmH	nHsH	tH$hKbh
*`PJaJmH	nHsH	tH9jhKbh
*`>*B*PJUaJmH	nHphÿsH	tH0hKbh
*`>*B*PJaJmH	nHphÿsH	tHhKbh
*`mH	sH	jhKbh
*`UmH	sH	hìnUhKbhkhpVmhkH*hk"ÍLñLòLJMKMÙMÚMPNQNRN«N¬N¶N·NO!O5O6O7O8OOOøðøðøðøðáÖá½ }mbK8K$hKbhK#ôPJaJmH	nHsH	tH-jhKbhK#ôPJUaJmH	nHsH	tHhm}hW($nHtHh
*`PJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbh
*`PJaJmH	nHsH	tH9jhKbh
*`>*B*PJUaJmH	nHphÿsH	tH0hKbh
*`>*B*PJaJmH	nHphÿsH	tHhKbh
*`mH	sH	jhKbh
*`UmH	sH	hìnUnHtHh[JÿnHtHOOOæOèOüOýOþOPPP P!P"P)P*PMPNPmPnP¨Q©Q_R`R"Sçн½|qiqe^VNJF?F?F?hHvhHvhìnUh>'hìnUnHtHhR¡nHtHhKbhR¡hR¡hð >mH	sH	hKbh9ämH	sH	hKbhÊømH	sH	hKbhèUmH	sH	hm}hW($nHtHhK#ôPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbhK#ôPJaJmH	nHsH	tH-jhKbhK#ôPJUaJmH	nHsH	tH0hKbhK#ô>*B*PJaJmH	nHphÿsH	tH*PNPnP©Q`R#SS¡SS®S°S²SúúúúúúììeììkdÎÎ$$IfcÖÖ0ÿ?ê$««
t Ö0ÿÿÿÿÿÿö6öÖÿÿÖÿÿÖÿÿÖÿÿ4Ö4Ö
laöpÖÿÿÿÿytv«$IfgdHvoÆíTW'Ogd1«"S#S%S+SaSlSnSsSSSSS®S¯S°S±S²S³SÝSÞSçSîSðS(T)T*T`TaTûTüT>U?UVVVVWWZY[Y¤Y¥YZZ0Z2Z>Z@ZNZVZZZZZªZ¬Z([üôðôéôðôéôüôÜôÏéÂôéôéô¾ü¶²ü²ü²ü²ü²ü²ü²ü²ü²üéôéüª ªªéhHvhHv5o(hHvhHv5hj.ShHv5o(hj.ShHv5hHvhHvhóo(hój7hHvhHvUo(jhHvhHvUo(jZÏhHvhHvUo(hHvhHvhkhHvhHvo(hìnU8²S´SµSÝSïSñjññkdæ$$IfcÖÖ0ÿ?ê$««
t Ö0ÿÿÿÿÿÿö6öÖÿÿÖÿÿÖÿÿÖÿÿ4Ö4Ö
laöpÖÿÿÿÿytv«$IfgdHvoÆíTW'ïSðS)T*TaTüT?UVVW[Yxsnia\a\a\Wgd1«G
&Fhgd1«Ogd1«gdHv(gd1«kdæ$$IfcÖÖ0ÿ?ê$««
t Ö0ÿÿÿÿÿÿö6öÖÿÿÖÿÿÖÿÿÖÿÿ4Ö4Ö
laöpÖÿÿÿÿytv«
[Y¥YZ@ZDZNZVZZ¬Z÷òíäääää	$IfgdHvgd1«Wgd1«G
&Fhgd1«¬Z®Z²Z*[ò[þ[j\OFFFFF	$IfgdHv°kd)ç$$IfÖÖrÿÂb¬Q$. ¥¥¥Ö0ÿÿÿÿÿÿöööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.S([*[,[Ø[Ú[î[ð[ò[ô[þ[\V\W\a\b\j\\¡\³\´\
]]]]^^^^º^¼^Ð^Ò^â^T_V_``````(`*`Ö`Ø`ì`î`ò`aZapaaaBbDbXbZb\b^bhbjbcc,c.c2cLcÈcÊcÔcÖcdddddd¨dªdVeXele÷ìåìÜìåØ÷ìåìÜì÷åÎåìåìÜìåØ÷ìåìÜì÷åìåìÜìåØ÷ìåìÜìØ÷åÎåìåìÜìåØ÷ìåìÜìØ÷åØåìåìÜì÷Ø÷ìåìÜhHvhHv56hHvhHvhHv0JhHvhHvjhHvhHvUhHvhHvo(Qj\k\o\³\]^â^OFFFFF	$IfgdHv°kd²ç$$IfÖÖrÿÂb¬Q$. ¥¥¥Ö0ÿÿÿÿÿÿöööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.Sâ^ä^è^T_`(`aOFFFFF	$IfgdHv°kd;è$$IfÖÖrÿÂb¬Q$. ¥¥¥Ö0ÿÿÿÿÿÿöööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.Saaaa\bhbLcOFFFFF	$IfgdHv°kdÄè$$IfÖÖrÿÂb¬Q$. ¥¥¥Ö0ÿÿÿÿÿÿöööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.SLcNcRcÔcd¨deOFFFFF	$IfgdHv°kdMé$$IfÖÖrÿÂb¬Q$. ¥¥¥Ö0ÿÿÿÿÿÿöööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.Slenereeee¡f¢fg g°h±h-i.inioiÚiôiõiöij%j7jIjjjjjjËjÏjÐjÒjkækçkNlOlVlXlYlZl[l\lµl¶lôðèáðÚÖÒÖÒÖÎÖÒÖðÊðÊÒÊÆÊÂÊÖ¾Ò¾¶¾Ò¾®ÖÒÖ§  §hKbh
*`mH	sH	jhKbh
*`UmH	sH	hKbh"aëhKbh9ähKbhhÈ*húOªhk*húOªhæ"¸hæ"¸hÒjÀhóheh©.¬hkhìnUhehehHvhHvhHvhHvo(hHvjhHvhHvU-eee¢f g±h.ioiOJEEEEEOgd1«gdÂfÒ°kdÖé$$IfÖÖrÿÂb¬Q$. ¥¥¥Ö0ÿÿÿÿÿÿöööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.SoijkçkOl[l4m5mïmðmnngoho"p#p q!q-qq\rrvëvnwÌxÂyúúúúõðëðëðëðëæëáëõúðúúúúúú	gdK#ô	gdñgdm}	gd;@¹gdD`©Ogd1«¶lÀlÁlülþlmmmm3m4m5m6mmmmmËmÍmØmÚmîmïmðmñmJnKnUnVnyn{nnnnnnnnnónônþnÿn(oçÊ·§·§·§·}r}çÊ·§·§·}r}çÊ·§·§·§·}r}çÊ·hKbh
*`mH	sH	jhKbh
*`UmH	sH	hm}hW($nHtHh
*`PJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbh
*`PJaJmH	nHsH	tH9jhKbh
*`>*B*PJUaJmH	nHphÿsH	tH0hKbh
*`>*B*PJaJmH	nHphÿsH	tH+(o)ofogohoioÂoÃoÍoÎo÷oùoÿoppp!p"p#p$p}pðÝÍ«««ooo_ÂH5$hKbhK#ôPJaJmH	nHsH	tH-jhKbhK#ôPJUaJmH	nHsH	tHhñPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH0hKbhñ>*B*PJaJmH	nHphÿsH	tH$hKbhñPJaJmH	nHsH	tH-jhKbhñPJUaJmH	nHsH	tHhm}hW($nHtHh
*`PJaJmH	nHsH	tH$hKbh
*`PJaJmH	nHsH	tHhð >PJaJmH	nHsH	tH}p~ppp³p´pöpøpqqq q!q(q*q+q,q-q[qdqfqkqxqq
qqqàqáqèÏ輬¼¬¼¬¼{tphptptdUJUhKbh
*`mH	sH	jhKbh
*`UmH	sH	hìnUhpVmh>'H*h>'hKbh>'hhÈhY[hKbh9ähKbhhÈhm}hW($nHtHhK#ôPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbhK#ôPJaJmH	nHsH	tH0hKbhK#ô>*B*PJaJmH	nHphÿsH	tH-jhKbhK#ôPJUaJmH	nHsH	tHáqëqìq8r:r@rBrGrIr\rrrrrvv®v¯vÈvÉvÊvÏvÐv×vÞvévêvëvmwnwÈwçÊ·§·§·§·xixxaYaQh®bÔnHtHhi2 nHtHhy@nHtHj_êh>tøUnHtHjh>tøUnHtHh>tønHtHhìåhìånHtHhìnUnHtHhìånHtHhroPJaJmH	nHsH	tH$hKbh
*`PJaJmH	nHsH	tH9jhKbh
*`>*B*PJUaJmH	nHphÿsH	tH0hKbh
*`>*B*PJaJmH	nHphÿsH	tHÈwx]xËxÌx×xÁyÂyF{G{{{ì{í{'nHtH3ÂyÒyèyýyz(z)zííííí7¶kdÜê$$Ifs4JÖrÿý(S~#©,`gpg+	ÿÿÿÿg+	ÿÿÿÿg+	ÿÿÿÿg+	ÿÿÿÿ
t 
6 ÙÛ´ö-öÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÖÿÿÿÿÿ´4Ö4Ö
sBÖaöpÖ2ÿÿÿÿÿÿÿÿÿÿytj.S$ÙÛ& #$/´Ifgdy@)z*z,z.z0z2z4z6z8z:zz@zBzCzKzQzWz]zczizozuz{zzzzzíííííííííííííèíííííííííííííFf¾î$ÙÛ& #$/´Ifgdy@zzz¢z¨zz³z¹z¾zÄzÊzÏzÕzÛzàzázözüz{
{{{{${+{2{8{úèèèèèèèèèèèèèãèèèèèèèèèèèFf§ü$ÙÛ& #$/´Ifgdy@Ffõ8{?{F{G{Q{W{]{c{i{o{u{{{{{{{{{¤{ª{°{¶{¼{Â{È{Î{Ô{ííèíííííííííííííãíííííííííFf÷
FfÏ$ÙÛ& #$/´Ifgdy@Ô{Ú{à{æ{ì{í{ô{ú{||||||$|*|0|6|$$Ifs4JÖrªüEp Æ)`gpg+	ÿÿÿÿg+	ÿÿÿÿg+	ÿÿÿÿg+	ÿÿÿÿ
t ö-öÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
sBÖaöýpÖ2ÿÿÿÿÿÿÿÿÿÿytj.SSUWY[]_`hntz¤ª°±¹¿ÆÍÓÚööööööñöööööööööööööìööööööFfHFfWB	$Ifgdy@Úáçîõû	$+28?FLSZ`gnoyöööööööñöööööööööööööìöööööFf
VFfO	$Ifgdy@¡§®µ»ÂÉÊÔÚáçíôú
 !(.4:ööööööööñöööööööööööööìööööFf?dFfb]	$Ifgdy@:@FLRX^djpq{£©°·½ÄËÌ×ÝäöööööööööñöööööööööööööìöööFfùqFfk	$Ifgdy@äëñøÿ '(6*B*PJaJmH	nHphÿsH	tH$hl3AhY[PJaJmH	nHsH	tH-jhl3AhY[PJUaJmH	nHsH	tHhy@nHtH$hj.Shi2 CJaJmH	nHsH	tHhìnUnHtHhi2 nHtHhi2 hi2 nHtH?@ACEGIKMRIIIIIII	$Ifgdi2 ¬kdä$$Ifs4JÖrAû±5
¹= Á)`gpg	ÿÿÿÿg	ÿÿÿÿg	ÿÿÿÿg	ÿÿÿÿ
t ö.öÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
sBÖaö´ûpÖ2ÿÿÿÿÿÿÿÿÿÿytj.SMOQSUWYZbhntz¤ª«³¹¿ÅËÑööööööñöööööööööööööìööööööFf2FfH	$Ifgdi2 Ñ×Ýãéïõûü#)/5;AGMSYZdkryöööööööñöööööööööööööìöööööFf|§Ff× 	$Ifgdi2 £ª±¸¹ÃÉÏÕÛáçíóùÿ!(ööööööööñöööööööööööööìööööFfÆ´Ff!®	$Ifgdi2 (/6=DKRY`ghry£ª±¸¿ÆÇÒØßöööööööööñöööööööööööööìöööFfÊÁFfA»	$Ifgdi2 ßæìóú"#17>DJQX^djpv|}ööööööööööñöööööööööööööìööFfêÎFfEÈ	$Ifgdi2 ¡§®´ºÁÇÍÔÚÛãéð÷ý$+12öööööööööööñöööööööööööööìFfàÛFfeÕ	$Ifgdi2 234KSw£ÃØÚßã
úúõðçâÝÝØÓÊÊÊÊÊ	$Ifgd0)=gd1«Ogd1«Ggd1«gd0)=@^@gdÊøgdm}	gdY[gdìåHIKRSvw¢£ÂÃÐÑרßãúþuvàáëìíîïðñlmwxרâãäåõêæß×ÏæËÇËÀ¸ÀËÀ¸°¦°À°À¸À¸ÀÀ¸À¸À¸À¸ÀÀÀ¸À¸Àh0)=h0)=0Jjh0)=h0)=Uh0)=h0)=5o(h0)=h0)=5h0)=h0)=o(h0)=h0)=h>'hìnUhìnUnHtHh0)=nHtHhKbh0)=h0)=hKbhÊømH	sH	hKbh(~,mH	sH	9
íOFFFFF	$Ifgd0)=°kdWß$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.SíîðyäOFFFFF	$Ifgd0)=°kdàß$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.Säåç
qxÜOFFFFF	$Ifgd0)=°kdià$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.Såæçè
deopwxyÏÐÚÛÜÝÞßà\]ghopqÇÈÒÓÔÕÖרüýST^_`defgh¾¿ÉÊËÌÍÎÏôõKLVW^_`¶·ÁÂÄÅÆÇìíCDNOVWX®÷ð÷ðåðåÜåð÷åðåÜå÷ð÷ð÷ðåðåÜåð÷åðåÜå÷ð÷ð÷ðåðåÜå÷ð÷ð÷åðåÜå÷ð÷ð÷ðåðåÜåð÷åðåÜåð÷ð÷ðåðåÜåð÷åðh0)=h0)=0Jjh0)=h0)=Uh0)=h0)=h0)=h0)=o(WÜÝßipÔOFFFFF	$Ifgd0)=°kdòà$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.SÔÕ×ü`gËOFFFFF	$Ifgd0)=°kd{á$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.SËÌÎôX_ÃOFFFFF	$Ifgd0)=°kd
â$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.SÃÄÆìPW»OFFFFF	$Ifgd0)=°kdâ$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.S®¯¹ºäå;?@FG¨©ÕÖ,-789>? ¡èé?@JKLQR¨©³´ùüýST^_`ef¼½ÇÈfgqrsxyÏÐÚÛuvôëôäôäôëôäÜôäôëôäôäôëôäÜôäôëôäôäôëôäÜôäôëôäôäôëôäÜôäôëôäÜôäôëôäÜôäôëôäôäôëôäÜôäôëôäÜôäôh0)=h0)=o(h0)=h0)=h0)=h0)=0Jjh0)=h0)=UW»¼¾äHN²OFFFFF	$Ifgd0)=°kd"ã$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.S²³µÜ@FªOFFFFF	$Ifgd0)=°kd±ã$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.Sª«®Õ9>¢OFFFFF	$Ifgd0)=°kd:ä$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.S¢£¦èLQµOFFFFF	$Ifgd0)=°kdÃä$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.Sµ¶¹ü`eÉOFFFFF	$Ifgd0)=°kdLå$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.SÉÊÍsxÜOFFFFF	$Ifgd0)=°kdÕå$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.SÜÝàëOFFFFF	$Ifgd0)=°kd^æ$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.SvÞßéêìí8 (tu}íîABno]^©ªËÌÕÖר    # . 0 1 P Q R « ÷ìåÝìåì÷ìåÙåÙåÕÍÙÕÍÙÈÙÕÍÙÕÙÕÙÕÙÕÙÕÙÕÙÕÁºÁ¶¯«£«¯«¯Õ«ÕhKbh®qmH	sH	jhKbh®qUmH	sH	hpVmh>'H*h>'hKbh>'hÊøhKbh"aëhKbhÊø	h0)=6*hj.Sh0)=hìnUh0)=h0)=h0)=o(h0)=h0)=jh0)=h0)=Uh0)=h0)=0J6ëìí uîBOJEEE@@gd1«Ogd1«gdÊø°kdçæ$$IfÖÖrÿÂ'Ìq%.À¥¥¥Ö0ÿÿÿÿÿÿöÝ%ööÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿÖÿÿÿÿÿ4Ö4Ö
laöytj.SBo^ªÌØ1 Q H¡c¡¡ô¢+£|£¢£õ£O¤¤°¤Ö¤¥Ç¥ý¥Y¦û¦§§úõõõõðõõëõõõõõúæúæáõõõõõõë	gdj.STgd1«	gd;@¹gdÊøOgd1«gd1«« ¬ ¶ · H¡b¡c¡¡¡ó¢ô¢*£+£{£|£¡£¢£ô£õ£N¤O¤¤¤Õ¤Ö¤¥¥Æ¥Ç¥ü¥ý¥X¦Y¦¦¦ú¦û¦ü¦U§V§`§a§b§§ð׺§ðð׺q§h«#PJaJmH	nHsH	tHhKbh®qmH	sH		jàðh0)=nHtHhìnUnHtHh0)=nHtH$hKbh®qPJaJmH	nHsH	tH9jhKbh®q>*B*PJUaJmH	nHphÿsH	tH0hKbh®q>*B*PJaJmH	nHphÿsH	tHjhKbh®qUmH	sH	+§§§§§§Ì§Í§ã©ä©ªªHªIªwªxªÞªßª+«,«g«h«i«j«Ã«Ä«Î«Ï«ï«ð«¬¬¬¬ðÝðÝÕÍÕÍÕÍÕÍÕÍÕÍÕÍÕͳ¨³rÝbÝðÝRh®qPJaJmH	nHsH	tHhY[PJaJmH	nHsH	tH9jhKbh®q>*B*PJUaJmH	nHphÿsH	tH0hKbh®q>*B*PJaJmH	nHphÿsH	tHhKbh®qmH	sH	jhKbh®qUmH	sH	hKbh0)=nHtHhìnUnHtHh0)=nHtH$hKbh®qPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH §§Í§ä©ªIªxªßª,«h«i«¬¬ä¬å¬µ¶q®r®.¯/¯°°,°8°úúúúúõúðëæáÜá×áÜáÜáÒáÍÈgdÊøOgdm}	gd1«	gd¢?'	gd+=gdm}	gd;@¹gdÊøQgd1«Ugd1«Ogd1«¬¬¬s¬t¬~¬¬¬ ¬±¬³¬¸¬º¬¿¬Á¬Ç¬É¬ã¬ä¬å¬æ¬?@õæÛæÂ¥rrrrbõSHShKbh¢?'mH	sH	jhKbh¢?'UmH	sH	h+=PJaJmH	nHsH	tHhroPJaJmH	nHsH	tHhY[PJaJmH	nHsH	tH$hKbh+=PJaJmH	nHsH	tH9jhKbh+=>*B*PJUaJmH	nHphÿsH	tH0hKbh+=>*B*PJaJmH	nHphÿsH	tHhKbh+=mH	sH	jhKbh+=UmH	sH	hm}hW($nHtH@JK´µ¶·®®®®7®çÊ·§·§·§·§·§iL9$hKbh+=PJaJmH	nHsH	tH9jhKbh+=>*B*PJUaJmH	nHphÿsH	tH0hKbh+=>*B*PJaJmH	nHphÿsH	tHhKbh+=mH	sH	jhKbh+=UmH	sH	hm}hW($nHtHh¢?'PJaJmH	nHsH	tH$hKbh¢?'PJaJmH	nHsH	tH9jhKbh¢?'>*B*PJUaJmH	nHphÿsH	tH0hKbh¢?'>*B*PJaJmH	nHphÿsH	tH7®8®p®q®r®s®Ì®Í®×®Ø®ô®õ®-¯.¯/¯0¯¯¯¯ðÝͳ¨³rÝðÝÍÂ[H[/0h°ih\>*B*PJaJmH	nHphÿsH	tH$h°ih\PJaJmH	nHsH	tH-jh°ih\PJUaJmH	nHsH	tH9jhKbh+=>*B*PJUaJmH	nHphÿsH	tH0hKbh+=>*B*PJaJmH	nHphÿsH	tHhKbh+=mH	sH	jhKbh+=UmH	sH	hm}hW($nHtHh+=PJaJmH	nHsH	tH$hKbh+=PJaJmH	nHsH	tHhY[PJaJmH	nHsH	tH¯¯°°°° °#°+°,°5°6°8°e°f°m°n°°
°°°èÕ¿¯¤}yuj_TLTAhKbhY[mH	sH	hY[mH	sH	hKbhtoËmH	sH	hKbhVU÷mH	sH	hKbh'lmH	sH	hìnUh>'hñãhKbhÊøhìnUnHtHh8)nHtHh8)hæ"¸nHtHhm}hW($nHtHh\PJaJmH	nHsH	tH+HhUZGh lüPJaJmH	nHsH	tH$h°ih\PJaJmH	nHsH	tH-jh°ih\PJUaJmH	nHsH	tH8°f°°°î°©±î³Ê´û´ÈµÉµ¤¶·Î·ß·5¸ú¸÷»ÿ¼I½Ø¾ò¾ó¾Å¿Ö¿úõìúçúúúçâÝúØìúçúúúúúÓçìgdY[Ogdm}	gdo#¡gdm}	gd;@¹@^@gdY[gdtoËOgd1«°°°°°Æ°Ï°Ñ°Ò°Ö°×°Û°Ü°è°ë°í°î°ï°H±I±S±T±
±õêõߨÔÌÅÔØÁºØ¶¯«x[H$hKbhY[PJaJmH	nHsH	tH9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tHhKbhY[mH	sH	jhKbhY[UmH	sH	hìnUhKbhJqhJqhKbhù=¸hù=¸hKbhÒ]h1«hÒ]H*hÒ]hKbhY[hKbh¬QmH	sH	hKbhü_mH	sH	hKbh(~,mH	sH	
±±±±±±©±I²±²³´³¹³É³Û³Þ³ì³í³î³É´Ê´ú´û´ü´UµVµ`µaµµ¯µ¸µºµÇµðÝðÝðÝÙÓÙÓÆ»Æ»ÆÓµ±±z]ÝðÝðÝ9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tHhKbhY[mH	sH	jhKbhY[UmH	sH	hìnUhù=¸
hìnUaJhÒ]aJnHo(tHhæ6MhÒ]aJnHtH
hÒ]aJhÒ]$hKbhY[PJaJmH	nHsH	tHhroPJaJmH	nHsH	tHǵȵɵʵ#¶$¶.¶/¶¶£¶¤¶··¤·©·É·Ì·Í·Î·Ú·Þ·ß·
¸¸¸ðåλ΢λ»yqmbWbPLDh1«hù=¸H*hù=¸hKbhY[hKbh(~,mH	sH	hKbhY[mH	sH	hìnU*hm}hè	khm}hè	khè	kh1«hW($h1«hè	khwKMPJaJmH	nHsH	tH0h-AshwKM>*B*PJaJmH	nHphÿsH	tH$h-AshwKMPJaJmH	nHsH	tH-jh-AshwKMPJUaJmH	nHsH	tHhm}hW($nHtHhY[PJaJmH	nHsH	tH¸¸¸¸¸"¸#¸/¸2¸4¸5¸6¸¸¸¸¸Æ¸È¸Î¸Ð¸â¸ã¸å¸æ¸í¸ï¸ú¸,¹¬¹®¹¯¹²¹Á¹Ü¹ê¹ô¹ùõñêñãêõùßÐÅЬ|l|l|l|l|l|c[c[c[c[chù=¸nHtHhù=¸nHo(tHhroPJaJmH	nHsH	tH$hKbhY[PJaJmH	nHsH	tH9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tHhKbhY[mH	sH	jhKbhY[UmH	sH	hìnUhKbh hKbhY[h hù=¸hKbhù=¸#ô¹/º4º5ºVºcºqº»ºu»§»¬»ö»÷»þ¼ÿ¼½H½I½×¾Ø¾ñ¾ò¾ó¾ô¾M¿N¿X¿Y¿ ¿¢¿©¿øïøïøïøïäïøÜøÜÐøÜÈÜÈÜÁ²§²q^N^hroPJaJmH	nHsH	tH$hKbhY[PJaJmH	nHsH	tH9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tHhKbhY[mH	sH	jhKbhY[UmH	sH	hKbhY[h nHtH*ho#¡hù=¸nHtHhìnUnHtHh79
hù=¸nHtHhù=¸nHo(tHhù=¸nHtH©¿«¿Å¿Ñ¿Ó¿Ô¿Õ¿Ö¿À
ÀÀÀÀÀÀ&À)À+À,À-ÀÀÀÀÀµÀ·À¿ÀÁÀÆÀÈÀÛÀðÂñÂLÃMÃðÝÒǿǷ°¬¤°¬°¬°¬° Òx[ÝðÝðÝðÝW W h 9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tHjhKbhY[UmH	sH	hìnUh1«hTÌH*hTÌhKbhTÌhY[mH	sH	hY[mH	sH	hKbh(~,mH	sH	hKbhY[mH	sH	$hKbhY[PJaJmH	nHsH	tHhroPJaJmH	nHsH	tH"Ö¿,ÀÛÀñÂMÃXÃ#Ä$ÄÚÄìÈ Ê0Ë»ËbÌÍ
Î-ÎhÎúÎAÏÏÏ
Ï`ÐjÐúõúúúðëõúúúúúúúúææÞæÞÙõÐ@^@gdY[gdY[$a$gd Jgd1«gdm}	gdY[	gd;@¹Ogd1«MÃWÃXÃYòóýþÃþÃÄÄ
Ä"Ä#Ä$Ä%Ä~ÄÄÄÄüøáÎáµáΥΥÎ{p{W:9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tHhKbhY[mH	sH	jhKbhY[UmH	sH	hm}hW($nHtHhY[PJaJmH	nHsH	tHhroPJaJmH	nHsH	tH0hl3AhY[>*B*PJaJmH	nHphÿsH	tH$hl3AhY[PJaJmH	nHsH	tH-jhl3AhY[PJUaJmH	nHsH	tHhìnUh ľÄÀÄÅÄÇÄÚÄõÄöÄÅÅÅÅ0Å6Å@ÅPÅRÅÆÆÆÆÆ°ÆàÆãÆìÆ
ÇAÇMÇNÇÇÇdzǴÇëÈìÈÊ Ê/Ë0˺˻ËaÌbÌÍÍåÍ	Î
ÎÎÎ,Î-ÎgÎhÎùÎúÎ?Ï@ÏAÏÏíÝíÝíÔÌÔÌÔÌÔÌÔÌÔÌÔÌÔÌÔÌÔÌÔÌÔÌÔÌÔÌÔÌļļļļļļ´Ä´¨´Ä´Ä´Ä´Ä´h1«hìnU*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tHhKbhY[mH	sH	jhKbhY[UmH	sH	hKbhY[h}![hsEÆaJhìnUnHtHjÐÀÐÑa×Î×ñ×CØ¢ØãØìØ-Ù4Ú5ÚÃÚãÚäÚÈÛ*ÝpÞAßÄáÆáTâãúõðëëëëëëææáÑúëõúúúúÌúúgd1«J
&Fhþ^h`þgd1«1gd1«Jgd1«gdY[gdö]	gd;@¹Ogd1«¿ÐÀÐÁÐÑÑ%Ñ&ÑIÑKÑSÑUÑbÑdÑpÑrÑÑ[ÓrÓxÓÓÓ¾ÓÑÓÙÓÔÔÔÔ}ÕÕ¢Õ¤ÕÖÖÖ`×a×Í×Î×Ù*Ù+Ù,Ù-Ùüíâíɬzzzzzzzzzzvrvnvnüh!³hY[hö]
hö]CJh)=hö]CJhroPJaJmH	nHsH	tH$hKbhY[PJaJmH	nHsH	tH9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tHhKbhY[mH	sH	jhKbhY[UmH	sH	hìnU+-ÙîÙ1Ú2ÚÃÚâÚãÚäÚåÚ>Û?ÛIÛJÛÛÛ¡Û£Û¯Û±ÛÈÛ)Ý*ÝoÞpÞ@ßAßÂáÄáÆáSâTâvâããhãiãÒãÓãüøüøüøñâ×⾡~~~wpwpwpwpkwpwdp`ø`øh©- h1«h©- 	*hìnUh1«hìnUh1«hè	khroPJaJmH	nHsH	tH$hKbhY[PJaJmH	nHsH	tH9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tHhKbhY[mH	sH	jhKbhY[UmH	sH	hKbhö]hìnUh!³%ãiãÓã*ääå2æ{æ¸æÔæüænçxèyèeééëàìXíµíÊíîîºî»î±ï²ïúúúúõõúðððõõëæõõõõõðððëáÜgdm}	gdU°	gd;@¹gdY[Jgd1«Ogd1«Ogdm}Óã)ä*äääåå1æ2æzæ{æ·æ¸æÓæÔæûæüæmçnçwèxèyèzèÓèÔèÞèßè-é/é6é8éEéGéOéQéeéèëéëßìüøüøñêñêæøæøæøæøßêßêØÉ¾É¥ueueueueuaøah!³hroPJaJmH	nHsH	tH$hKbhY[PJaJmH	nHsH	tH9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tHhKbhY[mH	sH	jhKbhY[UmH	sH	hKbh©- h1«hìhìh1«hìnUh1«h©- hìnUh©- &ßìàìíWíXíií´íµí½íÈíÉíÊíîîcîîî¹îºî»î¼îïï ï!ïïïïï°ï±ï²ï³ïð
ðüøôüøôüøìøüôüôèüôüáÊ·ÊÊ···~sdYdhKbhqgBmH	sH	jhKbhqgBUmH	sH	hm}hW($nHtHhU°PJaJmH	nHsH	tHhroPJaJmH	nHsH	tH0hKbhU°>*B*PJaJmH	nHphÿsH	tH$hKbhU°PJaJmH	nHsH	tH-jhKbhU°PJUaJmH	nHsH	tHhKbhY[hJq*ho#¡h!³h6lïh!³hìnU"
ððð^ð`ðfðhðmðoðð[ñ\ñpñqñØñÙñAòBòTòUò½ò¾òUóVó"ô#ô¥ô¦ô×ôØô½õ¾õNöOölönöoöpöqöö¥ö¦ö¨öçÊ·§·§·§·££££££££££££zvovhKbhNdðhNdðhKbhÙhM9}mH	sH	hKbh[GÞmH	sH	hKbh¾;mH	sH	hìnUh6lïhroPJaJmH	nHsH	tH$hKbhqgBPJaJmH	nHsH	tH9jhKbhqgB>*B*PJUaJmH	nHphÿsH	tH0hKbhqgB>*B*PJaJmH	nHphÿsH	tH*²ïð\ñqñÙñBòUò¾òVó#ô¦ôØô¾õOöqöÞö÷Õ÷Rú¦ú§úwûxû üúõíååíååõõõõõàõ×ÒõõõÒÍÈ	gdM9}gdm}	gd;@¹@^@gdL_gdL_I
&Figd1«H
&Figd1«Ogd1«	gdqgB¨öªö«ö¯ö°ö´öµöÁöÄöÅöÜöÝöÞöþöÿö÷÷[÷\÷f÷g÷¬÷®÷¸÷º÷Ã÷Å÷Õ÷QúRú÷ðìåáÚåìåÖåÒǿǰ¥°o\L\L\L\ìÒhroPJaJmH	nHsH	tH$hKbhòBJPJaJmH	nHsH	tH9jhKbhòBJ>*B*PJUaJmH	nHphÿsH	tH0hKbhòBJ>*B*PJaJmH	nHphÿsH	tHhKbhòBJmH	sH	jhKbhòBJUmH	sH	hM9}mH	sH	hKbhnÔmH	sH	hìnUhJqhKbh993h993hKbhÙhNdðhKbhNdðh1«hNdðH*Rú¥ú§ú¨úûûû
ûSûUû`ûbûvûwûxûyûÒûÓûüøéÞéŨ
ujS@S$hl3AhM9}PJaJmH	nHsH	tH-jhl3AhM9}PJUaJmH	nHsH	tHhm}hW($nHtHhòBJPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbhòBJPJaJmH	nHsH	tH9jhKbhòBJ>*B*PJUaJmH	nHphÿsH	tH0hKbhòBJ>*B*PJaJmH	nHphÿsH	tHhKbhòBJmH	sH	jhKbhòBJUmH	sH	hìnUhNdðÓûÝûÞûÿûüü ü~üü3ý4ýrýsýxýyý{ý~ýý«ý¬ýâýãý=þ>þ°þ±þGÿHÿ_ÿdÿeÿgÿjÿsÿÑ&4Nçн½xtxtxttttttxtxtxttl*h1«hìnUh)]ëh(\h)]ëhìnUnHtHhNdðnHtHhìnU
hNdðaJhNdðhM9}PJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hl3AhM9}PJaJmH	nHsH	tH-jhl3AhM9}PJUaJmH	nHsH	tH0hl3AhM9}>*B*PJaJmH	nHphÿsH	tH' ü4ýsý¬ýãý>þ±þHÿNñò±¼.¯+ùÀßé®»úõõõõúúúúúðëúúúúëúæáØÓúÓ	gd;@¹@^@gdL_Ogdm}	gd1«	gd¡Égdo#¡Ggd1«Ogd1«N¶ÈÉÐîïðñòóLMWX
±²»¼øðèðèðøàØÉ¾É¥ueueu]T]TL]hìnUnHtHh¡ÉnHo(tHh¡ÉnHtHh¡ÉPJaJmH	nHsH	tH$hKbh¡ÉPJaJmH	nHsH	tH9jhKbh¡É>*B*PJUaJmH	nHphÿsH	tH0hKbh¡É>*B*PJaJmH	nHphÿsH	tHhKbh¡ÉmH	sH	jhKbh¡ÉUmH	sH	hNdðnHtHhìnUnHtHhæ"¸nHtHh¡ÉnHtHh)]ënHtH*,-.®¯íðñû *+,
ÆÏÒÔáãùüóêâÚâÚâÏâÇâÇâÇâÚ¸¸wdTdTdTdPhJqh¡ÉPJaJmH	nHsH	tH$hKbh¡ÉPJaJmH	nHsH	tH9jhKbh¡É>*B*PJUaJmH	nHphÿsH	tH0hKbh¡É>*B*PJaJmH	nHphÿsH	tHhKbh¡ÉmH	sH	jhKbh¡ÉUmH	sH	hJqnHtHh£}h¡ÉnHtHhìnUnHtHh¡ÉnHtHh¡ÉnHo(tHhN_h¡ÉnHo(tHü
st~ÀÓÞßæçéêCDNOüøôÝÊݱÝʪø£~s~Z=9jhKbhòBJ>*B*PJUaJmH	nHphÿsH	tH0hKbhòBJ>*B*PJaJmH	nHphÿsH	tHhKbhòBJmH	sH	jhKbhòBJUmH	sH	hKbhü_mH	sH	hKbhnÔmH	sH	h£}hìnUh£}hæ"¸0h°ih\>*B*PJaJmH	nHphÿsH	tH$h°ih\PJaJmH	nHsH	tH-jh°ih\PJUaJmH	nHsH	tHhìnUhJqh¡ÉOz|¢®¯	QR}~ª¬»		¶	·	u
z
+íÝíÙÕÙÑ·Âí·ÂíÝíjíÝíÝíÝíÕd[dÕWÕhçBh80h¡ÉaJ
h¡ÉaJ-jhKbhòBJPJUaJmH	nHsH	tH9jhKbhòBJ>*B*PJUaJmH	nHphÿsH	tH0hKbhòBJ>*B*PJaJmH	nHphÿsH	tHhKbhòBJmH	sH	jhKbhòBJUmH	sH	hìnUh¡ÉhJqhroPJaJmH	nHsH	tH$hKbhòBJPJaJmH	nHsH	tH"+,-ÇÈ$%EFâãä=>HI«àê	Îñòùõñõñõñõñõñõâ×⾡~~umumem]eh÷wnHtHhìnUnHtHh£inHtHh£inHo(tHhroPJaJmH	nHsH	tH$hKbhòBJPJaJmH	nHsH	tH9jhKbhòBJ>*B*PJUaJmH	nHphÿsH	tH0hKbhòBJ>*B*PJaJmH	nHphÿsH	tHhKbhòBJmH	sH	jhKbhòBJUmH	sH	h£ihìnUhn
h¡É»-È%Fã«òÓÔºÜKh®ÍÎÍÎúúúúúúõúúðëõúúæÝúÝúõëõëõë@^@gdL_gdL_gdm}	gdK#ô	gd;@¹Ogd1«òóLMWX£¥¯±º¼ÄÅÒÓÔÕ./9èÕè¼èÕ¬Õ¬Õ¬ÕÕrgrN0hKbhòBJ>*B*PJaJmH	nHphÿsH	tHhKbhòBJmH	sH	jhKbhòBJUmH	sH	hm}hW($nHtHhK#ôPJaJmH	nHsH	tHhW($PJaJmH	nHsH	tHhroPJaJmH	nHsH	tH0hKbhK#ô>*B*PJaJmH	nHphÿsH	tH$hKbhK#ôPJaJmH	nHsH	tH-jhKbhK#ôPJUaJmH	nHsH	tH9:º£µ¶ÚÛî
AEHIJSTU\hoxyzÛÜ$+,EGIJKâϿϿϻ´»´»´»´»´»´»´»´»´»´»´»´»´»°»°¥|hÙ9ImH	sH	hKbh9ämH	sH	ho#¡h£imH	sH	hKbhûx²mH	sH	hKbh¾;mH	sH	hìnUhÚsçh÷wh÷whroPJaJmH	nHsH	tH$hKbhòBJPJaJmH	nHsH	tH9jhKbhòBJ>*B*PJUaJmH	nHphÿsH	tH,K[cefghl©«¬®Üãîõ÷klvwõêßÔßõÐÉźõß²ßõÉ®ª®¢É®ÉÅoR9jhKbhd3C>*B*PJUaJmH	nHphÿsH	tH0hKbhd3C>*B*PJaJmH	nHphÿsH	tHhKbhd3CmH	sH	jhKbhd3CUmH	sH	h1«hH*hwKMhhÙ9ImH	sH	hKbh¥	vmH	sH	hìnUhKbhL_~hJqhKbh;@¹mH	sH	hKbh9ämH	sH	hKbhúTcmH	sH	hKbhÒ}
mH	sH	wº¼ÌÍÎÏ()34¡£¯¸ºÌÍÎÏíÝíͳ¨³r_Ý_Ý_Ý_Ý_Ý_OÂ@jhKbhd3CUmH	sH	hòBJPJaJmH	nHsH	tH$hKbhòBJPJaJmH	nHsH	tH9jhKbhòBJ>*B*PJUaJmH	nHphÿsH	tH0hKbhòBJ>*B*PJaJmH	nHphÿsH	tHhKbhòBJmH	sH	jhKbhòBJUmH	sH	hm}hU[«nHtHhd3CPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbhd3CPJaJmH	nHsH	tHÏ()34npÜÝçõæÍ°}rcXc?0hKbhòBJ>*B*PJaJmH	nHphÿsH	tHhKbhòBJmH	sH	jhKbhòBJUmH	sH	hm}hU[«nHtHhd3CPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbhd3CPJaJmH	nHsH	tH9jhKbhd3C>*B*PJUaJmH	nHphÿsH	tH0hKbhd3C>*B*PJaJmH	nHphÿsH	tHjhKbhd3CUmH	sH	hKbhd3CmH	sH	
çè79BDJLVX]_qrstÍÎØÙ$&.079âϿϿϿϿϿϯ¤qTA¿A¿A¿$hKbhd3CPJaJmH	nHsH	tH9jhKbhd3C>*B*PJUaJmH	nHphÿsH	tH0hKbhd3C>*B*PJaJmH	nHphÿsH	tHhKbhd3CmH	sH	jhKbhd3CUmH	sH	hm}hU[«nHtHhòBJPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbhòBJPJaJmH	nHsH	tH9jhKbhòBJ>*B*PJUaJmH	nHphÿsH	tHrscÆ s!t!""Á"$#ð#ñ#Ù$Ú$¶%·%»&¼&''(úõúïêåàõÛõÒåàõàõàõàõÍõÍ	gdM9}@^@gdL_	gdÙ9I	gd;@¹Ogd1«Ggd1«Ggdm}m$gdm}	gdm}9ACIKcÅÆøù   ï ð ú û =!?!G!I!Z!\!r!s!t!u!íÝíÝíÙÕÙÕÙÑÙÕ·ÂnÝnÝnÝn^SKjhÙ9IUhm}hW($nHtHh9ÃPJaJmH	nHsH	tH$hKbh9ÃPJaJmH	nHsH	tH9jhKbh9Ã>*B*PJUaJmH	nHphÿsH	tH0hKbh9Ã>*B*PJaJmH	nHphÿsH	tHhKbh9ÃmH	sH	jhKbh9ÃUmH	sH	h£ihìnUhª|	hroPJaJmH	nHsH	tH$hKbhd3CPJaJmH	nHsH	tHu!Î!Ï!Ù!Ú!Ü!Ý!""$"0":";"=">"G"H"S"U"V"W"["\"e"f"v"w"""
"""""""°"¼"üôÛÁ®®®®®®®®®®®®®®®}rghKbhÒ}
mH	sH	hKbhd3CmH	sH	hm}hW($nHtH+HhUZGh lüPJaJmH	nHsH	tHhÙ9IPJaJmH	nHsH	tH$hk6hÙ9IPJaJmH	nHsH	tH3jhÙ9I>*B*PJUaJmH	nHphÿsH	tH0hk6hÙ9I>*B*PJaJmH	nHphÿsH	tHjhÙ9IUhÙ9I%¼"¾"¿"À"Á"ï"ö"##
##!###$#%#~####Ý#ß#ï#ð#õíõâÛ×Ó×ËÛ×ÛǸ¸wdTdDhd3CPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbhd3CPJaJmH	nHsH	tH9jhKbhd3C>*B*PJUaJmH	nHphÿsH	tH0hKbhd3C>*B*PJaJmH	nHphÿsH	tHhKbhd3CmH	sH	jhKbhd3CUmH	sH	hìnUh1«hH*hwKMhhKbhhKbhÒ}
mH	sH	hM9}mH	sH	hKbh9ämH	sH	ð#ñ#ò#K$L$V$W$£$¥$¶$¸$À$Â$Ø$Ù$Ú$Û$4%5%?%@%%%¡%£%µ%¶%·%¸%&&õæÛæÂ¥rõæÛæÂ¥rõcXchKbh9ÃmH	sH	jhKbh9ÃUmH	sH	hd3CPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbhd3CPJaJmH	nHsH	tH9jhKbhd3C>*B*PJUaJmH	nHphÿsH	tH0hKbhd3C>*B*PJaJmH	nHphÿsH	tHhKbhd3CmH	sH	jhKbhd3CUmH	sH	hm}hW($nHtH&&&&&&¡&¦&¨&º&»&¼&½&''!'"'r't'''''çÊ·§·§·§·ubuIub§b9uhM9}PJaJmH	nHsH	tH0hl3AhM9}>*B*PJaJmH	nHphÿsH	tH$hl3AhM9}PJaJmH	nHsH	tH-jhl3AhM9}PJUaJmH	nHsH	tHhm}hW($nHtHh9ÃPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbh9ÃPJaJmH	nHsH	tH9jhKbh9Ã>*B*PJUaJmH	nHphÿsH	tH0hKbh9Ã>*B*PJaJmH	nHphÿsH	tH'è'é'ó'ô'=(?(H(J(_(a(k(m(t(v(((((ó(ô(þ(ÿ(íÖ½ÖííííííjM9jhúOªhm}>*B*PJUaJmH	nHphÿsH	tH0húOªhm}>*B*PJaJmH	nHphÿsH	tHhúOªhm}jhúOªhm}UhW($nHtHhM9}PJaJmH	nHsH	tHhroPJaJmH	nHsH	tH0hl3AhM9}>*B*PJaJmH	nHphÿsH	tH-jhl3AhM9}PJUaJmH	nHsH	tH$hl3AhM9}PJaJmH	nHsH	tH(())¨)*ë*,(,,V-W-#.$.ï.ð.¼/½/}0~011¢1Å1úõúìçâÝìçØúÓúØúÓúØúÎúìç	gdÙ9I	gd+=	gd;@¹Sgd1«Wgd1«Ogd1«@^@gdL_	gdúOªgdm}ÿ(U)d)q)r)x)z){)))))£)¥)¦)§)¨)Ö)Ý)è)ï)ñ)þ)*
***e*íÝíÊÝÊݺ§~v~okgk_oko[NFh1«hM9}5jh1«hM9}5UhìnUh1«hH*hwKMhhKbhhM9}mH	sH	hKbh9ämH	sH	hKbhÒ}
mH	sH	hm}hÎ@nHtHhæ"¸nHtH$htgýhm}PJaJmH	nHsH	tHhm}PJaJmH	nHsH	tH$húOªhJqPJaJmH	nHsH	tHhJqPJaJmH	nHsH	tH$húOªhm}PJaJmH	nHsH	tHe*f*p*q*µ*·*¿*Á*Ò*Ô*ê*ë*,,%,&,(,V,],h,o,q,~,,,,,å,æ,ð,óáʾ²¾²¾²¾¦¢|teeL0hKbhd3C>*B*PJaJmH	nHphÿsH	tHjhKbhd3CUmH	sH	h1«hH*hwKMhhKbhhÙ9ImH	sH	hKbhd3CmH	sH	hìnUhj.Sh1«hìnU5nHtHh1«hro5nHtHh1«hM9}5nHtH,jh1«hM9}5>*B*UnHphÿtH#h1«hM9}5>*B*nHphÿtHjh1«hM9}5Uð,ñ, -"-3-5-=-?-U-V-W-X-±-²-¼-½-.	..".#.$.âϿϿϿϯ¤zaz¿zN>¤h+=PJaJmH	nHsH	tH$hKbhK#ôPJaJmH	nHsH	tH0hKbh+=>*B*PJaJmH	nHphÿsH	tH$hKbh+=PJaJmH	nHsH	tH-jhKbh+=PJUaJmH	nHsH	tHhm}hW($nHtHhd3CPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbhd3CPJaJmH	nHsH	tH9jhKbhd3C>*B*PJUaJmH	nHphÿsH	tH$.%.~....¹.».Ì.Î.Ö.Ø.î.ï.ð.ñ.J/K/ðåð̯|qZGZ$hKbh+=PJaJmH	nHsH	tH-jhKbh+=PJUaJmH	nHsH	tHhm}hW($nHtHhd3CPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbhd3CPJaJmH	nHsH	tH9jhKbhd3C>*B*PJUaJmH	nHphÿsH	tH0hKbhd3C>*B*PJaJmH	nHphÿsH	tHhKbhd3CmH	sH	jhKbhd3CUmH	sH	K/U/V/ /¢/´/»/¼/½/¾/00"0#0çн½pepL/9jhKbh9Ã>*B*PJUaJmH	nHphÿsH	tH0hKbh9Ã>*B*PJaJmH	nHphÿsH	tHhKbh9ÃmH	sH	jhKbh9ÃUmH	sH	hm}hW($nHtHh+=PJaJmH	nHsH	tH$hKbhK#ôPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbh+=PJaJmH	nHsH	tH-jhKbh+=PJUaJmH	nHsH	tH0hKbh+=>*B*PJaJmH	nHphÿsH	tH
#0c0e0m0o0|0}0~00Ø0Ù0ã0ä011%1&1(1)12131=1>1G1H1I1K1L1M1Q1R1[1\1l1m1x1y1{1íÝíÝíͺ¶ºp`p`p`p`p`p`p`p`p`p`p`p`phÙ9IPJaJmH	nHsH	tH$hk6hÙ9IPJaJmH	nHsH	tH3jhÙ9I>*B*PJUaJmH	nHphÿsH	tH0hk6hÙ9I>*B*PJaJmH	nHphÿsH	tHhÙ9IjhÙ9IUhm}hW($nHtHh9ÃPJaJmH	nHsH	tHhroPJaJmH	nHsH	tH$hKbh9ÃPJaJmH	nHsH	tH%{1|11111111 1¡1¢1¦1Ä1Å1Ø1Ü1Ý1222222 2!2-20222324222ðÝðÝðÒǼ±¼Ç¦¢
yr
r
r¢ccjhKbhY[UmH	sH	hKbh993h1«h993H*h993hKbhY[hKbh;@¹mH	sH	hKbhY[mH	sH	hìnUhKbhTÂhJqhKbhé.ØmH	sH	hKbh9ämH	sH	hKbhÒ}
mH	sH	hm}hW($nHtH$hk6hÙ9IPJaJmH	nHsH	tHhÙ9IPJaJmH	nHsH	tH Å1Ý1323 5Ã5÷5S6l6Â67 8û8%9::):L:Z::::³:´:î:úõðõõëõúõæõõõæáúõúõæáÜáú	gdK#ôgdm}	gd+=gd1«	gd;@¹Ogd1«gdÏaï222ã2å2ê2ì2ô2ö2þ2ÿ235 5Â5Ã5ö5÷5R6S6g6i6j6k6l66£6¥6¦6ª6«66°6¼6¿6Á6Â6çÊ·§·§·§·§·£
z
skdsdsdhKbh993h1«h993H*hKbhTÂhKbhü_mH	sH	hKbh9ämH	sH	hKbh,zZmH	sH	h993hìnUh÷whroPJaJmH	nHsH	tH$hKbhY[PJaJmH	nHsH	tH9jhKbhY[>*B*PJUaJmH	nHphÿsH	tH0hKbhY[>*B*PJaJmH	nHphÿsH	tH$Â6Ã677'7(77¥7?8\88 8ú8û8$9%9&99999â9ä9í9ï9:::ðåð̯}y}yðåð̯iiYNhm}hW($nHtHh+=PJaJmH	nHsH	tHhroPJaJmH	nHsH	tHhìnUh993hìnUnHtHh993nHtHhMh993nHtH$hKbh+=PJaJmH	nHsH	tH9jhKbh+=>*B*PJUaJmH	nHphÿsH	tH0hKbh+=>*B*PJaJmH	nHphÿsH	tHhKbh+=mH	sH	jhKbh+=UmH	sH	::$:&:':(:):-:K:L:U:V:W:X:Y:Z:d:e:~:::
:::::õêßÔßêÐÉź¯ºuguuÅK7hKbh+=haPJaJcHdhUZGmH	nHsH	tHjpçhKbh(CUjhKbh(CUhKbh(C'hKbhü_hacHdhUZGmH	sH	'hKbhÙhacHdhUZGmH	sH	hKbhÙmH	sH	hKbhÑ(-mH	sH	hìnUhKbhTÂhJqhKbh;@¹mH	sH	hKbhµ®mH	sH	hKbh'lmH	sH	hKbh,zZmH	sH	:::²:³:´:º:»:Ã:Ï:í:î:;;;
;;;æÒ¶Ò||qfR