IBC 2016 Conference
- Location: Amsterdam, Netherlands
- Conference date: 8-12 Sept. 2016
- ISBN: 978-1-78561-343-2
- Conference number: IBC2016
- The following topics are dealt with: TV broadcasting; technology advances; television content and production; broadcasting platforms; audiences and advertising; big screen experiences; business transformation; consumer experiences; IP TV; video.
- Session: A Brighter Future: High Dynamic Range TV and wide colour gamut
- Session: Advanced Developments in Dynamic Video Streaming
- Session: Advanced Ideas in Audio Production
- Session: Enhancing the Multi-screen Experience through Synchronisation and Personalisation
- Session: Exploring New Ideas in VR and 360° Immersive Media
- Session: Lessons from Experimental IP Studios
- Session: Making Much More of Metadata
- Session: New Applications of High-Efficiency Video Coding
- Session: Novel Ideas and Cutting Edge Technologies
- Session: Novel Technologies for Assisting Sensory-Impaired Viewers
- Session: Recent Advances in Terrestrial and Mobile Video Broadcasting
- Session: Solutions for Implementing Personalised Advertising
- Session: UHDTV Launches Across the World
-
- Author(s): J.A. Pytlarz ; K.D. Thurston ; D. Brooks ; P. Boon ; R. Atkins
- + Show details - Hide details
-
p.
14 (10 .)
(1)
The ability to deliver high dynamic range (HDR) and wide colour gamut (WCG) imagery is crucial to next generation broadcast. It is a key feature of both DVB UHD-1 Phase 2 and the latest ITU-R recommendation: BT.2100. While this is an important step towards the creation of broadcast HDR-WCG systems, if HDR-WCG production is to be deployed commercially, it is necessary to use a mix of both conventional standard dynamic range (SDR) and HDR cameras in a single HDR-WCG production. It is also necessary to derive a high quality conventional ITU-R BT.709 (SDR with gamma nonlinearity) programme for regular contribution and transmission. Additionally, it is necessary to cross-map SDR programmes, interstitials and adverts into an HDR-WCG service for transmission. This paper describes the techniques that have been developed to perform these transforms to meet broadcast production standards in real-time. These techniques are built on the experience gained in the creation of the first fifty HDR theatrical releases, as well as trials with HDR broadcast productions. Finally, the operational practices to ensure consistency in HDR-WCG production, high quality programme interchange, and a pleasing viewer experience are examined. - Author(s): E. François and L. van de Kerkhof
- + Show details - Hide details
-
p.
15 (10 .)
(1)
The migration from High Definition (HD) TV to Ultra High Definition (UHD) is already underway. In addition to an increase of picture spatial resolution, UHD potentially provides more colour by introducing a wider colour gamut (WCG), and better contrast by moving from Standard Dynamic Range (SDR) to High Dynamic Range (HDR). The transition from SDR to HDR will require distribution solutions supporting some level of SDR backward compatibility. This paper presents the HDR content distribution scheme jointly developed by Technicolor and Philips. The solution is based on a single layer codec design and provides SDR compatibility, thanks to a preprocessing step applied prior to the encoding. The resulting SDR video can be compressed and distributed then decoded using standard-compliant decoders (e.g. HEVC Main 10 compliant) and directly rendered on SDR displays. Dynamic metadata of limited size are used to reconstruct the HDR signal from the decoded SDR video, using a post-processing that is the functional inverse of the pre-processing. Both HDR quality and artistic intent are preserved. Pre- and post-processing are applied independently per picture, do not involve any inter-pixel dependency, and are codec agnostic. - Author(s): L. Lenzen
- + Show details - Hide details
-
p.
16 (10 .)
(1)
High dynamic range (HDR) allows us to capture an enormous range of luminance values within a still image or a sequence of video frames. But many consumers will not have the necessary displays to experience this in the near future. To allow these 'legacy' users to benefit, an adaptation using global tone mapping would be a possible solution. But the results tend to suffer from low subjective contrast and can product large-area flicker. To overcome these drawbacks, three enhancement steps are proposed. They are based on certain broadcast requirements as well as on viewer preferences, which were surveyed at the beginning of this study. The basic idea is to analyse each luminance value for its relevance in the image and discard unimportant ones. This 'virtual aperture' will be processed across the whole image and on image sections. Finally the tone mapping result will be composed with the chrominance values by using a modified IPT colour space. - Author(s): M. Pindoria and S. Thompson
- + Show details - Hide details
-
p.
17 (8 .)
(1)
Until now, most of the research in the field of high dynamic range (HDR) video has centred on the use of non-real-time graded images which have been adjusted to look correct on a known reference screen in a reference environment. For live television, without the luxury of grading, it is important that images captured directly by the camera look correct. So the HDR system's end-to-end opto-optic transfer function (OOTF), which maps the light captured at the camera sensor to the light output from the display, is of paramount importance. Furthermore, it is critical that the artistic intent of the video is preserved when rendered for the viewer with a different screen in a different viewing environment. The authors present results of two subjective tests. The first test determines the most suitable OOTF for a reference environment and display; the second test determines how this transfer function could be adjusted so the high dynamic range video signal can be displayed on a range of different brightness displays whilst maintaining artistic intent. - Author(s): M.E. Nilsson and B. Allan
- + Show details - Hide details
-
p.
18 (10 .)
(1)
This paper describes of a set of subjective tests that the authors have carried out to assess the end user perception of video encoded with high dynamic range technology when viewed in a typical home environment. Viewers scored individual single clips of content, presented in High Definition (HD) and Ultra High Definition (UHD), in Standard Dynamic Range (SDR), and in High Dynamic Range (HDR) using both the Perceptual Quantiser (PQ) and Hybrid Log Gamma (HLG) transfer characteristics, and presented in SDR as the backwards compatible rendering of the HLG representation. The quality of HD SDR was improved by approximately equal amounts by either increasing the dynamic range or increasing the resolution to UHD. A further smaller increase in quality was observed in the Mean Opinion Scores of the viewers by increasing both the dynamic range and the resolution, but this was not quite statistically significant.
Real time cross-mapping of high dynamic range images
A single-layer HDR video coding framework with SDR compatibility
HDR for legacy displays using sectional tone mapping
Image adaptation requirements for high dynamic range video under reference and non-reference viewing conditions
High dynamic range subjective testing
-
- Author(s): E. Thomas ; M.O. van Deventer ; T. Stockhammer ; A.C. Begen ; M.-L. Champel ; O. Oyman
- + Show details - Hide details
-
p.
22 (8 .)
(1)
MPEG DASH provides formats that are suitable to stream segmented media content over HTTP. DASH clients follow a client-pull paradigm by adapting their requests based on the available bandwidth and other local resources. This has proven to be easier to deploy over CDN infrastructure than server-push technologies. However, this decentralised nature introduces new challenges such as offering a consistent and higher quality of service for premium users. MPEG is addressing this issue in the to-be-published new MPEG DASH part 5, Server and Network-assisted DASH (SAND). The key features of SAND are asynchronous network-to-client and network-to-network communication of quality-related assisting information. In addition, DASH-IF is further defining interoperable guidelines to optimise SAND deployments in a variety of environments: home network, over-the-top, etc. MPEG is expected to publish SAND by end of 2016 while DASH-IF aims for the course of 2017. - Author(s): K. Streeter
- + Show details - Hide details
-
p.
23 (6 .)
(1)
While HTTP adaptive streaming (HAS) technology has been very successful, it also generally introduces a significant amount of live delay as experienced by the end viewer. Multiple elements in the video preparation and delivery chain contribute to live delay, and many of these elements are unique to HAS systems versus traditional streaming systems such as RTSP and RTMP. This paper describes how improvements both in the structure of the media, the delivery workflow, and the media player can be combined to produce a system that compares well with broadcast. The paper concludes with a preview of advances in delivery technology (such as HTTP2) that will improve the experience even more in the near future. - Author(s): T. Stockhammer ; I. Sodagar ; W. Zia ; S. Deshpande ; S. Oh ; M.-L. Champel
- + Show details - Hide details
-
p.
24 (9 .)
(1)
ATSC 3.0 is the next-generation broadcast television suite of around 20 standards including transmission, audio, video, captioning, metadata, watermarking, companion devices, security, and personalization. Among others, ATSC 3.0 uses MPEG DASH delivery format for broadcast and broadband delivery of media and data. In order to fulfil the use cases and requirements of ATSC 3.0 including broadcast-only, broadband and hybrid (broadcast/ broadband) delivery, as well as a multitude of new features, DASH-IF developed a DASH interoperability profile specifically designed for ATSC 3.0 standard. This profile supports broadcast and broadband delivery, codec signalling (audio, video, subtitle), interactivity & events, metadata, targeted & personalized ad insertion, and advanced security & content protection schemes. By the choice of DASH formats and HTML-5 based applications, ATSC 3.0 services are expected to be consumable not only on vertically integrated devices such as TV sets, but also on other types of devices such as PCs, tablets, game consoles and mobile phones. This paper will introduce the rationales, benefits and opportunities of such an approach and provide an overview of the ATSC 3.0 DASH profile.
Applications and deployments of server and network assisted DASH (SAND)
Improving live performance in HTTP adaptive streaming systems
Dash in ATSC 3.0: bridging the gap between OTT and broadcast
-
- Author(s): S.A.K. Weber and X. Bai
- + Show details - Hide details
-
p.
47 (10 .)
(1)
The production of media content across several languages and platforms is both time consuming and complex. Microphones, sound booths and arrays of editing software are typically required to generate translated audio tracks. This paper presents a one-stop solution to simplifying this workflow. With a particular focus on the translation of audio tracks contained in video files, this paper describes an innovative workflow that leverages commercialised Text-To-Speech voice synthesis and a prototypical system running in production. This workflow bypasses the need for microphones, video or audio editing software and allows a single editor to generate multiple mixed-gender voice-overs. A lightweight markup language is presented which helps editors to fine-tune synthetic voices. The balance between automation and editorial and linguistic quality will be also examined. The majority positive feedback received from journalists and audiences indicates that the prototype and its underlying language technology have the potential to become part of the multilingual video production process. - Author(s): S. Nair
- + Show details - Hide details
-
p.
48 (5 .)
(1)
The idea of immersive mixing is not new. Yet, the concept of adapting it to achieve an emotional story telling and audience control is still something that can be pushed forward. Looking at the developments happening in the world of cinema and home audio, the idea of bringing the spectator to engage with the story is now a reality. The Idea is to propose various techniques of mixing and audio processing along with the emotional impact into this world so that the artistic intention of the creator with the viewer is bridged and rather than having an audience that sees a movie, it is time to have the audience experience the story. It will span from the art of psychological placements of sounds to designed and interpreted spaces that replicate a human emotion when replayed. - Author(s): L. Whitcomb
- + Show details - Hide details
-
p.
49 (8 .)
(1)
Independent audio routing, or SDI audio breakaway, is a standard aspect of today's TV production workflow and is functionality that will need to be implemented in IP as the industry transitions away from SDI. Fortunately, the existing AES67 standard for audio over IP meets this objective and eliminates the need for the industry to reinvent the wheel. Not only is there already AES67 equipment deployed in the audio industry, but using this standard also enables significant new workflow opportunities. This paper provides an overview of AES67 and explores how it can be used with SMPTE 2022-6 and VSF TR03 uncompressed video to form a complete solution. AES67 uses the IEEE-1588 PTP timing standard, and combining AES67, IEEE-1588, SMPTE ST 2059 and SMPTE ST 2022-6 using the new VSF Technical Recommendation 04 (TR04) provides a solution for maintaining A/V alignment throughout the production workflow.
Video translation: weaving synthetic voices into the multilingual production workflow
Reverse engineering emotions in an immersive audio mix format
Audio for television: how AES67 and uncompressed 2022/2110/TR03 video fit together
-
- Author(s): N. Andersen
- + Show details - Hide details
-
p.
10 (9 .)
(1)
In a world where linear television is increasingly losing its relevance, there is a significant need for new ways to engage the viewers around television content. This paper presents a new extensive concept for Social Television based on time-indexed comments, enabling the viewers to receive comments from his or her chosen network, displayed when relevant to the content. The application is developed using experiences from research on Social Television, which is also presented in the paper. The result of the study is a crowdsourced annotation technology providing the end users with closer contact with other viewers, and the content providers with more information about the viewing habits of their users. This information can be used to further promote and develop the content to create an even better experience for the users and to increase revenue for the service providers. - Author(s): P. Lindgren and T. Olsson
- + Show details - Hide details
-
p.
11 (8 .)
(1)
The shift to multiscreen TV “broke” the social and interactive elements of television viewing. It was thought that through companion and messaging apps the social aspect of TV viewing could be revived. Unfortunately, the delay in delivering Over The Top (OTT) content to different devices and lack of synchronization between the primary screen and secondary screens quickly became an issue. The second screen has become a frustration rather than an enhanced social experience. For example, second screen users being informed of who scored a touchdown in a football game and how before it happens on their screen means viewers of live OTT events are forced to log out of social media and messaging platforms. They risk hearing about what is unfolding on someone else's screen before it happens on theirs. This is only part of the problem. There is also little opportunity for real-time social messaging, viewer engagement and shared experiences when audiences are watching the same content on different screens and devices with a time delay ranging from tens of seconds to several minutes. To bring the social element back to live TV viewing, OTT's issue of synchronizing content delivery across all devices and harmonizing this with live linear broadcasts needs to be addressed. This paper outlines the technical challenges in distributing true live OTT over today's Content Delivery Network (CDN) platforms, and why their limitations in streaming live content breaks real-time social interactivity. It further describes a software-based OTT distribution solution that is optimized for distribution of live content with low and synchronized OTT delivery, and further outlines the technical differences with today's HTTP-based streaming solutions. The last section of the paper provides examples of real-time social and audience engagement, and what this means for the entire media ecosystem. - Author(s): M. Barroco
- + Show details - Hide details
-
p.
12 (9 .)
(1)
Meeting audience expectations is becoming easier for broadcasters with Hybrid broadcasting. The advent of transport technologies such as Hybrid Radio and HbbTV (Hybrid broadcast broadband TV) facilitate a wide range of opportunities for custom-made content aggregation, discovery and, ultimately, consumption. In the connected world, editorial teams need to better know their audiences so as to better provide them with targeted content (format, duration, angle). To achieve this, broadcasters need to uniquely identify people across their various devices. As data privacy concerns are paramount, broadcasters need to let the user control whether he wants to be identified based on a profile, anonymously or not at all. Also, broadcasters should embark on this strategic pathway, in such a way as to avoid vendor lock-in, using open solutions and adopting standard interfaces for data exchange. In designing a system, Authentication, Authorisation, Identity Management, Device Synchronisation, Data Collection, Data Anonymization, Analytics and Recommendation Engines need to be considered as keys to providing customised non-linear TV, Radio and Online channels. The European Broadcasting Union (EBU) and its Public Service Media Members are working to put together a set of standards and technologies to enable broadcasters to offer a simple and smooth personalized user-experience on connected devices and ultimately across all their channels. This paper will present the required architectural elements and examples of use cases that help broadcasters embrace the personalised media future that awaits us all. - Author(s): N. Borch ; F. Daoust ; I. Arntzen
- + Show details - Hide details
-
p.
13 (9 .)
(1)
Timing is a fundamental property of media experiences. In particular, multi-device scenarios require a shared timing to provide engaging, coherent services to users. Trends are also that people have multiple devices available, and that they do not limit the usage to one device at a time. The industry caters to these developments in a variety of ways, often using expensive, custom solutions to limited areas or targeting short term issues that lock down users to particular devices and services. This paper discusses some existing solutions, and indicates how they could benefit from shared timing, as provided by the suggested HTML Timing Object and online Shared Motions..
A social experience for online TV
How true, synchronized live OTT can change the second screen and social TV game
Becoming a data-driven broadcaster and delivering a unified and personalised broadcast user experience
Timing - small step for developers, giant leap for the media industry
-
- Author(s): T. Fautier
- + Show details - Hide details
-
p.
28 (13 .)
(1)
New VR (Virtual Reality) HMDs (Head Mounted Displays) being introduced in 2016 are creating increased demand for VR video content. A growing amount of content - including documentaries, movies and live events - have already been covered in VR video. For the market to truly take off, some standardization is required with regards to the rules of content writing, the content acquisition and stitching methods, and the approach for mapping content for encoding and delivery. In addition, the industry needs to define a unified mechanism to address all of the different ecosystems, ranging from the various VR devices to mobile devices, STBs and connected TVs, to avoid the fragmentation that resulted with 3D and over-the-top (OTT) video delivery. This paper will present reference architectures that can be deployed with existing technology to pave the way for future evolutions of VR. - Author(s): A. Sheikh ; A. Brown ; Z. Watson ; M. Evans
- + Show details - Hide details
-
p.
29 (9 .)
(1)
360° video and Virtual Reality are powerful techniques for giving viewers a sense of `Being There' [1], and are becoming increasingly popular. However, giving the viewer the freedom to look around also results in a reduced ability for filmmakers to direct the viewer's attention, a serious impediment to successfully telling a story within a 360° environment. We have created a number of 360° clips, filmed in such a way as to demonstrate and test several unobtrusive techniques for directing a viewer's attention within a 360° panorama. We have evaluated these techniques in a user study in which participants viewed these clips using a head-mounted display. Qualitative and quantitative data from these tests have been analysed to evaluate the effectiveness of the different attention-directing techniques. Qualitative data was also captured to explore the effect of the camera being addressed directly, and the viewers' responses to action occurring at a range of distances. - Author(s): T. Fautier
- + Show details - Hide details
-
p.
28 (13 .)
(1)
New VR (Virtual Reality) HMDs (Head Mounted Displays) being introduced in 2016 are creating increased demand for VR video content. A growing amount of content - including documentaries, movies and live events - have already been covered in VR video. For the market to truly take off, some standardization is required with regards to the rules of content writing, the content acquisition and stitching methods, and the approach for mapping content for encoding and delivery. In addition, the industry needs to define a unified mechanism to address all of the different ecosystems, ranging from the various VR devices to mobile devices, STBs and connected TVs, to avoid the fragmentation that resulted with 3D and over-the-top (OTT) video delivery. This paper will present reference architectures that can be deployed with existing technology to pave the way for future evolutions of VR. - Author(s): A. Sheikh ; A. Brown ; Z. Watson ; M. Evans
- + Show details - Hide details
-
p.
29 (9 .)
(1)
360° video and Virtual Reality are powerful techniques for giving viewers a sense of `Being There' [1], and are becoming increasingly popular. However, giving the viewer the freedom to look around also results in a reduced ability for filmmakers to direct the viewer's attention, a serious impediment to successfully telling a story within a 360° environment. We have created a number of 360° clips, filmed in such a way as to demonstrate and test several unobtrusive techniques for directing a viewer's attention within a 360° panorama. We have evaluated these techniques in a user study in which participants viewed these clips using a head-mounted display. Qualitative and quantitative data from these tests have been analysed to evaluate the effectiveness of the different attention-directing techniques. Qualitative data was also captured to explore the effect of the camera being addressed directly, and the viewers' responses to action occurring at a range of distances. - Author(s): O. Schreer ; W. Waizenegger ; W. Fernando ; H.K. Arachchi ; A. Oehme ; A. Smolic ; B. Yargicoglu ; A. Akman ; U. Curjel
- + Show details - Hide details
-
p.
30 (10 .)
(1)
Up until now, TV has been a one-to-many proposition apart from a few exceptions. The TV Stations produced and packaged their shows and the consumers had to tune in at a specific time to watch their favourite show. However, new technologies are changing the way we watch and produce television programs. For example, viewers often use second screen applications and are engaged in lively discussions via social media channels while watching TV. Nevertheless, immediate live interaction with broadcast media is still not possible. In this paper, the latest results of the European funded project ACTION-TV, which is developing novel forms of user interaction based on advanced Computer Vision and Mixed-Reality technologies, are presented. The aim of this research project is to let viewers actively participate in pre-produced live-action television shows. This expands the horizon of the interactive television concept towards engaging television shows. The paper explains the concept, challenges and solutions resulting in a first prototype real-time demonstrator for novel interactive TV services.
VR video ecosystem for live distribution
Directing attention in 360-degree video
VR video ecosystem for live distribution
Directing attention in 360-degree video
Mixed reality technologies for immersive interactive broadcast
-
- Author(s): F. Poulin ; W. Vermost ; M. De Wolf ; W. De Cuyper ; K. De Bondt
- + Show details - Hide details
-
p.
19 (10 .)
(1)
The LiveIP project is an exploration of the possibilities and opportunities achievable with today's IP-enabled broadcast technology in a live production environment. The paper gives an overview of the facts and findings from this hands-on project, in order to share it with the broadcasting community, thereby helping to advance knowledge of the current state of the technology and to identify areas where further work is needed. The project has shown that it is possible to build and to operate a live & IP production studio based upon open standards in a multivendor environment. - Author(s): P. Hobson
- + Show details - Hide details
-
p.
20 (9 .)
(1)
An all-IP studio production environment opens up the opportunity to create more powerful production tools for the live workflow. Seamlessly integrating software processes into the live environment provides production directors with access to a wider range of applications. These may include inherently non real-time processes such as time reversal, creative features such as content-related graphics, or complex effects generation such as content speed-up/slow-down. In this paper, we use the example of a synthetic slow motion application to show how an inherently non real-time process can be integrated into a live IP production environment. This work was carried out within the context of the AMWA Networked Media Incubator (NMI) project (www.amwa.tv/nmi/). The InSync synthetic slow motion software was demonstrated at a workshop organised by NMI leaders BBC, in which we proved interoperability with a reference system and with other vendors' studio equipment. - Author(s): T. Kernen and N. Kerö
- + Show details - Hide details
-
p.
21 (10 .)
(1)
As the industry transitions from SDI based production to an all-IP studio environment progresses, some of the finer points related to a smooth migration such as Time & Sync are capturing more attention from the early adopters. Phase & frequency alignment of baseband signals are a critical element in media production. In the IP world, the required functionally is delivered via the IEEE 1588 Precision Time Protocol (PTP) specification. Whilst having enabled many industries to transfer their synchronisation requirements via PTP to the IP centric environment, special care needs to be taken for each specific industry and its specific constraints. This paper draws on the extensive research the authors have carried out on the use of PTP for the media production industry. It summarises their work on areas such as how PTP aware vs. non-aware networks behave under load, IP Quality of Service for PTP messages and Grand Master redundancy models. Concluding with the impact of design considerations, network architecture constraints and device requirements for a successful all-IP synchronised media production facility.
The VRT sandbox live IP experience
Integrating non real-time software processes into real-time IP-based production
Strategies for Deployment of Accurate Time Information using PTP within the All-IP studio
-
- Author(s): L. Zellan
- + Show details - Hide details
-
p.
6 (8 .)
(1)
Metadata will be most useful when it has become trivial to collect and, therefore, becomes ubiquitous. Logically, this should happen right from the start of the acquisition process - at the lens. The tools have been there for more than a decade, however even now there are vast quantities of valuable information that could be obtained during acquisition but aren't. This metadata can save significant time and money during production and post production, but only a small percentage of productions take advantage of this. This paper seeks to address the development of /i Technology, a semi-open metadata protocol developed by Cooke Optics that is made available to the industry in an effort to create a standard protocol for gathering and sharing lens data. It will also look at the barriers to adoption and implementation, and efforts to achieve greater standardisation. - Author(s): S. Kancherla ; R. Warey ; S. Ramki
- + Show details - Hide details
-
p.
7 (8 .)
(1)
Today's consumers not only have a variety of content choices, they also enjoy the convenience of consuming this content when they want, the way they want. While this has made video viewing increasingly personalized, the ads that play in and around the video are still far from being personalized and relevant. Broadcast networks need to guard against viewer fatigue/loss while brands need to be increasingly conscious of the ROI on every ad dollar spent. This paper highlights innovation done in contextual advertising using an additional metadata layer of in-video context. An approach that offers broadcasters the opportunity to optimally sell their video inventory and for brands to place their ad in the right program at the right time/place. The result - not only the opportunities to show an ad increase, but also the yield per ad spot improves considerably. A win-win for both broadcasters and brands. - Author(s): S. Scheller
- + Show details - Hide details
-
p.
8 (10 .)
(1)
Lately there have been noticeable traces of a consumption pattern change in today's society that catalyzes various trends in the overall approach of commercialisation and distribution of media. This technical paper identifies whether metadata enriched content can be the key to effective target audience engagement and content monetization. The study of this topic will reveal if the implementation of a 2nd screen mobile application could be a novel solution for the media industry to meet the demands of the hedonic, digitalized trends that are currently affecting media consumption and consumer behaviour. Through theoretical and empirical research this paper studies briefly the principals of metadata, the current changes in media, experiential value of products and the usage of 2nd screening. The graphs included in this technical paper were composed in cooperation with Living Labs, the research division of iMinds, the world's 4th best business accelerator, within a three months active testing phase of a 2nd screen application called Spott designed by Appiness. - Author(s): R. Franklin
- + Show details - Hide details
-
p.
9 (10 .)
(1)
Video content providers, from traditional broadcasters to Internet streaming platforms, face expanding consumption models and the challenge of responding to direct and immediate feedback from viewers. This evolution makes it necessary for content providers to deliver enriched, relevant and valuable content to the individuals consuming it. Metadata is the key to enabling this consumer-driven evolution and is used to enhance the viewing experience by providing support for such features as multiple languages and on screen graphics. As metadata advances, it enables content providers additional revenue opportunities through defined splice points signalling when local or targeted advertising may be inserted. It also provides video segment identifiers to block the distribution of content across unauthorized distribution channels and during blackout periods. This paper presents cutting edge technology that helps content provides insert and validate metadata whilst ensuring adherence to the standards. It discusses how content providers can best make the most of metadata to deliver relevant and enriched content to its consumers. It will describe available technology.
Lens metadata: a key link in the production chain and how to capture it
Using metadata to maximize yield and expand inventory in TV - contextual advertising
Metadata enriching technology as the key to effective target audience engagement and content monetization
Using metadata to deliver relevant and valuable content
-
- Author(s): D. Nandakumar ; S. Kotecha ; K. Sampath ; P. Ramachandran ; T. Vaughan
- + Show details - Hide details
-
p.
36 (9 .)
(1)
Adaptive bitrate streaming is a critical feature in internet video that significantly improves the viewer experience by customizing video stream quality to the viewer device's capability and connectivity. Encoding the source content at multiple quality tiers or bitrates is extremely demanding for post-production houses, studios, and content delivery networks. This paper describes an intelligent multi-bitrate encoder, based on the High Efficiency Video Coding (HEVC)/H.265 standard that encodes a single title to multiple bitrates at significant performance gains and no compression efficiency loss, as compared to standalone single bitrate encoder instances. We first describe the threading infrastructure of x265, and demonstrate its ability to dynamically adapt to varying degrees of parallelism in hardware. We then describe the key architectural design of a multi-bitrate encoder, including thread synchronization challenges across encoder instances. We also discuss the analysis data shared across different quality tiers, that is carefully chosen to eliminate loss of compression efficiency compared to a single bitrate encoder instance. Finally, we show the high performance gains achieved by the multi-encoder, and demonstrate the feasibility of simultaneous encoding to multiple bitrates with negligible loss of compression efficiency. - Author(s): J. Samuelsson and J. Ström
- + Show details - Hide details
-
p.
37 (8 .)
(1)
High Dynamic Range (HDR) video constitutes a new type of video with brighter brights and darker darks compared to conventional video. Recent developments in display technology have made it possible to deliver a more immersive viewing experience through being able to reproduce HDR video. This new video type has caused experts to investigate whether existing compression tools can operate efficiently or whether new tools need to be introduced. In MPEG and VCEG the current state is somewhere in-between: existing tools work well with HDR but adjusting their settings to specifically optimize for HDR video, makes it possible to reduce the bit-rate and improve visual quality. This paper will present background information around compression of HDR video and the work on HDR that has been, and still is being, performed in MPEG and VCEG. - Author(s): S.G. Blasi ; M. Naccari ; R. Weerakkody ; J. Funnell ; M. Mrak
- + Show details - Hide details
-
p.
38 (8 .)
(1)
The Turing codec is an open-source software codec compliant with the HEVC standard and specifically designed for speed, flexibility, parallelisation and high coding efficiency. The Turing codec was designed starting from a completely novel backbone to comply with the Main and Main10 profiles of HEVC, and has many desirable features for practical codecs such as very low memory consumption, advanced parallelisation schemes and fast encoding algorithms. This paper presents a technical description of the Turing codec as well as a comparison of its performance with other similar encoders. The codec is capable of cutting the encoding complexity by an average 87% with respect to the HEVC reference implementation for an average coding penalty of 11% higher rates in compression efficiency at the same peak-signal-noise-ratio level.
Efficient multi-bitrate HEVC encoding for adaptive streaming
Conversion and HEVC compression of high dynamic range (HDR) video
The open-source turing codec: towards fast, flexible and parallel HEVC encoding
-
- Author(s): O. Grau ; V. Helzle ; E. Joris ; T. Knop ; B. Michoud ; P. Slusallek ; P. Bekaert ; J. Starck
- + Show details - Hide details
-
p.
31 (9 .)
(1)
This paper describes the concepts and results implemented by the European FP7 Dreamspace project. Dreamspace is developing a new platform and tools for collaborative virtual production of visual effects in film and TV and new immersive experiences. The aim of the project is to enable creative professionals to combine live performances, video and computer-generated imagery in real-time. In particular the project has developed tools allowing on-set manipulation of 3D assets, live integration of video feeds from tracked cameras and live-compositing of either CGI content or background plates from panoramic video, captured by Omnidirectional video rigs. The CGI content is lit by automatically captured studio lighting using a new real-time global illumination rendering system. Furthermore, Dreamspace is investigating the use of omnidirectional video and 3D assets in new immersive user experiences. - Author(s): R. van Brandenburg ; O. Niamut ; A. Veenhuizen ; G.-J. Hoekman
- + Show details - Hide details
-
p.
32 (8 .)
(1)
Crowdsourced Live Mobile Streaming applications, such as Meerkat and Periscope, have seen an explosive growth in their popularity in the past few years. Whereas these applications provide great opportunities for crowdsourcers to directly share their experiences around live events, the unedited nature of these user-generated video streams makes them less suited for enriching news broadcasts or event reports. For such cases, the ability to select and edit streams as they come in, or to communicate with reporters in the field, are primary requirements for any editorial office or newsroom. This paper reports on the design of, and experimentation with, a crowdsourced live mobile streaming system and application for: requesting, receiving, filtering, directing, editing, and broadcasting live video streams from both consumers and professionals. This enables new forms of crowdsourced news gathering. The paper incorporates results from a number of technology validation tests and demonstrations, performed in collaboration with Dutch media partners. - Author(s): L. El Hafi ; M. Ding ; J. Takamatsu ; T. Ogasawara
- + Show details - Hide details
-
p.
33 (10 .)
(1)
This paper introduces a method to estimate gaze direction using images of the eye captured by a single high-sensitivity camera. The purpose is to develop wearable devices that enable intuitive eye-based interactions and applications. Indeed, camera-based solutions, as opposed to commercially available infrared-based ones, allow wearable devices to not only obtain natural user responses from eye movements, but also scene images reflected on the cornea, without the need for additional sensors. The proposed method relies on a model approach to evaluate the gaze direction and does not require a frontal camera to capture scene information, making it more socially acceptable if embedded in a glassesshaped device. Moreover, recent development in high-sensitivity camera sensors allows us to consider the proposed method even in low-light condition. Finally, experimental results using a prototype wearable device demonstrate the potential of the proposed method solely based on cornea images captured from a single camera. - Author(s): M. Evans ; T. Ferne ; Z. Watson ; F. Melchior ; M. Brooks ; P. Stenton ; I. Forrester
- + Show details - Hide details
-
p.
34 (8 .)
(1)
The move towards end-to-end IP between media producers and audiences will make new broadcasting systems vastly more agnostic to data formats and to diverse sets of consumption and production devices. In this world, object-based media becomes increasingly important; delivering efficiencies in the production chain, enabling the creation of new experiences that will continue to engage the audience and giving us the ability to adapt our media to new platforms, services and devices. This paper describes a series of practical case studies of our work in object-based user experiences since 2014. These projects encompass speech audio, on-line news and enhanced drama. In each case, we are working with production teams to develop systems, tools and algorithms for an object-based world: these technologies and techniques enable its creation (often using traditional linear media assets) and post-production; transforming user experience for audiences and production. - Author(s): M. Bugajski
- + Show details - Hide details
-
p.
35 (11 .)
(1)
The Internet of Things (IoT) is emerging as an ecosystem of connected sensors, and wearables that communicate with cloud-based intelligence to generate value-add actions at consumer premises over mainly mobile devices. However, the feedback from initial deployments indicates that there are significant barriers to the average consumer in the areas of installing the equipment, connecting it to the controllers, and setting up and managing new services. Products and services for IoT in the home (Home IoT) target consumers' premises devices and the connected objects surrounding us. This paper outlines the unique opportunities and challenges that Home IoT and Health IoT will present to existing service providers. The author will review the general readiness for IoT of the existing networking technologies in our connected homes and analyse the longevity of the current connections to the cloud where the service intelligence will reside.
Dreamspace: a platform and tools for collaborative virtual production
Towards new forms of news gathering through crowdsourced live mobile streaming systems
Gaze tracking using corneal images captured by a single high-sensitivity camera
Creating object-based experiences in the real world
Future of voice control for consumer interactions with internet of things systems: in the context of integration with other services offered by traditional service providers
-
- Author(s): S. Umeda ; T. Uchida ; M. Azuma ; T. Miyazaki ; N. Kato ; N. Hiruma
- + Show details - Hide details
-
p.
25 (8 .)
(1)
NHK has developed a system for computer generation of Japanese Sign Language (JSL) graphics for meteorological information. As JSL is a different language from Japanese, persons whose first language is JSL have been demanding more TV programmes with sign language in addition to the closed caption services. The JSL CG system automatically generates animations from the telegrams distributed by the Japan Meteorological Agency so that the user can immediately see the latest meteorological information on the Internet. In addition, we are now working on development to adapt the system to the NHK Hybridcast, which is the integrated broadcast and broadband system in Japan. A major issue for the CG animations to be accepted by persons who use JSL for their daily communication is that the automatically generated hand movements of the animated characters connecting the sign language words may seem unnatural. Therefore we have developed a new method for connecting and interpolating between sign language word motions. - Author(s): M. Armstrong
- + Show details - Hide details
-
p.
26 (9 .)
(1)
This paper describes an experimental system that can create good quality subtitle files for video clips derived from broadcast content. The system is designed to run automatically without the need for human verification. The approach utilises existing metadata sources, an off-air broadcast archive and an archive of original subtitle files along with audio fingerprinting and speech-to-text technology to identify the source programme. It then locates the position of the video clip, verifies the match between the video clip and the subtitles and create a new subtitle file. This paper also reports on the results of the work using a large corpus of over 7,000 video clips and further, smaller sets of clips from different television genres, and explores where improvements might be made. It also looks at the limitations of the current approach discussing alternative methods for providing subtitles for video clips. - Author(s): M.N. Simpson ; J. Barrett ; P.J. Bell ; S. Renals
- + Show details - Hide details
-
p.
27 (9 .)
(1)
Latency remains one of the most significant factors (1) in the audience's perception of quality in live-originated TV captions for the Deaf and Hard of Hearing. Once all prepared script material has been shared between the programme production team and the captioners, pre-recorded video content remains a significant challenge - particularly `packages' for transmission as part of a news broadcast. These video clips are usually published just prior to or even during their intended programme - providing little opportunity for thorough preparation. This paper presents an automated solution based on cutting-edge developments in Automatic Speech Recognition research, the benefits of context-tuned models, and the practical application of Machine Learning across large corpora of data - namely many hours of accurately captioned broadcast news programmes. The challenges in facilitating the collaboration between academic partners, broadcasters and technology suppliers are explored, as are the technical approaches used to create the recognition and punctuation models, the necessary testing and refinement required to transform raw automated transcription into broadcast captions and methodologies for introducing the technology into a live production environment.
Automatic production system of sign language CG animation for meteorological information
Automatic recovery and verification of subtitles for large collections of video clips
Just-in-time prepared captioning for live transmissions
-
- Author(s): A. De Vita ; R. Garello ; V. Mignone ; A. Morello ; G. Taricco
- + Show details - Hide details
-
p.
39 (9 .)
(1)
The article investigates the coverage achievable by three different network configurations for delivering high quality multicast video services to mobiles: conventional broadcast High Power High Tower (HPHT), mobile cellular Low Power Low Tower (LPLT), and mixed structures. Different spectrum efficiencies, transmitter distance and power, and receiver characteristics are considered, representing a range of possible network scenarios. The study actually models generic radio interfaces with particular characteristics. Nevertheless, the choice of the parameters reflects configurations in use in 4G or currently under discussion for 5G, preferred over similar/better performing DVB T2Lite or NGH (Next Generation Handheld) broadcast technologies, to facilitate the user terminal implementation (smartphones and tablets). The results clearly indicate that the best solution in terms of Capex/ Opex for running the network is represented by the cooperative approach where most of the rural/suburban coverage is provided by the HPHT network and the LPLT cellular networks are used to complete the coverage, especially in densely populated urban areas. This allows avoiding the installation and operation of thousands of LPLT transmitters, with a very significant reduction of the corresponding network costs. - Author(s): J. Montalban ; P. Angueira ; M. Velez ; Y. Wu ; L. Zhang ; W. Li ; K. Salehian ; S. Laflèche ; S.-I. Park ; J.-Y. Lee ; H.-M. Kim ; Dazhi He ; Yunfeng Guan ; Wenjun Zhang
- + Show details - Hide details
-
p.
40 (8 .)
(1)
Single Frequency Networks (SFNs) are considered the optimal network configuration to maximize the spectrum efficiency and to minimize the cochannel interference problems in the advanced broadcasting planning. As a matter of fact, they have been widely used in the European countries since the dawn of the first Digital Terrestrial Television (DTT) standard, namely DVB-T. Their main advantage is that, providing that all the transmitters are time and frequency synchronized, the same content can be delivered over the whole network occupying a single RF channel. Nevertheless, the local/regional content delivery is still one of the major drawbacks for SFNs. In this paper, Layered Division Multiplexing (LDM) is proposed as the definitive technique that will allow the seamless delivery of local contents or targeted advertisements over SFNs. LDM is a spectrum efficient non-orthogonal multiplexing technology that has been adopted in the ATSC 3.0 Physical Layer Standard as Baseline Technology, which consists on the superposition of two or more data streams of different power. In this scenario, LDM upper layer can be used to deliver TDM-ed mobile-HD and 4k-UHD services, whereas the LDM lower layer with a negative SNR threshold (dB) can reliably provide seamless local coverage/service for each SFN transmitter without coverage gaps. - Author(s): E. Stare ; J.J. Giménez ; P. Klenner
- + Show details - Hide details
-
p.
41 (10 .)
(1)
A new system concept for DTT, called “WiB”, is presented, where potentially all frequencies within the Ultra High Frequency (UHF) band are used on all transmitter (TX) sites (i.e. reuse-1). Interference, especially from neighbouring transmitters operating on the same frequency and transmitting different information, is handled by a combination of a robust transmission mode, directional discrimination of the receiving antenna and interference cancellation methods. With this approach DTT may be transmitted as a single wideband signal, covering potentially the entire UHF band, from a single wideband transmitter per TX site. Thanks to a higher spectrum utilisation the approach allows for a dramatic reduction in fundamental power/cost and about 37-60% capacity increase for the same coverage as with current DTT. High speed mobile reception as well as fine granularity local services would also be supported, without loss of capacity. The paper also outlines further possible developments of WiB, e.g. doubling the capacity via cross-polar Multiple In Multiple Out (MIMO), backward-compatible with existing receiving antennas, and adding a second, WiB-mobile, Layer Division Multiplexing (LDM) layer within the same spectrum, either as mobile broadcast or as mobile broadband. - Author(s): T. Stockhammer ; G. Teniou ; F. Gabin
- + Show details - Hide details
-
p.
42 (8 .)
(1)
Video consumption on mobile devices is getting more and more popular. Among others, a significant amount of traffic is TV centric, but generally delivered Over-The-Top. 3GPP is addressing needs to migrate TV services to 3GPP-based distribution systems by enhancing LTE Broadcast, developing new codec and service extensions and providing solutions to fulfil TV centric requirements. This document presents the latest developments in 3GPP standards from an operator's, end device and network manufacturer's point of view.
Mobile and broadcast networks cooperation for high quality mobile video: a win-win approach
Local content delivery in SFNS using layered division multiplexing (LDM)
WIB: a new system concept for digital terrestrial television (DTT)
3GPP based tv service layer
-
- Author(s): S. Pham ; K. Hughes ; T. Lohmar
- + Show details - Hide details
-
p.
43 (13 .)
(1)
Today, with W3C HTML5 premium media extensions MSE (Media Source Extensions) and EME (Encrypted Media Extensions) adaptive streaming formats such as MPEG DASH enable delivery of media content to many devices. Even televisions and settop boxes are adding Internet connections, and support HTML5 for GUI rendering and media processing. From a commercial point of view, dynamic ad insertion plays a crucial role. For broadcasters, cable and IPTV operators, content owners and advertisers new opportunities open up as advertisement can be personalized and delivered to any device. Complex signaling and backend systems have been built for ad decision over years, and integrating with them is imperative for industry adoption of any new technology. We evaluate existing dynamic advanced advertisement techniques and present solutions for interoperable ad insertion using MPEG DASH for HTML5-based platforms and its integration with existing advertisement ecosystem. We have extended the open source DASH-IF reference player “dash.js” with mechanisms for ad insertion. These mechanisms are based on the DASH-IF Interoperability Points guidelines and some of the recent SCTE work. We present different ad insertion workflows and highlight the flexibility that can be achieved with state-of-the-art technologies and standards. - Author(s): L. Bringuier
- + Show details - Hide details
-
p.
44 (9 .)
(1)
Advertising is an integral part of premium broadcasters' and video content providers' monetization strategies. To optimize revenue opportunities, this advertising needs to be personalized accurately for the user and delivered so that it does not impair the quality of a viewer's experience. The industry is moving towards server-side advertising insertion (SSAI). With SSAI, a single uninterrupted stream containing both program and commercial content is delivered at a consistent quality, and commercials are personalized for each individual stream at the moment of delivery. This paper examines the architectures required to achieve SSAI at scale so that thousands or even millions of concurrent, individually-tailored advertising manifests can be delivered in a timely fashion, even for live streamed events. To achieve this scale, cloud and cloud-assisted software solutions are required. This paper assesses the effectiveness of this approach and the long-term practicalities of delivering targeted and secure dynamic advertising with high quality of experience. - Author(s): M. Smith
- + Show details - Hide details
-
p.
45 (5 .)
(1)
HTTP adaptive streaming (HAS) has become the de facto mode of delivering content for over-the-top (OTT) video services , both live and on-demand. As these services grow and mature, so does the need for viable, robust capabilities for monetization and personalization. These capabilities (and more) are driving many programmers and broadcasters to look to DAI to help drive additional revenue streams on the plethora of devices now capable of streaming. This paper and presentation will explore some of the key considerations related to OTT DAI. First, what are the the fundamentals of ad stitching and the differences between legacy client-side ad insertion (CSAI) and today's server-side ad insertion (SSAI)? How can those who deploy OTT services defeat the very real concern posed by ad blockers and reach the audiences that seek their content? This paper will also look at IAB standards (e.g., VAST/VPAID) and how OTT adverts can now deliver national/local/regional payloads within ad pods, essentially mimicking the broadcast world and making true monetization a reality for content creators who have long looked at OTT and broadcast as separate worlds. Finally, we will discuss the implications of personalization and true user targeting, which are now a reality for OTT services using a variety of data sources (GPS, postal code, IP tables). These capabilities represent a great advance for OTT and the granular experiences it can provide. This means that service providers can deliver adverts that are more relevant to the viewer, OTT is more impactful as an overall experience, and will likely drive ad rates (and revenue) higher over time. This will also include interactive overlays, where advertisers can now provide viewers on connected devices with tangible elements (coupons) they can redeem. These new capabilities help to cement relevant experiences for the viewer and audience and increase revenue opportunities for OTT providers. - Author(s): T. Levy
- + Show details - Hide details
-
p.
46 (8 .)
(1)
Video advertising is still one of the leading approaches with which content and service providers can monetize video content. Combine that with the common-sense approach of using open standards and surprisingly you find yourself facing a problem. This paper presents the two main open standards for providing dynamic video advertising are SCTE-130 and IAB VAST. But, SCTE was developed with Service Providers and traditional North American broadcast cable in mind and IAB VAST was developed for web-based advertising. Neither of these advertising standards options is ideal for the entire range of devices. This paper presents a novel VAST server-side solution where a video streaming server acts as a VAST client fronting the client device, and merges the main video and the advertising returned from a VAST server into a single video stream. This solution combines the advantages from both options above and thus allows ad and content providers to address the full set of devices with a single platform to operate.
Implementing dynamic ad insertion in HTML5 using MPEG dash
Increasing ad personalization with server-side ad insertion
Best practices for OTT dynamic ad insertion
Advantages and challenges of a vast server-side video advertising solution
-
- Author(s): A. Beale ; S. Jones ; I. Wallace
- + Show details - Hide details
-
p.
1 (10 .)
(1)
The increasing availability of Ultra HD displays in the consumer market combined with the acquisition of exclusive European football television rights created the ideal opportunity for BT to launch Europe's first live sports Ultra HD channel; BT Sport Ultra HD. This ground breaking new channel went live on the 2nd August 2015 and since then has broadcast a wide range of sports, including Premier League and Champions League football, European and domestic rugby, PSA squash, NBA basketball and the world's largest UHD OB to date, the British Moto GP from Silverstone. The technology for live UHD production was in its infancy when BT started the project. The first productions used separate UHD and HD production units, but as the season developed increasing numbers of cameras have been shared. This paper will describe how new technical approaches and production values have enabled several broadcasts to be delivered using a single Outside Broadcast (OB) truck delivering all the UHD, HD and SD requirements. This paper also describes the many innovations introduced in the end-to-end ecosystem for the capture and delivery of Ultra HD including contribution networking, playout technology, HEVC encoding, broadband distribution and Set Top Box. - Author(s): S. D'Agostini ; R. Alocci ; A. Alquati ; C. Benzi ; B. Mari ; S. Rebechi
- + Show details - Hide details
-
p.
2 (12 .)
(1)
The ceremony of the Opening of the Holy Door, celebrated by Pope Francis at the Vatican on 8 December 2015, has been the largest event produced exclusively in UltraHD 4k and distributed worldwide live via satellite in UltraHD 4k and HD. This paper describes the challenges faced by CTV (Centro Televisivo Vaticano) and its partners in the implementation of the UltraHD 4k production and distribution of the event to the entire world. Several decisions and choices had to be made in the design of the technical infrastructure, while seeking to maximize the reach of the signal, ensure the most reliable transmission, and bring into play a technology that was not entirely mature at the time of the event. - Author(s): H. Kamata ; H. Kikuchi ; P.J. Sykes
- + Show details - Hide details
-
p.
3 (8 .)
(1)
Interest in High Dynamic Range (HDR) for live broadcasting continues to increase. Well publicised trials completed within the past year have proven that Ultra High Definition (UHD) images with HDR can be captured, delivered and displayed to viewers on HDR-capable TV screens (1)(2). A number of broadcast organisations are now moving to the next phase of development, drawing up plans to implement permanent on-air UHD services including HDR. In most cases, the new services will be delivered alongside existing High Definition programming and in many cases a simultaneous Standard Dynamic Range (SDR) feed at 3840 × 2160 resolution will be required. This paper will examine the technical and operational challenges presented. An example of a production infrastructure designed from first principles to overcome these challenges is provided. - Author(s): S. Hara ; A. Hanada ; I. Masuhara ; T. Yamashita ; K. Mitani
- + Show details - Hide details
-
p.
4 (10 .)
(1)
NHK started the world's first test satellite broadcasting of 8K/4K ultra-highdefinition television (UHDTV), called Super Hi-Vision (SHV), on August 1, 2016. Coverage of the Rio Olympic Games and many fascinating 8K and 4K programs were broadcasted to 8K prototype receivers installed at all of NHK's local stations in Japan. This paper describes the SHV test broadcast system, especially the program play-out and transmission system and the 8K/4K receivers. In addition, it describes the progress made so far in developing a full-featured 8K system with a 120-Hz frame frequency. - Author(s): T. Fautier
- + Show details - Hide details
-
p.
5 (14 .)
(1)
Broadcasters and service providers are preparing for the launch of Ultra HD (UHD) using the upcoming DVB UHD-1 Phase 2 specification. This shift will be the biggest change that broadcasters face since the launch of HD. With this move, comes a new delivery specification, UHD-1 Phase 2, which will include Wide Color Gamut (WCG), High Dynamic Range (HDR), High Frame Rate (HFR) and Next-generation Audio (NGA), bringing the quality of the UHD experience to an entirely new level. This paper will describe how content can be created to accommodate the new specification and will provide reference architectures that are planned for deployment. The presentation will also highlight the work done by the Ultra HD Forum for the first commercial deployments of UHD-1 Phase 2 in 2016.
BT Sport Ultra HD - Europe's First Ultra High Definition Television Sports Channel
Opening of the holy door by Pope Francis: first worldwide live distribution via satellite of 4K UHD pictures and HDR HLG test
Real-world live 4K ultra HD broadcasting with high dynamic range
Celebrating the launch of 8K/4K UHDTV satellite broadcasting and progress on full-featured 8K UHDTV in Japan
UHD for broadcast and the DVB ultra HD-1 phase 2 standard