News & Press
IEEE Broadcast Technology Society Seeking Nominees for the Administrative Committee Elections
The AdCom is the governing body of the Society, which administers all of the Society’s affairs. There are 15 at-large members, elected by ballot of the full BTS membership. Five of the seats are open to election each year. The AdCom meets at least two, but no more than four, times a year. AdCom members are expected to attend at least one out of four consecutive meetings to remain in good standing.
Serving on the AdCom is a great opportunity to become more involved in the Society and the industry. If you feel you or a BTS member you know would help us to progress and to serve our members better, please submit a nomination. We encourage our young members and those working in new media technologies to become more involved in the Society.
To submit a nomination for yourself or on behalf of someone else, please use the form in the link. https://app.smartsheet.
4K for Broadcast: Is it Worth the Expense? Applying the “law of diminishing returns” test
CALGIARI, ITALY–I expect most people have a general understanding of the law of diminishing returns. This is the point where level of benefit is less than the amount invested. In engineering we see it all the time where the next level of incremental technical improvement is imperceptible to the consumer and the cost of implementing it is significant. In my career I have had to coach engineers to recognize and accept that while we may be able to get another tenth of a dB reduction in noise in a transmitter, it won’t have any appreciable impact on the service to the viewer or listener and therefore may not be worth doing. Engineers, myself included, hate that concept because it is those small, hard to achieve improvements that really exercise our brains and skills. Coming to grips with this can be quite a challenge and requires some careful consideration in both the short term and the long term.
At the 2017 International Symposium on Broadband Multimedia Systems and Broadcasting in Cagliari, Italy last week, Dr. Peter Siebert, executive director of the DVB Project in Geneva, Switzerland, presented an interesting keynote address where he asked a couple of very compelling questions. The first was whether or not 4K resolution is the broadcast equivalent of the Emperor’s New Clothes?For those not familiar, this Hans Christian Andersen tale is about two tailors who promise to weave a suit for the Emperor that will be invisible to anyone who is unfit for their position, stupid or incompetent. The Emperor is vain enough to believe this claim and once he is outfitted in these new clothes, he parades before his subject all of whom are unwilling to say that they don’t see the new suit for fear of being judged unfit, stupid or incompetent. It is only when a child with no fear of being judged proclaims that the Emperor isn’t wearing anything, and others begin to pick up the cry. Interestingly, in the story, the Emperor suspects that the cry is true but continues the parade, probably out of pride or vanity.THE DISTANCE TESTSo what does this have to do with 4K resolution? We all know that our eyes’ ability to perceive resolution on a display is a function of the proper viewing distance for the size of the screen and the horizontal lines. The rule of thumb for HD is three times picture height, which varies slightly depending on whether the screen is 720 or 1080 lines. For UHD-1, the viewing distance is one and a half times the screen height. In Dr. Siebert’s presentation he provided some research conducted at IRT where UHD content was down converted to three variations of high definition and then upconverted back to UHD and then shown on a 56-inch UHD display. The researchers ran two versions of the test comparing the native content on the native resolution display as well as the three HD versions upconverted to UHD. One version was at the proper viewing distance for a 56-inch display and the other version at 2.7 meters (almost 9 feet) from the display.At the proper viewing distance, the 720p and 1080i content was perceived to have over a half point worse quality on the ITU 5-point quality comparison scale compared with the native UHD. Even the 1080p content was viewed as just under a half point worse. However, at the 2.7 meter distance, all three versions of the HD upconverted content showed less than a half point difference when compared to the native UHD content. When considering all of the data presented, Dr. Siebert said, “When comparing UHD-1 resolution with 1080p50 there is a performance improvement of about 0.5 point. This is a statistically relevant, but nevertheless minor improvement when going beyond HD resolutions.”What does this mean for broadcast? Well, we must consider the laws of diminishing returns. We clearly know that in order to broadcast UHD, we will require more resources such as channel capacity with resolution being the largest consumer of extra overhead. We also know that with very few exceptions, consumers are sitting further away from their televisions than the proper viewing distance.So the operative question becomes whether or not the perceived quality improvement, which the IRT scale indicates is statistically relevant, but small, is worth the resource overhead needed to broadcast the higher resolution?HDR IN A 1080P WORLDBefore answering this question it may be important for you and your decisionmaking team to remember that UHD-1 is a bouquet of capabilities. Obviously more pixels is one of the components, but so are high dynamic range (HDR), wide color gamut (WCG) and high frame rate (HFR). With the exception of resolution, all of these other capabilities can be deployed in a 1080p system, which the UHD Forum includes in their UHD Phase A Content parameters. While I suspect HDR and WCG could be applied to 720p and 1080i HD formats, I am not aware of anyone showing work in this area. Given this fact, it is quite understandable why some broadcast organizations are considering the idea of moving to a 1080p resolution but incorporating HDR and WCG in their plans and allowing the upconversion to take place in the display. So it is important for those of us making the decision to consider all the factors and make a sound choice rather than the vain choice made by the Emperor.
The other interesting question that Dr. Siebert raised had to do with future advancements and the limits of the human visual system. Remember that UHD-1 has the four components of 4K resolution, HDR, WCG and HFR. During his presentation Dr. Siebert presented experimental results, which indicated that in each of the latter three elements the technology is approaching the limits of our visual systems. Now that is not to say they are at the limits, but again we have to consider the law of diminishing returns and for me, this is where it gets interesting.If I think strictly in terms of conventional broadcasting, then future improvements would seem to be on the wrong end of the scale. The incremental improvements in color space, dynamic range, and frame rate delivered to a standard UHD display may indeed be so small as to be imperceptible and therefore may not be worth the investment in resources.But what about the future? What will the displays of the future be? Holographic? Will we come up with technologies that allow us to enhance the human visual system or bypass it entirely and directly stimulate the visual cortex? If so, will we want the content that we are creating today to have value to the consumers of that content in the future? If so, how do we insure that there is sufficient information available?
Now the law of diminishing returns becomes a little more complex because part of the equation has to do with the projected or proposed long term value of the content. The rule of thumb I have always applied to content creation is if the content has long-term value, it should be created at the highest “quality” possible based on the resource availability. I include metadata in the quality metric because in the future it may be even more important to the value of the content then the essence.Dr. Siebert noted in his presentation that time did not allow him to address the considerable capabilities and impact of next generation audio. My own research and experience is that audio is equally important to the consumer’s total quality of experience. Remember, in the future, the content we are creating today may be consumed on what is the equivalent of a holodeck and that metadata will be used to map the 2D essence in 3D space.Bill Hayes is director of engineering for Iowa Public Television. He can be reached via TV Technology.
World’s TV Engineering Leaders Gather in Italy
Next-Gen TV Promises Immersive, Personalized Audio ATSC 3.0 sets the stage for future-proof sound technology
Now, however, new doors are being opened for content developers to leverage the ATSC 3.0 next-generation audio standards and offer a more immersive sound environment, as well as provide end users with a more personalized sound experience. An exploration of ATSC 3.0 audio standards capabilities demonstrates their impact on how we deploy and engage with new sound systems that are evolving along with rapid advancements in the digital world.
3D IMMERSION & AUDIO OBJECTS
Building upon surround sound that’s laid out in a plane (5.1)—such as with initial surround sound systems like Dolby AC3—ATSC 3.0 audio standards take sound to a full 7.1+4 implementation, meaning seven channels of sound in a plane, one channel for a subwoofer (or the low frequencies), and four channels overhead.
On first look, this may just seem like throwing more sound into the mix, but how that sound is delivered is what makes it so unique. With audio experts having a detailed understanding of how the ear works and how humans perceive sound, the new standards can be used more effectively to convey directionality. And, what’s more, this can be done not just on fully equipped home theater speaker systems leveraging all channels, but on something as simple as a sound bar attached to a digital TV. It’s even possible to replicate this immersive 3D sound environment using ordinary headphones.
Imagine the sound of raindrops hitting leaves over your head in a scene filmed in a tropical forest or the oncoming sounds of a helicopter approaching from the side and crossing overhead before moving on and away from you. The possibilities for sound technicians truly are expansive and these new ATSC 3.0 standards involved are designed to scale and accommodate newer, more sophisticated audio scenarios as they emerge, making for a truly immersive 3D, and much more attractive, user experience today and for the future.
While there’s likely to be some time delay related to broadcasters implementing full capability of the new standards, (as well as end users not running out to purchase advanced sound systems), there are other aspects of the new audio technologies that are going to probably be used right away, and that will ultimately be very impactful.
For example, the coding technology for next-generation audio systems has moved away from being simply channel-based systems. In today's 5.1 implementations, there are five channels of surround sound and one channel for subwoofer, or low frequency, with fixed assignments: front left, front right, center, the two rears, and then the subwoofer. All sounds fall into these channels. Next-generation audio standards additionally incorporate object audio, whereby audio objects can move and be maneuvered into different positions to register sound information.
As an example, imagine someone filming a skateboarder while they run a circuit in a skate park, where the sound tech is following the skater using a joystick to control the movement of a sound object in three dimensions. In this scenario, the sound will follow the skateboarder around the course and record a more realistic representation of sound as it changes with the skater’s movements. This allows for more diversity as objects can be positioned and moved to accommodate a lot of unique and intriguing audio scenarios.
Another change enabled by next-generation audio standards—including audio objects instead of just channels—is that it allows viewers to control and choose objects that they want to hear (personalization). This enhances the user experience by vastly increasing a viewer’s control over audio content. For example, because you are dealing with objects, it makes it possible to offer controls that allow viewers to turn one object up, or turn another down, based on their own personal preference. For example, you might broadcast a football game where one object is the home team announcer and the other is the visiting team announcer. With next-generation audio systems, it’s fairly simple to give the viewer control over the audio so that they can choose and customize an audio experience tailored to their individual likes (choose which announcer to listen to in this example). Another scenario might be a visually-challenged viewer looking to “turn up” an object that is providing some audio detail describing what is coming over the television screen (known as descriptive video). Personalized audio is likely to be simpler to implement than full immersive audio (especially regarding fitting into existing station workflows) and will be very attractive to many viewers.
BUILT FOR THE FUTURE
Technology is always evolving, as are the capabilities of devices. Because it’s understood that evolution will happen, ATSC 3.0 has been developed to gracefully move from what we have today to what will be coming in the future. The need for this is something learned from past experience and incorporated into the entire standard, not just the audio portion. Throughout the entire system, each layer signals to the layer above what technologies will be used. ATSC 3.0 has set the stage for carrying both the old technology and new technology as it comes online—a win-win scenario for all involved in broadcast television and the viewing public at large.
HIGHTLIGHTS FROM NAB
In my opinion, one of the primary themes of this years NAB was ATSC 3.0. ATSC 3.0 is clearly a reality—with demos, products, conference sessions and significant mention in the keynote speeches from FCC commissioner Ajit Pai, NAB’s Senator Gordon Smith and Sam Matheny. There was a well-attended Next Generation TV Hub in the LVCC Grand Hallway that demonstrated the reality of many new features of the ATSC 3.0 system: Better Pictures, Immersive Sound, Mobility, Gateway Devices, Targeted Ad Insertion, Audience Measurement, Emergency Alerting, Content Delivery to Automobiles and a broadcast from Black Mountain. The ATSC Pavilion in the Futures Park area of North Hall gave a deeper dive into many technologies and features for ATSC 3.0—including the systems currently being deployed in South Korea for the launch of UHDTV services for the 2018 Olympics.
Dr. Richard Chernock is the Distinguished Lecturer Chair for the IEEE Broadcast Technology Society (IEEE BTS). He is currently the Chief Science Officer at Triveni Digital. In that position, he is developing strategic directions for monitoring, content distribution and metadata management for emerging digital television systems and infrastructures. Dr. Chernock is active in many of the ATSC, SMPTE and SCTE standards committees, particularly in the areas of future DTV, monitoring, metadata, and data broadcast. He is chairman of the ATSC Technology Group on ATSC 3.0 (TG3) and chairs the AHG on service delivery and synchronization for ATSC 3.0. He was previously chairman of the ATSC Technology and Standards Group (TG1). Previously, he was a Research Staff Member at IBM Research, investigating digital broadcast technologies.