BTS President Bill Hayes was busy this morning at IBC 2017 in Amsterdam moderating moderating a panel session in the Content Everywhere Hub focused on "How to Design a Successful OTT Content Service- Panel Video

Register NOW to attend the 2017 IEEE Broadcast Symposium on October 10-12 in Arlington, VA and hear from industry experts on the following HOT topics!
•ATSC 3.0 Technical Deep-dive
•Practical Implementation Aspects of ATSC 3.0
•Repacking the U.S. TV Band
•Datacasting and Hybrid Broadcasting
•AM and FM Broadcasting
•Using Drones for Broadcast Engineering and Production
To see full Conference Agenda visit…/technical-program.html

2017 IEEE Broadcast Symposium Keynote Speaker James Snyder, Library of Congress -
National Audio Visual Conservation Center (NAVCC)
Motion Picture, Broadcasting & Recorded Sound Division (MBRS)

--()--MEDIA ADVISORY: James Snyder – Senior Systems Administrator for the Library of Congress’ National Audio-Visual Conservation Center (NAVCC), Emmy award winning digital media engineering, data & media archiving, preservation, production and project management specialist will be a Keynote Speaker at the 2017 IEEE Broadcast Symposium.

2017 IEEE Broadcast Symposium Keynote James Snyder, Library of Congress National Audio Visual Conservation Center

Tweet this

WHAT: James Snyder - Senior Systems Administrator for the Library of Congress’ National Audio-Visual Conservation Center (NAVCC), Emmy award winning digital media engineering, data & media archiving, preservation, production and project management specialist will be a Keynote Speaker at the 2017 IEEE Broadcast Symposium. James administers the overall technical infrastructure for audio, video and film preservation and digitization technologies, including long-term planning & implementation, long-term data preservation planning & implementation, technology services to the United States Congress and organizations on Capitol Hill, as well as standards participation and technology liaison with media content producers worldwide.

With 37 years’ of industry experience James handled projects for commercial, non-commercial and government organizations including MCI, Verizon, Intelsat, PBS, ABC, the Advanced Television Test Center, Fox News, Reuters and Discovery. He has also consulted on projects for Sarnoff Corporation, Turner Engineering, CBS, NBC, ABC, Fox, the News Corporation, FedNet and multiple agencies of the U.S. Federal government. He has worked on key projects leading to the ATSC digital television standard, the HD Radio digital radio standard, the AXF Archive eXchange Format (SMPTE 2034) standard, and the first consumer commercial HDTV satellite service Unity Motion. James teaches on analog & digital audio, video, television transmission and engineering basics for the industry professionals through many public and private organizations.

Register at

WHERE: 2017 IEEE Broadcast Symposium at the Key Bridge Marriott in Arlington, Virginia

WHEN: Tuesday, October 10th thru Thursday, October 12th

ABOUT BTS: The IEEE Broadcast Technology Society (BTS) is a technical society and council dedicated toward advancing electrical and electronic engineering by maintaining scientific and technical standards, as well as educating its members through various meetings, presentations, events, conferences, and training programs.

Follow IEEE BTS:


Special Issue on Quality of Experience for Advanced Broadcast Services


With the catching on of second-screen adoption and the increase of real-time news consumption via

social channels, the broadcast landscape underwent a major transformation in the last years: viewers have

begun to demand highly customized experiences that meet their individual needs. Beside

traditional Terrestrial/SAT/Cable broadcast, global service providers have begun to offer fixed/mobile

advanced media delivery on the customer premise, enabling consumers to enjoy the emergence of new

services offered by IPTV, 3DTV, SU/U-HDTV advancements in cloud services, and over-the-top

(OTT) content providers. Moreover, new technologies such as multi sensorial media, augmented reality,

holographic screens and the proliferation of connected devices through the Internet of Things (IoT), could

create an immersive environment that will enrich a rapidly growing array of customer experiences and

become the next frontier of advanced broadcast services. To this end, there is a need to evaluate the level of

enhancement of these experiences and to compare their functionalities and requirements so operators

can properly design their networks and regulators can assess the services offered to the audience. The

present special issue seeks for original high quality papers on QoE for advanced broadcast services. Topics

of interest include, but are not limited to:

● QoE for Emerging Technologies: multi sensorial media; Hi-Fi/spatial/3D audio quality; stereo/multiview

video quality; high resolution/dynamic-range/frame-rate imaging; light-field imaging;

holographic imaging; quality in immersive environments (virtual/augmented/ mixed realities); IoT and

Emotional TV.

● QoE for Mobile Broadcasting: quality evaluation for mobile devices; adaptive media streaming;

impact of viewing conditions, context and device properties, user behaviour.

● QoE for Web and Social Media Broadcasting.

● Big data QoE analytics: Media streaming platforms, crowdsourcing studies.

● QoE-based network and service management: KPI and KQI definition for QoE optimization in

emerging environments; control, monitoring and management strategies; inter-operator QoE-oriented


● QoE-driven processing, compression and transmission technology.

● QoE Fundamentals: Understanding experience and quality formation; quality vs. user satisfaction vs.

acceptance; long-term quality measurement; physiological QoE assessment.

● Reproducible QoE Research: Benchmarking and certification; automation tools for subjective quality

assessment; multimedia quality databases; testing conditions and methods; standardization efforts and

recommendations for quality targets for TV, desktop, and mobile viewing use cases.

Prospective authors should visit

information-for-authors.html for information on paper submission. Manuscripts should be

submitted at When submitting your paper please chose Special issue

Manuscripts and the title Quality of Experience for Advanced Broadcast Services. All papers will be

peer reviewed according to the standard IEEE process.

Important Dates:

Manuscript submission due: December 1st, 2017

First review completed: March 1st, 2018

Revised manuscript due: April 15th, 2018

Second review completed: June 1st, 2018

Final manuscript due: July, 15th, 2018

Publication date: September 2018

Guest Editors:

Maurizio Murroni, University of Cagliari, Italy


Reza Rassool, RealNetworks, USA


Rafael Sotelo, University of Montevideo,

Uruguay (

Li Song, Shanghai Jiao Tong University, China


Transactions Call for Papers

The IEEE Broadcast Technology Society is seeking nominees for the Administrative Committee (AdCom) election this fall. Any member of the BTS in good standing is eligible for election to the AdCom. Elected officers will begin their three-year term on January 1, 2018.

The AdCom is the governing body of the Society, which administers all of the Society’s affairs. There are 15 at-large members, elected by ballot of the full BTS membership.  Five of the seats are open to election each year. The AdCom meets at least two, but no more than four, times a year. AdCom members are expected to attend at least one out of four consecutive meetings to remain in good standing.

Serving on the AdCom is a great opportunity to become more involved in the Society and the industry. If you feel you or a BTS member you know would help us to progress and to serve our members better, please submit a nomination. We encourage our young members and those working in new media technologies to become more involved in the Society.

To submit a nomination for yourself or on behalf of someone else, please use the form in the link.
If you have any questions or need more information, please contact me by e-mail at  All nominations should include a brief bio (space on form) that includes current and past responsibilities; memberships and offices held; education, certifications, and other credentials; and publications, patents, and other achievements we should consider. Nominees must confirm their willingness to stand for election.  We must receive your nominations by September 15, 2017.

CALGIARI, ITALY–I expect most people have a general understanding of the law of diminishing returns. This is the point where level of benefit is less than the amount invested. In engineering we see it all the time where the next level of incremental technical improvement is imperceptible to the consumer and the cost of implementing it is significant. In my career I have had to coach engineers to recognize and accept that while we may be able to get another tenth of a dB reduction in noise in a transmitter, it won’t have any appreciable impact on the service to the viewer or listener and therefore may not be worth doing. Engineers, myself included, hate that concept because it is those small, hard to achieve improvements that really exercise our brains and skills. Coming to grips with this can be quite a challenge and requires some careful consideration in both the short term and the long term.


At the 2017 International Symposium on Broadband Multimedia Systems and Broadcasting in Cagliari, Italy last week, Dr. Peter Siebert, executive director of the DVB Project in Geneva, Switzerland, presented an interesting keynote address where he asked a couple of very compelling questions. The first was whether or not 4K resolution is the broadcast equivalent of the Emperor’s New Clothes?For those not familiar, this Hans Christian Andersen tale is about two tailors who promise to weave a suit for the Emperor that will be invisible to anyone who is unfit for their position, stupid or incompetent. The Emperor is vain enough to believe this claim and once he is outfitted in these new clothes, he parades before his subject all of whom are unwilling to say that they don’t see the new suit for fear of being judged unfit, stupid or incompetent. It is only when a child with no fear of being judged proclaims that the Emperor isn’t wearing anything, and others begin to pick up the cry. Interestingly, in the story, the Emperor suspects that the cry is true but continues the parade, probably out of pride or vanity.THE DISTANCE TESTSo what does this have to do with 4K resolution? We all know that our eyes’ ability to perceive resolution on a display is a function of the proper viewing distance for the size of the screen and the horizontal lines. The rule of thumb for HD is three times picture height, which varies slightly depending on whether the screen is 720 or 1080 lines. For UHD-1, the viewing distance is one and a half times the screen height. In Dr. Siebert’s presentation he provided some research conducted at IRT where UHD content was down converted to three variations of high definition and then upconverted back to UHD and then shown on a 56-inch UHD display. The researchers ran two versions of the test comparing the native content on the native resolution display as well as the three HD versions upconverted to UHD. One version was at the proper viewing distance for a 56-inch display and the other version at 2.7 meters (almost 9 feet) from the display.At the proper viewing distance, the 720p and 1080i content was perceived to have over a half point worse quality on the ITU 5-point quality comparison scale compared with the native UHD. Even the 1080p content was viewed as just under a half point worse. However, at the 2.7 meter distance, all three versions of the HD upconverted content showed less than a half point difference when compared to the native UHD content. When considering all of the data presented, Dr. Siebert said, “When comparing UHD-1 resolution with 1080p50 there is a performance improvement of about 0.5 point. This is a statistically relevant, but nevertheless minor improvement when going beyond HD resolutions.”What does this mean for broadcast? Well, we must consider the laws of diminishing returns. We clearly know that in order to broadcast UHD, we will require more resources such as channel capacity with resolution being the largest consumer of extra overhead. We also know that with very few exceptions, consumers are sitting further away from their televisions than the proper viewing distance.So the operative question becomes whether or not the perceived quality improvement, which the IRT scale indicates is statistically relevant, but small, is worth the resource overhead needed to broadcast the higher resolution?HDR IN A 1080P WORLDBefore answering this question it may be important for you and your decisionmaking team to remember that UHD-1 is a bouquet of capabilities. Obviously more pixels is one of the components, but so are high dynamic range (HDR), wide color gamut (WCG) and high frame rate (HFR). With the exception of resolution, all of these other capabilities can be deployed in a 1080p system, which the UHD Forum includes in their UHD Phase A Content parameters. While I suspect HDR and WCG could be applied to 720p and 1080i HD formats, I am not aware of anyone showing work in this area. Given this fact, it is quite understandable why some broadcast organizations are considering the idea of moving to a 1080p resolution but incorporating HDR and WCG in their plans and allowing the upconversion to take place in the display. So it is important for those of us making the decision to consider all the factors and make a sound choice rather than the vain choice made by the Emperor.


The other interesting question that Dr. Siebert raised had to do with future advancements and the limits of the human visual system. Remember that UHD-1 has the four components of 4K resolution, HDR, WCG and HFR. During his presentation Dr. Siebert presented experimental results, which indicated that in each of the latter three elements the technology is approaching the limits of our visual systems. Now that is not to say they are at the limits, but again we have to consider the law of diminishing returns and for me, this is where it gets interesting.If I think strictly in terms of conventional broadcasting, then future improvements would seem to be on the wrong end of the scale. The incremental improvements in color space, dynamic range, and frame rate delivered to a standard UHD display may indeed be so small as to be imperceptible and therefore may not be worth the investment in resources.But what about the future? What will the displays of the future be? Holographic? Will we come up with technologies that allow us to enhance the human visual system or bypass it entirely and directly stimulate the visual cortex? If so, will we want the content that we are creating today to have value to the consumers of that content in the future? If so, how do we insure that there is sufficient information available?


Peter Siebert

Now the law of diminishing returns becomes a little more complex because part of the equation has to do with the projected or proposed long term value of the content. The rule of thumb I have always applied to content creation is if the content has long-term value, it should be created at the highest “quality” possible based on the resource availability. I include metadata in the quality metric because in the future it may be even more important to the value of the content then the essence.Dr. Siebert noted in his presentation that time did not allow him to address the considerable capabilities and impact of next generation audio. My own research and experience is that audio is equally important to the consumer’s total quality of experience. Remember, in the future, the content we are creating today may be consumed on what is the equivalent of a holodeck and that metadata will be used to map the 2D essence in 3D space.Bill Hayes is director of engineering for Iowa Public Television. He can be reached via TV Technology.

CAGLIARI, ITALY—More than 100 of the best minds in the field of television engineering have gathered here for the 2017 Institute of Electrical and Electronic Engineering (IEEE) Broadband Multimedia Systems and Broadcasting (BMSB) conference. The June 7-9 event is designed to stimulate interchange of ideas in the areas of video content capture, processing and distribution technologies among all areas and levels, including industry professionals, broadcasters, content creators and distributors, academics and engineering students.

Maurizio Murroni

The conference got underway with a welcoming address from the assistant professor of communications at the University of Cagliari’s department of electrical and electronic engineering, Maurizio Murroni, who stated that his organization was pleased to both host the BMSB event and also the high attendance level at the conference, given the difficulties sometimes associated with long distance travel in today’s world.


“We are happy to host people and are proud of our traditions,” said Murroni. “We have been a bridge of communication for several centuries and this [the island of Sardinia] is a good place for a conference on communications.”


Richard Chernock

Triveni Digital CTO Richard Chernock led off the technical proceedings with an update on the ATSC 3.0 digital television transmission standard, remarking that it was nearly complete.

“At the stage we’re at now, most of the standard is finished,” said Chernock, who is also chairman of the ATSC Technology Group. “We’re still tweaking a few elements, but the bottom line is that ATSC 3.0 is mostly competed.”

Chernock acknowledged the rollout of ATSC 3.0 in Korea and described some of the features embodied in the standard and how these will help broadcasters better serve their audiences. His presentation is one of several on the BMSB program agenda involving ATSC 3.0.

Chernock’s remarks were followed by a session from Alberto Messina, senior research engineer and research and development coordinator at Italy’s RAI television center for research and technological Innovation, on ways to make content more accessible to consumers.

“Today the situation is very fragmented, as the signals come not only from satellite, but also [terrestrial] broadcasters and the Internet,” said Messina. “The problem is too much content; consumers are drowning in it. It’s difficult for them to find something they want to view. Our work is to make sure that every consumer’s needs are satisfied.”

Alberto Messina

Messina’s remarks provided a lead-in to three days of nearly continuous presentations accomplished in three separate tracks in order to handle the volume of the information exchange at the conference.


Topics on the agenda at this year’s BMSB event include video coding, light field imaging, signal processing and modulation technologies, the convergence of broadcast and broadband, use of increasingly higher frequencies in satellite communications, emergency alerting technologies, transmission system field trials, energy-saving and spectrum-conserving technologies, and more.  

The 2017 BMSB event marks the 12th anniversary of the conference. It has been held in the United States, Korea, Japan, Spain, Belgium and England.

Princeton Junction, NJ—When we look at where television audio standards stand today, it’s hard to reconcile the initial mono sound broadcasts from television’s infancy to where it has evolved with the capabilities of the next generation of broadcast standards, part of ATSC 3.0. This progression of television audio from mono led first to stereo, then a second audio language program (SAP), to ATSC 1.0 digital with the availability of Dolby AC3 Surround Sound—each step advancing audio technology on a steady progression towards making sound more realistic and engaging.
Visitors to the Next Generation TV Hub at the 2017 NAB Show check out immersive audio for ATSC 3.0.

Now, however, new doors are being opened for content developers to leverage the ATSC 3.0 next-generation audio standards and offer a more immersive sound environment, as well as provide end users with a more personalized sound experience. An exploration of ATSC 3.0 audio standards capabilities demonstrates their impact on how we deploy and engage with new sound systems that are evolving along with rapid advancements in the digital world.


Building upon surround sound that’s laid out in a plane (5.1)—such as with initial surround sound systems like Dolby AC3—ATSC 3.0 audio standards take sound to a full 7.1+4 implementation, meaning seven channels of sound in a plane, one channel for a subwoofer (or the low frequencies), and four channels overhead.

On first look, this may just seem like throwing more sound into the mix, but how that sound is delivered is what makes it so unique. With audio experts having a detailed understanding of how the ear works and how humans perceive sound, the new standards can be used more effectively to convey directionality. And, what’s more, this can be done not just on fully equipped home theater speaker systems leveraging all channels, but on something as simple as a sound bar attached to a digital TV. It’s even possible to replicate this immersive 3D sound environment using ordinary headphones.

Imagine the sound of raindrops hitting leaves over your head in a scene filmed in a tropical forest or the oncoming sounds of a helicopter approaching from the side and crossing overhead before moving on and away from you. The possibilities for sound technicians truly are expansive and these new ATSC 3.0 standards involved are designed to scale and accommodate newer, more sophisticated audio scenarios as they emerge, making for a truly immersive 3D, and much more attractive, user experience today and for the future.

While there’s likely to be some time delay related to broadcasters implementing full capability of the new standards, (as well as end users not running out to purchase advanced sound systems), there are other aspects of the new audio technologies that are going to probably be used right away, and that will ultimately be very impactful.

For example, the coding technology for next-generation audio systems has moved away from being simply channel-based systems. In today's 5.1 implementations, there are five channels of surround sound and one channel for subwoofer, or low frequency, with fixed assignments: front left, front right, center, the two rears, and then the subwoofer. All sounds fall into these channels. Next-generation audio standards additionally incorporate object audio, whereby audio objects can move and be maneuvered into different positions to register sound information.

As an example, imagine someone filming a skateboarder while they run a circuit in a skate park, where the sound tech is following the skater using a joystick to control the movement of a sound object in three dimensions. In this scenario, the sound will follow the skateboarder around the course and record a more realistic representation of sound as it changes with the skater’s movements. This allows for more diversity as objects can be positioned and moved to accommodate a lot of unique and intriguing audio scenarios.

Another change enabled by next-generation audio standards—including audio objects instead of just channels—is that it allows viewers to control and choose objects that they want to hear (personalization). This enhances the user experience by vastly increasing a viewer’s control over audio content. For example, because you are dealing with objects, it makes it possible to offer controls that allow viewers to turn one object up, or turn another down, based on their own personal preference. For example, you might broadcast a football game where one object is the home team announcer and the other is the visiting team announcer. With next-generation audio systems, it’s fairly simple to give the viewer control over the audio so that they can choose and customize an audio experience tailored to their individual likes (choose which announcer to listen to in this example). Another scenario might be a visually-challenged viewer looking to “turn up” an object that is providing some audio detail describing what is coming over the television screen (known as descriptive video). Personalized audio is likely to be simpler to implement than full immersive audio (especially regarding fitting into existing station workflows) and will be very attractive to many viewers.



Technology is always evolving, as are the capabilities of devices. Because it’s understood that evolution will happen, ATSC 3.0 has been developed to gracefully move from what we have today to what will be coming in the future. The need for this is something learned from past experience and incorporated into the entire standard, not just the audio portion. Throughout the entire system, each layer signals to the layer above what technologies will be used. ATSC 3.0 has set the stage for carrying both the old technology and new technology as it comes online—a win-win scenario for all involved in broadcast television and the viewing public at large.


In my opinion, one of the primary themes of this years NAB was ATSC 3.0. ATSC 3.0 is clearly a reality—with demos, products, conference sessions and significant mention in the keynote speeches from FCC commissioner Ajit Pai, NAB’s Senator Gordon Smith and Sam Matheny. There was a well-attended Next Generation TV Hub in the LVCC Grand Hallway that demonstrated the reality of many new features of the ATSC 3.0 system: Better Pictures, Immersive Sound, Mobility, Gateway Devices, Targeted Ad Insertion, Audience Measurement, Emergency Alerting, Content Delivery to Automobiles and a broadcast from Black Mountain. The ATSC Pavilion in the Futures Park area of North Hall gave a deeper dive into many technologies and features for ATSC 3.0—including the systems currently being deployed in South Korea for the launch of UHDTV services for the 2018 Olympics.

Dr. Richard Chernock is the Distinguished Lecturer Chair for the IEEE Broadcast Technology Society (IEEE BTS). He is currently the Chief Science Officer at Triveni Digital. In that position, he is developing strategic directions for monitoring, content distribution and metadata management for emerging digital television systems and infrastructures. Dr. Chernock is active in many of the ATSC, SMPTE and SCTE standards committees, particularly in the areas of future DTV, monitoring, metadata, and data broadcast. He is chairman of the ATSC Technology Group on ATSC 3.0 (TG3) and chairs the AHG on service delivery and synchronization for ATSC 3.0. He was previously chairman of the ATSC Technology and Standards Group (TG1). Previously, he was a Research Staff Member at IBM Research, investigating digital broadcast technologies.

PISCATAWAY, NJ - Sally French, Founder of the Drone Girl will be a Keynote speaker at the upcoming IEEE Broadcast Symposium on October 10-12, 2017 in Arlington, Virginia. Sally has been named as one of the “4 Top Women Shaping the Drone Industry” by Fortune Magazine and has been published in The Wall Street Journal, MarketWatch, NPR, CNN, Forbes, The Economist and the Orange County Register. She is a renowned public speaker on drones and drone technology, having appeared at South by Southwest, Harvard Business School and John A. Pulson School of Engineering and Applied Sciences. The Broadcast Technology Society is excited to have Sally attend the Broadcast Symposium and discuss the impact drones will make on the broadcasting industry.