

David was a key engineering contributor at InterVideo from 2000–2007.
In this role, he worked on broadcast technology (ATSC), and DVD editing
and authoring.
DRM work that David did on the InterVideo DVR system was the inspiration for the PS3 Blu-Ray implementation at Netflix.
DRM work that David did on the InterVideo DVR system was the inspiration for the PS3 Blu-Ray implementation at Netflix.

Shortly after joining Netflix, work started on the Silverlight client, the first widely deployed and used Adaptive-Bitrate (ABR) streaming client.
The streams needed for ABR streaming required special processing, and David wrote the back-end tools to prepate these streams for client delivery.
This was the birth of the Netflix Encoding Technology Team.
The streams needed for ABR streaming required special processing, and David wrote the back-end tools to prepate these streams for client delivery.
This was the birth of the Netflix Encoding Technology Team.
Breaking the Rules for 25 Years

We were challenged to develop an adaptive streaming client for the PS3. At the
time, Sony was not working with Netflix and so the only way to build such a client
was as a Blu-Ray Java app. The problem is that Blu-Ray plays muxed streams, and so
there was no obvious means of handling ABR. David proposed a model for Blu-Ray compliant
ABR with client-side muxing, and build the back-end tools. Plot spoiler, we shipped the working app in 9 months
At one point, Sony messaged us saying, "The system was not intended to be used like this. Please stop!"
At one point, Sony messaged us saying, "The system was not intended to be used like this. Please stop!"

In January, 2010, Apple announced the iPad. Reed emailed Steve Jobs and told him that the
iPad was a cool product and Netflix would be a great app. Then we all waited ...
In early February, Steve replied that Netflix "would be a great app for the iPadl and we want you to launch with us!"
The only problem was that launch was 57 days out, and the iPad used HLS 1.0, a streaming model foreign to the Netflix system: Specifically, 1) M2TS vs. fragmented MP4, 2) Muxed A/V vs. separate streams, and 3) per-segment files, vs. addressable segments in one file.
In early February, Steve replied that Netflix "would be a great app for the iPadl and we want you to launch with us!"
The only problem was that launch was 57 days out, and the iPad used HLS 1.0, a streaming model foreign to the Netflix system: Specifically, 1) M2TS vs. fragmented MP4, 2) Muxed A/V vs. separate streams, and 3) per-segment files, vs. addressable segments in one file.

While working on the complexdity analyzer in mid-2010, Dr. Ioannis Katasavounidis
was perplexid by an apparent bug in his tool that was recommending ~1Mbps H264 bitrate for
a fairly complex 1080p video. After much analysis, Ioannis realized that the
test video was not actual 1080p video, but 480p video that had been upsampled.
It was then we realized that our fixed bitrate ladders were likely overallocating bits for many of the videos we delivered. Ioannis and David proposed a complexity-based model for encoding where bitrate would be tuned to the underlying video. This was part of US Patent 20120147958, filed in mid-2010.
It was then we realized that our fixed bitrate ladders were likely overallocating bits for many of the videos we delivered. Ioannis and David proposed a complexity-based model for encoding where bitrate would be tuned to the underlying video. This was part of US Patent 20120147958, filed in mid-2010.

"We are video experts, and we are telling you that this is good video quality!"
These were the exact words spoken to us by a codec vendor when when we complained about the quality of their video technology. David had long been telling anyone who would listen that the industry needed a new video quality metric. One day he was having lunch with Raul Diaz, a former colleague, and explaining the metric problem. Raul suggested that he talk to USC Prof Jay Kuo (a former colleague of both David & Raul). After some conversations with Prof. Kuo, David wrote a research grant with provisions that the work would be royalty-free and open-source.
Dr. Joe YuChieh Lin (a PhD student at the time), lead the project, and USC built a functional VMAF prototype in Matlab. At Netflix, David's team productized VMAF, and brought Prof Kuo and Dr. Lin's great work to the industry.
These were the exact words spoken to us by a codec vendor when when we complained about the quality of their video technology. David had long been telling anyone who would listen that the industry needed a new video quality metric. One day he was having lunch with Raul Diaz, a former colleague, and explaining the metric problem. Raul suggested that he talk to USC Prof Jay Kuo (a former colleague of both David & Raul). After some conversations with Prof. Kuo, David wrote a research grant with provisions that the work would be royalty-free and open-source.
Dr. Joe YuChieh Lin (a PhD student at the time), lead the project, and USC built a functional VMAF prototype in Matlab. At Netflix, David's team productized VMAF, and brought Prof Kuo and Dr. Lin's great work to the industry.

In 2015, David worked closely with Rich Gerber and Dolby to
build the Netflix HDR ingest and processing workflow. This included
helping Dolby to build an SDK that could support Netflix'
distributed prodessing model.
The Dolby Vision work along with the IMF workflow built by Andy Schuler and Rohit Puri (leaders on David's team) enabled support for ingest and processing of multi-trim-pass IMF HDR masters.
The Dolby Vision work along with the IMF workflow built by Andy Schuler and Rohit Puri (leaders on David's team) enabled support for ingest and processing of multi-trim-pass IMF HDR masters.

In 2017, Dr. Ioannis Katsavounidis proposed the Dynamic Optimizer model
for encoding a video. This model made sample encodes at various resolutions
and CRF values, and then plotted the convex hull of the RD curves to predeict
the optimal ABR ladder. While computationally expensive, Dynamic
Optimized (convex hull) compression resulted in significant bitrate reductions allowing
good streaming of complex videos at bitrates as low as ~100Kbps. Ioannis' work
on convex hull encoding (US Patent 20190028529) has had a significant impact on quality
and egress of streaming video.
At Netflix, David's role in convex hull encoding was to immediately see the value and ensure that the project had the full support of Netflix leadership, and was given top priority. When David joined Meta, he led the team that scaled convex hull encoding to 3.5B users around the world.
At Netflix, David's role in convex hull encoding was to immediately see the value and ensure that the project had the full support of Netflix leadership, and was given top priority. When David joined Meta, he led the team that scaled convex hull encoding to 3.5B users around the world.

David has been working with AOM swince 2015. As an engineering leader working through Netflix and Meta,
David sponsored OSS AV1 projects such as SVT-AV1 encoder and dav1d decoder. David has also worked tirelessly as
an AV1 evangelist, discussing the benefits of RF codecs and AV1 both publically, and with network operators,
and handset and SoC vendors.
As an engineering leader at Meta, David led the buildout of the AV1 ecosystem. As an IC, David focused on expoanding AV1 reach though device testing. David also worked on a level-based decoding model for SW AV1 decoding. Finally, David has worked hard on device benchmarking for SW decoding, most recently with the development of the VCAT OSS benchmarking tool.
As an engineering leader at Meta, David led the buildout of the AV1 ecosystem. As an IC, David focused on expoanding AV1 reach though device testing. David also worked on a level-based decoding model for SW AV1 decoding. Finally, David has worked hard on device benchmarking for SW decoding, most recently with the development of the VCAT OSS benchmarking tool.

In the early years of Netflix, video encoding was very slow, often taking
more than 3 days to encode a 1080p stream for a given title. Further, Netflix
used a static ABR ladder for all titles.
In 2010, David proposed 'MAPLE' (MAssively ParalleL video Encoding), using a complexity analyzer model for making encoding decisions. David invited Dr. Ioannis Katsavounidis join him on the project for the summer. Working with David, Ioannis developed the complexity analyzer that alter the trajectory of streaming video.
US Patent 20120147958 (Ronca, Katsavounidis, et al) covers the fundamentals of both map-reduce encoding, complexity analysis, and complexity-based encoding. The basis for Netflix "Complexit-based Encodeing model."
In 2010, David proposed 'MAPLE' (MAssively ParalleL video Encoding), using a complexity analyzer model for making encoding decisions. David invited Dr. Ioannis Katsavounidis join him on the project for the summer. Working with David, Ioannis developed the complexity analyzer that alter the trajectory of streaming video.
US Patent 20120147958 (Ronca, Katsavounidis, et al) covers the fundamentals of both map-reduce encoding, complexity analysis, and complexity-based encoding. The basis for Netflix "Complexit-based Encodeing model."

In late 2014, Netflix made a decision to launch the service in Japan in September, 2015.
Japanese (JA) subtitles require complex typography, and none of the Netflix devices
had the necessary technology to properly support JA. To support the
tight launch window, David proposed an image subtitle delivery model, where the
JA subtitle assets were rendered to images for client-side rendering. This
work still required backend tools to render JA subtitles properly. David
designed the workflow model, and led the development of these tools. When Netflix-Japan
launched in 2015, the subtitles were recognized as the best JA implementation to date.
While working on the JA subtitle project, David saw that the global subtitle standards were incomplete and did not support the major JA features. David led the multi-company effort to complete the subtitle specifications, resulting in the publication of IMSC 1.1, and TTML 2.0. Netflix was awarded a technical Emmy for the global subtitle standardization effort.
While working on the JA subtitle project, David saw that the global subtitle standards were incomplete and did not support the major JA features. David led the multi-company effort to complete the subtitle specifications, resulting in the publication of IMSC 1.1, and TTML 2.0. Netflix was awarded a technical Emmy for the global subtitle standardization effort.

While working at Meta on the problem of high-scale video encoding,
David proposed the CAP Encoding Postulate, which states:
"In any well-optimized video encoding system, you cannot simultaneously minimize Compute, Distortion, and Egress. Improving any one of these metrics necessarily degrades one or both of the others."
"In any well-optimized video encoding system, you cannot simultaneously minimize Compute, Distortion, and Egress. Improving any one of these metrics necessarily degrades one or both of the others."

Through Meta, David partnered with Ittiam to build the VCAT
(Video Codec Acid Test) tool for benchmarking HW and SW video
decoding.
In retirement, David is continuing to contribute code and directional leadership to VCAT, and has a vision to build a full suite of video benchmarking tools.
In retirement, David is continuing to contribute code and directional leadership to VCAT, and has a vision to build a full suite of video benchmarking tools.

In 2019, Dr. Ioannis Katsavounidis shared his work on the development of the cost/benefit model for
encoder comparison. This work laid the foundation for apples-to-apples comparison of generational
video codecs, and finally provided an objective model to measure that value of an encoder
implementation. Ioannis' model provided the feedback loop that enabled
SVT-AV1 to achieve impressive cost-benefit wins.
As an engineering leader at Meta, David sponsored multiple codec evaluations based on Ioannis' work, and brought this model into conversations with network operators, and handset and SoC vendors. This work was also a fundamental to Meta bringing high-quality video to 3.5B global users.
As an engineering leader at Meta, David sponsored multiple codec evaluations based on Ioannis' work, and brought this model into conversations with network operators, and handset and SoC vendors. This work was also a fundamental to Meta bringing high-quality video to 3.5B global users.

As the Engineering leader of the Video Processing team, David lead
the deployment of two generations of custom video encoding ASICS including
the Emmy-winning MSVP processor, that enabled Meta to deliver
the quality of convex-hull encoding to 3.5B people around the world.