Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save gmarchand/c8d1581afabadbf096a4a4a412f7e63e to your computer and use it in GitHub Desktop.
Save gmarchand/c8d1581afabadbf096a4a4a412f7e63e to your computer and use it in GitHub Desktop.
Bootstrap MacBook Pro
WEBVTT
Kind: captions
Language: en
00:00:02.640 --> 00:00:05.250
All right, let's get started.
00:00:05.250 --> 00:00:10.250
First off, I want to thank everyone for coming.
00:00:10.480 --> 00:00:14.200
Welcome to re:Invent 2021, in person.
00:00:14.200 --> 00:00:15.760
That's a lotta fun, right?
00:00:15.760 --> 00:00:16.930
My name is Jamie Duemo.
00:00:16.930 --> 00:00:21.120
I am a Business Development Leader with AWS for M&E.
00:00:21.120 --> 00:00:23.770
And we are thrilled that you're able to join us today
00:00:23.770 --> 00:00:27.420
for this session on transforming broadcast production,
00:00:27.420 --> 00:00:29.313
playout, and fan experiences.
00:00:31.460 --> 00:00:34.550
Broadcasters continue to adopt the newest technologies,
00:00:34.550 --> 00:00:36.700
as a means to differentiate themselves
00:00:36.700 --> 00:00:39.500
in an ever increasing competitive market.
00:00:39.500 --> 00:00:40.560
And in this session,
00:00:40.560 --> 00:00:42.740
we are thrilled to feature a wide variety
00:00:42.740 --> 00:00:45.650
of successful use cases from value customers
00:00:45.650 --> 00:00:49.680
like Discovery, ViacomCBS, and Discovery Sports,
00:00:49.680 --> 00:00:52.280
sharing how they are redefining
00:00:52.280 --> 00:00:54.930
as well as further optimizing their workloads on AWS.
00:00:57.610 --> 00:00:59.677
With the acceleration of direct to consumer
00:00:59.677 --> 00:01:03.480
and the convergence of broadcast with OTT workflows,
00:01:03.480 --> 00:01:06.290
it means that there are more services and content
00:01:06.290 --> 00:01:07.850
competing for viewers more today
00:01:07.850 --> 00:01:09.730
than ever before in the industry.
00:01:09.730 --> 00:01:13.230
Consumers are able to churn through platforms faster,
00:01:13.230 --> 00:01:16.980
chasing the experiences that most resonates with them.
00:01:16.980 --> 00:01:20.040
And for these reasons, we are hearing from our customers
00:01:20.040 --> 00:01:21.770
that they need more flexibility
00:01:21.770 --> 00:01:24.330
to adapt to these changing behaviors.
00:01:24.330 --> 00:01:27.210
They need the flexibility to quickly and easily
00:01:27.210 --> 00:01:31.090
readjust their technology stack, per genre, per event,
00:01:31.090 --> 00:01:35.060
in order to optimally meet their business requirements.
00:01:35.060 --> 00:01:37.260
Building with that single recipe
00:01:37.260 --> 00:01:39.850
of provisioning for the worst case scenario,
00:01:39.850 --> 00:01:42.650
it just cannot keep up with today's pace of innovation,
00:01:42.650 --> 00:01:45.440
nor the pace of...
00:01:45.440 --> 00:01:46.690
What's the best way to say it?
00:01:46.690 --> 00:01:49.950
Nor the best aspect of consumer fragmentation.
00:01:49.950 --> 00:01:52.250
That is changing very quickly.
00:01:52.250 --> 00:01:53.880
So for some business requirements,
00:01:53.880 --> 00:01:56.920
we find that broadcasters may need lower latency
00:01:56.920 --> 00:01:58.530
and higher signal quality,
00:01:58.530 --> 00:02:00.050
while for other business requirements,
00:02:00.050 --> 00:02:02.160
the same broadcasters may need to focus more
00:02:02.160 --> 00:02:05.653
on lowering costs while increasing reliability.
00:02:07.110 --> 00:02:08.150
To assist our customers,
00:02:08.150 --> 00:02:11.150
we are continually innovating with flexibility in mind,
00:02:11.150 --> 00:02:13.950
and our AWS media services and products
00:02:13.950 --> 00:02:15.870
demonstrates our efforts.
00:02:15.870 --> 00:02:19.630
Customers choose AWS media services to prepare, to process,
00:02:19.630 --> 00:02:24.320
to deliver their live and nonlinear content from the cloud.
00:02:24.320 --> 00:02:26.780
And these managed services allow customers
00:02:26.780 --> 00:02:29.920
to build and rebuild their workflows quickly,
00:02:29.920 --> 00:02:32.140
thereby eliminating capacity planning
00:02:32.140 --> 00:02:34.380
so that you can easily scale with growth,
00:02:34.380 --> 00:02:36.110
and you can also provide,
00:02:36.110 --> 00:02:38.590
be able to benefit from cost optimization
00:02:38.590 --> 00:02:40.103
with pay-as-you-go pricing.
00:02:44.450 --> 00:02:46.130
Along with the AWS innovations,
00:02:46.130 --> 00:02:48.700
we have a incredible partner community,
00:02:48.700 --> 00:02:52.450
focused to help customers with their solution adoption.
00:02:52.450 --> 00:02:54.330
With more than 500 partners now,
00:02:54.330 --> 00:02:57.450
our customers can benefit with choice and flexibility
00:02:57.450 --> 00:03:01.330
to not just keep up within these disruptive business trends,
00:03:01.330 --> 00:03:05.770
but to enable our customers to internally redefine
00:03:05.770 --> 00:03:08.950
what disruption is, to be just business as usual.
00:03:08.950 --> 00:03:10.430
And the results are showing.
00:03:10.430 --> 00:03:14.050
In one year, we expanded from roughly 1600 playout channels
00:03:14.050 --> 00:03:17.920
originating on AWS to about 2,900 now,
00:03:17.920 --> 00:03:21.910
driven by customers like A&E, Cinedigm, Discovery,
00:03:21.910 --> 00:03:26.363
NBC Olympics, PAC 12, ViacomCBS, and Warner Media.
00:03:28.710 --> 00:03:30.990
In playout, although that's a great accomplishment,
00:03:30.990 --> 00:03:32.910
is not the only growth that we're seeing.
00:03:32.910 --> 00:03:35.140
There is a lot of exciting innovation
00:03:35.140 --> 00:03:36.970
within live remote production,
00:03:36.970 --> 00:03:40.157
for news, for sports, and for special events.
00:03:42.130 --> 00:03:45.220
And Sky is a great example of this innovation,
00:03:45.220 --> 00:03:47.970
as well as its redefinition of market disruption to BAU
00:03:49.205 --> 00:03:50.390
and may be able to accomplish this
00:03:50.390 --> 00:03:53.050
with their 12-day live production
00:03:53.050 --> 00:03:56.220
of COP26 Climate Conference.
00:03:56.220 --> 00:04:00.040
To host the event, Sky News required a production gallery,
00:04:00.040 --> 00:04:01.550
production control room
00:04:01.550 --> 00:04:03.670
with the same complement of functionality
00:04:03.670 --> 00:04:06.550
that powers their high quality brands every day.
00:04:06.550 --> 00:04:08.990
However, there is no availability
00:04:08.990 --> 00:04:12.117
with their on-prem facility, and they had limited time.
00:04:12.117 --> 00:04:15.240
Typically their on-prem builds
00:04:15.240 --> 00:04:16.940
took about six months to construct
00:04:16.940 --> 00:04:19.390
and about one month worth of testing.
00:04:19.390 --> 00:04:22.250
Sky had six weeks from conversation to launch
00:04:22.250 --> 00:04:23.710
to build a solution.
00:04:23.710 --> 00:04:25.600
And because of this time crunch,
00:04:25.600 --> 00:04:27.540
there was no time to retrain staff
00:04:27.540 --> 00:04:29.193
on a different type of solution.
00:04:30.150 --> 00:04:33.237
So, Sky built their production gallery on AWS.
00:04:34.080 --> 00:04:35.790
Built with continuous integration
00:04:35.790 --> 00:04:38.930
and continuous delivery best practices
00:04:38.930 --> 00:04:40.960
in order to automate their deployments,
00:04:40.960 --> 00:04:44.150
Sky delivered a flawless 12-day event
00:04:44.150 --> 00:04:46.839
with extreme low latency to direct to home,
00:04:46.839 --> 00:04:50.040
OTT, and their Sky Glass platforms.
00:04:50.040 --> 00:04:53.350
Sky spun up the environment prior to the event every day
00:04:53.350 --> 00:04:54.800
and spun it back down.
00:04:54.800 --> 00:04:58.870
And as a result, this enabled a flexible production gallery
00:04:58.870 --> 00:05:00.670
for a fraction of the cost
00:05:00.670 --> 00:05:02.640
compared to their on-prem facilities,
00:05:02.640 --> 00:05:04.630
as well as drastically reduced
00:05:04.630 --> 00:05:06.930
the associated carbon footprint
00:05:06.930 --> 00:05:08.563
typical for this type of event.
00:05:10.350 --> 00:05:14.890
And one of the technology partners for COP26 was Vizrt.
00:05:14.890 --> 00:05:17.530
Sky leveraged Vizrt for their cloud-first
00:05:17.530 --> 00:05:18.800
live production solution,
00:05:18.800 --> 00:05:21.360
including Viz Vector for switching
00:05:21.360 --> 00:05:23.408
for up to 40 camera feeds,
00:05:23.408 --> 00:05:27.313
Viz Engine for graphics and Viz Mosart for automation.
00:05:28.280 --> 00:05:31.280
When Sky launched on day one, it was all hands on deck.
00:05:31.280 --> 00:05:32.190
And everyone here who,
00:05:32.190 --> 00:05:33.750
especially within the broadcast industry,
00:05:33.750 --> 00:05:35.980
knows what that exercise is.
00:05:35.980 --> 00:05:37.700
This was the best story they had told me.
00:05:37.700 --> 00:05:39.780
About three hours into the event,
00:05:39.780 --> 00:05:42.010
the nervousness started dissipating,
00:05:42.010 --> 00:05:43.730
and people started fizzling out.
00:05:43.730 --> 00:05:47.513
And by day two, it was just business as usual.
00:05:49.840 --> 00:05:52.410
To demonstrate another example of how AWS
00:05:52.410 --> 00:05:55.880
provides the flexibility to lead to customer success,
00:05:55.880 --> 00:05:57.900
I'm happy to present this by Discovery,
00:05:57.900 --> 00:06:00.420
who could not join us in person today,
00:06:00.420 --> 00:06:03.347
sharing their next chapter of cloud-based playout
00:06:03.347 --> 00:06:04.880
and the substantial benefits
00:06:04.880 --> 00:06:06.623
they continue to realize at scale.
00:06:11.940 --> 00:06:12.773
Hi everyone.
00:06:12.773 --> 00:06:13.606
I'm Dave Duvall.
00:06:13.606 --> 00:06:16.440
I'm the Chief Information Officer at Discovery.
00:06:16.440 --> 00:06:18.050
A little bit later you'll hear from Rasha Ahmad,
00:06:18.050 --> 00:06:20.900
who's our Senior Director of Cloud FinOps and Strategy
00:06:20.900 --> 00:06:22.520
at Discovery as well.
00:06:22.520 --> 00:06:23.690
We're really excited today to be here
00:06:23.690 --> 00:06:26.330
to talk to you about our linear cloud playout journey
00:06:26.330 --> 00:06:29.113
and the TCO study that we recently completed.
00:06:29.970 --> 00:06:32.560
But before we do all that, I wanted to,
00:06:32.560 --> 00:06:34.290
Discovery is a big brand and a big company.
00:06:34.290 --> 00:06:36.140
A lot of you probably know Discovery Channel,
00:06:36.140 --> 00:06:39.020
but we wanted to begin by showing a quick clip
00:06:39.020 --> 00:06:39.870
all about our company
00:06:39.870 --> 00:06:41.570
and all the brands that we know and love
00:06:41.570 --> 00:06:43.070
and we bring to consumers every day,
00:06:43.070 --> 00:06:44.670
and you can learn a little bit more about us
00:06:44.670 --> 00:06:46.180
before we get into the study.
00:06:46.180 --> 00:06:48.030
So let's go to the clip.
00:06:48.030 --> 00:06:51.070
Discovery is everywhere you look,
00:06:51.070 --> 00:06:54.625
from small towns to wide open spaces
00:06:54.625 --> 00:06:57.530
or even our own backyards.
00:06:57.530 --> 00:07:01.000
This is where we find the shared experiences,
00:07:01.000 --> 00:07:04.993
the quiet moments that connect us wherever we might go.
00:07:06.220 --> 00:07:07.780
And whether the journey is long.
00:07:07.780 --> 00:07:09.670
We have a very close bond.
00:07:09.670 --> 00:07:10.520
A little scary.
00:07:10.520 --> 00:07:12.210
Something just like came over me.
00:07:12.210 --> 00:07:13.110
Or close to home.
00:07:13.110 --> 00:07:14.700
Are y'all ready to see your fixer upper.
00:07:14.700 --> 00:07:16.740
The Discovery family of networks
00:07:16.740 --> 00:07:20.180
are here to bring the world to you.
00:07:20.180 --> 00:07:21.810
We're tryin' to make people show really good.
00:07:21.810 --> 00:07:23.550
So we're givin' 'em our best.
00:07:23.550 --> 00:07:25.570
So be intrigued,
00:07:25.570 --> 00:07:27.740
find the secret ingredient.
00:07:27.740 --> 00:07:29.200
Heaven on earth.
00:07:29.200 --> 00:07:31.630
Get hooked on passion.
00:07:31.630 --> 00:07:33.271
Let out a roar.
(lion roaring)
00:07:33.271 --> 00:07:34.580
(intense music)
(dog barking)
00:07:34.580 --> 00:07:35.743
Let in some sun.
00:07:37.913 --> 00:07:39.823
And get ready for what comes next.
00:07:40.986 --> 00:07:43.310
The Discovery family of networks,
00:07:43.310 --> 00:07:45.629
we bring the world to you.
00:07:45.629 --> 00:07:48.296
(gentle music)
00:07:49.320 --> 00:07:51.360
So how did we go about this journey?
00:07:51.360 --> 00:07:53.820
Well, it's important to understand at the outset,
00:07:53.820 --> 00:07:55.890
the things we did to set out to begin.
00:07:55.890 --> 00:07:58.730
We knew that we wanted to begin with a POC,
00:07:58.730 --> 00:08:00.210
but how do you get started with a POC
00:08:00.210 --> 00:08:01.800
where you're fundamentally rethinking
00:08:01.800 --> 00:08:04.910
how you approach linear production and where you do it?
00:08:04.910 --> 00:08:06.560
So we did a lot of time spending,
00:08:07.822 --> 00:08:09.480
or a lot of effort spending time with our team
00:08:09.480 --> 00:08:11.350
to talk about what it would take to get there,
00:08:11.350 --> 00:08:13.180
the belief in the outcome.
00:08:13.180 --> 00:08:14.700
We also spend a lot of time investing
00:08:14.700 --> 00:08:17.200
in our cloud content supply chain.
00:08:17.200 --> 00:08:19.990
You cannot feed the act of playout,
00:08:19.990 --> 00:08:21.950
the literal transmission of our content at the end,
00:08:21.950 --> 00:08:23.660
if you're not prepared upfront,
00:08:23.660 --> 00:08:25.180
we invested in the data layers
00:08:25.180 --> 00:08:26.570
and the cloud technology stacks
00:08:26.570 --> 00:08:29.030
to make sure that we could process our content
00:08:29.030 --> 00:08:31.130
and bring it to this platform effectively
00:08:31.130 --> 00:08:33.160
and with a high degree of data quality
00:08:33.160 --> 00:08:36.763
to enable all this automation and the outcome we were after.
00:08:38.610 --> 00:08:41.480
So down that journey, and over the next two years,
00:08:41.480 --> 00:08:43.920
we built all the prerequisite platforms we needed,
00:08:43.920 --> 00:08:47.290
culminating in our launch in the U.S. in 2017.
00:08:47.290 --> 00:08:49.680
It may seem a little crazy to say we started
00:08:49.680 --> 00:08:51.960
with our most important business in the U.S.,
00:08:51.960 --> 00:08:52.870
but we were confident.
00:08:52.870 --> 00:08:55.670
We had great partners at our side, along with AWS,
00:08:55.670 --> 00:08:58.980
as well as our partner Evertz on the playoff technology,
00:08:58.980 --> 00:09:01.220
and we successfully launched all 48 channels
00:09:01.220 --> 00:09:03.300
in our U.S. portfolio that year.
00:09:03.300 --> 00:09:04.800
From there, we went global.
00:09:04.800 --> 00:09:09.280
We went to the UK, across EMEA, APAC,
00:09:09.280 --> 00:09:11.440
culminating in Latin America this year
00:09:11.440 --> 00:09:13.010
to hit our end result,
00:09:13.010 --> 00:09:16.330
240 feeds across all four regions
00:09:16.330 --> 00:09:18.380
that we operate in globally.
00:09:18.380 --> 00:09:21.320
In fact, every minute of every bit of content you saw
00:09:21.320 --> 00:09:23.110
in that sizzle reel a moment ago
00:09:23.110 --> 00:09:25.020
is carried on this cloud platform.
00:09:25.020 --> 00:09:26.250
We're very proud of that fact,
00:09:26.250 --> 00:09:28.750
and that our entire pay television business
00:09:28.750 --> 00:09:30.720
is operating from the public cloud
00:09:30.720 --> 00:09:32.690
and being operated incredibly efficiently
00:09:32.690 --> 00:09:34.513
and effectively by an amazing team.
00:09:35.350 --> 00:09:37.110
So let's talk about that team benefit.
00:09:37.110 --> 00:09:38.870
When we began this process,
00:09:38.870 --> 00:09:42.058
we had disparate operations all over the globe,
00:09:42.058 --> 00:09:45.630
multiple outsource partners, different ratios
00:09:45.630 --> 00:09:49.230
in our teams, as far as the amount of people
00:09:49.230 --> 00:09:52.690
looking after a given feed and the care and upkeep,
00:09:52.690 --> 00:09:55.420
disparate technology on the playout.
00:09:55.420 --> 00:09:58.500
Today we have this view, very different picture.
00:09:58.500 --> 00:10:01.950
The concept that we went for was to really separate
00:10:01.950 --> 00:10:03.660
the technology stack and the location
00:10:03.660 --> 00:10:05.730
of the playout technology itself
00:10:05.730 --> 00:10:07.930
from the control surfaces in the staff.
00:10:07.930 --> 00:10:09.860
We've been able to abstract all that away,
00:10:09.860 --> 00:10:11.640
consolidating to a single location
00:10:11.640 --> 00:10:13.430
on the East Coast of the U.S.,
00:10:13.430 --> 00:10:15.310
which now allows us to operate
00:10:15.310 --> 00:10:17.860
out of a dual region redundancy model.
00:10:17.860 --> 00:10:21.720
We operate our feeds out of U.S. East and EU West.
00:10:21.720 --> 00:10:23.390
That transatlantic redundancy model
00:10:23.390 --> 00:10:25.130
may seem a bit strange at first,
00:10:25.130 --> 00:10:27.700
but when you consider the global reach of our platforms,
00:10:27.700 --> 00:10:31.020
we wanted continental boundaries of failure
00:10:31.020 --> 00:10:33.450
that were a little more effective for our business.
00:10:33.450 --> 00:10:35.730
We have a backup sites and locations in Europe
00:10:35.730 --> 00:10:39.393
that make that an incredibly effective proposition for us.
00:10:40.430 --> 00:10:43.121
Let's talk about how we realize the value and the dimensions
00:10:43.121 --> 00:10:46.490
that we endeavored upon them on.
00:10:46.490 --> 00:10:48.600
On the agility front,
00:10:48.600 --> 00:10:51.700
we brought to playout technology
00:10:51.700 --> 00:10:55.160
and to our business operation a much more agile mindset.
00:10:55.160 --> 00:10:56.380
We're able to move quickly.
00:10:56.380 --> 00:10:59.390
We can launch channels faster than we ever could before.
00:10:59.390 --> 00:11:01.030
We're upgrading our playout air chains
00:11:01.030 --> 00:11:02.360
multiple times per year.
00:11:02.360 --> 00:11:05.360
That's relatively unheard of in the media industry.
00:11:05.360 --> 00:11:10.270
Many legacy environments are built and operated for years
00:11:10.270 --> 00:11:12.170
without touching or upgrading the software.
00:11:12.170 --> 00:11:14.830
We now operate and upgrade our software
00:11:14.830 --> 00:11:16.460
multiple times per year,
00:11:16.460 --> 00:11:19.580
doing red black deployments to cut over and cut down
00:11:19.580 --> 00:11:22.930
on errors and ensure we can test before launch.
00:11:22.930 --> 00:11:25.390
That's resulted in an increase in resiliency and uptime.
00:11:25.390 --> 00:11:27.910
Our operational resilience metrics have vastly improved
00:11:27.910 --> 00:11:29.710
over the course of this project.
00:11:29.710 --> 00:11:32.380
We're now reaching five nines of uptimes today,
00:11:32.380 --> 00:11:34.380
where we only had three nines before,
00:11:34.380 --> 00:11:35.940
and we've got that wonderful multi-region
00:11:35.940 --> 00:11:37.600
model of redundancy.
00:11:37.600 --> 00:11:40.970
We have no disruptions to our customers on a general basis,
00:11:40.970 --> 00:11:45.350
beyond the odd single second issue here and there.
00:11:45.350 --> 00:11:47.090
And it's a great outcome for all involved.
00:11:47.090 --> 00:11:50.070
And then finally on that staff productivity lens,
00:11:50.070 --> 00:11:52.410
we're now seeing a six-fold increase
00:11:52.410 --> 00:11:56.230
over the course of project in the ratios of staff to feeds,
00:11:56.230 --> 00:11:59.530
now operating 60 feeds for a single operator.
00:11:59.530 --> 00:12:01.170
That came with great partnership
00:12:01.170 --> 00:12:04.520
with our friends at Evertz around monitoring by exception,
00:12:04.520 --> 00:12:08.620
automation, and an investment in QC and data quality checks.
00:12:08.620 --> 00:12:10.140
At this point, I'm gonna hand it off to Rasha,
00:12:10.140 --> 00:12:12.377
who's gonna take us through TCO value
00:12:12.377 --> 00:12:14.943
and really how we did it from here.
00:12:16.060 --> 00:12:17.563
Thank you, Dave.
00:12:17.563 --> 00:12:22.563
Serving channels into 120 countries over 1000 EC2 instances
00:12:22.640 --> 00:12:25.350
and serving 15 petabytes of content,
00:12:25.350 --> 00:12:27.800
moving from on-prem to a cloud solution
00:12:27.800 --> 00:12:32.260
generated 61% in total cost of ownership savings.
00:12:32.260 --> 00:12:35.720
We moved from a complex model of physical, administrative,
00:12:35.720 --> 00:12:38.950
and software costs to simplify cloud services
00:12:38.950 --> 00:12:41.870
focused on storage, compute, and networking.
00:12:41.870 --> 00:12:44.440
We were also able to reduce our network
00:12:44.440 --> 00:12:47.850
and server rack space by 92%.
00:12:47.850 --> 00:12:49.130
That allowed us the ability,
00:12:49.130 --> 00:12:51.770
along with the consolidation of the pods,
00:12:51.770 --> 00:12:55.330
to repurpose that space for our live sport tech hub,
00:12:55.330 --> 00:12:57.670
something that we would have had to create a new lease
00:12:57.670 --> 00:13:00.450
and a new building expense for it.
00:13:00.450 --> 00:13:02.720
How we were able to successfully do it,
00:13:02.720 --> 00:13:05.800
we focused on three core strategies, partnership,
00:13:05.800 --> 00:13:08.860
continuous improvement, governance, and analytics.
00:13:08.860 --> 00:13:10.230
In terms of partnership,
00:13:10.230 --> 00:13:12.900
our relationship with AWS and Evertz
00:13:12.900 --> 00:13:15.440
was absolutely essential for our success.
00:13:15.440 --> 00:13:17.590
We work together on progressing the roadmap
00:13:17.590 --> 00:13:20.100
and to get it where it is today.
00:13:20.100 --> 00:13:24.211
We also continue to collaborate on different innovative PLCs
00:13:24.211 --> 00:13:26.550
that focus on the linear playout,
00:13:26.550 --> 00:13:29.150
but also focus on the entire supply chain.
00:13:29.150 --> 00:13:31.740
Some of the work that we're doing right now
00:13:31.740 --> 00:13:34.210
is post-production edit in the cloud,
00:13:34.210 --> 00:13:37.520
live stream monitoring, shoppable videos,
00:13:37.520 --> 00:13:41.280
IP (indistinct) POC, analytics focused POCs
00:13:41.280 --> 00:13:44.800
that discuss the personalization and recommendations
00:13:44.800 --> 00:13:47.110
for a more targeted experience.
00:13:47.110 --> 00:13:49.890
We also look to repurpose a lot of the work we're doing
00:13:49.890 --> 00:13:52.590
for the linear playout into other products.
00:13:52.590 --> 00:13:54.790
For example, the content re-contribution,
00:13:54.790 --> 00:13:59.570
which focused on content enrichment and metadata enrichment
00:13:59.570 --> 00:14:02.440
is being also used in the Discovery+
00:14:02.440 --> 00:14:05.160
direct to consumer products as well.
00:14:05.160 --> 00:14:08.810
In terms of continuous improvement, at our scale,
00:14:08.810 --> 00:14:11.120
when you have an operation of this size,
00:14:11.120 --> 00:14:13.460
and we're constantly year-over-year growing,
00:14:13.460 --> 00:14:15.900
we wanna make sure our investment dollars is stretched
00:14:15.900 --> 00:14:17.620
and we're getting the most out of it.
00:14:17.620 --> 00:14:20.470
We've been utilizing the EC2 savings plans
00:14:20.470 --> 00:14:23.320
and the compute plans for our workflows,
00:14:23.320 --> 00:14:25.290
especially the predictable ones,
00:14:25.290 --> 00:14:27.910
so that we could scale our growth economically.
00:14:27.910 --> 00:14:29.340
We are a media company,
00:14:29.340 --> 00:14:31.830
and we wanna make sure our content is rich.
00:14:31.830 --> 00:14:34.740
Storage is a big service that we rely on.
00:14:34.740 --> 00:14:37.870
So we have been making use of the lifecycle policies,
00:14:37.870 --> 00:14:42.040
the different S3 policy buckets, and intelligent tiering,
00:14:42.040 --> 00:14:44.850
as well as the archive tier as well.
00:14:44.850 --> 00:14:48.580
We also perform quarterly deep dives with our executive team
00:14:48.580 --> 00:14:50.690
to make sure we understand our span,
00:14:50.690 --> 00:14:52.940
and we're defining actions to improve
00:14:52.940 --> 00:14:55.920
and focus on the right investments.
00:14:55.920 --> 00:14:58.600
In terms of governance and analytics,
00:14:58.600 --> 00:15:01.080
when the footprint is as large as ours,
00:15:01.080 --> 00:15:03.490
managing the spend becomes very difficult.
00:15:03.490 --> 00:15:05.095
For that we've instituted
00:15:05.095 --> 00:15:08.140
a lot of governance policies and standards
00:15:08.140 --> 00:15:10.970
that range from tagging from cloud intake,
00:15:10.970 --> 00:15:14.610
from configuration, from resource setup.
00:15:14.610 --> 00:15:17.140
It helps us set up the accounts correctly
00:15:17.140 --> 00:15:21.030
and ensure that the resources are used effectively.
00:15:21.030 --> 00:15:23.100
We also focus on providing showback.
00:15:23.100 --> 00:15:24.970
Our cloud spend is central.
00:15:24.970 --> 00:15:28.120
And having ability to understand what we're spending on,
00:15:28.120 --> 00:15:31.440
and if those items being spent on are on the roadmap
00:15:31.440 --> 00:15:34.460
and they're strategically aligned with what we intend to do
00:15:34.460 --> 00:15:35.950
is very critical.
00:15:35.950 --> 00:15:38.577
We also focus on cost governance in general.
00:15:38.577 --> 00:15:43.100
We utilize anomaly detection, trusted advisors.
00:15:43.100 --> 00:15:47.240
We have FinHack events, cost optimization workshops.
00:15:47.240 --> 00:15:49.570
We wanna make sure that we are looking
00:15:49.570 --> 00:15:52.150
at different solutions, re-architecting them.
00:15:52.150 --> 00:15:55.070
We are also encouraging and championing our teams
00:15:55.070 --> 00:15:58.270
to come and look at and new generation technologies
00:15:58.270 --> 00:16:01.080
that are higher performance at lower costs.
00:16:01.080 --> 00:16:04.690
We also try to get people to adopt the mindset
00:16:04.690 --> 00:16:06.760
of owning the work that they're doing
00:16:06.760 --> 00:16:08.770
and understanding their cost mechanics
00:16:08.770 --> 00:16:11.800
so that there is more governance and ownership with them.
00:16:11.800 --> 00:16:13.700
This is how we were able to succeed.
00:16:13.700 --> 00:16:14.943
Thank you for listening.
00:16:20.550 --> 00:16:22.860
And with this continual effort to optimize
00:16:22.860 --> 00:16:25.640
and govern their overall business costs,
00:16:25.640 --> 00:16:28.860
Discovery's able to flexibly redirect such benefits
00:16:28.860 --> 00:16:31.050
into other avenues of re-invention,
00:16:31.050 --> 00:16:33.970
such as track cycling and transforming
00:16:33.970 --> 00:16:36.133
how it's presented to the consumers.
00:16:37.300 --> 00:16:40.690
Discovery Sports events has been working with the UCI,
00:16:40.690 --> 00:16:42.900
the worldwide governing body for cycling,
00:16:42.900 --> 00:16:45.820
to redefine and innovate track cycling.
00:16:45.820 --> 00:16:48.490
So this new series of races takes place
00:16:48.490 --> 00:16:51.480
under the UCI Track Champions League
00:16:51.480 --> 00:16:53.700
with the goal of developing a strong
00:16:53.700 --> 00:16:55.560
year-round narrative for the sport,
00:16:55.560 --> 00:16:58.250
which was traditionally just only in the Olympics,
00:16:58.250 --> 00:17:00.860
telling the whole story and setting up the foundation
00:17:00.860 --> 00:17:03.320
for sustained fan engagement,
00:17:03.320 --> 00:17:06.190
doing this through serving live data, analysis,
00:17:06.190 --> 00:17:08.070
and predictive stats to the fans,
00:17:08.070 --> 00:17:09.400
whether they are in the velodrome
00:17:09.400 --> 00:17:11.110
where races are taking place,
00:17:11.110 --> 00:17:13.180
through live television and streaming,
00:17:13.180 --> 00:17:16.030
or through their newly launched Track Cycling League app.
00:17:17.710 --> 00:17:19.270
So how did the Discovery Sports get
00:17:19.270 --> 00:17:21.540
from initial ideation to launch?
00:17:21.540 --> 00:17:26.070
So first they partner with AWS and Discovery Sports events,
00:17:26.070 --> 00:17:28.760
which is a portfolio of sports brands
00:17:28.760 --> 00:17:32.710
that reaches more than about 130 million viewers.
00:17:32.710 --> 00:17:36.460
Then Discovery Sports participated in an AWS program
00:17:36.460 --> 00:17:38.640
called data-driven everything.
00:17:38.640 --> 00:17:40.680
And it is a series of workshops
00:17:40.680 --> 00:17:44.120
designed to help customers think about data differently,
00:17:44.120 --> 00:17:47.373
to define and sometimes redefine what great looks like.
00:17:48.920 --> 00:17:50.930
When data is harnessed,
00:17:50.930 --> 00:17:54.340
it is a powerful asset that can drive sustain innovation
00:17:54.340 --> 00:17:56.490
as well as create actionable insights
00:17:56.490 --> 00:17:59.950
to continually supercharge the fan engagement.
00:17:59.950 --> 00:18:02.490
After leaving these workshops with this focus
00:18:02.490 --> 00:18:04.330
and a prescriptive framework in hand,
00:18:04.330 --> 00:18:07.830
Discovery Sports reached out to AWS Professional Services
00:18:07.830 --> 00:18:09.530
in order to help deliver the plan.
00:18:10.650 --> 00:18:13.510
And a part of this framework where it was the use cases
00:18:13.510 --> 00:18:16.120
for the fan experiences,
00:18:16.120 --> 00:18:18.040
what did the audience want to see?
00:18:18.040 --> 00:18:19.990
What stats would allows cycling fans
00:18:19.990 --> 00:18:22.010
to have a more connected experience
00:18:22.010 --> 00:18:23.950
with their favorite riders?
00:18:23.950 --> 00:18:27.560
To start Discovery Sports envisioned some key biometrics,
00:18:27.560 --> 00:18:32.380
such as heart rate, cadence, and overall wattage.
00:18:32.380 --> 00:18:34.920
Then the next thing is how do we present this?
00:18:34.920 --> 00:18:38.780
So Discovery Sports envisioned real-time dynamic graphics
00:18:38.780 --> 00:18:42.310
for traditional viewing and streaming as MVP,
00:18:42.310 --> 00:18:44.940
as well as the creation of a fan app,
00:18:44.940 --> 00:18:48.050
which the app itself would be foundational
00:18:48.050 --> 00:18:51.310
to possible post MVP interactive experiences
00:18:51.310 --> 00:18:53.513
for the fans at the velodrome or at home.
00:18:54.370 --> 00:18:56.880
Let's take a quick look at a video from Discovery Sports,
00:18:56.880 --> 00:18:59.230
providing a little behind the scenes insights
00:18:59.230 --> 00:19:00.393
into this initiative.
00:19:01.280 --> 00:19:03.940
All the bikes are equipped with sensors
00:19:03.940 --> 00:19:08.940
to suck data, power, cadence, speed, heart rate.
00:19:09.078 --> 00:19:12.390
AWS is sucking that data from the riders
00:19:12.390 --> 00:19:14.170
and from the timekeepers
00:19:14.170 --> 00:19:17.700
and send that data into a data lake in U.S.
00:19:17.700 --> 00:19:19.480
and send us back the data
00:19:19.480 --> 00:19:23.140
so that we can use that data in a smart way for LED screen,
00:19:23.140 --> 00:19:25.010
for mapping, for an app,
00:19:25.010 --> 00:19:27.600
which we're gonna launch from the second event
00:19:27.600 --> 00:19:28.980
for the TV graphics.
00:19:28.980 --> 00:19:31.060
All the data is going through algorithms
00:19:31.060 --> 00:19:35.110
so that we can do halfway through the season predictability
00:19:35.110 --> 00:19:39.040
and get a much better insight
00:19:39.040 --> 00:19:42.230
for the fans to understand how extreme is that sport.
00:19:42.230 --> 00:19:44.850 line:20%
This is the Formula One of cycling
00:19:44.850 --> 00:19:46.323 line:20%
that we are looking after.
00:19:48.350 --> 00:19:52.630
So we have about 12 of those cameras,
00:19:52.630 --> 00:19:54.800
plus 10 live on boards,
00:19:54.800 --> 00:19:58.770
plus a cable cam, which is flying over the velodrome
00:19:58.770 --> 00:20:00.780
operated by those gentlemen.
00:20:00.780 --> 00:20:02.760
And we operate the augmented reality.
00:20:02.760 --> 00:20:04.920
If you can see in the screen, we augment it,
00:20:04.920 --> 00:20:06.630
we operate the augmented reality,
00:20:06.630 --> 00:20:09.650
just like the Australian Open, for instance,
00:20:09.650 --> 00:20:10.990
from the cable cam.
00:20:10.990 --> 00:20:13.740
Really cool, never done before in cycling.
00:20:13.740 --> 00:20:15.290
(upbeat music)
00:20:15.290 --> 00:20:19.280
6:23, we are just 30 minutes before kickoff.
00:20:19.280 --> 00:20:22.523
We are here in the broadcast center.
00:20:23.520 --> 00:20:25.990
Discovery Sport events is the whole broadcaster
00:20:25.990 --> 00:20:27.500
on the entire competition.
00:20:27.500 --> 00:20:30.050
We organize it, we produce it,
00:20:30.050 --> 00:20:32.280
we broadcast it, we distribute it.
00:20:32.280 --> 00:20:34.600
It's quite a strong chain of a value.
00:20:34.600 --> 00:20:37.850
(gentle intense music)
00:20:41.750 --> 00:20:44.270
So as first steps to mobilize the data
00:20:44.270 --> 00:20:45.710
to drive real-time stats,
00:20:45.710 --> 00:20:48.010
Discovery Sports had to a data lake,
00:20:48.010 --> 00:20:51.800
as solution enabling fast ingest, storage, cataloging,
00:20:51.800 --> 00:20:56.140
and quick access to real-time data for automated usability.
00:20:56.140 --> 00:20:58.920
The lake was foundational to harness the data.
00:20:58.920 --> 00:21:01.100
Built on AWS native services,
00:21:01.100 --> 00:21:03.320
Discovery Sports implemented a pipeline
00:21:03.320 --> 00:21:05.220
in which JSON documents
00:21:05.220 --> 00:21:07.980
were ingested at ten-second intervals.
00:21:07.980 --> 00:21:09.950
They were deduped and sanitized,
00:21:09.950 --> 00:21:12.880
and then indexed to one common structured schema
00:21:12.880 --> 00:21:15.590
so they could automate everything off of that.
00:21:15.590 --> 00:21:16.710
After this process,
00:21:16.710 --> 00:21:19.960
the index data itself is stored in DynamoDB,
00:21:19.960 --> 00:21:23.910
which is Amazon's fully managed, no SQL database.
00:21:23.910 --> 00:21:28.230
And then finally, leveraging the benefits of GraphQL,
00:21:28.230 --> 00:21:30.770
specific metrics were made readily available
00:21:30.770 --> 00:21:34.170
to mobile devices for dynamic rendering of live graphics,
00:21:34.170 --> 00:21:36.900
with more efficiency and simplicity
00:21:36.900 --> 00:21:39.693
than using conventional rest API approaches.
00:21:41.860 --> 00:21:44.590
So what's next in Discovery Sport's journey
00:21:44.590 --> 00:21:45.693
with track cycling?
00:21:46.800 --> 00:21:51.800
So first, the MVP stats at launch were more mathematical.
00:21:51.850 --> 00:21:54.170
So now Discovery Sports is pushing forward
00:21:54.170 --> 00:21:56.620
to a computational modeling approach
00:21:56.620 --> 00:21:59.000
to generate insightful predictions,
00:21:59.000 --> 00:22:01.130
like the ability of your favorite cyclist
00:22:01.130 --> 00:22:02.920
to overtake another rider.
00:22:02.920 --> 00:22:06.240
Or perhaps the creation of a cyclist draft score
00:22:06.240 --> 00:22:09.433
to demonstrate efficiency of energy distribution.
00:22:10.770 --> 00:22:12.900
Discovery also has this slice on developing
00:22:12.900 --> 00:22:15.690
more interactive elements with their newly app,
00:22:15.690 --> 00:22:16.990
their newly launched app,
00:22:16.990 --> 00:22:19.180
like real-time fan voting,
00:22:19.180 --> 00:22:21.815
further building on the need for fast data ingestion
00:22:21.815 --> 00:22:24.850
for millions of apps, mass processing,
00:22:24.850 --> 00:22:28.110
and fast publication back to those apps.
00:22:28.110 --> 00:22:29.757
And when we asked how fast, they said,
00:22:29.757 --> 00:22:31.537
"Well, it has to be faster than the rider
00:22:31.537 --> 00:22:34.247
"that has top speeds of 75 miles per hour."
00:22:35.460 --> 00:22:38.010
Also starting this year is a focus on remote,
00:22:38.010 --> 00:22:40.350
or excuse me, on remote live production,
00:22:40.350 --> 00:22:44.570
migrating onsite velodrome productions into AWS
00:22:44.570 --> 00:22:46.710
to gain the additional flexibility with cost,
00:22:46.710 --> 00:22:48.853
resiliency, and with quality.
00:22:51.160 --> 00:22:53.230
And another one of our great customers
00:22:53.230 --> 00:22:55.800
taking advantage of this need for flexibility,
00:22:55.800 --> 00:22:58.710
leveraging AWS for live production requirements,
00:22:58.710 --> 00:23:00.620
is CBS Sports Digital.
00:23:00.620 --> 00:23:02.460
Please join me in welcoming Steph Lone,
00:23:02.460 --> 00:23:05.290
SVP of CBS Sports Digital at ViacomCBS
00:23:05.290 --> 00:23:08.015
to share their ongoing production evolution.
00:23:08.015 --> 00:23:11.182
(audience applauding)
00:23:17.910 --> 00:23:18.743
All right.
00:23:20.220 --> 00:23:21.053
Hang on.
00:23:27.470 --> 00:23:28.630
Ah, there we go.
00:23:28.630 --> 00:23:29.660
All right, hey everybody.
00:23:29.660 --> 00:23:31.360
This is, my name is Steph Lone.
00:23:31.360 --> 00:23:33.820
I head up engineering for CBS Sports Digital,
00:23:33.820 --> 00:23:36.350
which is a ViacomCBS brand.
00:23:36.350 --> 00:23:39.270
For those of you who don't know who ViacomCBS is,
00:23:39.270 --> 00:23:41.270
we are one of the largest global
00:23:41.270 --> 00:23:43.210
media and entertainment companies.
00:23:43.210 --> 00:23:47.073
We have popular brands like CBS, CBS News and Sports,
00:23:48.270 --> 00:23:53.270
Showtime, BET, Nick, MTV, Telefe, Network 10 in Australia.
00:23:55.590 --> 00:23:56.820
And this is just to name a few.
00:23:56.820 --> 00:24:00.020
We also have some world-class streaming services
00:24:00.020 --> 00:24:03.423
in our Paramount+ service as well as Pluto TV.
00:24:06.990 --> 00:24:09.400
So as part of the CBS brand,
00:24:09.400 --> 00:24:12.360
we at CBS Sports Digital like to think of ourselves
00:24:12.360 --> 00:24:14.410
as serving our sports customers
00:24:14.410 --> 00:24:16.690
from preps all the way to pros,
00:24:16.690 --> 00:24:20.570
starting with our high school sports brand in MaxPreps.
00:24:20.570 --> 00:24:24.030
We have a college subscription service in 247Sports.
00:24:24.030 --> 00:24:26.610
I think all of you probably know our flagship,
00:24:26.610 --> 00:24:31.610
cbssports.com site, mobile apps, and our OTT services.
00:24:31.700 --> 00:24:35.440
We have a 24 by seven digital channel in CBS Sports HQ.
00:24:35.440 --> 00:24:38.420
And for those pros out there who really love sports,
00:24:38.420 --> 00:24:41.700
we have free and premium fantasy offerings,
00:24:41.700 --> 00:24:43.360
as well as some gambling news
00:24:43.360 --> 00:24:45.160
and information subscription services
00:24:45.160 --> 00:24:46.643
in our sports line brand.
00:24:47.800 --> 00:24:49.550
Across our whole portfolio,
00:24:49.550 --> 00:24:51.640
we fully migrated all of our brands
00:24:51.640 --> 00:24:54.380
and thousands of services into the cloud.
00:24:54.380 --> 00:24:56.650
We've done this via a few models,
00:24:56.650 --> 00:25:01.293
lift and shift, hybrid, and fully native migrations.
00:25:02.580 --> 00:25:05.040
But today I'm gonna focus on the work that we've done
00:25:05.040 --> 00:25:07.590
around our video workloads and our video workflows.
00:25:10.300 --> 00:25:12.010
So what does an average day look like
00:25:12.010 --> 00:25:14.173
for a CBS Sports Digital employee?
00:25:15.110 --> 00:25:19.320
We curate over 30,000 live events per year.
00:25:19.320 --> 00:25:23.570
So this means hundreds, up to 500 events per day,
00:25:23.570 --> 00:25:25.363
hundreds of them concurrently.
00:25:27.827 --> 00:25:29.340
Our workloads vary,
00:25:29.340 --> 00:25:31.650
so we could do anything from major events,
00:25:31.650 --> 00:25:34.710
like an NFL or an SEC football game,
00:25:34.710 --> 00:25:36.250
a long tail event,
00:25:36.250 --> 00:25:39.360
that would be something like a college lacrosse game,
00:25:39.360 --> 00:25:44.360
popup events like a press conference, or audio only events.
00:25:44.810 --> 00:25:47.167
So we knew that we needed the flexibility
00:25:47.167 --> 00:25:50.700
to maximize and minimize the infrastructure of the cloud
00:25:50.700 --> 00:25:53.570
because we would not be able to be as efficient or agile
00:25:53.570 --> 00:25:56.093
with all that physical on premise hardware.
00:25:59.220 --> 00:26:01.920
So let's take a little look here into our journey
00:26:01.920 --> 00:26:03.730
and our evolution.
00:26:03.730 --> 00:26:07.130
In the past, especially with larger scale events,
00:26:07.130 --> 00:26:09.270
the encoder has typically been kind of
00:26:09.270 --> 00:26:10.950
that demarcation point
00:26:10.950 --> 00:26:13.383
where we start that journey into the cloud.
00:26:14.450 --> 00:26:17.580
We would have a production done either at the stadium
00:26:17.580 --> 00:26:20.580
or in a broadcast center, in a co-ward room.
00:26:20.580 --> 00:26:24.680
It would then be fed into an on-premise encoder.
00:26:24.680 --> 00:26:26.390
We'd use Direct Connect,
00:26:26.390 --> 00:26:28.300
push into a media store,
00:26:28.300 --> 00:26:30.500
and then deliver the rest of the experience
00:26:30.500 --> 00:26:32.963
all the way through to the consumer in the cloud.
00:26:34.290 --> 00:26:37.150
These efforts were initially very high touch,
00:26:37.150 --> 00:26:39.410
required a lot of planning and a lot of testing
00:26:39.410 --> 00:26:40.780
for execution day.
00:26:40.780 --> 00:26:42.720
And then depending on the event,
00:26:42.720 --> 00:26:46.990
we would spin up that infrastructure multiple hours
00:26:46.990 --> 00:26:48.370
or for something like a Superbowl,
00:26:48.370 --> 00:26:51.763
multiple weeks prior to the actual event happening.
00:26:53.770 --> 00:26:56.920
So several years ago, for the majority
00:26:56.920 --> 00:27:01.740
of our long-tail events, our audio and our pop-up events,
00:27:01.740 --> 00:27:04.310
we moved the demarcation point upstream.
00:27:04.310 --> 00:27:06.910
So now the production was still being handled
00:27:06.910 --> 00:27:09.010
either at a physical location,
00:27:09.010 --> 00:27:14.010
whether that was at a school, or in an on-premise facility,
00:27:16.160 --> 00:27:19.210
but we removed the physical hardware for the encoding
00:27:19.210 --> 00:27:21.400
and started using media live services.
00:27:21.400 --> 00:27:22.920
Again, giving us that ability
00:27:22.920 --> 00:27:25.283
to scale to hundreds of concurrent events.
00:27:28.880 --> 00:27:31.130
So if any of you are big soccer fans out there
00:27:31.130 --> 00:27:32.730
and have been watching what we've been doing
00:27:32.730 --> 00:27:34.470
over the course of the last year,
00:27:34.470 --> 00:27:35.940
you may have noticed that we've designed
00:27:35.940 --> 00:27:38.910
about a dozen separate soccer deals.
00:27:38.910 --> 00:27:42.780
CBS Sports and Paramount+ are now offering soccer fans
00:27:42.780 --> 00:27:47.340
a full portfolio year round of live and on-demand soccer.
00:27:47.340 --> 00:27:49.030
And in the past eight months alone,
00:27:49.030 --> 00:27:52.423
we've introduced nine new soccer competitions.
00:27:52.423 --> 00:27:56.270
This is more than 2,000 live soccer matches
00:27:56.270 --> 00:28:00.090
from over 145 national team associations
00:28:00.090 --> 00:28:02.003
across five different continents.
00:28:04.370 --> 00:28:05.650
But all these new deals
00:28:05.650 --> 00:28:08.300
brought just a few challenges with them.
00:28:08.300 --> 00:28:11.900
We had very little time between signing these deals
00:28:11.900 --> 00:28:14.330
and being fully operational.
00:28:14.330 --> 00:28:17.300
So I mentioned these prior events that we've been doing
00:28:17.300 --> 00:28:20.040
were produced on premise,
00:28:20.040 --> 00:28:23.120
whether they were in a broadcast center in a co-ward room,
00:28:23.120 --> 00:28:26.720
they would have a very traditional master control setup.
00:28:26.720 --> 00:28:29.200
Now with so many concurrent events coming in
00:28:29.200 --> 00:28:31.500
requiring that level of production
00:28:32.540 --> 00:28:34.960
on top of our regular sports schedule,
00:28:34.960 --> 00:28:36.480
we knew that we needed to move
00:28:36.480 --> 00:28:39.530
our master control operations into the cloud.
00:28:39.530 --> 00:28:42.340
But in order to do this, we had to solve for a few issues
00:28:42.340 --> 00:28:45.360
involving latency in this new hybrid environment.
00:28:45.360 --> 00:28:48.090
We needed to make it easy for our operators
00:28:48.090 --> 00:28:50.880
to have a different operator coverage model
00:28:50.880 --> 00:28:53.430
based on different tiers of games that we had.
00:28:53.430 --> 00:28:55.810
And we had to have a minimum physical setup
00:28:55.810 --> 00:28:59.283
ready for when competition started in August.
00:29:03.070 --> 00:29:04.600
So to solve our challenges,
00:29:04.600 --> 00:29:09.020
we worked very closely with Grass Valley and with AWS.
00:29:09.020 --> 00:29:11.860
We utilized the Grass Valley AMPP system,
00:29:11.860 --> 00:29:16.070
and we brought to the table our own AWS storage and compute.
00:29:16.070 --> 00:29:17.890
We worked with Grass Valley on everything
00:29:17.890 --> 00:29:21.930
from signal timing to video switching,
00:29:21.930 --> 00:29:23.640
frame-accurate SCTE insertion,
00:29:23.640 --> 00:29:26.240
all the way through to monitoring and observability.
00:29:29.980 --> 00:29:34.980
So I wanna take a deep dive here into our architecture.
00:29:37.090 --> 00:29:38.390
Sorry, this looks a little
00:29:39.260 --> 00:29:40.233
weird.
00:29:43.330 --> 00:29:44.310
There we go.
00:29:44.310 --> 00:29:46.277
I don't know that got a little wonky (laughing).
00:29:47.590 --> 00:29:49.990
So this is an example of one of our soccer workflows
00:29:49.990 --> 00:29:52.940
that we needed to execute this past August.
00:29:52.940 --> 00:29:55.610
So let's start here, and we'll try to do this
00:29:55.610 --> 00:29:57.420
so both sides can see.
00:29:57.420 --> 00:29:58.900
Nope.
00:29:58.900 --> 00:29:59.803
How do I back up?
00:30:01.140 --> 00:30:02.090
Ah, there we go.
00:30:02.090 --> 00:30:04.570
All right, let's start here with contribution.
00:30:04.570 --> 00:30:06.800
So as you can see down here,
00:30:06.800 --> 00:30:10.810
we got primary and backup feeds via satellite and fiber
00:30:10.810 --> 00:30:15.110
into our facility in Stanford, Connecticut.
00:30:15.110 --> 00:30:17.630
We also had multiple studio shows.
00:30:17.630 --> 00:30:20.810
These studio shows could be in Stanford
00:30:20.810 --> 00:30:22.710
or in our New York broadcast center.
00:30:22.710 --> 00:30:27.110
We also had post-game shows in our CBS Sports HQ product
00:30:27.110 --> 00:30:30.010
that was at either our Fort Lauderdale facility
00:30:30.010 --> 00:30:32.000
or also in Stanford.
00:30:32.000 --> 00:30:35.893
So bringing in all these feeds into Stanford,
00:30:37.058 --> 00:30:39.083
we utilize the AMPP Edge box,
00:30:40.410 --> 00:30:45.310
pushed via AWS Direct Connect up into the AMPP ecosystem
00:30:45.310 --> 00:30:48.180
and the AMPP environment on top of our AWS storage
00:30:49.060 --> 00:30:50.360
and compute.
00:30:50.360 --> 00:30:53.060
I'm gonna go around the circle here a little bit.
00:30:53.060 --> 00:30:54.220
So if you can see here,
00:30:54.220 --> 00:30:57.110
we worked very closely with our corporate MAM team.
00:30:57.110 --> 00:31:00.550
They worked with us so that we could automate our promos,
00:31:00.550 --> 00:31:02.740
our slates, clips, bugs,
00:31:02.740 --> 00:31:06.050
and get them all in an Amazon S3 bucket.
00:31:06.050 --> 00:31:09.040
And then as a tertiary setup,
00:31:09.040 --> 00:31:12.050
we had a cloud-based backup system
00:31:12.050 --> 00:31:16.761
for the world feed to be able to come in via MediaConnect.
00:31:16.761 --> 00:31:17.720
And then we did this because
00:31:17.720 --> 00:31:20.900
this was a new experience for CBS Sports Digital
00:31:20.900 --> 00:31:23.350
in doing master control in the cloud.
00:31:23.350 --> 00:31:27.690
And we wanted to use our internal orchestration system,
00:31:27.690 --> 00:31:29.981
that if there were any issues,
00:31:29.981 --> 00:31:31.490
we could use this as an emergency option
00:31:31.490 --> 00:31:34.020
to bypass cloud master control
00:31:34.020 --> 00:31:37.993
to continue to deliver a good experience for our consumers.
00:31:40.450 --> 00:31:41.800
So I'll try it on this side, too,
00:31:41.800 --> 00:31:43.700
give everybody a little...
00:31:43.700 --> 00:31:47.440
All right, so now we're inside the AMPP ecosystem.
00:31:47.440 --> 00:31:52.440
We were running this for each operator on one G4dn, 12XL.
00:31:55.770 --> 00:31:58.410
Inside here they're doing their switching,
00:31:58.410 --> 00:32:02.120
their SCTE insertion, adding their clips and their bugs,
00:32:02.120 --> 00:32:06.340
and then we're distributing out via MediaConnect
00:32:06.340 --> 00:32:09.930
to our Paramount+ apps and experiences
00:32:09.930 --> 00:32:13.423
as well as our CBS Sports.com apps and experiences.
00:32:14.720 --> 00:32:15.553
At the same time,
00:32:15.553 --> 00:32:17.930
we're distributing back down on premise
00:32:17.930 --> 00:32:21.250
and back into the studios so that the producers there
00:32:21.250 --> 00:32:23.023
could see the on-air feeds.
00:32:25.030 --> 00:32:27.920
One of the future architectures here
00:32:27.920 --> 00:32:30.440
will include our simulcast distribution
00:32:30.440 --> 00:32:32.410
out to CBS Sports Network.
00:32:32.410 --> 00:32:34.720
We're through our final phases, I believe,
00:32:34.720 --> 00:32:37.570
of being able to pass the ancillary data that we need
00:32:37.570 --> 00:32:40.273
to get our close captioning working properly here.
00:32:42.560 --> 00:32:45.320
All right, so honing in on the latency.
00:32:45.320 --> 00:32:47.850
So as I mentioned before,
00:32:47.850 --> 00:32:51.920
these prior events were produced either on-prem fully
00:32:51.920 --> 00:32:53.370
or in the cloud fully.
00:32:53.370 --> 00:32:56.230
So all that latency was controlled upstream.
00:32:56.230 --> 00:32:59.010
So we had to build away to get,
00:32:59.010 --> 00:33:02.200
to allow for the operators and the studio control rooms
00:33:02.200 --> 00:33:03.820
to be able to talk to each other
00:33:03.820 --> 00:33:05.360
so that they could be in sync
00:33:05.360 --> 00:33:07.080
since the operators were not going to be
00:33:07.080 --> 00:33:09.943
in the same physical location as the TVs.
00:33:14.860 --> 00:33:15.910
Using a combination
00:33:15.910 --> 00:33:19.350
of the Grass Valley Edge devices and AMPP
00:33:19.350 --> 00:33:22.830
allowed us to align all the feeds that were coming in,
00:33:22.830 --> 00:33:26.050
as well as align the feeds going across each node
00:33:26.050 --> 00:33:27.883
that all of the operators would see.
00:33:28.971 --> 00:33:32.190
We utilize the risk protocol and we were able to get
00:33:32.190 --> 00:33:36.353
our on-prem to on-prem roundtrip in less than a second.
00:33:37.270 --> 00:33:39.133
So just to visualize this here,
00:33:40.400 --> 00:33:42.020
basically everything...
00:33:42.020 --> 00:33:44.690
Oops, I did it again.
00:33:44.690 --> 00:33:45.523
There we go.
00:33:46.650 --> 00:33:47.483
Now.
00:33:48.630 --> 00:33:51.450
All right everything prior to the AMPP Edge box
00:33:51.450 --> 00:33:53.063
is an SDI input.
00:33:54.150 --> 00:33:56.550
We translated that into RIST.
00:33:56.550 --> 00:33:59.780
Everything inside of the AWS ecosystem is RIST,
00:33:59.780 --> 00:34:01.870
comes back out, back onto prem,
00:34:01.870 --> 00:34:04.850
and then back out into SDI output.
00:34:04.850 --> 00:34:05.900
Again, we've timed this.
00:34:05.900 --> 00:34:07.883
It's around 750 milliseconds.
00:34:10.700 --> 00:34:13.970
All right, I'm gonna move into operations here.
00:34:13.970 --> 00:34:16.590
So the limited time we needed to make operations
00:34:16.590 --> 00:34:19.390
as easy and familiar to our operators as possible,
00:34:19.390 --> 00:34:22.363
because we didn't have time to train them on something new.
00:34:25.500 --> 00:34:27.240
We had multiple different workloads
00:34:27.240 --> 00:34:29.270
that we were trying to accommodate for.
00:34:29.270 --> 00:34:32.780
We had heavier production workloads where we staffed
00:34:32.780 --> 00:34:35.910
in your typical one-to-one ratio model,
00:34:35.910 --> 00:34:38.570
one operator to one match.
00:34:38.570 --> 00:34:41.240
For these matches, the operator had to be in sync
00:34:41.240 --> 00:34:43.400
with the countdowns coming from the studios
00:34:43.400 --> 00:34:45.569
to be able to switch between multiple feeds,
00:34:45.569 --> 00:34:49.263
insert SCTEs, insert graphics, bugs, et cetera.
00:34:50.260 --> 00:34:52.690
Whereas with our lighter productions,
00:34:52.690 --> 00:34:55.980
our requirements allowed us to utilize one operator
00:34:55.980 --> 00:34:58.060
for four matches.
00:34:58.060 --> 00:35:00.840
For these matches, they needed to be able to do switching
00:35:00.840 --> 00:35:05.270
between pregame, halftime, the event, and a post-game show,
00:35:05.270 --> 00:35:06.470
and insert the ad breaks.
00:35:06.470 --> 00:35:09.320
But we were allowing them to do this via dirty switching.
00:35:10.830 --> 00:35:12.420
We needed to make this,
00:35:12.420 --> 00:35:14.440
the operator panel as easy to use
00:35:14.440 --> 00:35:16.580
and familiar and intuitive as possible.
00:35:16.580 --> 00:35:20.540
And then lastly, we wanted to be able to use AWS
00:35:20.540 --> 00:35:23.140
to be flexible and cost efficient for us.
00:35:23.140 --> 00:35:25.470
So we wanted the operators to be able to spin up
00:35:25.470 --> 00:35:27.400
and spin down their environments
00:35:27.400 --> 00:35:28.950
with just the push of a button.
00:35:32.610 --> 00:35:34.870
So partnering with Grass Valley,
00:35:34.870 --> 00:35:39.210
they used Elgato's open APIs to incorporate our interface
00:35:39.210 --> 00:35:40.253
into Stream Deck.
00:35:41.430 --> 00:35:43.920
This allowed us to give the operators
00:35:43.920 --> 00:35:45.520
that familiar switching access
00:35:45.520 --> 00:35:48.310
into the Grass Valley AMPP system and environment
00:35:48.310 --> 00:35:50.600
at an affordable price point to us,
00:35:50.600 --> 00:35:53.910
whether or not those units were in a cloud master control
00:35:53.910 --> 00:35:55.400
in a physical facility,
00:35:55.400 --> 00:35:58.160
or if we just sent them to their homes.
00:35:58.160 --> 00:36:00.110
All we needed was just internet access.
00:36:01.990 --> 00:36:04.410
So for the operators handling multiple matches,
00:36:04.410 --> 00:36:07.960
we had to also give them a way to drop custom SCTE payloads
00:36:07.960 --> 00:36:09.640
for our ad pod durations.
00:36:09.640 --> 00:36:12.990
And they were able to do this on either a game-by-game basis
00:36:12.990 --> 00:36:15.943
or across all four games at once.
00:36:17.240 --> 00:36:18.963
So let's take a look here at this.
00:36:19.890 --> 00:36:21.040
All right.
00:36:21.040 --> 00:36:22.450
Up here at the top what you'll see
00:36:22.450 --> 00:36:26.790
is your inbound four matches that an operator would see
00:36:26.790 --> 00:36:29.840
on their panel that they were responsible for.
00:36:29.840 --> 00:36:31.933
This would be an example studio feed.
00:36:32.970 --> 00:36:37.970
This here is our HQ post programming, and here's the slate.
00:36:38.220 --> 00:36:39.100
As you can see right now,
00:36:39.100 --> 00:36:41.675
these all four slates were cut and moved over to,
00:36:41.675 --> 00:36:46.083
all four outputs, all four matches, were out on slate,
00:36:47.020 --> 00:36:48.930
but up here in the top right corner what we have
00:36:48.930 --> 00:36:50.890
is custom ad insertion,
00:36:50.890 --> 00:36:54.580
that you could either do all four matches at the same time
00:36:54.580 --> 00:36:56.713
or individually match by match.
00:36:57.900 --> 00:37:00.536
Over here, sorry, this,
00:37:00.536 --> 00:37:02.087
I'm not very good with this pointer (laughing).
00:37:03.330 --> 00:37:05.250
Over here on the left-hand side of the,
00:37:05.250 --> 00:37:07.070
what looks like a control panel
00:37:07.070 --> 00:37:09.730
is where each one of these four matches
00:37:09.730 --> 00:37:11.970
could be switched individually,
00:37:11.970 --> 00:37:14.970
or you could switch all four matches altogether,
00:37:14.970 --> 00:37:16.260
and you could move them, for example,
00:37:16.260 --> 00:37:18.720
to slate by just one push of a button.
00:37:18.720 --> 00:37:21.549
So again, trying to make this easy and familiar,
00:37:21.549 --> 00:37:24.640
less training, less time to up ramp
00:37:25.660 --> 00:37:28.113
all of our operators and get us operational.
00:37:30.630 --> 00:37:32.480
And lastly, I'll talk just quickly here
00:37:32.480 --> 00:37:34.330
about supply chain issues.
00:37:34.330 --> 00:37:37.110
So during our facilities planning, we had some setbacks,
00:37:37.110 --> 00:37:38.210
'cause everybody knows about
00:37:38.210 --> 00:37:40.472
the well-known supply shortages going on,
00:37:40.472 --> 00:37:43.140
but we were able to actually launch an entire
00:37:43.140 --> 00:37:46.790
cloud-based workflow before we even got our furniture.
00:37:46.790 --> 00:37:48.780
So we launched in August.
00:37:48.780 --> 00:37:50.400
And I don't know if you guys can see this here,
00:37:50.400 --> 00:37:53.530
but these are us on cardboard tables
00:37:53.530 --> 00:37:56.760
in the middle of our cloud facility.
00:37:56.760 --> 00:37:58.050
We've done multiple iterations
00:37:58.050 --> 00:37:59.380
since August of our workflows,
00:37:59.380 --> 00:38:00.960
and I'm happy to say we now actually have
00:38:00.960 --> 00:38:03.020
our permanent furniture in,
00:38:03.020 --> 00:38:04.943
so we are fully up and operational.
00:38:06.310 --> 00:38:07.683
Okay, so the results.
00:38:08.970 --> 00:38:11.040
They've been outstanding.
00:38:11.040 --> 00:38:16.040
We have been able to deliver multiple production workloads
00:38:16.220 --> 00:38:18.690
and multiple operating models
00:38:18.690 --> 00:38:22.040
in a very short period of time to the business.
00:38:22.040 --> 00:38:25.290
Our operators have the efficiency and the flexibility
00:38:25.290 --> 00:38:28.610
to spin up and down their work environments.
00:38:28.610 --> 00:38:31.570
This is leading to a much larger cost savings
00:38:31.570 --> 00:38:33.120
for the business as a whole.
00:38:33.120 --> 00:38:36.060
And we were able to complete this project on time
00:38:36.060 --> 00:38:40.030
at a fraction of the normal budget for a project like this.
00:38:40.030 --> 00:38:42.683
So think 75% savings and more.
00:38:45.510 --> 00:38:46.893
So where are we going next?
00:38:49.160 --> 00:38:50.940
So for existing workflows,
00:38:50.940 --> 00:38:52.520
we have now created a template
00:38:52.520 --> 00:38:54.380
of repeatability for the future.
00:38:54.380 --> 00:38:56.920
We are able now to spin up when a new league
00:38:56.920 --> 00:38:58.370
or a new deal comes on board,
00:38:58.370 --> 00:39:01.870
we're able to spin them up in minutes or hours.
00:39:01.870 --> 00:39:03.890
And if we need to change an operator model,
00:39:03.890 --> 00:39:07.513
we can spin it up in days, not weeks or months.
00:39:09.760 --> 00:39:12.180
We're also trying to take this success
00:39:12.180 --> 00:39:15.130
and push this forward with our linear channels.
00:39:15.130 --> 00:39:17.840
So we're expanding to that, for the linear channels,
00:39:17.840 --> 00:39:19.957
to the cloud-first strategy as well.
00:39:19.957 --> 00:39:21.330
And we're moving upstream,
00:39:21.330 --> 00:39:24.550
so we're working on playout, and we'll be working on MAM.
00:39:24.550 --> 00:39:27.783
And then lastly, live studio production control rooms.
00:39:30.470 --> 00:39:32.030
I wanna say one quick, big thank you
00:39:32.030 --> 00:39:33.710
to somebody who couldn't be here today,
00:39:33.710 --> 00:39:34.980
and that's Dan Laterra.
00:39:34.980 --> 00:39:36.990
He was the mastermind behind a lot of this,
00:39:36.990 --> 00:39:38.530
and unfortunately with scheduled conflicts
00:39:38.530 --> 00:39:40.600
he couldn't come and present as well with me,
00:39:40.600 --> 00:39:42.700
but I wanted to give him a lot of kudos,
00:39:42.700 --> 00:39:44.520
and he did a lotta nights and weekends
00:39:44.520 --> 00:39:46.560
and working very closely with the Grass Valley team
00:39:46.560 --> 00:39:49.620
to get this whole soccer workflow up and running.
00:39:49.620 --> 00:39:51.463
So, Jamie, I'll turn it back to you.
00:39:53.266 --> 00:39:58.266
(audience applauding)
Thanks.
00:39:59.830 --> 00:40:00.830
Thank you, Steph.
00:40:00.830 --> 00:40:02.900
And finally, to wrap up this session,
00:40:02.900 --> 00:40:04.770
please welcome Sydney Lovely.
00:40:04.770 --> 00:40:06.970
Sydney's a CTO for Grass Valley,
00:40:06.970 --> 00:40:09.640
and he's here to describe how Grass Valley AMPP
00:40:09.640 --> 00:40:13.040
is providing AWS customers, like ViacomCBS,
00:40:13.040 --> 00:40:15.532
with the flexibility necessary to transform
00:40:15.532 --> 00:40:16.888
their media workflows.
00:40:16.888 --> 00:40:19.130
Sydney.
All right, thanks Jamie.
00:40:19.130 --> 00:40:19.963
Well, thanks everyone,
00:40:19.963 --> 00:40:22.350
and a special thanks to the ViacomCBS team.
00:40:22.350 --> 00:40:24.130
They've been great partners in this whole journey.
00:40:24.130 --> 00:40:27.450
So, we started the journey about four years ago.
00:40:27.450 --> 00:40:30.280
We started working with in partnership with the AWS team,
00:40:30.280 --> 00:40:32.930
looking at how we can unlock value of the cloud
00:40:32.930 --> 00:40:33.913
for our customers.
00:40:36.080 --> 00:40:37.820
A little bit about Grass Valley to begin with.
00:40:37.820 --> 00:40:39.730
So Grass Valley, we're the,
00:40:39.730 --> 00:40:41.900
basically we view ourselves as the trusted partner
00:40:41.900 --> 00:40:43.100
for the media industry.
00:40:43.100 --> 00:40:45.680
We've been with the industry for over 60 years,
00:40:45.680 --> 00:40:46.580
and we've led the industry
00:40:46.580 --> 00:40:49.060
through many different technology journeys.
00:40:49.060 --> 00:40:51.330
Prior to going to a cloud-based stuff,
00:40:51.330 --> 00:40:53.250
a lot of that was around AIMS and IEP.
00:40:53.250 --> 00:40:55.050
And so we just see this as another evolution
00:40:55.050 --> 00:40:55.913
in the industry.
00:40:58.620 --> 00:41:01.520
If you look at a traditional approach to media production,
00:41:01.520 --> 00:41:03.100
it was really a kind of a combination
00:41:03.100 --> 00:41:04.670
of purpose-built hardware
00:41:04.670 --> 00:41:07.610
and enterprise class software solutions.
00:41:07.610 --> 00:41:09.640
Complete solutions really required integrating
00:41:09.640 --> 00:41:11.710
from many different vendors,
00:41:11.710 --> 00:41:15.560
complex professional services installations and so on.
00:41:15.560 --> 00:41:16.870
So as we looked at this whole thing,
00:41:16.870 --> 00:41:18.810
we felt we were sorta uniquely positioned to say,
00:41:18.810 --> 00:41:20.490
how can we transform more and more
00:41:20.490 --> 00:41:22.780
of the value chain for our customers
00:41:22.780 --> 00:41:24.900
into the cloud and take advantage of those benefits?
00:41:24.900 --> 00:41:26.900
And not just, say, playout, for example.
00:41:29.370 --> 00:41:31.200
So this is a slide I put together
00:41:31.200 --> 00:41:34.310
about three or four years ago for an internal presentation,
00:41:34.310 --> 00:41:36.020
which kinda helped explain to some of our execs
00:41:36.020 --> 00:41:38.860
around what the economics of cloud computing
00:41:38.860 --> 00:41:40.700
and cloud adoption are for our customers.
00:41:40.700 --> 00:41:42.260
So some of these things are a little bit universal
00:41:42.260 --> 00:41:43.680
for cloud in general,
00:41:43.680 --> 00:41:45.410
and some of them are a little bit specific
00:41:45.410 --> 00:41:46.820
to the media industry.
00:41:46.820 --> 00:41:48.070
So if you look on the upper right here,
00:41:48.070 --> 00:41:51.290
I'm talking about cloud adoption drivers in general.
00:41:51.290 --> 00:41:53.830
So the first one is the need for elasticity.
00:41:53.830 --> 00:41:56.350
So elasticity, being able to spin things up and down.
00:41:56.350 --> 00:41:58.400
So if you have highly bursty workloads,
00:41:58.400 --> 00:42:01.470
elasticity is a great fit for it, always on workloads,
00:42:01.470 --> 00:42:02.490
not as much.
00:42:02.490 --> 00:42:04.610
Other things that are important as well to customers
00:42:04.610 --> 00:42:07.100
is managing geographic diversity.
00:42:07.100 --> 00:42:08.500
One could argue in the last two years,
00:42:08.500 --> 00:42:10.340
that should be number one on the list now,
00:42:10.340 --> 00:42:12.160
but it's top of mind for our customers,
00:42:12.160 --> 00:42:14.210
and then other things like managing obsolescence,
00:42:14.210 --> 00:42:16.630
labor, facilities, and so on.
00:42:16.630 --> 00:42:18.540
And when we look at the sort of three verticals
00:42:18.540 --> 00:42:20.440
that we serve in the ecosystem,
00:42:20.440 --> 00:42:24.000
from playout through, say, post-production and live,
00:42:24.000 --> 00:42:25.930
we kind of built this sort of,
00:42:25.930 --> 00:42:27.360
this little comparison chart here
00:42:27.360 --> 00:42:28.690
that helps people understand it.
00:42:28.690 --> 00:42:30.330
So as an industry,
00:42:30.330 --> 00:42:33.240
we started the journey moving to cloud and playout,
00:42:33.240 --> 00:42:35.990
which was great and it unlocked some value,
00:42:35.990 --> 00:42:38.630
but as we looked at it, we said, well,
00:42:38.630 --> 00:42:40.680
it really is not unlocking that much value.
00:42:40.680 --> 00:42:42.330
It's a 24/7 workload.
00:42:42.330 --> 00:42:44.270
There are certain efficiencies you can unlock.
00:42:44.270 --> 00:42:46.010
But if you contrast that with live production,
00:42:46.010 --> 00:42:48.210
where you build these large complex systems
00:42:48.210 --> 00:42:50.900
that get utilized a very small percentage of the time,
00:42:50.900 --> 00:42:53.460
we could unlock far more value for our customers
00:42:53.460 --> 00:42:55.730
if we could solve the live problem.
00:42:55.730 --> 00:42:57.740
Now, of course, the issue for us is that
00:42:57.740 --> 00:43:00.710
live is much more demanding than, say, pre-produced playout.
00:43:00.710 --> 00:43:03.420
So we had to solve some very difficult technical challenges
00:43:03.420 --> 00:43:04.740
in order to unlock live.
00:43:04.740 --> 00:43:07.680
But our thesis was, if we built a platform for live,
00:43:07.680 --> 00:43:09.080
we could do everything else.
00:43:10.690 --> 00:43:13.290
So this was when we started out re-inventing,
00:43:13.290 --> 00:43:16.280
really, the entire broadcast ecosystem with GV AMPP.
00:43:16.280 --> 00:43:17.510
So it was a common platform,
00:43:17.510 --> 00:43:20.740
really allowing customers to do the entire media workflow,
00:43:20.740 --> 00:43:24.820
from live production through ingest, edit, archive,
00:43:24.820 --> 00:43:27.410
play to air, traffic, billing, the whole nine yards.
00:43:27.410 --> 00:43:28.730
We started out and launched
00:43:28.730 --> 00:43:31.010
in the live space very deliberately,
00:43:31.010 --> 00:43:33.197
partly because we thought there was more value,
00:43:33.197 --> 00:43:35.570
and of course also part of it's because at Grass Valley
00:43:35.570 --> 00:43:37.570
we like to solve hard problems for our customers.
00:43:37.570 --> 00:43:39.420
And live is the hardest one to solve.
00:43:41.420 --> 00:43:43.830
So one of the benefits we were trying to address
00:43:43.830 --> 00:43:45.960
for our customers was really
00:43:45.960 --> 00:43:47.810
unlocking and modernizing workflows.
00:43:47.810 --> 00:43:49.470
So I won't go through all of this,
00:43:49.470 --> 00:43:50.960
but it gives you a sense of
00:43:50.960 --> 00:43:52.380
pain points customers struggle with.
00:43:52.380 --> 00:43:55.950
So, first is having to have all of these disparate platforms
00:43:55.950 --> 00:43:57.100
that they have to glue together.
00:43:57.100 --> 00:43:59.430
It's very complex, very, very brittle,
00:43:59.430 --> 00:44:01.010
very time-consuming.
00:44:01.010 --> 00:44:02.270
Meeting the demands of live production
00:44:02.270 --> 00:44:04.190
that I already talked about.
00:44:04.190 --> 00:44:06.750
Delivering truly agile resource management.
00:44:06.750 --> 00:44:08.250
So this isn't a lift and shift.
00:44:08.250 --> 00:44:09.840
We started the process years ago.
00:44:09.840 --> 00:44:11.600
We realized that's just not gonna cut it.
00:44:11.600 --> 00:44:13.870
We could leverage a lot of our core competencies,
00:44:13.870 --> 00:44:15.619
but a lot of the platform
00:44:15.619 --> 00:44:17.090
had to be built from scratch for the cloud,
00:44:17.090 --> 00:44:19.000
and then universal deployment of workloads.
00:44:19.000 --> 00:44:21.290
Customers need to deploy workloads on-prem
00:44:21.290 --> 00:44:23.880
or in public cloud or in multiple cloud environments.
00:44:23.880 --> 00:44:25.580
They really need that level of agility
00:44:25.580 --> 00:44:28.360
all from one common experience.
00:44:28.360 --> 00:44:30.250
And then kinda on the right-hand side here
00:44:30.250 --> 00:44:31.760
is our point of view of pain points
00:44:31.760 --> 00:44:33.260
we're trying to eliminate for customers.
00:44:33.260 --> 00:44:35.420
So, completely eliminating the notion
00:44:35.420 --> 00:44:36.670
of software commissioning.
00:44:36.670 --> 00:44:39.290
This is something that you can spin up in matter of minutes.
00:44:39.290 --> 00:44:40.790
And a lot of that had to do with cloud.
00:44:40.790 --> 00:44:42.620
A lot of it also had to do with moving everything
00:44:42.620 --> 00:44:45.103
to HTML5 and web app based development.
00:44:46.220 --> 00:44:47.700
From there, you have to look at
00:44:47.700 --> 00:44:49.080
the pain point customers have
00:44:49.080 --> 00:44:52.180
around big bang software upgrades and rollbacks,
00:44:52.180 --> 00:44:53.500
huge pain point in the industry,
00:44:53.500 --> 00:44:54.750
which we can address entirely
00:44:54.750 --> 00:44:57.240
in a multi-tenanted environment in the cloud.
00:44:57.240 --> 00:44:59.030
Eliminating the customization bottleneck,
00:44:59.030 --> 00:45:01.210
in that many of these systems are proprietary
00:45:01.210 --> 00:45:03.445
and require customization from the core vendor.
00:45:03.445 --> 00:45:06.730
We've built the whole system with public immutable APIs.
00:45:06.730 --> 00:45:08.746
So our customers can build on top of it
00:45:08.746 --> 00:45:10.570
and partners can build on top of it alike.
00:45:10.570 --> 00:45:13.790
And then finally, modernizing the entire support paradigm.
00:45:13.790 --> 00:45:15.620
Gone are the days where customers have time
00:45:15.620 --> 00:45:18.202
to ring up our Grass Valley support team,
00:45:18.202 --> 00:45:21.100
have them email in or FTP in the logs.
00:45:21.100 --> 00:45:23.600
Oh, we missed something in the logs, send us something else.
00:45:23.600 --> 00:45:26.340
It's entirely an actively monitored system
00:45:26.340 --> 00:45:28.410
with a DevOps team standing behind it
00:45:28.410 --> 00:45:29.710
and support through Slack.
00:45:31.930 --> 00:45:34.830
And again, CBS has been a great partner in the journey,
00:45:34.830 --> 00:45:36.930
helping us really refine this,
00:45:36.930 --> 00:45:38.090
bringing their expertise
00:45:38.090 --> 00:45:40.000
and their innovation to the platform.
00:45:40.000 --> 00:45:42.280
Today GV AMPP's live with over 25 customers
00:45:42.280 --> 00:45:43.700
around the world.
00:45:43.700 --> 00:45:46.440
We've used it for various events ranging from
00:45:46.440 --> 00:45:48.370
streaming corporate affairs all the way up
00:45:48.370 --> 00:45:49.970
through the Olympics.
00:45:49.970 --> 00:45:53.720
And here you're seeing one of our other customers,
00:45:53.720 --> 00:45:55.890
which is Jeff Butler from EA Sports,
00:45:55.890 --> 00:45:57.730
doing the FIFA 21 production.
00:45:57.730 --> 00:46:00.960
So, this is an example of those attached control surfaces,
00:46:00.960 --> 00:46:02.340
a little higher end than Stream Deck.
00:46:02.340 --> 00:46:04.180
It's a full on production switcher surface
00:46:04.180 --> 00:46:06.450
attached directly to a complete virtualized environment
00:46:06.450 --> 00:46:07.283
in the cloud.
00:46:08.150 --> 00:46:11.240
So with that said, I'll just say thanks to the AWS team.
00:46:11.240 --> 00:46:12.320
It's been great to partner with you guys,
00:46:12.320 --> 00:46:14.470
and special thanks to the CBS team as well.
00:46:15.382 --> 00:46:20.382
Thanks.
(audience applauding)
00:46:22.580 --> 00:46:25.380
So I personally wanna thank everyone for joining us today.
00:46:25.380 --> 00:46:28.310
I hope you were able to gain some valuable insights
00:46:28.310 --> 00:46:30.640
from this session and from our valued customers
00:46:30.640 --> 00:46:32.950
who were very generous to us
00:46:32.950 --> 00:46:35.201
in sharing their stories here at re:Invent.
00:46:35.201 --> 00:46:37.230
One request that we have is that
00:46:37.230 --> 00:46:38.960
if you could please take our survey,
00:46:38.960 --> 00:46:40.860
we like to capture your feedback.
00:46:40.860 --> 00:46:42.850
Our goal is always to continually improve.
00:46:42.850 --> 00:46:44.580
So I know I greatly appreciate it.
00:46:44.580 --> 00:46:47.140
And if anyone has any questions,
00:46:47.140 --> 00:46:50.550
Sydney and Steph and I will be over at the M&E lounge
00:46:50.550 --> 00:46:51.540
for the next hour.
00:46:51.540 --> 00:46:55.960
So please feel free to stop by and have a chat with us.
00:46:55.960 --> 00:46:57.660
Else than that, thank you very much
00:46:57.660 --> 00:46:59.560
and enjoy the rest of your week reinventing.
00:46:59.560 --> 00:47:00.675
Take care.
00:47:00.675 --> 00:47:03.203
(audience applauding)
#!/bin/bash
RED='\033[0;31m'
YELLOW='\033[1;33m'
NC='\033[0m'
function _logger() {
echo "$(date) ${YELLOW}[*] $@ ${NC}"
}
function brew_install_or_upgrade {
if brew ls --versions "$1" >/dev/null; then
_logger "upgrade"
HOMEBREW_NO_AUTO_UPDATE=1 brew upgrade "$1"
else
_logger "install"
HOMEBREW_NO_AUTO_UPDATE=1 brew install "$1"
fi
}
function brew_install_or_upgrade_cask {
if brew ls --versions "$1" >/dev/null; then
_logger "upgrade"
HOMEBREW_NO_AUTO_UPDATE=1 brew upgrade --cask "$1"
else
_logger "install"
HOMEBREW_NO_AUTO_UPDATE=1 brew install -- cask "$1"
fi
}
_logger "[+] Update brew"
brew update
brew upgrade
brew cleanup -s
#now diagnotic
brew doctor
brew missing
true > ~/.bash_profile
true > ~/.zshrc
echo 'export PATH="/usr/local/sbin:$PATH"' >> ~/.zshrc
echo 'export PATH="/usr/local/sbin:$PATH"' >> ~/.bash_profile
source ~/.zshrc
source ~/.bash_profile
_logger "[+] Installing tools"
brew_install_or_upgrade git
git --version
brew_install_or_upgrade httpie
http --version
brew_install_or_upgrade gnu-sed
brew_install_or_upgrade go-task/tap/go-task
brew brew_install_or_upgrade gitlab-runner
brew_install_or_upgrade ffmpeg
brew_install_or_upgrade plantuml
brew_install_or_upgrade youtube-dl
brew_install_or_upgrade ocrmypdf
brew_install_or_upgrade tesseract-lang
brew_install_or_upgrade_cask sourcetree
brew_install_or_upgrade_cask rectangle
brew_install_or_upgrade_cask drawio
brew_install_or_upgrade_cask alfred
brew_install_or_upgrade_cask transmit
brew_install_or_upgrade_cask vlc
brew_install_or_upgrade_cask microsoft-teams
brew_install_or_upgrade_cask deezer
brew_install_or_upgrade_cask visual-studio-code
brew_install_or_upgrade_cask duet
_logger "[+] Installing code tools : nodejs"
#brew uninstall --ignore-dependencies node
#brew uninstall --force node
brew_install_or_upgrade nvm
mkdir ~/.nvm
echo 'export NVM_DIR=~/.nvm' >> ~/.bash_profile
echo 'source $(brew --prefix nvm)/nvm.sh' >> ~/.bash_profile
source ~/.bash_profile
echo 'export NVM_DIR=~/.nvm' >> ~/.zprofile
echo 'source $(brew --prefix nvm)/nvm.sh' >> ~/.zprofile
source ~/.zprofile
nvm install 16.3.0
nvm use 16.3.0
node -v
npm -v
_logger "[+] Installing code tools : Python"
brew_install_or_upgrade python
brew unlink python && brew link python
brew_install_or_upgrade pyenv
brew_install_or_upgrade pipenv
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.profile
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.profile
echo 'eval "$(pyenv init --path)"' >> ~/.profile
echo 'if [ -n "$PS1" -a -n "$BASH_VERSION" ]; then source ~/.bashrc; fi' >> ~/.profile
echo 'eval "$(pyenv init -)"' >> ~/.bashrc
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.zshrc
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.zshrc
echo 'eval "$(pyenv init --path)"' >> ~/.zprofile
echo 'eval "$(pyenv init -)"' >> ~/.zshrc
source ~/.bash_profile
source ~/.profile
source ~/.zshrc
source ~/.zprofile
pyenv install --skip-existing 3.10.0
pyenv global 3.10.0
pyenv versions
python --version
_logger "[+] Installing aws tools"
brew tap aws/tap
brew_install_or_upgrade awscli
aws --version
brew_install_or_upgrade aws-cdk
cdk --version
brew_install_or_upgrade aws-sam-cli
sam --version
#brew_install_or_upgrade aws-cfn-tools
_logger "[+] Installing k8s tools"
brew_install_or_upgrade kubernetes-cli
kubectl version
brew_install_or_upgrade aws-iam-authenticator
brew tap weaveworks/tap
brew_install_or_upgrade weaveworks/tap/eksctl
eksctl version
_logger "[+] cleanup"
brew cleanup
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment