What’s in a Network, Anyway?

SPEAKER: This is a
story about networking. [MUSIC PLAYING] Once upon a time,
when you wanted to set up a private
network or a subnet to host your workloads
in the Cloud, you had to construct a
Virtual Private Cloud, or VPC, contained to just one region. This was OK, as long as you
were only managing two or three. But scaling up
became really hard because you needed VPNs
between each region to keep your traffic secure, and
this meant longer wait times. At Google, we imagined
a single global VPC, where VMs and regional subnets
could communicate privately across the world. And all of this without
going through a VPN. And then we built it. This global VPC approach
allowed for simplicity at scale, empowering businesses to
add subnet after subnet without a resulting network
that was too unwieldy to manage. For network professionals
who lived in both worlds, the difference is,
well, noticeable. Just ask anybody who spent
time stitching together VPN gateway after gateway. Ask anybody who’s
had to pinpoint the source of a network problem
in a system of localized VPCs when something breaks. Or anybody who’s had to
manage security policies for those different VPCs. Or anyone trying to
maintain clear segregation of their engineers
access to each network. When you use Google’s
network, you’ll reap all the benefits of scaling
without any of these drawbacks. So, what makes
this all possible? Originally, we built
our own network because, quite simply, we
couldn’t buy for any price the scale and
performance we needed. So what did we have
to do differently to solve for this
scale and performance when designing our own network? A big part of the answer? Tail latency. When you’re doing any kind
of distributed computation, you don’t just care
about how fast you can get the data most of the time. You care about what happens
to the speed at the tail end of the distribution. To understand why, suppose
you have a computation that starts at one server,
collects data, and doesn’t finish until it collects
data from 1,000 more servers. If even one of those
servers returns your query significantly later
than the rest, well, the speed of
your computation is only as fast as
the slowest server. But at Google, by guaranteeing
that the worst case is very close to the best
case, we design our network for what matters most– predictability. It’s this network and
the philosophy driving it that allows us to treat your
hundreds, sometimes thousands, of servers as one
unit of computation. One collective brain
talking to itself across locations
around the globe. And what does this mean for you? When designing services
for your customers, you get to assume low latency. In large part, this is thanks
to best-in-class global load balancing services,
which support one-million-plus
queries per second. Traffic enters Cloud
Global Load Balancing through 80-plus distinct global
load balancing locations, meaning your traffic
primarily stays on Google’s fast private
network backbone instead of public internet
service providers. And naturally, this
design can accommodate huge unexpected
spikes in traffic. So when your app becomes a
crowd favorite overnight, our network intelligently
directs traffic to VM instances that
can be configured to spin up automatically
in a matter of seconds. In other words, we’re not just
hoping your business will grow, we’re counting on it. Of course, we also know that
one size doesn’t fit all, and that there will
always be a few niche cases where you might
still want to set up a separate regional VPC. So we built that
option right in. With Google’s network,
you can take advantage of our global VPC
when it makes sense, and go the traditional
route when you need to. If you’re ready to
see for yourself, head to cloud.google.com. But the story doesn’t end there. There’s so much more we
could say about networks. Actually, we created
a whole series, Cloud Networking End-To-End,
dedicated to answering all the questions you’ve ever
had about networking, and some you didn’t. Ever wondered how to
set up your own VPC on Google Cloud, hybrid
connectivity, firewall rules, or external load balancers? So be sure to check out Cloud
Networking End-To-End, as well as a bunch of
other great content on the Google Cloud
Platform YouTube channel. [MUSIC PLAYING]

Leave a Reply

Your email address will not be published. Required fields are marked *