Centurylink Outage, is human the cause of the outage? no known time for resolution

Centurylink Internet Network is down.

Where are your five nines now? CenturyLink’s nationwide outage affects millions

 

2 HOURS AGO

3 Comments

angry_baby_2thumb
SUMMARY:

If you’re one of CenturyLink’s 5.8 million broadband subscribers, you’re probably fuming because your service is out. Such nationwide outages are rare, but that doesn’t make it any less painful for customers.

Most cloud outages are caused by human errors, and there is a good chance that CenturyLink's outage is also caused by human errors in the ability to detect the problem, identify the source of the problem, and possibly taking the wrong action to fix the problem.  we'll see what the postmortem reveals as the events that caused the outage.

What's Google Fiber, 1 gigabit connectivity good for? A load test for planning the future

A friend asked me what is next for Google data center group?  They build and design their own servers.  Design, build, and run their own data centers.  Google has tackled the network issues with a software defined network (SDN).  You can gather a bit from looking at job postings, but nothing really interesting popped up when I looked at the job postings.  What I did notice is a data center expansion that few know about, but I'll wait and see when others discover it rather than write a post on it.  Sometimes it is  better to share insights with friends rather than put a blog post up.

So, what is Google's next big thing?  I was reading GigaOm's Stacey Higginbotham's posts on Google fiber.  Stacey posted on the July 26th announcement.

Google Fiber to launch next week

Google just sent out invitations to a “special event” in Kansas City on July 26 which is undoubtedly the launch of its much-anticipated fiber-to-the-home network. The search giant sent an invite Tuesday that reads, “We would like to invite you to a special announcement about Google Fiber and the next chapter of the Internet.”

 

 

 

I'll be traveling on July 26th so I'll be slow in covering the news, so let's take a stab at what Google Fiber gives Google.

I think it is relatively simple.  Google fiber connects users with a 1 gigabit bandwidth connection vs. a more typical 10 megabit to the home.   Remember the days when the corporate LAN was 10 megabit, and it was a privilege to have 100 megabit?  1 gigabit is the common connection in corporate LANs now, and data centers are networked with 10 gigabit.

Google Fiber will have 600k population in Kansas City, MO and KS and thousands, ten thousand, maybe eventually a hundred thousand 1 megabit connections to two of its data centers in Iowa and Oklahoma.  

NewImage                  NewImage

Google is going to be able run load tests on these data centers with 1 gigabit connections to thousands of users.

Load testing is the process of putting demand on a system or device and measuring its response. Load testing is performed to determine a system’s behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation. When the load placed on the system is raised beyond normal usage patterns, in order to test the system's response at unusually high or peak loads, it is known as stress testing. The load is usually so great that error conditions are the expected result, although no clear boundary exists when an activity ceases to be a load test and becomes a stress test.

With this data Google will be able to more accurately plan for when 1 gigabit will be pervasive what kind of changes are needed in the data centers, servers, networking storage, software, operations to run 1 gigabit connections.

Google will get great coverage on July 26th and there will be all kind of discussions on what gets delivered over the gigabit connection.  But, ultimately all these different scenarios are just a bits over the wire that will put a load on the above data centers.

All that use is going to give Google data on well their infrastructure holds up and what is required in the future.

Google fiber is a load testing of Google's data center, servers, networking, storage, software and operations.

NewImage

NewImage

The World's Undersea Cable Network

There is a race to provide Worldwide services by Google, Facebook, Amazon, Microsoft, and any mature Web 2.0 companies.  Also in this mix are Equinix, Verizon, AT&T, Digital Realty Trust.  

If you want to design your own map. You can got to this site.

NewImage

GigaOm has a post on Undersea Cable Network.  When you look at this below graphic you can see the USA is the hub of the cable network.  The USA has the most bandwidth used.

A visual guide to undersea cables and their $5.5B price tag

The U.S., the Netherlands, France, the UK and Germany are all mega users of bandwidth, using more than 10 terabytes of capacity to feed their web surfing needs.

But the rest of the world is continuing to demand more broadband, and the industry of undersea cables and long haul broadband providers has spent up to $5.5 billion to meet that demand with new cables coming online in 2012 and 2013, according to TeleGeography.

NewImage

Here are a list of some of the top land stations.
NewImage
 
Besides size of the pipe an issue is the speed.
NewImage
 
 
 
 
 
(Disclosure: I work for GigaOm Pro as an analyst.)

Google's Urs Hoelzle OpenFlow Presentation

27 Jan 2014 update.  Complete original slides are here.

http://www.greenm3.com/gdcblog/2014/1/27/complete-slides-for-urs-hoelzles-openflow-talk-at-2012-open.html

 

 

James Hamilton has a post that describes what Urs covered. We need to get James a good camera to take pictures of the slides.  It is hard to write and take pictures at the same time though.

Urs Holzle did the keynote talk at the 2012 Open Networking Summit where he focused on Software Defined Networking in Wide Area Networking. Urs leads the Technical Infrastructure group at Google where he is Senior VP and Technical Fellow. Software defined networking (SDN) is the central management of networking routing decisions rather than depending upon distributed routing algorithms running semi-autonomously on each router.  Essentially what is playing out in the networking world is a replay of what we have seen in the server world across many dimensions. The dimension that is central to the SDN discussion is a datacenter full of 10k to 50k servers are not managed individually by an administrator and the nodes making up the networking fabric shouldn’t be either.

So, I spent some time crawling around to see what slides I could find and throw them together into this blog post.  These slides are not in the exact order that Urs presents them in as I wasn’t there and don’t know for sure. 

I now understand Urs’s presentantion much better and can watch the video while referring to the below and going back to James Hamilton’s notes.

Why all this effort?  Steven Levy’s Wire article says it well.

‘You have all those multiple devices on a network but you’re not really interested in the devices — you’re interested in the fabric, and the functions the network performs for you,’ Hölzle says.

Hölzle says that the idea behind this advance is the most significant change in networking in the entire lifetime of Google.

In the course of his presentation Hölzle will also confirm for the first time that Google — already famous for making its own servers — has been designing and manufacturing much of its own networking equipment as well.

“It’s not hard to build networking hardware,” says Hölzle, in an advance briefing provided exclusively to Wired. “What’s hard is to build the software itself as well.”

In this case, Google has used its software expertise to overturn the current networking paradigm.

NewImage

NewImage

NewImage

NewImage

NewImage

NewImage

NewImage

NewImage