James Hamilton discusses inter-datacenter replication and geo-redundancy which was quite easy for me to get my mind wrapped around as the issues discussed have a lot in common to work I did on Microsoft's Branch Office Infrastructure Solution (BOIS) and the issues with WAN.
Inter-Datacenter Replication & Geo-Redundancy
Wide area network costs and bandwidth shortage are the single most common reason why many enterprise applications run in a single data center. Single data center failure modes are common. There are many external threats to single data center deployments including utility power loss, tornado strikes, facility fire, network connectivity loss, earthquake, break in, and many others I’ve not yet been “lucky” enough to have seen. And, inside a single facility, there are simply too many ways to shoot one’s own foot. All it takes is one well intentioned networking engineer to black hole the entire facilities networking traffic. Even very high quality power distribution systems can have redundant paths taken out by fires in central switch gear or cascading failure modes. And, even with very highly redundant systems, if the redundant paths aren’t tested often, they won’t work. Even with incredibly redundancy, just having the redundant components in the same room, means that a catastrophic failure of one system, could possibly eliminate the second. It’s very hard to engineer redundancy with high independence and physical separate of all components in a single datacenter.
With incredible redundancy, comes incredible cost. Even with incredible costs, failure modes remain that can eliminate the facility entirely. The only cost effective solution is to run redundantly across multiple data centers. Redundancy without physical separation is not sufficient and making a single facility bullet proof has expenses asymptotically heading towards infinity with only tiny increases in availability as the expense goes up. The only way to get the next nine is have redundancy between two data centers. This approach is both more available and considerably more cost effective.
The solution James references is from Infineta.
Last week I ran across a company targeting latency sensitive cross-datacenter replication traffic. Infineta Systemsannounced this morning a solution targeting this problem: Infineta Unveils Breakthrough Acceleration Technology for Enterprise Data Centers. The Infineta Velocity engine is a dedupe appliance that operates at 10Gbps line rate with latencies under 100 microseconds per network packet. Their solution aims to get the bulk of the advantages of the systems I described above at much lower overhead and latency. They achieve their speed-up three ways: 1) hardware implementation based upon FPGA, 2) fixed-sized, full packet block size, 3) bounded index exploiting locality, and 4) heuristic signatures.
Infineta provides a technology overview
Technology Overview
Infineta Systems delivers solutions based on several technologies that significantly reduce the amount of traffic running across today’s data center WAN interconnect. Our groundbreaking innovation centers on the patent-pending Velocity Dedupe Engine™, the industry’s first-ever hardware deduplication (“dedupe”) engine. Unlike alternatives, the Velocity Dedupe Engine enables our solutions to maintain the highest levels of data reduction at multi-gigabit speeds while guaranteeing port to port latencies in the few 10s of microseconds. As a result, Infineta’s solutions enable customers to accelerate all data center applications (such as replication and backup) – including ones that are highly latency sensitive. It does so while reducing overall costs incurred by this growing, bandwidth-hungry traffic.
Technology Architecture
Distributed System Architecture
Unlike traditional acceleration solutions that are assembled around monolithic processing environments, Infineta’s solutions are designed from the ground-up around a distributed processing framework. Each core feature set is implemented in a dedicated hardware complex, and they are all fused together with high-speed fabric, guaranteeing wire speed acceleration for business-critical traffic.
Hardware-based Data Reduction
Data reduction is carried out purely in hardware in a pipelined manner, allowing the system to reduce enterprise WAN traffic by as much as 80-90 percent.
Fabric-based Switching
Infineta’s solutions are built on a massive scale, fully non-blocking switch fabric that can make precise switching decisions in the face of sustained traffic bursts.
Multi-gig Transport Optimization
Infineta’s solutions employ multi-core network processors to carry out transport-level acceleration at multi-gigabit speeds. By making key resource decisions at wire-speed, the system is able to maintain up to 10Gbps traffic throughput while working around detrimental WAN characteristics, such as packet loss.
And EMC has VPlex for Virtualized Storage across multiple data centers.
News Summary:
- EMC advances Virtual Storage with industry’s first distributed storage federation capabilities, eliminating boundaries of physical storage and allowing information resources to be transparently pooled and shared over distance for new levels of efficiency, control and choice.
- Groundbreaking EMC VPLEX Local and VPLEX Metro products address fundamental challenges of rapidly relocating applications and large amounts of information on-demand within and across data centers which is a key enabler of the private cloud.
- Future EMC VPLEX versions will add cross-continental and global capabilities, allowing multiple data centers to be pooled over extended distances, providing dramatic new distributed compute and service-provider models and private clouds.