Newer Leaf Spine Topologies in DCs are build with L3 only (L2VPN). The pysical topology (link states) are the underlay and an tunneling mechanism (most often VXLAN) in combination with Routing protocol (usually BGP) is used as an overlay. The ToR (or virtualisation hosts) will map an VLAN to an VNI (VTAP) and then send the tunnel packet to the destination VTEP (learned vom BGP) where it is mapped back to the VLAN. This is done, because L2 is fragile, hard to scale up and complex while L3 is scalable to an global scale (e.g. the Internet :-D). L3 underlays allows for an easy loadbalancing over all possible paths while L2 usually blocks redundant paths to avoid loops. If you want loadbalancing in L2 you need to configure trunking. with MLAG it is also possible to do trunking with clustered switches but this doesnt scale and is fragile. the beauty of Spine-Leaf Architecture is, that you can scale it easily. e.g. more bandwidth between racks => add more spines.
Yes, everything above ToRs is routed in this architecture, this is where Google’s proprietary routing protocol comes into play (not bgp in this case). It wasn’t mentioned in the paper but I’m sure google uses vxlan on top of this for L2 across the cluster.
@@interviewpen indeed the paper only mentions "We support Layer 3 routing all the way to the ToRs via a custom Interior Gateway Protocol (IGP), Firepath". Its pointed out, that they wanted to keepolder server stacks. Most Clouds allow internal L2 Networks so some form of L2VPN must be supported in the stack. Most likely both L2VPN and direct L3 is supported and used. But the most interesting part of the new jupiter network is the OCS where they can dynamically switch the optical links on demand.
Newer Leaf Spine Topologies in DCs are build with L3 only (L2VPN). The pysical topology (link states) are the underlay and an tunneling mechanism (most often VXLAN) in combination with Routing protocol (usually BGP) is used as an overlay. The ToR (or virtualisation hosts) will map an VLAN to an VNI (VTAP) and then send the tunnel packet to the destination VTEP (learned vom BGP) where it is mapped back to the VLAN. This is done, because L2 is fragile, hard to scale up and complex while L3 is scalable to an global scale (e.g. the Internet :-D). L3 underlays allows for an easy loadbalancing over all possible paths while L2 usually blocks redundant paths to avoid loops. If you want loadbalancing in L2 you need to configure trunking. with MLAG it is also possible to do trunking with clustered switches but this doesnt scale and is fragile. the beauty of Spine-Leaf Architecture is, that you can scale it easily. e.g. more bandwidth between racks => add more spines.
Nobody is using VXLAN at this scale. FAANG / Hyperscalers all have native L3 support in the apps they build.
Yes, everything above ToRs is routed in this architecture, this is where Google’s proprietary routing protocol comes into play (not bgp in this case). It wasn’t mentioned in the paper but I’m sure google uses vxlan on top of this for L2 across the cluster.
@@interviewpen indeed the paper only mentions "We support Layer 3 routing all the way to the
ToRs via a custom Interior Gateway Protocol (IGP), Firepath". Its pointed out, that they wanted to keepolder server stacks. Most Clouds allow internal L2 Networks so some form of L2VPN must be supported in the stack. Most likely both L2VPN and direct L3 is supported and used. But the most interesting part of the new jupiter network is the OCS where they can dynamically switch the optical links on demand.
thank you bro
of course :)
I think when you said "switches", you referred to 'routers".
Are you sure? Sounds like he got it right
No we are on layer 2 here
@@thaRealShady1I worked on this before and Google uses distributed core routers like 8000 for data center setup
@@thaRealShady1they also use tunneling for faster throughput
These are layer 3 switches running BGP / L2-VPNs