Apstra
Apstra
  • Видео 110
  • Просмотров 165 419

Видео

Accelerate VMware NSX deployments with Apstra Intent-Based Networking
Просмотров 3915 лет назад
Accelerate VMware NSX deployments with Apstra Intent-Based Networking
Apstra Demo VMware Integration
Просмотров 1,3 тыс.6 лет назад
Apstra Demo VMware Integration
How: Intent-Based Networking is Different than Software Defined Networking
Просмотров 3 тыс.6 лет назад
How: Intent-Based Networking is Different than Software Defined Networking
What: Are Intent-Based Analytics
Просмотров 2136 лет назад
What: Are Intent-Based Analytics
How: To Eliminate Data Center Nightmares
Просмотров 3826 лет назад
How: To Eliminate Data Center Nightmares
Apstra Live Cisco Live Automating Lifecycle Data Center
Просмотров 3806 лет назад
Apstra Live Cisco Live Automating Lifecycle Data Center
Apstra Live at Cisco Live - Automating Leaf-Spine Data Center Fabrics
Просмотров 1,5 тыс.6 лет назад
Apstra Live at Cisco Live - Automating Leaf-Spine Data Center Fabrics
Live Cisco NOC Cisco Live 2018 Multi Tenant EVPN Automation
Просмотров 4826 лет назад
Live Cisco NOC Cisco Live 2018 Multi Tenant EVPN Automation
Apstra's Network to Code Integration with ServiceNow
Просмотров 5186 лет назад
Apstra's Network to Code Integration with ServiceNow
Intent-based Networking Tutorial: Headroom - Part 2
Просмотров 2196 лет назад
Intent-based Networking Tutorial: Headroom - Part 2
Intent-based Networking Tutorial: Headroom - Part 1
Просмотров 9686 лет назад
Intent-based Networking Tutorial: Headroom - Part 1
TelecomTV - The Evolution to the Self-Operating Network
Просмотров 877 лет назад
TelecomTV - The Evolution to the Self-Operating Network
Data Center Network Design and Operation Should Be This Easy - Full Video
Просмотров 1,6 тыс.7 лет назад
Data Center Network Design and Operation Should Be This Easy - Full Video
Data Center Network Design and Operation Should Be This Easy - Part 4: Operate
Просмотров 4677 лет назад
Data Center Network Design and Operation Should Be This Easy - Part 4: Operate
Data Center Network Design and Operation Should Be This Easy - Part 3: Deploy
Просмотров 5177 лет назад
Data Center Network Design and Operation Should Be This Easy - Part 3: Deploy
Data Center Network Design and Operation Should Be This Easy - Part 2: Build
Просмотров 7267 лет назад
Data Center Network Design and Operation Should Be This Easy - Part 2: Build
Data Center Network Design and Operation Should Be This Easy - Part 1: Design
Просмотров 1,8 тыс.7 лет назад
Data Center Network Design and Operation Should Be This Easy - Part 1: Design
Fragile to Agile - Apstra Webinar Preview
Просмотров 437 лет назад
Fragile to Agile - Apstra Webinar Preview
Intent Based Networking Systems - Apstra Webinar Preview
Просмотров 4527 лет назад
Intent Based Networking Systems - Apstra Webinar Preview
Apstra - Intent Based Network Automation for Your Data Center
Просмотров 66 тыс.7 лет назад
Apstra - Intent Based Network Automation for Your Data Center
Extensible Telemetry - Data Center Network Operations - Apstra Webinar
Просмотров 4407 лет назад
Extensible Telemetry - Data Center Network Operations - Apstra Webinar
AOS's Extensible Telemetry - Apstra Integration with Metamako Metaconnect 16 Demo
Просмотров 2317 лет назад
AOS's Extensible Telemetry - Apstra Integration with Metamako Metaconnect 16 Demo
Intent Based Self Operating Network - Apstra Headroom Demonstration
Просмотров 7257 лет назад
Intent Based Self Operating Network - Apstra Headroom Demonstration
Data Center Network Operations: From Fragile To Agile - Apstra Webinar
Просмотров 2157 лет назад
Data Center Network Operations: From Fragile To Agile - Apstra Webinar
IT Automation - Ansible and Apstra: Bridge the DevOps Divide - Apstra Webinar
Просмотров 3207 лет назад
IT Automation - Ansible and Apstra: Bridge the DevOps Divide - Apstra Webinar
Intent-Based Networking Systems - Apstra Webinar
Просмотров 1,8 тыс.7 лет назад
Intent-Based Networking Systems - Apstra Webinar
Self Operating Networks - Data Center Architecture - Apstra Webinar
Просмотров 2917 лет назад
Self Operating Networks - Data Center Architecture - Apstra Webinar
Disaggregate Networking Making Open Networking Easy to Build - Apstra Self-Operating Network
Просмотров 7998 лет назад
Disaggregate Networking Making Open Networking Easy to Build - Apstra Self-Operating Network
Apstra Operating System (AOS): Vendor-Agnostic, Intent-Driven Network Automation
Просмотров 1,9 тыс.8 лет назад
Apstra Operating System (AOS): Vendor-Agnostic, Intent-Driven Network Automation

Комментарии

  • @Diego-np9sr
    @Diego-np9sr 6 дней назад

    thank you guys!! that was very precious content

  • @samyogdhital
    @samyogdhital 16 дней назад

    ### Key Discussion Points: #### 1. Upcoming IETF Meeting in Vancouver (00:00:34 - 00:00:42) - The hosts mention an upcoming IETF meeting in Vancouver, highlighting that many relevant topics will be discussed, including in-network computing for AI training, congestion control, and other new working group topics. (00:00:34 - 00:01:01) - It is noted that one of the routing sessions will likely occur during the same time they would normally record their show. (00:01:14 - 00:01:22) - The hosts encourage listeners to check the agenda and participate online for free if they cannot attend in person. (00:01:36 - 00:01:49) #### 2. Focus on Fundamentals of Routing (00:03:54 - 00:04:00) - The discussion shifts to routing, a fundamental aspect of networking. (00:03:45 - 00:03:54) - They decide to revisit first principles, rather than diving into advanced topics right away. (00:04:00 - 00:04:13) #### 3. The Criticality of Packet Loss in Clusters (00:04:34 - 00:04:49) - Unlike enterprise networks where packet loss is often tolerable, dropping packets in a cluster environment has severe consequences. (00:04:34 - 00:04:49) - This leads to a discussion about load balancing. (00:04:58 - 00:05:05) #### 4. Load Balancing and Routing (00:05:05 - 00:05:22) - Load balancing is typically a forwarding function, but routing can be leveraged for more than just loop freeness. (00:05:05 - 00:05:10) - In a classical leaf-spine architecture, routing is mainly used to provide a set of equal-cost multi-paths (ECMP). (00:05:22 - 00:05:35) - Routing is usually unaware of load or other semantics. (00:05:50 - 00:06:00) #### 5. BGP Metadata and Congestion Signaling (00:06:00 - 00:07:54) - There's work to introduce metadata into BGP to convey information about the quality of reachability, not just reachability itself. (00:06:00 - 00:06:18) - A proposal exists for BGP to signal congestion beyond the next hop using a new path attribute called "next-next hop" in a Clos architecture.(00:06:37 - 00:07:54) #### 6. Congestion Control in AI Workloads (00:07:55 - 00:10:28) - Congestion control is crucial for AI workloads, helping to adjust traffic transmission rates. (00:07:55 - 00:08:03) - Classical Data Center Quantized Congestion Notification (DCQCN) involves a round trip plus processing time. (00:08:22 - 00:08:32) - BGP's feedback loop for congestion is much slower compared to DCQCN. (00:09:39 - 00:10:28) #### 7. Importance of Testing with Realistic Workloads (00:10:28 - 00:11:30) - Workload in AI is not uniform. (00:10:28 - 00:10:50) - The network's behavior is entirely different depending on message sizes and other factors. (00:10:50 - 00:11:02) - Theoretical assumptions about network design are often incorrect, so testing with real equipment is essential. (00:11:02 - 00:11:30) #### 8. BGP Scalability vs. Speed (00:11:30 - 00:12:08) - BGP is immensely scalable but also notoriously slow, which poses a challenge in environments where microsecond latency matters. (00:11:30 - 00:11:53) - Separation of concerns is key, with reachability handled differently from the quality of reachability. (00:12:08 - 00:12:18) #### 9. IGP Extensions for Signaling (00:12:18 - 00:13:40) - There are new extensions to IGP to signal various attributes like available bandwidth and latency, but they require careful management due to frequent fluctuations. (00:12:18 - 00:12:53) - Updating IGP too often can overload the network, and updating it too slowly might render the information irrelevant. (00:13:17 - 00:13:40) #### 10. Traffic Engineering (00:16:58 - 00:18:03) - Modern traffic engineering involves using IGP within the network, BGP for the controller, and segment routing. (00:17:01 - 00:17:17) - RSVP is considered too fiddly and complex for most modern implementations. (00:16:58 - 00:17:19) - Reoptimization of network paths is done in tens of seconds, which is a long time in most networks. (00:17:20 - 00:18:03) #### 11. Cooperation Between Network and Host Technologies (00:18:03 - 00:18:30) - The cooperation between network-based and host-based technologies is complex. (00:18:03 - 00:18:20) - The best solution varies based on transmit rates, message sizes, and other factors. (00:18:20 - 00:18:30) #### 12. Data Center Solutions and AI (00:18:30 - 00:20:00) - Good data center design is generally applicable, but its importance is heightened in AI environments. (00:18:30 - 00:19:00) - Poor network performance in AI/ML can lead to significant costs and inefficiencies. (00:19:00 - 00:20:00) #### 13. Current Solutions and Development Areas (00:20:00 - 00:22:08) - Congestion control signaling needs improvement because current methods are designed for storage, not AI. (00:20:00 - 00:20:42) - There is a lot of development in the area of network-assisted telemetry. (00:21:04 - 00:21:14) - Combining basic marking with round-trip time (RTT) measurements can help with detecting incast congestion. (00:22:08 - 00:22:30) - It\'s crucial to run networks at high utilization in AI environments. (00:22:30 - 00:24:34) #### 14. Key Principles for AI Networks (00:25:10 - 00:27:14) - AI networks are highly optimized for RDMA over IP, specifically RoCEv2 which is routed traffic. (00:25:10 - 00:25:39) - They require low latency, minimal buffering, and fast convergence. (00:25:41 - 00:26:54) - Avoiding deep buffers is crucial, as data spends a significant amount of time traveling through different memory tiers. (00:25:55 - 00:26:30) - Follow best practices, avoid loop hunting, and let protocols handle convergence. (00:27:00 - 00:27:14) #### 15. Hyperscale and GPU as a Service (00:27:14 - 00:28:30) - Large AI clusters are mainly built by hyperscalers or those offering GPU as a service. (00:27:14 - 00:28:30) - These companies prioritize high performance infrastructure and leverage the experience of those coming from large tech companies. (00:28:30 - 00:29:00) #### 16. Importance of Following Best Practices (00:28:30 - 00:30:00) - It’s vital to follow best practices and learn from the mistakes of others. (00:28:30 - 00:29:00) - It is important to understand what is marketing and what is truth. (00:30:00 - 00:30:09) - Deep buffer switches are commonly sold, but they don't work for all applications, and you must choose the correct vendor. (00:30:09 - 00:31:52) #### 17. Buffer Sizing and Vendor Information (00:31:52 - 00:33:00) - There\'s a lot of misinformation about buffer sizing. (00:31:52 - 00:32:08) - Vendors often promote their specific solutions and the importance of deep buffers for all use cases. (00:30:00 - 00:33:00) - A discussion with outside, unbiased parties would be useful to determine the right approach. (00:33:00 - 00:33:30) #### 18. Focus on Transport and Evolution (00:33:59 - 00:34:08) - The evolution of transport is more interesting than hardware. (00:33:59 - 00:34:08) #### 19. Self-Contained Building Blocks and Scalability (00:40:37 - 00:41:53) - Networks should be built using self-contained, repeatable building blocks that allow for clear abstraction of details. (00:40:37 - 00:41:53) - Reducing the amount of state is essential, which is achieved through summarization and avoiding randomness in IP allocation. (00:41:19 - 00:41:53) #### 20. Overlay and Underlay Separation (00:41:53 - 00:43:01) - The overlay, which is dynamic and involves tenants and virtual functions, should be completely decoupled from the immutable underlay. (00:41:53 - 00:42:01) - Separate route distribution schemas for overlay and underlay can improve reliability and stability. (00:42:01 - 00:43:01) #### 21. Importance of Routability (00:43:01 - 00:43:30) - Workloads must be routed with IP, and Layer 2 solutions should be avoided. (00:43:01 - 00:43:30) - Summarize wherever possible to reduce the amount of state. (00:43:15 - 00:43:30) #### 22. Network Design Principles for Scalable Data Centers (00:43:30 - 00:48:30) - Abstract details as you move up in the network. (00:43:30 - 00:44:17) - Use ports (self-contained deployment units) for managing upgrades and summarization. (00:44:17 - 00:46:32) - The aggregation on different tiers of the network, using best practices such as summarization, are very important for scaling the network. (00:46:32 - 00:48:30) - Self-contained, repeatable building blocks are key to scalability, avoiding snowflake configurations. (00:48:30 - 00:49:15) #### 23. Key Takeaways (00:49:15 - 00:50:00) - Networks are critical and require careful design to avoid severe performance issues and ensure high availability. (00:49:15 - 00:50:00) - Focus on building good networking solutions that are repeatable. (00:50:00 - 00:50:09) #### 24. Show Summary (00:50:09 - 00:51:24) - The discussion emphasized fundamental principles, which the hosts have discussed in previous episodes. (00:50:09 - 00:51:24) #### 25. Importance of Troubleshooting and Operational Experience (00:51:24 - 00:51:36) - Actual troubleshooting helps crystallize first principles and emphasizes the need for clear, organized network design. (00:51:24 - 00:51:36) #### 26. Mental Picture of the Network (00:51:36 - 00:52:07) - It is important to have a mental picture of the network to aid in troubleshooting. (00:51:36 - 00:52:07) #### 27. Next Steps and IETF (00:52:07 - 00:54:49) - The hosts will be attending the IETF meeting in Vancouver, where they will present "BGP over Quick". (00:53:43 - 00:54:09) - Listeners are encouraged to participate virtually at IETF meetings to stay updated on networking developments. (00:54:09 - 00:54:49)

  • @samyogdhital
    @samyogdhital 16 дней назад

    Introduction (00:00:00 - 00:02:12) • The hosts, Jeff and Stu, introduce the show "Between Two Nerds," which discusses various aspects of the networking industry. They also introduce their guest, Peter, who has experience in web-scale infrastructure, networking, automation, and software operations at companies like Microsoft, Facebook, and Nvidia. (00:00:00 - 00:02:12) Topic: AI Data Center Design (00:02:12 - 00:03:04) • The main topic of the episode is AI data center design and its unique requirements. They plan to discuss the drivers for AI data center design, emphasizing the importance of understanding AI/ML workflows. (00:02:12 - 00:03:04) AIML Workflows and Network Design (00:03:04 - 00:04:00) • The discussion highlights that while network design principles remain the same, the consequences of incorrect design are much greater in AI data centers due to the massive data sets involved in machine learning training and inference. (00:03:04 - 00:03:50) • The speed of change in AI is rapid, with cluster sizes growing from 4K GPUs to hundreds of thousands, making network design more complex and critical. (00:03:50 - 00:04:00) GPU Dominance in AI (00:04:00 - 00:07:01) • Nvidia's dominance in the GPU market is acknowledged, and the discussion shifts to why GPUs are fundamental for AI processing clusters. (00:04:00 - 00:06:01) • It's explained that GPUs, initially designed for graphics, have an architecture that is massively parallel, making them suitable for machine learning tasks that also require massive parallelism. (00:06:01 - 00:07:01) • The development of APIs by Nvidia allowed researchers to run matrix multiplications on GPUs, leading to their dominance in AI training. (00:07:01 - 00:09:35) GPU Training and Networking (00:09:35 - 00:11:09) • The conversation details how the need to parallelize training across multiple GPUs led to the necessity for high-performance networking. (00:09:35 - 00:10:00) • Initially, CPUs were also used for training, but GPUs became dominant due to their superior performance with massive parallelism. (00:10:00 - 00:11:09) Google TPUs vs. Nvidia GPUs (00:13:42 - 00:15:19) • Google's use of Tensor Processing Units (TPUs) is discussed, noting that they are optimized for matrix multiplications but are less flexible than GPUs. (00:13:42 - 00:14:10) • It is noted that while TPUs have their place, GPUs are considered more efficient and programmable. The flexibility of GPUs allows them to adapt to rapidly changing workloads. (00:14:10 - 00:15:19) Open Source Contributions (00:17:40 - 00:18:07) • The open-source nature of Nvidia’s software infrastructure, especially CUDA, is highlighted as a key factor in its growth. The open-source model allowed for worldwide contributions, which helped build its system around GPUs. (00:17:40 - 00:18:07) Model Size and GPU Scaling (00:22:22 - 00:25:36) • The discussion transitions to how model size drives the network size and number of GPUs needed for training. As models grow larger, more GPUs are required. This is not just due to computational needs but also to fit the model in memory. (00:22:22 - 00:23:05) • Training times are also a key driver. Larger and more scalable hardware allows faster training, which is crucial for time to market and better services. (00:23:05 - 00:24:15) • It’s noted that the time to train grows as a power law, whereas adding GPUs provides linear improvements. There are practical limitations to the number of GPUs that can be placed in a data center due to space and power. (00:24:15 - 00:25:36) Training and Inference (00:30:24 - 00:32:32) • The presenters describe the AI workflow as a linear process, where data is used to train models, which are then used for inference. They also highlight that this is an iterative process, as feedback from inference can further improve the model. (00:30:24 - 00:31:05) • They discuss how training is a compute-intensive process that requires a high-performance network, while inference needs low latency and a fast response time for end-users. (00:31:05 - 00:32:32) Importance of Low Latency for Inference (00:32:32 - 00:34:44) • Low latency becomes more critical for machine-to-machine interactions. Human attention spans need quick responses, so if models don't provide output quickly enough, consumers may move on. (00:32:32 - 00:33:10) • It is also noted that latency is becoming an important constraint for machine-to-machine inference as more and more applications call other applications to do inferencing. Optimizing for latency becomes more important. (00:33:10 - 00:34:44) AI Network Characteristics (00:37:36 - 00:40:00) • AI networks are optimized for performance, not for cost savings. It is more important to ensure the maximum performance from your network than to save money when building it. (00:37:36 - 00:38:30) • Job completion time is the primary metric. The network must not hinder job completion. (00:38:30 - 00:39:30) • AI networks are not under-subscribed, and the goal is to maintain network utilization above 90%. (00:39:30 - 00:40:00) Power Consumption and Network Layers (00:40:00 - 00:42:22) • Power is a major constraint. While trying to build larger data centers for larger clusters, it is crucial to consider the amount of power available. (00:40:00 - 00:41:15) • AI clusters have multiple types of networks, including IP networks, scale-up networks (like Nvidia's NVLink), and platform-specific libraries. (00:41:15 - 00:42:22) Historical Perspective and GPU Direct (00:43:53 - 00:47:58) • The evolution from Hadoop to today's AI clusters is discussed. In the early days, TCP/IP was often used, but this became inefficient with the massive data flows of GPU-based AI. (00:43:53 - 00:45:45) • With RDMA over Converged Ethernet (RoCE) and GPU direct, the efficiency of data transfer has significantly improved. GPU direct provides zero-copy data transfers directly from GPU memory to the network, bypassing the CPU. (00:45:45 - 00:47:58) NVLink Technology (00:50:15 - 00:52:30) • NVLink is highlighted as a key technology for connecting GPUs within a server, offering much higher bandwidth than scale-out networking options. Current and next-generation plans show increasing numbers of GPUs that can be connected via NVlink, which has also moved from internal to the server to being part of the rack. (00:50:15 - 00:52:30) RDMA and GPU Direct Explained (00:52:30 - 00:53:30) • A brief explanation of Remote Direct Memory Access (RDMA), highlighting that it enables direct access to memory over a network, bypassing the CPU and the kernel. (00:52:30 - 00:53:30) • GPU direct allows the NIC to read memory directly from the GPU and write it on the wire. (00:53:30 - 00:54:55) Parallelization Techniques and NCCL (00:55:57 - 00:57:30) • The most common parallelization techniques include data, model, and tensor parallelism. Communication libraries such as NVIDIA Collective Communications Library (NCCL) play a crucial role in enabling communication between GPUs for training. (00:55:57 - 00:57:30) Data Locality and Server Design (00:57:30 - 01:00:00) • If a single GPU has enough memory to hold the entire dataset, there is no need for external networking. Data locality in single servers is discussed, as well as server designs that help optimize training. (00:57:30 - 01:00:00) NVIDIA Communication Library and Data Parallelism (01:00:00 - 01:02:28) • Discussion about how NVIDIA Collective Communications Library (NCCL) was designed to be optimized for GPUs. The importance of data parallelism, which involves slicing the data into sub-slices, and model parallelism is discussed. They note that the data can be stretched across the cluster, achieving large computing capabilities. (01:00:00 - 01:02:28)

  • @rasoulmesghali
    @rasoulmesghali Месяц назад

    Amazing session! Thank you both, legends!

  • @LibertypopUK
    @LibertypopUK 2 месяца назад

    the best of the best, keep bringing Petr on, his info is gold

  • @lensin-re5if
    @lensin-re5if 3 месяца назад

    hi , Yingzhen mentioned that she will upload the files to github, could you share the link please? Also she mentioned to reach out in case of we have questions for more details, how we can reach out to her?

  • @sylwekara
    @sylwekara 4 месяца назад

    Great show. JeffT has really bad microphone and it makes very difficult to understand what he is trying to say.

  • @RommelsAsparagus
    @RommelsAsparagus 4 месяца назад

    Great intro. A couple of questions, has nVidia thought about full immersion cooling for their GPU clusters instead of just cold water on the backplane? Also, do you think inference data centers will become a larger market than training data centers? Would you distribute inference in something like a CDN to optimize latency? Thanks.

  • @hgaliza
    @hgaliza 4 месяца назад

    Great talk, networking geniuses 🙏🏻

  • @AlgoNudger
    @AlgoNudger 5 месяцев назад

    Geoff Huston, pls. 😊

  • @AlgoNudger
    @AlgoNudger 5 месяцев назад

    Pls invite Geoff Huston. 😊

    • @Apstra
      @Apstra 5 месяцев назад

      Thanks for the suggestion. We will check into it.

  • @LibertypopUK
    @LibertypopUK 5 месяцев назад

    hope theres a part 4 Jeff and Jeff, Petr is a well of knowledge

  • @imnothingyouarebetter
    @imnothingyouarebetter 6 месяцев назад

    I wish there is a virtual whiteboard or something to map the idea of discussion. because the discussion has a lot of information that is too much for viewers's brain like me.

  • @LibertypopUK
    @LibertypopUK 6 месяцев назад

    thanks for hosting this series Jeff, legendary :)

  • @LibertypopUK
    @LibertypopUK 6 месяцев назад

    All three legends !!! Such a pleasure to listen to

  • @ChrisWhyte24
    @ChrisWhyte24 7 месяцев назад

    Really enjoyed this one! Greatly appreciate the insight.

  • @sinade1
    @sinade1 7 месяцев назад

    Nice to see Petr Lapukhov again.

  • @hassanshah7929
    @hassanshah7929 9 месяцев назад

    Much needed video podcast.

  • @manojj1544
    @manojj1544 10 месяцев назад

    Both jeff’s respect ..I just respect/luv/adore U almost same as (Ken Thompson).Gr8 guys ignited countless minds and kept our stomachs fed/happy as well.

  • @Douglas_Gillette
    @Douglas_Gillette 10 месяцев назад

    Great conversation!

  • @dagobertoantonioalzamorach5594

    Great episode as always!

  • @arulgobinathemmanuel5417
    @arulgobinathemmanuel5417 Год назад

    In retrospect everyone can judge ❤

  • @anujkant1
    @anujkant1 Год назад

    Great session. Full of knowledge and practical experience. Just one request, could you please add all the whitepapers/docs you bring up in the notes?

  • @joespt966
    @joespt966 Год назад

    What a nice session! Thx. I am a big fan of this 2 nerd. Quick question on Free Valley routing. In case of IGP for underlay with no specific policy, the traffic via another Leaf to Exit point. It probably is useful there is NO traffic high-demanding. Could you give me another point of view for it?

  • @jbparrish17
    @jbparrish17 Год назад

    Great talk. This has been one of my favorite shows you guys have hosted. Would love to see a future show on LISP with Dino and on BIER

  • @daryllawrence9398
    @daryllawrence9398 Год назад

    Promo-SM

  • @eddiechan4840
    @eddiechan4840 Год назад

    This echo a lot what i see in my organization. We try to push orchestration to different teams but they worry about supporting and not having the automation knowledge to maintain it.

  • @durgaprasadm9972
    @durgaprasadm9972 Год назад

    Where can i get the slides presented here

  • @ludakris29
    @ludakris29 Год назад

    Thanks for the great content! You both are Legends!

  • @sebschrader
    @sebschrader Год назад

    Very informative as always

  • @MXSPORTRU
    @MXSPORTRU Год назад

    Sorry to say that, but this session is very difficult to understand. The speaker is talking about details, options and differences between them without any introduction and visualization. Before he question what is a dragonfly topology was vaguely explained, about 15 minutes of discussion preceded it that was impossible to understand. It would be better to start from topologies that appeared around 30-35 minutes from the beginning.

  • @shankarvaranasy8404
    @shankarvaranasy8404 Год назад

    Have you looked at vPC Fabric Peering functionality on NX-OS? It doesn't need peer-link and leverages EVPN fabric for that functionality of peer-link

    • @jefftantsura9677
      @jefftantsura9677 Год назад

      it is proprietary to NX-OS, which defeats the purpose of using an open standard (EVPN)

  • @mazenkhaddam5586
    @mazenkhaddam5586 Год назад

    There is an emerging nerd (Mel) ! Timely and relevant emerging technology ... time well spent. thank you all.

  • @halfsterker
    @halfsterker Год назад

    Looking forward to the continuation of this conversation! The intersection of ML and Networking is a fascinating topic.

  • @mazenkhaddam5586
    @mazenkhaddam5586 Год назад

    Love that shirt JeffT …

  • @mmm763
    @mmm763 2 года назад

    why we need mac-only route-type 2 , if we already have all information in mac-ip route-type 2?

  • @SalmanSadiq-iy4fv
    @SalmanSadiq-iy4fv 2 года назад

    I would request to upload videos with better resolution than 720p. Specially the slides are very hard to read

  • @yummyummy8662003
    @yummyummy8662003 2 года назад

    Such an amazing series, just found it and loved all the videos. Jeff Doyle has unique ablility to put things in simple words and give historical perspective wherever possible to clear things out. Jeff Tantsura clear and concise and to the point!!

  • @errickbaroquin7464
    @errickbaroquin7464 2 года назад

    This episode was the least informative of the series. Part 1 and 3 were pretty good in technical detail, not to mention the more structured format supported with slides. I do not doubt the guest's knowledge and expertise at all. I think he was either not able to articulate those well and/or too nervous about giving away any confidential information, which resulted in him providing almost no useful information at all. I understand the nature of proprietary information, but it should be possible to explain certain things without giving away actual implementation details. Nobody was asking for block diagrams for algorithms or code samples. As Jeff T mentioned in the beginning of Part 1, these interview series were inspired by the late discussions on the nanog mailing list and aimed to provide more insight, as well as shed some light on some of the items under question there. Those transactions were much more detailed and technical at a deeper level. You kept saying "we are all nerds here, please give us more details" and I barely saw that happening. I am sorry to say I learned almost nothing new from this and still have those questions unanswered. Anyhow, I appreciate all your time and effort to put this together, and always great to see you 0x2 Jeffs.

  • @andyyu8755
    @andyyu8755 2 года назад

    very nice

  • @timstevenson121
    @timstevenson121 2 года назад

    Clearly a real expert in the field, thanks for the very informative session.

  • @joespt966
    @joespt966 2 года назад

    proud of you as Junivator and X-Junivator. Thx a lot.

  • @hetvankarsa
    @hetvankarsa 2 года назад

    Thank you Sharada, Jeff & Jeff.

  • @MS-um5ni
    @MS-um5ni 2 года назад

    Great podcast, keep it up guy's 👍

  • @biswajit007-4Ever
    @biswajit007-4Ever 2 года назад

    Hi Jeff^2, Could you please create a session on the flooding list creation with the assistance of the dataplane ?

  • @20dorko
    @20dorko 2 года назад

    Thanks a lot Peter and Jeff for this great video. The concept of using BGP communities to prevent BGP path hunting is really interesting, but I am not 100% sure how this works. Let's say the FSW1 switch has 2 paths for an IP address behind the RSW1 - 1 primary (with the "rack_prefix" community) and 1 backup (with "completed_backup_path" community). How is the path hunting prevented if a prefix behind the RSW1 became unreachable? In my understanding, RSW1 would send BGP withdraw to FSW1 and FSW2. Both routers will start to converge, let's say FSW1 converges first, he can see the backup path via RSW2 and start to use this path and in the meantime it will send explicit withdraw to RSW2. RSW2 will now send the traffic via FSW2 (if FSW2 is not fully converged and didn't send withdraw to RSW2). Once all RSWx / FSWx are not fully converged, traffic is dropped. Probably I am missing something, could someone please explain? Thanks a lot !

  • @lucianobarros2303
    @lucianobarros2303 2 года назад

    Great talk, enjoyed listening to you guys!

  • @mohammedelhassanhabiballa5700
    @mohammedelhassanhabiballa5700 3 года назад

    Really a very good discussion. I wish if there will me more details on specific topics like MPLS implementations on scaling DC vs ECMP routing. What to expect in the near future? Are they any large DCs which currently in the path to embrace MPLS in DC?

  • @mohammedelhassanhabiballa5700
    @mohammedelhassanhabiballa5700 3 года назад

    Thank you very much. I really enjoyed this webinar, it opened my eyes beyond seeing the small picture of scaling DC in narrow environments and sees how big whales are trying to scale really giant DCs.

  • @monstertraining1528
    @monstertraining1528 3 года назад

    Without a doubt that was one of the greatest session ever!