The Power of Zero-Trust Architecture: Building a Secure Internal Network with Nebula

Поделиться
HTML-код
  • Опубликовано: 5 авг 2024
  • Imagine if we could establish a level of trust that in our network, we can verify with certainty that a computer really is who it says it is! By bringing mutual authentication and trust into networking, we can better make security decisions on when connections should be allowed. This can enable our services to talk to each other securely over the global internet, and reduce the dependence on a a trusted perimeter. This mututal trust is the foundation of a zero-trust security model. In this video, I'm going to walk through the basics of zero-trust security, the choices I've made to implement it in my own network. One of those choies is Nebula, an open-source zero trust overlay network designed for highly scalable distributed networks. Come along on this adventure!
    This video is sponsored by ProtoArc and their XK01 FoldableKeyboard
    ProtoArc XK01 Bluetooth Keyboard: www.protoarc.com/collections/...
    ProtoArc XKM01 Bluetooth+USB Keyboard+Mouse Combo: www.protoarc.com/collections/...
    Find a written version of this tutorial along with all of the commands on my blog:
    www.apalrd.net/posts/2023/net...
    Feel free to chat with me more on my Discord server:
    / discord
    If you'd like to support me, feel free to here: ko-fi.com/apalrd
    The absolutely beautiful thumbnail image comes from NASA and the James Webb Space Telescope. Go Science!
    Timestamps:
    00:00 - Introduction
    00:58 - What Is Nebula?
    05:08 - What Is Zero Trust Anyway?
    10:33 - My Zero Trust Plan
    13:23 - Nebula Addressing
    16:37 - Certificate Authority
    21:09 - Lighthouse Setup
    23:43 - Host Setup
    28:21 - Relaying
    32:05 - Firewall
    34:31 - Nebula DNS
    37:55 - Conclusion
    #zerotrust #networking #homelab
    #ProtoArc #ProtoArcXK01 #ProtoArcXK01FoldableKeyboard
  • НаукаНаука

Комментарии • 70

  • @Mikesco3
    @Mikesco3 Год назад +10

    I'm so glad you prioritize on value of content over time or usual pleasing algorithm BS...
    This was a phenomenal video. I can't wait to play with it...

  • @dougphillips5686
    @dougphillips5686 3 месяца назад +2

    That keyboard needs a track pointer. Track pointer are so easy to use, your hands never need to leave the keyboard. I have one on my laptop and I love it. When I get back to my desktop with a keyboard without a trackpointer, I find my index finger wiggling looking for the trackpointer.

  • @DanMackAlpha
    @DanMackAlpha Год назад +8

    Nice video. I've been using this outside of work myself for ad-hoc groups where we want to share each others systems for development / research and it is pretty handy. Great presentation.

  • @user-kv9dw4tp3y
    @user-kv9dw4tp3y Год назад +1

    Wow! Awesome foldable keyboard in advertising! Thanks for recomendation!

  • @melvinvoyd6354
    @melvinvoyd6354 8 месяцев назад +1

    I really appreciated this video. Please continue with this type of content. Thanks you.

  • @diogomild
    @diogomild Год назад +8

    Just awesome! Been following for a while now, the subjects just keep improving!

  • @karpfenboy
    @karpfenboy 8 месяцев назад +1

    thanks, you explain it all so well and in extensive detail
    this channel is a goldmine

  • @chrisumali9841
    @chrisumali9841 Год назад +1

    Thanks for the demo and info, have a great day

  • @jaketus
    @jaketus Год назад +22

    14:25 Absolutely never ever do that. There's literally never a reason to do that.
    RFC1918 is there for a reason. Now if something on the internet happened to be in that range, you wouldn't be able to access it. Even if currently it's random Taiwanese ISP probably handling end-customers, that could easily change, perfect example being 1.1.1.1, which people used in the past for internal traffic.
    Also, very ironic of you to say "ipv4 is a mess" and then making it even worse of a mess.

  • @NetBandit70
    @NetBandit70 Год назад +11

    So... instead of a small chance of a RFC1918 address conflict... Let's break a /16 worth of internet routable addresses. Yolo.

    • @andrewferguson6901
      @andrewferguson6901 Год назад +2

      Yeah but it's 42.69 so it's fine :)

    • @andre_warmeling
      @andre_warmeling Год назад +3

      Disrespect RFC1918, steal a whole /16 for yourself. Be a man.

  • @kurousagi1339
    @kurousagi1339 8 месяцев назад +2

    Hey apalrd! Noticed a small issue on your blog post for the video. The video link on there is incorrect and leads to the “This video isn’t available anymore” page on RUclips. Don’t want future blog-exclusive readers going to the wrong place!
    Anyway, I’ve just recently discovered your channel and it’s a gold mine. You keep doing you! I appreciate you spreading the knowledge.

  • @andrewjohnston359
    @andrewjohnston359 Год назад +2

    you should have a look at pritunl zero, just to do a bit of a review on an alternate zero trust solution. Also, while this establishes trust between hosts across private/public boundaries (which is great), it doesn't actually solve the problem of the example you gave in regards to a trusted host getting a virus. Regardless of the network being behind a firewall, or each host being behind a firewall in a zero trust situation, once they are all trusted they are all equally vulnerable. It's actually the ACLs you put in place which begin to reduce the vulnerability of other clients - but that's exactly what should be used in a private trusted network - ACL's either network based (on hardware or host OS) or user based using AD and Radius to define permissions to access files/folders/sites/ports/wifi networks so on and so forth. So in short the second A in the AAA security model is still required regardless of way the first A is handled - just wouldn't want people to be lulled into a false sense of security =)

    • @apalrdsadventures
      @apalrdsadventures  Год назад +3

      Very true - The big advantage of this type of zero-trust system is that the ACLs can be written much more narrowly since they are applied on every single device and have (cert signed) knowledge of the node's identity, not just the IP address range. So this removes trust in the network IP address (which requires more physical security) as a source of identity.
      But you do still have to write good ACLs.

  • @martymccafferty7510
    @martymccafferty7510 Год назад +1

    Nebula is nice. Your demo was great. I have been using Nebula for a few years and it has been great.

  • @hainesk967
    @hainesk967 Год назад +1

    First of all, great video! I really like the way you explain things. How is the throughput performance on Nebula?

    • @apalrdsadventures
      @apalrdsadventures  Год назад +2

      I don't have multiple systems with >1G currently setup to test, but I was able to iperf at 923Mbits without and 880Mbits with Nebula, which is expected due to the lower MTU. So basically it's not limited by anything but the 1G wire on my test setup. That of course doesn't mean it will do 10G or higher, but it's certainly performant enough for all of my Proxmox systems.

  • @DarkNightSonata
    @DarkNightSonata 3 месяца назад

    Awesome video. Is it possible to setup nebula to make it work as an exit node where all traffic from iphone is routed and exited through the server in digital ocean for example. ??

  • @KeithWeston
    @KeithWeston Год назад +1

    Okay - yes! Great video. As always, extremely thorough and helpful. Now, what is that blue, silver & black rocket on the right side of your desk near your case?

    • @apalrdsadventures
      @apalrdsadventures  Год назад +2

      it's a 4 color pen lol each of the fins push down separately to activate one color

    • @w0ode198
      @w0ode198 6 месяцев назад

      It's a flesh light.

  • @GrishTech
    @GrishTech Год назад +1

    When I looked at Nebula a long time ago, it was when relaying was not a thing yet. I have since settled with headscale. I actually like headscale a lot more, but nebula looks really good now. I might try it out. The cool thing about headscale is that you sign in on the node one time, and the server takes care of pushing routes, DNS, ACLs, etc. Nebula seems to be something that has to be managed at scale through something like ansible.

    • @nick-leffler
      @nick-leffler Год назад +1

      I'm using headscale as well and love it.

    • @apalrdsadventures
      @apalrdsadventures  Год назад +3

      There are some architectural differences, so the headscale server does a whole lot more work in the setup than Nebula's lighthouse.

    • @GrishTech
      @GrishTech Год назад

      @@apalrdsadventures This is true. Sort of a different use case. Nebula is a bit more zero-trust than headscale.

    • @apalrdsadventures
      @apalrdsadventures  Год назад +5

      Yeah, the lower load on the lighthouse and delegated relaying also scales up better at really big setups. Tailscale has a lower barrier to entry for most people. The Nebula lighthouse also has no private data at all, so a compromised lighthouse just means the network stops working, security isn't broken.

  • @gintarasp2
    @gintarasp2 Год назад

    How do you add snmp devices to the nebula?

  • @michaelpayne5272
    @michaelpayne5272 Год назад +1

    awesome video. Is there a mobile app for nebula? I see a Chinese iOS app, ‘Nebula Mesh’ along with the Official ‘Mobile Nebula’. Wondering how the handshake would affect battery life/power consumption when the device goes to sleep. Usually this isn’t a problem with the WireGuard app, although I have noticed increased usage with the Tailscale app.

    • @apalrdsadventures
      @apalrdsadventures  Год назад +2

      Mobile Nebula by Defined Networking is the one. No idea on battery life, but it shouldn't keep a connection open unless there are packets periodically going across the tunnel.

  • @maxmustermann9858
    @maxmustermann9858 8 месяцев назад

    What a great tool. Let’s assume I have multiple hosts in the cloud which are all in one nebula network. How the traffic gets routed when I connect from one host to another on the same cloud, via the internet? Is it possible for these kind of connections to use a private cloud network to not route traffic via the internet?

    • @apalrdsadventures
      @apalrdsadventures  8 месяцев назад

      A given Nebula host learns all of its IP addresses from all interfaces. It then connects to the lighthouse(s) and gives them all of the IP addresses it has learned. The lighthouse then knows all of those IPs plus the IP the lighthouse received (if the node goes through NAT on the way out) and shares this with a node wishing to connect. The sending node will try all of the receiving node's addresses in order of similarity to its own addresses, so if they are on a private cloud network they will use that.
      But if you use IPv6 it doesn't matter since IP will always route most efficiently without any NAT or overlapping address ranges on different networks.

  • @CrazyMineCuber
    @CrazyMineCuber Год назад

    Will you do an update video on how to automate the renewal of the certificates? Also, why is the certificate valid for so long (1 year compared to 1 day as smallstep ca does by default). After digging into setting up small step ca, I am still confused how zero trust could work during the renewal of certificates. Because of how I understand how smallstep CA does the http-01 ACME validation, it basically must trust the DNS servers and the routers/network between the CA, DNS server and the client who want to renew its certificate. This does not really seem like zero trust. Are there better ways of doing it in small step/Nebula? Would the ultimate solution be something like device attestation using a YubiKey where the CA has a database of your fleet of devices YubiKey public attestation keys? This is the solution that sounds the most like zero trust to me.

    • @apalrdsadventures
      @apalrdsadventures  Год назад +3

      So in general, the danger is that a certificate and it's private key are compromised. The two ways to avoid this are to use hardware-bound keys (Yubikey, TPM2, ...) or continuously re-key everything so a compromised key doesn't have as long of a useful life. This applies to all keys, TLS, Nebula, and SSH.
      Since the ACME protocol is entirely automated, we can use a really short interval (24 hours) and there won't be any client interruption. With Nebula, for servers we could automate re-keying at that interval, but since clients also use the crypto system and might not be powered on for 24 hours that's a bit too short. So we need a compromise. The Nebula defaults are 1 year, which is decent if you don't have any automation to re-key. It's just a default though.
      Reading through the weeds a bit, it seems like Slack has a CA which they use to issue minimal-access provisional certs to machines as they go through the automated first boot setup which has enough access for the re-keying automation to come in and replace the key with the real one when it fully sets up the server with a role. They haven't said how they do attestation to re-key.
      In general you are right and the CA does need to trust something to issue certs. 'Zero-trust' refers more to the individual connections, in which a node trusts nothing other than the root certificate it's told to trust, and then builds trust in other nodes via keys which have been signed by the root. How you attest to the CA at re-key time is still not perfect. You could trust the existing nebula cert to renew the same one, for example.
      And finally, Smallstep has a provisioner which uses Nebula certs as a means of trust to issue x.509 and SSH certs.

  • @str0g
    @str0g Год назад

    Nice video, but I have question...
    When you get a public query for spaceship3 and the lighthouse sends the ip of spaceship3, shouldn't the lighthouse check if the client has a valid nebula certificate?

    • @apalrdsadventures
      @apalrdsadventures  Год назад +1

      DNS queried over the public network doesn’t require any authentication, dns doesn’t really have any facilities for that. The lighthouse also doesn’t have a database of all clients, it only has a table of clients that have recently established sessions with it, and that’s what it uses for it’s dns query. So if spaceship3 doesn’t have a valid certificate it won’t end up in that table.

  • @autohmae
    @autohmae Год назад

    This piece of software feels like: almost right, no IPv6 for the internal network and no yubikey auth for the node..? Which is something the commercial companies do seem to do.

  • @mrhidetf2
    @mrhidetf2 Год назад

    That is the god damn cutest cat! The video is alright too /s
    Jk, thanks for the great content, never heart of Zero Trust Networks before.

  • @martymccafferty7510
    @martymccafferty7510 Год назад

    You can set how long you want the CA cert to live. Defaut is 1 year.

  • @beepboop-vj1ft
    @beepboop-vj1ft 11 месяцев назад

    Any risk of conflict with an active directory? How would DNS be setup in that case?

    • @apalrdsadventures
      @apalrdsadventures  11 месяцев назад

      I don't use Windows so I'm not an expert on this.

  • @Bashlearn
    @Bashlearn Месяц назад

    In my scenario the only solution to solve the NAT problem resolved by SoftEtherVPN secureNAT feature, how can I achieve it using Nebula?

    • @apalrdsadventures
      @apalrdsadventures  Месяц назад

      Nebula can use relays, which are nodes not behind NAT. You can configure a node to advertise a relay for itself.

  • @esra_erimez
    @esra_erimez Год назад +1

    I'm envious of your brain

  • @neutral139
    @neutral139 Год назад +3

    19:25 cat

  • @nick-leffler
    @nick-leffler Год назад

    Just as a heads up tailscale works the same way. Each system has tailscale installed on it.

    • @TheDark0rb
      @TheDark0rb Год назад

      Sort of, however you can use it to "break" the model and use subnet routers, which removes the "trust" part of it.

    • @apalrdsadventures
      @apalrdsadventures  Год назад +2

      Tailscale also centralizes all of the trust at the tailscale service or headscale server, since it's responsible for all of the ACLs and pushing config to clients.

    • @lethedata
      @lethedata 11 месяцев назад

      @@apalrdsadventures This is a good thing (and sometimes required) depending on your zero trust implementation. Besides easier management due to the centralization, the user *and* device identity are validated before connecting unlike Nebula which only validates the device. The user identity part is an extremely common thing that comes up during zero trust discussions which is probably why Nebula isn't advertised as such.

    • @apalrdsadventures
      @apalrdsadventures  11 месяцев назад +2

      The Tailscale client device can't check a cert for validity, it can only trust that the auth server was correct. This is very centralized trust.
      Nebula centralizes management at the certificate authority, but once a cert is signed it remains valid. This means any node can now validate a certificate on its own, without relying on the central server. The PKI part isn't done internal to Nebula, so it's not a complete solution for all use cases. It's advertised for server to server zero-trust that can scale up to tens of thousands of nodes, which is something that's hard to do when you make a centralized node key to every decision.

    • @lethedata
      @lethedata 11 месяцев назад

      ​ @apalrdsadventures Yup, all depends on the implementation requirements, this one being access vs datacenter/server networks.
      I said not advertised because looking at their website and docs I only see "overlay networking" referenced, not zero trust (unless it's hidden on a page I'm not seeing). Tho it very much is a server based zero-trust model

  • @ArturoNavarro
    @ArturoNavarro 10 месяцев назад

    Do you know openziti ?

  • @rsjrx
    @rsjrx 8 месяцев назад

    The neckbeard is strong on this one.

  • @michaczerwonka8720
    @michaczerwonka8720 Год назад

    What's about DDoS attach?

    • @apalrdsadventures
      @apalrdsadventures  Год назад +1

      On which node?
      In general unless you have open security vulnerabilities or an extremely fast internet connection you will saturate the internet connection before there's any problems with the host (lighthouse or otherwise), although this would have happened anyway with any other network architecture using the same connection.

  • @Lovesickdangerboy
    @Lovesickdangerboy 11 месяцев назад

    How does this compare to OpenZiti?

    • @apalrdsadventures
      @apalrdsadventures  11 месяцев назад +2

      OpenZiti is functionally a very different implementation of the same ideals. It's more focused on a software SDK to integrate in applications and web browsers vs Nebula's tunneling protocol approach for non-ZT-aware applications. Ziti has that too, but it's not the focus.
      On the wire, Ziti requires nodes communicate via routers instead of facilitating direct connections (so potentially congesting these nodes and funneling bandwidth), and uses TLS (over TCP) for these sessions, which are kept alive all the time to avoid the latency of the 3-round-trip TCP + TLS handshakes. This architecture is also susceptible to head-of-line blocking since it multiplexes traffic into a single TCP session between nodes. Nebula essentially encapsulates/encrypts packets individually and routes them at each node instead of relying on dedicated routers, and uses UDP as the transport. It also won't suffer from head-of-line blocking with multiple connections traversing the same route.
      Crypto wise, Nebula is using Noise Protocol with ed25519 certificates ('raw' public key cryptography, not x.509) for authentication and either AES or ChaCha20 as the block cipher. Ziti uses mTLS, so it can use x.509 TLS certs (RSA or ed25519) and integrate with other PKI systems, and also can negotiate AES or ChaCha20 per connection. Ziti also has the advantage that a web browser can directly connect to a router (using mTLS) which can they route to the service, so you don't always need to install client software, although you do need to install client certificates which requires you to add a CA to the client as well. Nebula's approach was chosen to keep the amount of data exchanged during a key exchange to under 500 bytes to allow it to be guaranteed to fit in a single UDP packet without fragmentation.
      Overally Nebula is a tool to add a secure point to point overlay network with per-node zero trust firewalling to a network of computers which are running unmodified network-enabled software. OpenZiti is a framework to build your software on including zero-trust from the beginning. Ziti's TCP overhead / h-o-l blocking and requirement for router nodes to process every packet will certainly be a problem at large scale, unless you are replacing your existing TCP sockets with Ziti sockets at the application level.

    • @philipgriffiths5779
      @philipgriffiths5779 9 месяцев назад

      @@apalrdsadventures overall good comparison. While I am not an expert, I do work on the OpenZiti project so here are some comments:
      - "On the wire, Ziti requires nodes to communicate via routers". Today correct, soon we will enable P2P. We are getting HA controllers out of the door first. Having a smart routing fabric does give you many advantages though incl. outbound only connections on both sides & no need to UDP hole punch, the ability to find lower latency paths by hoping on/off underlays, you can obfuscate metadata so external observers do not know source/destination/other hops, as well as nodes not becoming blocking as smart routing will gracefully move path from any nodes which it perceives as becoming blocking.
      - "Nebula essentially encapsulates/encrypts packets individually and routes them at each node"... thats what Ziti does too.. thus I don't think you are going to get the head-of-line blocking issue
      - Ziti uses mTLS and E2EE "by default". You can turn off E2EE. Also, you can change the library if you want (e.g., I have seen FIPs 140-2 implementations. As you say, you can bring external JWT or CA providers, soon (with HA control plane) Ziti will support any external OICD/SAML compliant identity.
      - While we do TLS over TCP we also packetize so it's not easy to refute in a bitesize bit.
      The SDK and app-embedded part of OpenZiti is definitely a powerful selling point and future. Today I would say those who use it are more likely to want the highest level of security. If you want that, you will love Ziti.

    • @philipgriffiths5779
      @philipgriffiths5779 9 месяцев назад +1

      Found a bit more details which could be interesting: *OpenZiti does not encapsulate the entire tcp/udp packet. Ziti just extracts the tcp/udp payload and transports it over TLS. So for edge from a TCP payload perspective this produces 83 bytes of total TLS overhead including encryption TLS header(5 bytes) + 78 bytes of encryption overhead). So if you send a TCP packet with 100 bytes of payload it will equate to 183 bytes of encrypted tcp payload over ziti including TLS headers/encryption. If the amount of data payload + TLS overhead exceeds 1448 bytes pin the case of standard ethernet 1514 bytes then additional bytes will be fragmented. Further link (Router to Router adds additional overhead which seems to vary between 31 and 43 bytes though I cannot account for why there is a 12 byte variance.*

  • @Parkhill57
    @Parkhill57 Год назад +3

    Chinese IP range... I like it.

    • @apalrdsadventures
      @apalrdsadventures  Год назад +6

      It's the answer to life, the universe, and everything
      But it's also allocated (to a Taiwanese ISP) but not advertised over BGP so it won't route on the public internet either (at least for now).

  • @joedoe3688
    @joedoe3688 7 месяцев назад

    you cannot just use an IPv4 randomly, 192.168.0.0/16 and 10.0.0.0/8 are there for a reason. calling the IPv4 a mess is not a good reason to make it more messy.

  • @michaczerwonka8720
    @michaczerwonka8720 Год назад

    *attack

  • @shephusted2714
    @shephusted2714 Год назад

    anon p2p ai would be more interesting while using pki/qkd you should add another monitor