You will be able to run the core on virtually any modern OS, although as we develop on Linux and Mac those will be the best supported in the early days. We haven't yet launched an alpha version of Freenet (that's still a few weeks away - we're still debugging), but when we do it will likely just be a matter of typing "cargo install freenet" and then "freenet network". We'll probably also provide a docker image. If you have a stable IP address and can punch a hole in your firewall for inbound UDP then you could also be a gateway peer - which will help new users join the network. Setting this up will also be pretty easy and we'll document how to do it before launch.
@@IanClarkeSanity I have a static public IPs at my disposal (rare commodity nowadays) and host my own firewall, so It won't be a problem. A fine addition to my TOR node and another project worth supporting. I'm waiting for news!
Louis I know you keep suggesting that we need a way for our Grandma's to be able to run servers at home, I wonder if this can be bundled with a home server client, we basically need an IOS-STYLE interface to run a server at home where we just download bundles. I think if that's a future having servers at home this would be a fantastic bundle to just have. I'm wondering too if the server can just be a replacement for a home router and can conduct services such as PLEX and custom firewalls
And honestly for the price of modern routers, I think a home server router bundled unit might be a the best way to have an entry level way to home hosting, since most people know how to connect to their router already
IMO hosting itself is not something that difficult with stable and reliable releases of the software, networking is an issue tho. We would need some kind of fairly run public proxies for everyone without technical knowledge behind NAT.
@@limesta I second the router/server setup for 2 reasons: 1) The sheer impact of simply being the _default_ option with hardware is a huge driving factor the general public's adoption of software. Most people aren't going to try doing anything except for installing programs... maybe changing system settings. Anything beyond that is like trying to draw blood from a stone. 2) In my personal experience with OPNsense, I kept finding myself wanting to add more servers to the router. Anything I intended to have running all the time just made sense to put on the low-power machine that's intended to be run 24/7. If you can get a core system together with a solid UI that makes it easy to integrate and manage new servers, particularly from 3rd parties, that'd be amazing. I'd happily spread the word about it anywhere the subject comes up.
@@trajectoryunown I'm a UX and web designer / marketing background by trade, but I have a fair bit of familiarity with networking devices and hardware, I'm wondering what would be the ideal configuration for a *default* option, I see some solutions out there but nothing is as simple as plug in, open a URL, and load your home apps. I'd like to run my local smart device servers, from my phone if I want. The closest you can get in terms of hardware and some locked firmware compatibility still needs to be configured and is an additional device on your stack. I think for a market advantage (see the tesla initial market strategy) You would need to bring a device to market that interfaces the most common home devices, AC, garage, your phone, as a core feature set, with replacing the home router to make it easier, marketing as "your internet and home in one device" . If a terrible rabbit product can hit the market doing nothing, I could see a strategized market campaign bringing a device like this to market, and be a centralized hub for open source applications to be hosted, maybe an OAuth service for local applications can be enabled for home hosting automatically from their respective apps / websites As an extra mile service, it could serve as a decentralized network / cloud for this and open source apps, it could be opt-in or even rev-share for consumers who have extra bandwidth and computing power for their local router. The start of something like this would be likely a linux build designed with this in mind, with a simple web interface, but I'm not too familiar with how you can run isolated services easily, then once it's proven to work and some FUTO sponsored and other open source projects become compatible with it, some funding would be needed to bring a hardware set to market to make it as streamlined and attractive as possible for the end consumer
Freenet in concept sounds a lot like ipfs? You can host apps and site on ipfs, the routing and peering sounds in concept similar as well? How does freenet differ from ipfs? It was mentioned at the end that ipfs is more like a Dropbox and about storage, but ipfs is a cache not really for storage, you can pin content and trust other peers to pin their content if you want, but by default ipfs only acts as a cache and communication mechanism. Much like freenet.
I address the main differences between Freenet and other decentralized systems here: ruclips.net/video/enTAromEeHo/видео.htmlsi=tZ2MdyNsLNjqvCSr&t=2014
Thanks for the reply. I saw that QA part, you mentioned freenet works like a lru cache with pinning support and ipfs is more like Dropbox, but that's not the case, ipfs is a lru cache and supports pinning, much like freenet, so was wondering how it differs in that regard. The compute sounds interesting, does that happen locally on your local peer and propagated out, or does it happen on another peer? Sounded like it was former?
@@nightshade427 Yes - I was talking more about the original IPFS/filecoin concept, they've extended it quite a bit over the years which makes it difficult to summarize. With Freenet the compute happens in the network, you modify the state on your local peer which is subscribed to a contract - it then propagates up the subscription tree and out to all other branches and leaves - like a virus. The goal is that changes to data propagate to all interested peers within a second or two. I think a good way to compare is the "crankshaft versus car" analogy I make in the QA. To elaborate: Freenet is like an all-in-one operating system for decentralized apps: install it once, and you have access to a wide range of applications directly within your browser without needing additional installations or configurations. IPFS is more like a toolkit for building decentralized apps: developers use it to integrate decentralized storage and functionality into their applications, which often require additional components and setup for end-users.
So if the peer coordinate depends on his IP address, and there are only 2^24 (~16million) addresses...does that cause any troubles? If attacker chooses a victim (knows his IP and thus coordinate) can the attacker have many machines that hash close to the victim coordinate in order to ... I don't know...deprive the victim of information? Or something? Supply the victim with fake information?
There is a risk if a hacker controlled a large block of IP addresses then they could create peers that are all quite close to a particular contract in location-space, this is why we only look at the class-C IP address when generating the hash. This means that an attacker would need control of many class-C IP address blocks which will be a lot more difficult.
It looks like contracts that aren't web apps are written in rust. Which means that presumably, they're compiled to wasm and then run on whatever peer requests that app. Which kind of makes me wonder about security.
i2p is an anonymizing proxy - but services on i2p are still centralized - they're just hidden. In contrast, services on Freenet are entirely decentralized.
Isotonic regression sounds similar to OSPF or am I just confused? Still cool, just confused cause I was just thinking about the numerous technologies that are similar but have different names.
They're similar in that their goal is to route packets efficiently, however I don't think OSPF learns from past routing performance how to route more efficiently in the way Freenet does, rather this data is provided manually by network administrators. OSPF also seems to keep a complete map of the network whereas with Freenet peers just use local information to make routing decisions.
Wasn't able to finish the video past around the halfway point because of an unforseen appointment. I'll come back to it once I'm done but sorry if the questions seem redundant
@@oladrolahola Freenet runs over UDP/IP, and integrates with web browsers or any other user software via a HTTP/websocket API - so it's very compatible with current internet protocols.
It's decentralized like IPFS but unlike IPFS it isn't designed for long-term storage of data, think of Freenet more as a communication medium than a storage medium. In terms of incentives, Freenet peers employ a "tit-for-tat" strategy - so peers keep track of how much work they're doing for other peers versus how much work those peers are doing for them. If this cost-benefit reveals that a neighboring peer is "leeching" then peers won't want to talk to it.
Interesting, though I don't really know what use it would be to me as I don't develop anything and don't know anything that uses it... Needs to teach critical mass before it'll become relevant
Am i getting this right? The TLDR is pretty much this guy wants to torrent the whole Internet, but with better resource management? Started getting interested in all this but it's above me tbh.
24 years after Napster was shut down, and we're still trying to come up with a viable way to have free speech on the Internet. It's a tough nut to crack.
The original Freenet actually predates bittorrent, but BT is a file distribution mechanism, not a general purpose platform for building decentralized systems.
God I love this company so much I've been dreaming of a viable decentralized internet for over 10 years. I hope you guys can make it a reality.
Looking forward to this getting to a more finished state at some point in the future.
Currently in the final stages of debugging, hopefully a public release in the next few weeks.
Can I host the core on my homelab rack voluntarily? Do you provide Docker images?
You will be able to run the core on virtually any modern OS, although as we develop on Linux and Mac those will be the best supported in the early days. We haven't yet launched an alpha version of Freenet (that's still a few weeks away - we're still debugging), but when we do it will likely just be a matter of typing "cargo install freenet" and then "freenet network". We'll probably also provide a docker image.
If you have a stable IP address and can punch a hole in your firewall for inbound UDP then you could also be a gateway peer - which will help new users join the network. Setting this up will also be pretty easy and we'll document how to do it before launch.
@@IanClarkeSanity I have a static public IPs at my disposal (rare commodity nowadays) and host my own firewall, so It won't be a problem. A fine addition to my TOR node and another project worth supporting. I'm waiting for news!
Routing for you.😆
You win the (decentralized) internets!
Amazing work. I'm eager to try to use this in some form asap! so interesting!
Louis I know you keep suggesting that we need a way for our Grandma's to be able to run servers at home, I wonder if this can be bundled with a home server client, we basically need an IOS-STYLE interface to run a server at home where we just download bundles. I think if that's a future having servers at home this would be a fantastic bundle to just have.
I'm wondering too if the server can just be a replacement for a home router and can conduct services such as PLEX and custom firewalls
And honestly for the price of modern routers, I think a home server router bundled unit might be a the best way to have an entry level way to home hosting, since most people know how to connect to their router already
I think I might have found my calling for an open source project
IMO hosting itself is not something that difficult with stable and reliable releases of the software, networking is an issue tho. We would need some kind of fairly run public proxies for everyone without technical knowledge behind NAT.
@@limesta I second the router/server setup for 2 reasons:
1) The sheer impact of simply being the _default_ option with hardware is a huge driving factor the general public's adoption of software. Most people aren't going to try doing anything except for installing programs... maybe changing system settings. Anything beyond that is like trying to draw blood from a stone.
2) In my personal experience with OPNsense, I kept finding myself wanting to add more servers to the router. Anything I intended to have running all the time just made sense to put on the low-power machine that's intended to be run 24/7.
If you can get a core system together with a solid UI that makes it easy to integrate and manage new servers, particularly from 3rd parties, that'd be amazing.
I'd happily spread the word about it anywhere the subject comes up.
@@trajectoryunown I'm a UX and web designer / marketing background by trade, but I have a fair bit of familiarity with networking devices and hardware, I'm wondering what would be the ideal configuration for a *default* option, I see some solutions out there but nothing is as simple as plug in, open a URL, and load your home apps. I'd like to run my local smart device servers, from my phone if I want. The closest you can get in terms of hardware and some locked firmware compatibility still needs to be configured and is an additional device on your stack. I think for a market advantage (see the tesla initial market strategy) You would need to bring a device to market that interfaces the most common home devices, AC, garage, your phone, as a core feature set, with replacing the home router to make it easier, marketing as "your internet and home in one device" . If a terrible rabbit product can hit the market doing nothing, I could see a strategized market campaign bringing a device like this to market, and be a centralized hub for open source applications to be hosted, maybe an OAuth service for local applications can be enabled for home hosting automatically from their respective apps / websites
As an extra mile service, it could serve as a decentralized network / cloud for this and open source apps, it could be opt-in or even rev-share for consumers who have extra bandwidth and computing power for their local router.
The start of something like this would be likely a linux build designed with this in mind, with a simple web interface, but I'm not too familiar with how you can run isolated services easily, then once it's proven to work and some FUTO sponsored and other open source projects become compatible with it, some funding would be needed to bring a hardware set to market to make it as streamlined and attractive as possible for the end consumer
Is there a community forum for this project?
Fantastic project and talk!
Freenet in concept sounds a lot like ipfs? You can host apps and site on ipfs, the routing and peering sounds in concept similar as well? How does freenet differ from ipfs? It was mentioned at the end that ipfs is more like a Dropbox and about storage, but ipfs is a cache not really for storage, you can pin content and trust other peers to pin their content if you want, but by default ipfs only acts as a cache and communication mechanism. Much like freenet.
I address the main differences between Freenet and other decentralized systems here: ruclips.net/video/enTAromEeHo/видео.htmlsi=tZ2MdyNsLNjqvCSr&t=2014
Thanks for the reply.
I saw that QA part, you mentioned freenet works like a lru cache with pinning support and ipfs is more like Dropbox, but that's not the case, ipfs is a lru cache and supports pinning, much like freenet, so was wondering how it differs in that regard.
The compute sounds interesting, does that happen locally on your local peer and propagated out, or does it happen on another peer? Sounded like it was former?
@@nightshade427 Yes - I was talking more about the original IPFS/filecoin concept, they've extended it quite a bit over the years which makes it difficult to summarize.
With Freenet the compute happens in the network, you modify the state on your local peer which is subscribed to a contract - it then propagates up the subscription tree and out to all other branches and leaves - like a virus. The goal is that changes to data propagate to all interested peers within a second or two.
I think a good way to compare is the "crankshaft versus car" analogy I make in the QA. To elaborate:
Freenet is like an all-in-one operating system for decentralized apps: install it once, and you have access to a wide range of applications directly within your browser without needing additional installations or configurations.
IPFS is more like a toolkit for building decentralized apps: developers use it to integrate decentralized storage and functionality into their applications, which often require additional components and setup for end-users.
So if the peer coordinate depends on his IP address, and there are only 2^24 (~16million) addresses...does that cause any troubles? If attacker chooses a victim (knows his IP and thus coordinate) can the attacker have many machines that hash close to the victim coordinate in order to ... I don't know...deprive the victim of information? Or something? Supply the victim with fake information?
There is a risk if a hacker controlled a large block of IP addresses then they could create peers that are all quite close to a particular contract in location-space, this is why we only look at the class-C IP address when generating the hash. This means that an attacker would need control of many class-C IP address blocks which will be a lot more difficult.
It looks like contracts that aren't web apps are written in rust. Which means that presumably, they're compiled to wasm and then run on whatever peer requests that app.
Which kind of makes me wonder about security.
This is correct, contracts run in a wasm security sandbox (similar to wasm running in web browsers).
@@IanClarkeSanity Got it, makes sense, thanks!
Internet Service Provider
will there be Freenet Service Provider?
Great Q&A!
i remember many yrs ago they said freenet was not so anonymous anymore?
Will watch fully on the coming weekend, I'm existed.
Do you think?
I'm thinked therefore I'm existed
Thanks folks
Is it better than i2p?
i2p is an anonymizing proxy - but services on i2p are still centralized - they're just hidden. In contrast, services on Freenet are entirely decentralized.
@@IanClarkeSanity thanks for answering
Great explanation! Curious what we can do to support Freenet. :)
Wonderful, wonderful, wonderful!!
Isotonic regression sounds similar to OSPF or am I just confused? Still cool, just confused cause I was just thinking about the numerous technologies that are similar but have different names.
They're similar in that their goal is to route packets efficiently, however I don't think OSPF learns from past routing performance how to route more efficiently in the way Freenet does, rather this data is provided manually by network administrators.
OSPF also seems to keep a complete map of the network whereas with Freenet peers just use local information to make routing decisions.
@@IanClarkeSanity thanks😄.
Are there any compatibility issues with the current internet? Does it perform similarly to a closed network?
Wasn't able to finish the video past around the halfway point because of an unforseen appointment. I'll come back to it once I'm done but sorry if the questions seem redundant
@@oladrolahola Freenet runs over UDP/IP, and integrates with web browsers or any other user software via a HTTP/websocket API - so it's very compatible with current internet protocols.
It seem similar to LBRY or IPFS. The big issue is how do you incentive people to use the software and share data.
It's decentralized like IPFS but unlike IPFS it isn't designed for long-term storage of data, think of Freenet more as a communication medium than a storage medium.
In terms of incentives, Freenet peers employ a "tit-for-tat" strategy - so peers keep track of how much work they're doing for other peers versus how much work those peers are doing for them. If this cost-benefit reveals that a neighboring peer is "leeching" then peers won't want to talk to it.
Interesting, though I don't really know what use it would be to me as I don't develop anything and don't know anything that uses it... Needs to teach critical mass before it'll become relevant
Am i getting this right?
The TLDR is pretty much this guy wants to torrent the whole Internet, but with better resource management?
Started getting interested in all this but it's above me tbh.
Middle out
i understand nothing of all the low level stuff 😭😭 looking forward to try the software, the OG freenet was cool even tho it was slow af lmao
24 years after Napster was shut down, and we're still trying to come up with a viable way to have free speech on the Internet.
It's a tough nut to crack.
sounds cool
cool to see him with a framework laptop
this sounds cool, but also it sounds like its reinventing the torrent protocol
The original Freenet actually predates bittorrent, but BT is a file distribution mechanism, not a general purpose platform for building decentralized systems.
The internet computer is tackling the same problem, an alternative to centralized clouds
just go ham
Of course, it is written in Rust.
Like NYM but without the crypto bs and much better architecture! 😍
Decentralized internet is for pr0n!
Likely also for the illegal variety :^I
This reminds me of what Qortal is doing. A collab with Crowetic from Qortal would be cool.
I haven't FMS/soned in years... 8888 up boys. i2p tunnel awayyyyyyy. I was expecting an older Ian.