This weeks feels empty enough so that I could reflect on my past readings about computer science stuff. A nice critique about self-hosting (in french) sprung some thoughts about why most (if not every) technical alternatives to cloud giants (GAFAM and like-minded companies) the open source community pushes is wrong. Let me rephrase it: the cloud needs alternatives, yet the existing ones are failing us because they can't scale.
Scaling might be the pain point of every naescent startup, but it also is the one of the open source community. We often don't realise it because scale is rarely ever reached, but it is a fact you cannot expect a lambda user to administer a machine at all. Even their own laptops are seldomly updated, all thanks to repeated popups only (*sigh*).
That's why alternatives like YunoHost, Sandstorm, Cloudfoundry and others are never gonna scale. They are good for advanced users that have time and a broad knowledge of (non-exhaustive list) :
- command line
- x509 certificates
- DNS and having a domain
- IP lease
- http proxy configuration
Most computer science professionals don't even know all these, much less master them. That's why companies still make a business1 out of making all this administration mess for you.
Except we can't trust them for the very reason they are here to make money and your data is a very appealing source of money.
A sure fact is that the cloud model is working, because the machines, the servers, the software you rely on is actively maintained by professionals.
Nicer models are the ones which put you in control of your data2 but also don't put the pain at the same scale as the user population. The problem is, the only solution so far is creating silos - centralizing data the same way GAFAM already do. We wish for silos administration-wise as we've seen. But another pain point is that of money. Such structure provides services and all services don't have the same cost: hosting a video is expensive, hosting a pad (raw text) is inexpensive. Both are very popular but technically one brings a lot of stress on the provider's infrastructure3.
being "in control" means here either having full control on it (hosting, access) or at least hosting your data at a trusted third party. The subjective notion of trust means you might consider more or less third parties trustworthy depending on how strong you want this trust to be. Some for-profits advertise on not selling or looking at your data, but your standards might be higher, especially if you want a proof of trust (ie. code).
as shown with Alphabet's YouTube (Google) example analysis from the WSJ.
Expensive services are going to pull down smaller structures like associations if they don't unite to raise money and share the expenses. It needs coordination, and definitely cannot be like YouTube. Nice alternatives relying on WebRTC exist to lessen the load on servers4, and some more obscure (and alpha software) like ZeroNet provide fully decentralized YouTube drafts that would not rely on any central structure to bear the costs. Right now, only the former provides a viable technical alternative.
As for a more classic alternative, there is the possibility of making a federation of service providers, that would help them split the heaviest costs. It has apparently already proved efficient with FDN, a french ISP. That also seems like a saner solution since software cannot save the world, but people can. Ethical organizations like the ones shown at Franciliens or FDN can. They have the big advantage of relying on humans rather than software to adapt to the problem of financing, and can react to a lack of funds. Building on software - however promising - always takes time, as shown with ZeroNet's option.
with the drawback of relying on WebRTC, which is known to provide another way to leak your local IP address, even under the presence of a VPN.