« Low-tech Magazine in Spanish, French, and Other Languages | Main | Fruit Trenches: Cultivating Subtropical Plants in Freezing Temperatures »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Will

(1)

Have you thought about hosting your notechmagazine website on the solar powered server alongside the translated websites, or is it too massive to run on the server at the moment?

Amos Blanton

(2)

There are many ways to imagine building more opportunities for efficiency into the design of the network protocol itself. For example, you could imagine routing requests to solar servers around the world who have agreed to mirror each others websites. The criteria used to select which server(s) to request content from could be how much sun is shining on them at the moment. That way servers with full sun and charged batteries could work hardest, making good use of power that would otherwise be wasted.

Dr. Edward Morbius

(3)

Kris:

Love your work and inspiration.

Looking over the solar-website retrospective (https://solar.lowtechmagazine.com/2020/01/how-sustainable-is-a-solar-powered-website.html), some thoughts. Hopefully not too much of this is already considered

1. Traffic and load aren't mentioned. If the system is serving continuous requests, suspending or switching to a low-power / hibernation mode isn't viable, but if it *isn't*, then utilising a wake-on-demand service (wake-on-lan, inetd) *might* offer some server efficiencies.

2. System performance tuning. Ruthlessly pruning all unnecessary processing to match available power. Disabling cronjobs either at night or when power reserves are below a specified threshold might help somewhat.

3. Scheduled or batched jobs. RSS feeds especially might be slotted such that they're served en mass at a specified interval. These tend to be automated (rather than human-initiated) requests, and if a combined burst of activity allows the system to return to quiet mode, might be worthwhile. Otherwise, RSS could also be disabled at night.

4. A router-based server-side cache. Move static pages/requests further off the webserver itself. Nginix or squid proxy?

5. Hardware-accelerated crypto / TLS termination. Either on the webserver or router -- moving crypto to a dedicated hardware implementation *might* reduce processing load. Performance monitering should indicate if this is the case.

6. Detailed system power monitoring: This seems missing from the analysis. A breakdown of power usage by process / function would be interesting.

7. Geographically-distributed service. One obvious alternative to single-site provisioning would be to have a globally-distributed set of servers, possibly serving numerous websites, spread across both latitudes and north/south hemispheres. The likelihood of all instances being simultaneously power-starved is low. Even as few as 2-4 systems should provide considerable availability improvements. DNS load balancing (with some indication of "I'm going to sleep soon") might be an approach. So long as this consolidates multiple websites, the overall embedded energy should remain feasible, and reliability enhancement should be much easier than trying to supply larger panels and/or battery storage.

Drew Pearce

(4)

I read with interest the analysis of the sustainability of the solar
powered website.

I wonder if you've considered demand shaping in different ways:

So something like targeted downtime. If you had data around when the
website is being accessed and there was a time which was very low access
rate then forcing downtime to safe the battery could allow you to use
the lower spec system with minimal experienced downtime. It reminded me
a little of your discussion around energy security and downtimes. Aiming
for 96% downtime would involve finding two blocks of 30 minutes in a 24
period (or four blocks of 15minutes) to decide the number of visitors is
small enough to warrant the downtime.

The other thing I was thinking about (but have very limited experience
of) is your website lends itself well to RSS style content access. I'm
not clear on the details of how RSS feeds work but I presume the content
is called when the users RSS feed calls for it. If there was a system to
communicate a time to attempt to access based on the power details of
your system. I think it would prob work best for content heavy websites
as I suspect the ratio of data used to probe the system to the data of
the content itself would need to be high to make it worthwhile. It seems
that a system which emailed new content out to a list of subscribers but
only did so when the sun was shining would also be potentially efficient
but is much more limited than a website.

Best,

Drew

Ikkaro

(5)

If you have enough and appropriate energy storage you could expand the system with a Stirling engine

kris de decker

(6)

Comments at hackernews: https://news.ycombinator.com/item?id=22184052

@ Will

Yes, No Tech Magazine will also move to the solar powered server.

@ Others

Thanks for the advice.

And a breakdown of power usage by process / function would indeed be interesting, we're going to look into that.

@ Drew

Targeted downtime is a good idea, but due to our international readership there is never really a moment in the day with a very low access rate.

 Markus Padourek

(7)

All very interesting comments that have been added so far and most of them I think are worth considering.

One point I find interesting to consider more in-depth is in regards to the location of the server and how much energy is required for the request to get from the user to the server, so e.g. now there is just one server in Barcelona serving requests from all the world, but mostly the US and Europe. So requests from the US now need more energy and have a higher embodied energy, because the bits have to travel further and more infrastructure is needed to serve each request. But if now there was also a server in the US serving the users from the US the total energy use of the requests would immediately reduce. Plus then it would not matter so much if each server-setup has some downtime at night-time and one could possibly use a smaller battery / weaker solar panel.

Of course this energy usage is very difficult to calculate accurately, as it is difficult to figure out what exact infrastructure is serving the requests and it probably would increase the embodied energy of the server-setup itself. Also the data would have to be copied across the ocean at least once. All that said, I am curios to see if it was worth at a certain amount of bits less that have to cross the ocean.

Another point that I find lacking about lifecycle and internet energy usage calculations that I have seen so far, is not just calculating / talking about the embodied energy of the direct infrastructure (i.e. servers, network, and end-use devices) but of the surrounding / indirect infrastructure needed to get everything designed, manufactured, into the right place things. So I am talking about the energy needed to design and test a specific product, the energy needed to build and maintain a factory where everything gets produced, including the trucks, tools, streets, etc. needed to be able to build the factory. There is also energy cost in delivering the items where they need to be, e.g. delivering the server from its manufacturing place to the buyer, or setting the cable from the US to Europe into the ocean. And of course there is also further impact than just energy usage, e.g. the space that is taken up by data centers, factories, streets, etc. can generally not be used for CO2 sinks (e.g. trees) and contributes to a reduced biodiversity in that location.

Of course the energy/CO2 cost of that wider infrastructure is spread between many many more users, but I still find it is something we need to talk about more. Where in my opinion this becomes most interesting is if one could eliminate a whole production chain, e.g. if we reduce our plastic usage by 90% we still need some of the underlying infrastructure, but if it is reduced by 100% all the direct infrastructure that is involved in the production and delivery is not needed anymore and all the indirect infrastructure has one thing less that is dependent on it and could potentially be reduced if replaced by more local products.

If we now take this to the internet, what if 100% of the websites where built-up decentralised in peoples apartements, then no datacenters would need to be built, no extra infrastructure to serve data from the data centers would be needed to be produced or transpored. And since apartements generally already have that infrastructure the full embodied energy should be lower for such a setup. Also I would argue that the overall distance that data travels can be reduced and it would be easier to shut off entire servers for local services in a planned manner.

Another interesting thing about datacenters, while they generally can utilise servers more efficiently, they are also driven to always have enough spare capacity, so if clients need extra capacity or new clients want to use the service, they do not have to wait for new servers to be bought and installed.

A last thing I am be curios about, if a server is now being turned off, planned, 4-5 hours each day would that change its life expectancy? If so it could make having local servers for local services also more interesting.

Mario Stoltz

(8)

Hello Kris,
great that you provide all of us this data - though I guess many people (e.g. IT managers of even smaller companies) will struggle with the question how they might put this into reality for their case, I think it is great to see and prove that and how this can be done. This hands-on approach is one of the things that I profoundly appreciate about Low-tech Magazine over the years.

One comment: in section “Energy use in the Network”, you say “Energy use in the Network is directly related to the bit rate of the data traffic…”. I think the statement requires a closer definition. Even if there were no active internet traffic, i.e. if all devices in the internet were simply switched on but in their idle mode awaiting communication, they would still hum and blink and consume a surprising quantity of power.

Therefore in reality, Energy use in the network is the sum of a) a certain base load which is approximately constant (or depends on how many devices are currently switched on), and b) a dynamic load which indeed scales, but not with nominal bit rate, but with gross* data traffic per time. This would be measured in bits per second – but not in nominal “this is my max data rate” bits per second, but in a value that measures how many bits are really transmitted. The internet is largely asynchronous and consumes less energy if there is less traffic.

*) gross, not net, because of course the communication overhead (packet headers, metadata, checksums etc) also counts in.

Kind regards,
Mario.

Clinton Nolan

(9)

Very interesting analysis.

I think you should have two servers, one located in the US (preferably as far south as possible, and/or in a state with excellent weather such as Arizona) and the other in Europe, since these two locations are where you say most of your readership is based. The rest of the world can incidentally be served by these two servers. After all, you only have a need for continuous uptime due to how many people worldwide are interested in your site, which means means you should be able to find someone interested in hosting your site in the US.

Two servers in such different locations using 86.4Wh batteries and 10W solar panels would likely have more or less 100% uptime.

The more I think about it, Arizona really would be an ideal location for a solar server. It's pretty far south, the weather is very clear, and it is far to the west, so its even further from Europe than an eastern or midwestern US location would be.

@Amos
What you're proposing sounds like a solar CDN. https://en.wikipedia.org/wiki/Content_delivery_network

Speaking of, one major factor that might not have been accounted for in this analysis is the fact that the network includes CDNs (such as Cloudflare) that are caching your website and serving up your content on your behalf, especially since it is static content (which is easy to cache, they can basically cache and serve up the whole website). In fact, it's very likely that most of the US traffic never reaches Barcelona, since traffic across the submarine data cables is relatively expensive, while a CDN can pick up one copy of the website and serve it to us in the US over and over again. While the CDN is crucial to the efficiency of the network, what is their energy situation like?

Fabian

(10)

Hi Kris,

thank you for the detailed article! I was thinking about reducing the energy consumption of the end-user devices. I am often downloading longer articles as epub (via the Firefox extension ePub Creator) and read these on an ebook reader. Have you thought about directly providing an epub version of your articles, perhaps even with a simple list/directory to browse on the ebook reader itself? I don't have exact numbers, but would expect this to reduce energy consumption also on the end-user side.

Fabian

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name is required. Email address will not be displayed with the comment.)