Can AI Clear the Net(Zero)?

By Daga Panas

Image credit: Daga Panas

Set the (not easy) challenge of wrestling with the BIG question: Can AI be net negative from a climate perspective? University of Edinburgh Data Scientist Dr Daga Panas shares her thoughts on some of the paradoxes of working with big data and powerful algorithms for planetary good. 

With Transformer models gobbling up preposterous amounts of energy to create fake humans[1,2] data centres syphoning off vast amounts of water to cool off heated online debates about... well, ironically enough, climate change;[3] and with AI raising both temperatures and eyebrows, the field of Data Science is becoming a bit of an environmental minefield to work in. 

Frankly, when the question is raised Can AI be Net Negative? (i.e. reduce the carbon footprint, rather than add to it), I suck in my breath through my teeth. While I don’t think I am nearly qualified to fully answer this – in fact, no offence, but I don’t think anyone really is – what I can offer is a medley of positive examples, some ensuing paradoxes and a question of my own in return.

Our practice in the Data Science Unit at the University of Edinburgh happens to touch on a lot of societal and planetary issues, from the micro-scale of new materials chemistry to the macro-level use of Earth Observation to monitor and manage the environment. We also in many cases use both big data and powerful AI algorithms, and wouldn’t be doing so without some hope of being able to balance the books in favour of the environment, rather than our curiosity. Let me explain... starting with the big (bad?) data. 

Big (bad?) data 

We live in an unprecedented moment in history, having turned the planet into a version of the Orwellian. Putting aside the undeniably troubling political questions, I want to focus on the practical aspects of such mass monitoring. 

Take satellites for example; a swarm of busy little robots which photograph the surface and atmosphere of the Earth daily, and pretty much from pole-to-pole. This offers the opportunity to observe processes at scales and resolution that were previously unthinkable. There is the climate, of course, but also habitats, mass movements of animals, human activities and their impacts – the list goes on. In our direct practice, we use satellite data to monitor glaciers (and not just any glaciers but the mother of all glaciers, the Antarctic Ice Sheet), classify types of vegetation in the Amazon (used to detect deforestation),[4] estimate population density[5] and calculate travel times needed to reach the nearest health clinic (important considerations for policy-making, particularly in under-developed countries).[6] None of these tasks would be possible without the data, nor without machine learning tools. The number of images we're talking about, and that need to be looked at and processed quickly, is orders of magnitude greater than even the longest holiday pictures slideshow you have ever had to sit through. And it isn’t merely about the patience needed, or the tedium of the task, it’s also that humans are just not fast enough. That’s where AI is so incredibly useful; it can approximate some of our processing capabilities (which I would say is still a far cry from intelligence) and then perform them on our behalf at scale and at speed. 

Image credit: NASA's Earth Observatory, CC BY 2.0 httpscreativecommons.orglicensesby2.0, via Wikimedia Commons

Problem and solution? 

And here comes the paradox; given that the speed and scale mentioned above requires extensive data storage and recall, and significant processing capacity, both of which being energy intensive, it seems like I’m saying that without AI we wouldn’t be able to monitor things like the burden of AI on the environment? Does this then mean that without AI we wouldn’t have the solution, but also the problem? 

Of course, it is not as simple as that, and it’s probably patently obvious to most that – AI or not – deforestation would still be rampant, glaciers would still be melting and we’d still be in this mess. As these systemic issues are long ingrained in our society, AI is neither the sole cause nor its salvation, but it is useful to perhaps be able to better understand and start addressing these deeply rooted problems. It is critical to note that this is all very anthropocentric – we, the humans, build and use AI and it is our behavioural and social, economic and political change that can help shape a better planetary future. 

Inventive creativity 

To explain this a bit, let’s zoom in now on the micro-level. We also face problems in our practice from the other end of the spectrum, where data is not necessarily abundant. The thing about the powerful AI algorithms discussed so far is that they work because they have seen billions of examples and have gigabytes of memory available. In other words, it acts a bit like a search engine,[7] just not at the level of keywords, but of data patterns. 

This works well for satellite data or cat pictures, of which there is no shortage. But for chemistry, for example, where we don’t have an endless supply of different materials to show as examples, the issue is rather different, and we have to make do with very little data. We do, however, have a pool of hard-won chemistry knowledge, rules and laws that we know always apply (say the conservation of energy that underlies the sad impossibility of eating the cake while still having it). The problem is that unlike human intelligence, which can seamlessly combine concrete data and abstract rules into sometimes very ‘out of the box’ thinking, computer ‘intelligence’ has a hard time with inventive creativity. The difficulty is that we either have to show enough examples to capture the general rule, or we have to explicitly code each rule, but we then run into the problem of which rule to use when? To solve that problem we in turn need more examples, or more rules, and the recursion swiftly blows up. 

Runaway data gluttony 

This combinatorial explosion – or more plainly, runaway data gluttony – is in fact already starting to cause problems. We’re approaching the point where we’ve fed all we can into Large Language Models (LLMs), as if a diet of everything that anyone has ever written could birth us the next Shakespeare or Dickinson.[8] Well it hasn’t and it can’t, because all the AI can do is indiscriminately remix – and not in the creative way that musicians do it; it’s a cheap DJ that replays greatest hits to a drunken crowd. If you spend any time perusing social media, you may already have noticed an onslaught of fake art that is automatically generated, as often as not saccharinely polished, and quickly elicits a nauseating sense of deja vu.[9] What happens when, all examples exhausted, the LLMs start cannibalising their own productions?[10

If you think that AI is already mindlessly regurgitating shallow imitations of art and amplifying human-generated biases and falsehoods, consider how hollow of any value the internet echo chamber will become when it loops through the tape again, resampled and caricatured.

A bad workman blames their tools 

What I am trying to convey is that, try as we might, we humans often fail at approximating things or thinking in very large or very small numbers; we are, after all, tuned to the environment we have evolved in. What do we humans do when we need to augment our limitations? We invent tools. AI on the other hand can process things at planetary scales but is on its own very (very) dumb. It is simply another tool, a complex and pretty amazing one, but a tool nonetheless. 

We humans have a penchant for anthropomorphising, and with the advent of LLMs this has become all too easy to do, but we should not be fooled – nor tempted to blame our digital helper. Like any tool it can be designed and wielded for good and for ill – by its owner. So perhaps the question should not be whether AI can ever become Net Negative, but instead, can we humans ever be? 

References

1. Strubell, E., et al. (2020) ‘Energy and Policy Considerations for Modern Deep Learning Research’, Proceedings of the AAAI Conference on Artificial Intelligence, 34(9), pp. 13693–13701. 

2. Samsi, S., et al. (2023) ‘From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference’, 2023 IEEE High Performance Extreme Computing Conference (HPEC). IEEE, 2023.

3. Mytton, D. (2021) ‘ Data Centre Water Consumption’, npj Clean Water 4(1).

4. Hirschmugl, M., et al. (2017) Methods for Mapping Forest Disturbance and Degradation from Optical Earth Observation Data: A review.’ Current Forestry Reports, 3, pp. 32–45.

5. Neal, I., et al. (2022) ‘Census-Independent Population Estimation using Representation Learning’. Scientific Reports, 12(1), pp. 5185.

6. Watmough, G.R., et al. (2022) ‘Using Open-Source Data to Construct 20 Metre Resolution Maps of Children’s Travel Time to the Nearest Health Facility’. Scientific Data, 9(1), pp. 217.

7. Marcus, G. (2023) ‘LLMs Don’t do Formal Reasoning – and Never Will’, Substack: Gary Marcus, 17 November. (Also see 2410.05229v1.pdf)

8. Davis, E. (2024) ‘ChatGPTs Poetry is Incompetent and Banal: A Discussion of (Porter and Machery, 2024)’, Department of Computer Science New York University, pp. 1-10.

9. Marcus, G. (2023) ‘The Imminent Enshittification of LLMs’, Substack: Gary Marcus, 9 May.

10. Guo, Y., et al. (2023) ‘The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text.’ arXiv preprint, pp. 1-16.

Links

  1. ’Why AI is a Disaster for the Climate’ 

  2. ’Generative AI’s Environmental Costs are Soaring — and Mostly Secret’

  3. ’AI’s Craving for Data is Matched only by a Runaway Thirst for Water and Energy’

  4. More on The University of Edinburgh’s Data Science Unit

Author bio

Dr Daga Panas is a researcher at the Data Science Unit for Science, Health, People, and Environment at The University of Edinburgh. She studied Computational Neuroscience and Physics and describes herself as an all-round-geek, when pressed to write in the third person.Daga has a varied background, including, in no particular order: deploying machine learning in a business environment, guiding punting tours on the river Cam, and research on the benefits of naps (often conducted personally).


Cite as: Daga Panas (2024). 'Can AI Clear the Net(Zero)?' The New Real Magazine, Edition Two. pp 58-62. www.newreal.cc/magazine-edition-two/can-ai-clear-the-netzero

Previous
Previous

Artificial Intelligence, No Longer Sci-Fi

Next
Next

Circular Diffusion