The New Real Observatory: Art and AI in Conversation with the Environment

By Matjaz Vidmar & Drew Hemment

An artist stares intently at a computer screen. She's not looking at environmental data in the usual way — charts, graphs or satellite imagery. Instead, she's using an AI system to explore how machines and humans might together make sense of our changing planet. As she adjusts parameters on the screen, the system generates new interpretations of local greenery, revealing something profound about how both humans and machines perceive nature. This is The New Real Observatory, where artists and scientists are working together to create new ways of seeing our changing world.

The vision: a new way to explore AI and reflect on its environmental costs

The story of The New Real Observatory begins with a fundamental challenge: how do we bridge the growing disconnect between sophisticated digital life, the environmental data and our embodied lived experience? We live in an age of unprecedented environmental monitoring, where satellites track global temperatures and sensors measure local air quality. Yet this flood of data often feels removed from our daily reality – abstract numbers that fail to capture the intimate ways we experience meaning behind those numbers.

The challenge we face is not just about collecting more data or making better models, it is about finding new ways to make sense of that data, to make it meaningful and actionable in people's lives. We believed artists could help us discover these new ways of seeing.

This led to an ambitious vision: create a platform where artists could work directly with AI systems to explore environmental data in novel ways. But this wouldn't be just another tool for visualising AI models. Instead, it would be a space for genuine collaboration between human and machine intelligence, where artistic practice could drive innovation in AI development.

The approach, which we call 'Experiential AI', emerged from understanding that meaning comes through the active entanglement of human interpretation and machine computation. Rather than treating AI systems as either pattern recognition engines or simulations of human intelligence, we wanted to explore how new forms of understanding could emerge through their interaction.

Traditional approaches to AI development often prioritise statistical accuracy, but we saw an opportunity to do something different – to use artistic practice to drive technical innovation, particularly in how we create interpretable dimensions and paths through AI systems.

In addition to developing tools to bridge the gap in meaning and interpretation of data models, we wanted to contextualise that these new tools are part of a wider system of digital infrastructure – one with a large carbon footprint. We wanted to bring awareness of the climate crisis directly in conversation with the exploration of AI models, thus co-creating understanding of environmental futures on a planetary scale. 

This vision would evolve significantly over the next four years, shaped by collaboration between scientists, artists, engineers and humanists. 

The journey: the evolution of The New Real Observatory

The development of The New Real Observatory unfolded across two distinct phases between 2020 and 2024, each marking a significant evolution in how we approached the challenge of connecting human experience, AI and critical environmental reflection.

The journey began during the height of the COVID-19 pandemic, through a collaboration with the Edinburgh Science Festival. The initial concept was to create a digital experience that would help people bridge the disconnect between humans and their environment, powered by AI. The result was AWEN – A Walk Encountering Nature’, a mobile application that guided users on self-directed walks, using their mobile devices not as barriers to nature but as tools for deeper engagement.

We wanted to prompt people to use their senses differently. The application asked users to notice and appreciate the layering of data and embodied experience. This was the guiding principle behind the conceptual development of The New Real Observatory. 

In particular, in the co-creation process involving artists, scientists and engineers, one common question and mission statement started to take shape. How can we enable artists to use generative AI to at the same time explore the data models and modelling, while contextualising such an exploration through the lens of climate change and environmental crisis? 

Phase one: exploring how to see (2021–2022)

The realisation of the concept developed above pre-dated the public release of cloud-based image generators, in particular DALL-E. This first phase, in partnership with The Alan Turing Institute, specifically focused on the implementation of a generic, yet controllable generative AI platform, based on the transferGAN algorithm, that allowed users to fine-tune existing image data models with their own curated input and explore the new conceptual dimension they created. 

The innovation at the core of this was the composite SLIDER tool – Shaping Latent-spaces for Interactive Dimensional Exploration and Rendering. This allowed for direct exploration and interpretation of the AI 'understanding' of the image data, while at the same time reflecting on their own understanding of the conceptual thinking behind the curation of input images. 

Furthermore, the exploration was prompted by the direct integration of generic, yet localised environmental modelling data alongside the generative AI tools. This data can be searched in a way that demonstrates the effects of climate change, by juxtaposing present, past and future values such that their difference can be used as a proxy to contextualise how our own perception of the environment will shift due to their impact.

In the arts commission at the heart of this development, we invited artists and AI researchers to investigate a provocative question: how does human cognition differ from artificial cognition when it comes to understanding environmental change? In addressing this question. artistic practice pushed beyond traditional statistical modeling of environmental data, exploring instead how different forms of intelligence conceptualise changes in colours, textures and narratives that are in conversation with data on environmental futures.

We find ourselves negotiating with AI, not just using it as a tool, but exploring how it interprets the concepts embedded in the data and reimagines narratives. 

This experimentation led to three critical new works being developed: Ines Cámara Leret’s ‘The Overlay’, Keziah Macneill’s Photographic Cues’, and Lex Fefegha’s Thames Path 2040’. These works speak to the profound ways in which humans and AI can collaborate to analyse the environment and co-create visions of environmental futures, and how the embodied analogue experience and its digital representation merge. 

Phase two: learning the words (2023–2024)

The second phase, supported by the Scottish AI Alliance, added new capability to the platform’s word processing, as well as consolidating these experiments into a platform that supports 'synergetic experiences' – where a number of human and AI conceptual modalities can interact to produce new perspectives on climate-driven futures.

Deploying a Word2Vec training algorithm and working with small, carefully curated corpora of text as training data, rather than the massive internet-scraped datasets typical in AI development, we discovered that intimate, focused data can sometimes lead to more meaningful results than broad, general training. In particular, this approach allows users to have more precise control and detailed interpretation, enabling them to align the relationships found within the AI models with personal experience in ways that create new forms of understanding.

Having to balance the desire for sophisticated AI capabilities with the need for interpretable, controllable systems we again turned to an upgraded version of the SLIDER tool to develop a new approach to latent space navigation. Using strings of words to describe human conceptual dimensions, they were then challenged by the AI model’s reordering and associative word generation, which again led to both better understanding of the system as well as reflection on their own (pre)conceptions. 

Visualising word-cloud exploration and, as in the second phase, using localised environmental parameters to scale the exploratory process, a number of critical conceptual works emerged that spoke to the theme of Uncanny Machines’ - an art commission we framed around this new word processing capability. 

Image credit: The New Real Observatory Platform

Artists Kasia Molga, Alice Bucknell, Linnea Langfjord Kristensen and Kevin Walker, Sarah Ciston, Johann Diedrick and Amina Abbas-Nazari, all explored different conceptual issues arising from engaging with small text-based datasets, from addressing historical injustices in newspapers to exploring AI’s role in modelling extreme weather events. A full commission – that would enable the artist to explore their concept further and to realise an artwork and a workshop that would bring their concept fully to life – was awarded to Molga’s project 'How to find the Soul of a Sailor', which explores the interplay between textual remnants of her father’s identity and the unpredictable future of the ocean due to climate change.

The New Real Observatory isn't about making the machine see like a human, or humans seeing like a machine. It's about discovering entirely new ways of seeing.

By the end of 2024, The New Real Observatory had thus evolved into a platform for human-AI collaboration in exploring generative data models, contextualised by environmental understanding. But more importantly, it demonstrated a new way of developing AI systems – one that values interpretation and meaning-making as much as technical capability.

The platform: shaping the human-AI understanding of planetary futures

At first glance, The New Real Observatory platform might seem deceptively simple: a web interface where users can explore climate data and train AI models. But beneath this surface lies a radical rethinking of how humans and AI systems can work together to develop new understanding of environmental change.

We wanted to give users a level of control that you do not get in current AI tools, by not just using pre-trained models, but actually shaping how the AI understands and interprets data through an environmental lens. Furthermore, we wanted to ensure that users retain the ownership of both their input data and the models they generate with AI, allowing for a more intimate and personal exploratory environment as well as protecting users' data rights. 

This goal led to several innovative features that distinguish The New Real Observatory from conventional AI platforms.

Training the AI's gaze

What makes The New Real Observatory unique is how it positions users in relation to AI. Rather than treating AI as a black box that generates outputs from inputs, the platform creates a space for genuine co-creation. Artists can explore how the AI system understands their input data, probe its interpretations and use this understanding to shape their creative process.

Perhaps the platform's most innovative feature is the SLIDER. This tool allows artists to define their own conceptual dimensions within the AI's latent space – the internal representation where the AI system organises its understanding. Artists can upload small, carefully curated sets of images or text that represent the spectrum of concepts they want to explore.

Small data models

We noted that working with limited, thoughtfully selected data could actually lead to more meaningful results than massive training sets, as users can still recognise the input data, even though there is significant AI extrapolation. This approach turned conventional AI development on its head; instead of trying to eliminate bias through bigger datasets, the platform embraces the specific perspectives that users, in our case artists, bring to their exploration.

This is a promising bridging tool to build transparency and accountability within these data models, as humans can and do evolve conceptual reasoning with AI, rather than having to trust its methodology. The size of these models also enables clearer ownership of both the input data and the generated AI model, allowing for a more ethical data practice and a more intimate co-creation environment. 

The co-creation space

In addition to the exploratory negotiation with the platform, its technical constraints became creative opportunities. For instance, the system's limited image resolution (128x128 pixels) initially seemed like a drawback. But artists found ways to work with this limitation, using it to frame particular visual narratives or explore the boundaries between human and machine perception.

The constraints make us think more carefully about what we want the AI to 'understand'. It is not about processing more data, but about finding the right data to express what we want to explore.

Keziah MacNeill, one of the artists who used the platform, responded to these constraints by pairing AI-generated images with pinhole photography. Thus, the low resolution became part of the work's meaning as it spoke to questions about how machines and humans see differently.

Similarly, when artist Inés Cámara Leret used the platform to explore the boundary between natural and artificial greenery, she found that what initially appeared as technical limitations – like the AI's tendency toward certain colour patterns – could reveal deeper insights about how both humans and machines categorise nature.

The climate lens

Integral to the platform are three environmental parameters drawn from the Copernicus Climate Data Service: air temperature, precipitation and wind speed. Artists can explore these parameters for any location on Earth, investigating both current conditions and future projections. Rather than presenting this data as mere numbers, the platform allows artists to use it as a dimension for AI exploration.

The vision was to go beyond the use of climate data as evidence of the environmental crisis, instead using it to have a conversation about the future, with both the climate models and the AI. The system allows users to navigate between different climate scenarios – optimistic, realistic, and pessimistic – creating a space to integrate planetary futures directly into the process of experimenting with human-AI sense making.

The platform continues to evolve through use. Each user who works with it discovers new possibilities and pushes its capabilities in unexpected directions. Thus, The New Real Observatory is not just a tool for making art, but rather it is a laboratory for exploring how human and machine intelligence can work together to understand our environment and its change.

The analysis: towards new modes of meaning

Through the development and deployment of The New Real Observatory, we've discovered something profound about the relationship between art, AI and environmental understanding. These insights extend far beyond our initial goals, suggesting new approaches to both AI development and environmental engagement.

Traditional AI systems often operate as black boxes – complex but opaque systems that process inputs into outputs. Through The New Real Observatory, we found that artists could help create more interpretable AI systems, not by simplifying the technology, but by making its complexity meaningful.

When we let users define dimensions within the AI's latent space, we are not just making the system more user-friendly, we are discovering new ways to make AI systems that are inherently more interpretable and meaningful to human users.

Contrary to conventional wisdom in AI development, we demonstrated that technical constraints could become sources of innovation. Limited computational resources, small datasets and bounded latent spaces – typically seen as limitations – became creative opportunities. This insight suggests an alternative to the current focus on ever-larger AI models and datasets.

Through the artworks created with The New Real Observatory, we've glimpsed new possibilities for environmental understanding. Inés Cámara Leret's exploration of artificial nature revealed how AI might help us recognise our constructed relationships with the environment. Keziah MacNeill's speculative future showed how machine and human perception might merge in our understanding of environmental change. Lex Fefegha combined the environmental and visual data to expand awareness of emerging challenges. Kasia Molga created a transcended dialogue with her sailor father about the future of the ocean.

These works don't just represent or analyse environmental data – they create new ways of seeing and understanding our relationship with the environment. As one viewer remarked during an exhibition: 'It made me think about climate change not just as numbers and projections, but as something that changes how we see and understand our world.'

The impact: AI’s role in defining planetary futures

Overall, this programme of work points toward future directions for both AI development and environmental engagement. We've shown that artistic practice can drive technical innovation; that small, carefully curated datasets can be as valuable as massive ones; and that human-AI collaboration can create new forms of environmental understanding.

But perhaps most importantly, we've discovered that the future of AI doesn't have to be about replacing human intelligence or automating human tasks. Instead, it can be about creating new forms of understanding that emerge through the collaboration between human and machine ensembles, and exploring, interrogating and merging our ways of seeing and knowing.

As we face the environmental challenges ahead, these new forms of understanding – these new ways of seeing our changing planet – may be exactly what we need. The New Real Observatory isn't just a platform for creating art with AI; it's a prototype for how we might develop more meaningful relationships with both technology and our environment.

We started by trying to bridge the disconnect between environmental data and human experience, but we ended up discovering new ways of seeing that might help us navigate our planetary future.

Author bios

Matjaz Vidmar is the Deputy Director of the Institute for the Study of Science, Technology and Innovation (ISSTI) and of The New Real research programme and creative community. He is a Lecturer in Engineering Management, based in Mechanical Engineering and the Institute for Materials and Processes at the University of Edinburgh.

Drew Hemment is Professor of Data Arts and Society at the University of Edinburgh and Theme Lead for Arts, Humanities and Social Sciences at The Alan Turing Institute. He is Director of The New Real and Director of Festival Futures at Edinburgh Futures Institute.

Cite as: Matjaz Vidmar and Drew Hemment (2025). ‘The New Real Observatory: Art and AI in Conversation with the Environment.’ The New Real Magazine, Edition Two. pp 10-18. www.newreal.cc/magazine-edition-two/feature

Previous
Previous

The New Real Pavilions and Salons: Creating Spaces for AI Futures

Next
Next

Can AI and Art Help Shape Planetary Futures?