Exploring
Human-AI Ensembles
The Edinburgh Futures Institute’s (EFI) Building Near Futures course proudly presents the 2025 course Showcase.
The students from the Building Near Futures course at EFI present their groupwork: the future prototypes exploring how humans and AI can collaborate to tackle pressing societal and environmental challenges. Short-form videos guide you through near-future scenarios and inspire conversation on AI co-creation. Will these ideas spark real-world change?
You can read more about the challenge theme and the student groups’ curatorial statements below.
Challenge Theme: Human-AI Ensembles
There is today a need and an opportunity to do AI differently. Today’s most advanced models are built on social data and used across incalculable social settings. The technology is changing, and we are changing too, the results may be astonishing, in ways we cannot yet comprehend.
Increasingly, today's challenges are solved through collaborations between human and AI agents. Yet, in approaching these challenges, we tend to draw on a limited sociological imagination of potential interactive configurations: one human, one bot. This limited view suggests that humans and AIs are interchangeable—that when we add an AI to a system, we must remove a person. But reality is more complex. In the emerging AI era, humans and AIs will work together in rich networks of collaboration, taking on different roles and arrangements. How can we design AI-based sociotechnical systems that get the best from both—enhancing, rather than replacing, human activity?
Sociotechnical Ensembles explores this question by looking at the many ways humans and AIs can work together. Like musicians in an orchestra, each brings unique strengths to the group. By studying different arrangements of human-AI collaboration, we can better understand how to bring out the best in both human and artificial intelligence. As our AI-based sociotechnical systems continue to develop, human and AI collaboration will be woven into a complex network of interaction, interfaces, and interlocution.
It is crucial that we develop understandings of how different arrangements within this web affect the unique contributions that can be made by both humans and AI—not only as individual contributors, but as a comprehensive ensemble.
The Groups’ curatorial statements below offer a response and reflection on this emerging challenge theme…
Groups’ Curatorial Statements
-
Human-AI collaboration transforms both humans and AI, offering benefits but introducing risks. Our theme considers the growing risk of overreliance on Human-AI Ensembles (HAEs), supported by a tendency to anthropomorphise Large Language Models (LLMs).
This unwarranted trust can lead to distorted decision-making and flawed outcomes. Although HAEs can help synthesise complex data and potentially enable stakeholder voices, they can also misrepresent and marginalise underrepresented groups while propagating existing societal patterns. For instance, HAE’s often use LLMs that perpetuate biases, hallucinations, manipulations, and misalignment. Moreover, even well-intentioned HAEs operating independently of LLMs can still lead to unethical outcomes. Given these challenges, we must carefully assess the need, transparency, limitations, and dependence on HAEs to help humans make responsible decisions.
2030 Scenario
In 2030, 30% of land and sea have been protected for nature, as set out at the UN Biodiversity Summit (COP15). HAEs have been crucial in achieving the 30% goal, illustrating the complexities of balancing biodiversity conservation with housing development. For example, the government used satellite drones to extensively monitor biodiversity while local authorities implemented highly configurable land use scenario planning that would have been impossible for humans alone.
This prioritisation has hindered progress in meeting housing demand. Community members are voicing concerns over stricter development controls aimed at achieving nature protection targets, putting achievement of further nature restoration at risk. Moreover, tensions are rising between different user groups focused on independent goals.
2030 Premise and Prototype
The challenge in 2030 is how to bridge these divides, balancing human land use requirements with nature protection. Finite land and growing demands for housing and food threaten progress in safeguarding nature. Some land may need repurposing for urgent housing needs, but how can we ensure the process addresses broader priorities?
The SymbioCity Lab 2030 game prototype illustrates how multiple, diverse actors, from urban planners to conservationists, must make decisions collaboratively while negotiating conflicting priorities. Through an immersive role-playing game, players balance human needs with environmental protection for positive shared outcomes.
Going beyond traditional board games, the 2030 game incorporates an “advanced” HAE chatbot to present diverse stakeholder perspectives. This chatbot generates infinite, often unheard voices, fostering a more inclusive exploration of potential futures.
Challenges
Nonetheless, the prototype reveals profound ethical dilemmas in utilising HAEs. Biases and hallucinations in LLM-based chatbots distort marginalised voices and obscure inaccuracies, reinforcing power imbalances4. AI's simplification of complex human experiences into datapoints removes essential context and nuance, while reliance on historical data and regression to the mean weakens the authenticity and legitimacy of outputs and stakeholder representations. Moreover, proposals such as deploying drones for biodiversity monitoring raise significant ethical concerns related to surveillance and privacy. Additionally, the energy-intensive nature of AI operations contributes to substantial environmental challenges.
By illustrating such issues, the prototype fosters discussion about the unintended consequences of AI-driven solutions and emphasises the need for transparent, ethical practices to ensure AI supports economic, environmental and social sustainability.
Ultimately, it underscores the complexities of responsible decision-making when employing HAEs in planning.
-
We interpret the challenge as preparing for recurring climate disasters by building a response that enhances resilience through AI-driven immediate support and long-distance human aid. We propose RapidLink, initially conceived as a proactive alternative to the LA fires. With climate disasters becoming more frequent, similar events will likely repeat within the next five years.
We examined how the disaster unfolded, its impact on residents, and ongoing recovery challenges. We tried to answer questions: How could the response have improved? How could lives have been saved?
How could it all have been mitigated? Where are the current system shortfalls? By analyzing the different values, visions, and needs of various stakeholders (such as communities, authorities, CSOs, and the public at large), we arrived at the possibility of a human-AI ensemble that would assist victims via digital means, until on-the-ground help arrives.
Initially, we looked at the most pressing needs of civilians in such a time: safety, food, and shelter. We questioned what obstacles may arise to challenge the fulfilment of those needs. Then we explored the relationships between different stakeholders - for example, how NGOs and marginalized communities may come together in grassroots movements; how public-private partnerships may be leveraged; how would different groups benefit from RapidLink; and at what cost would those benefits come?
Examining marginalized communities, we mapped potential disadvantages, dissenting objections to RapidLink, and their reasons. We found that global implementation could face friction due to varying cultural and ethical norms, particularly around data privacy.
Our desired future is based on an improved policy framework and a comprehensive, long-distance disaster response and management system, composed of a human-AI ensemble: RapidLink.
We envision RapidLink’s AI covering diverse users, supporting displaced victims and refugees across the following areas: personalized education adapted to geography and mental wellbeing; built environment analysis using satellite imaging for site selection, debris material recognition, and shelter identification; healthcare via telemedicine and symptom detection; and sustainability by ensuring heightening carbon footprint offset.
Transitions needed to arrive at such a world would require policy changes and improved technologies. In 5 years, policymakers would be more open to adopting AI and frameworks mitigating allocative and distributive harms. This is crucial for telemedicine, addressing data ethics and privacy. Regarding education, we anticipate major shifts with improved digital skills that will enable students and teachers to benefit from online learning without disruption. Since RapidLink is accessed as an app, we envision its usage being enabled by equitable access to the internet and technological devices.
AI advancement would integrate mental well-being into personalized learning, enhance image and material detection for shelter solutions, and expand medical knowledge databases. Similar development of databases for best practices for detection, prevention, and resolution of concerns among people in vulnerable contexts will also improve social protection and sustainability practices of affected individuals and communities.
Our prototype is presented as RapidLink’s website. It showcases not only the voice of the founders, but also the voices of the public and press, both supportive and critical, regarding RapidLink’s establishment and development.
-
Our project, TruthLink Mindmaze (TM), confronts the escalating issue of disinformation projected into 2030, exploring how Human-AI collaboration can offer new approaches. Positioned within a future fractured by information warfare fueled by AI-enhanced algorithmic polarization, our work critically examines how collaborative human-AI approaches might offer an experiential antidote for the misinformation epidemic. Our group’s core assumption is that active, experiential learning is essential for resilience against misinformation, alongside skepticism towards unchecked corporate control of AI.
Set in a near-future context, where misinformation is rampant and has outpaced static fact-checking solutions, actively eroding share notions of objective reality, TM leverages virtual reality (VR) escape rooms to inoculate communities against viral, targeted misinformation campaigns.
The core premise of our scenario is that while misinformation persists, TM challenges traditional notions of knowledge dissemination. Instead of direct fact verification which we recognize risks imposing authoritative narratives, it encourages participants through a gamified experience to engage actively and critically with information, emphasizing that discerning truth is as vital as truth itself. These VR experiences are designed to appeal to children and young people with the tech becoming a widely adopted reasoning tool. The use cases for this tool span the fields of education, medicine, business, and more as a treatment and possible antidote to viral misinformation. Our goal is to explicitly challenge existing social structures that passively accept authoritative narratives or resist external interventions.
TM proposes a sociotechnical ensemble where psychologists and social scientists leverage their human expertise and work alongside an AI's ability to provide adaptive, procedural scenario generation to create immersive learning environments.
The AI dynamically adapts each VR scenario to individuals to create personalized learning pathways. Experts, in turn, monitor ethical boundaries and emotional impact, ensuring the AI driven personalization is constructive and does not manipulate people or cause unnecessary amounts of emotional distress or harm. This synergy is the core innovation and acknowledges the limitations and biases inherent in both humans and AI, aligning their strengths to cultivate critical reasoning and emotional regulation skills, crucial for navigating misinformation.
Our prototype, a satirical commercial for TM dramatizes techno-solutionist narratives. Presented in a sleek futuristic style with a hint of dark humor, it acts as a provocation. It showcases the allure of ‘effortlessly’ navigating truth, while subtly hinting at the underlying complexities and potential ethical pitfalls. Our explicit intent is to provoke critical reflection rather than present a definitive solution. The satirical tone is intentional, designed to encourage critical reflection on techno-solutionist narratives, and the potential for even well intention technologies, to be co-opted or misused, especially by big technology corporations.
Our TM commercial prototype challenges dominant narratives of simplistic faith in AI as a panacea. We felt that by framing the project as a satirical commercial for a fictional NeuroLink product, it would serve to expose the potential pitfalls of human-AI ensembles designed for social engineering.
Ultimately, this prototype is meant to encourage a deeper understanding of how Human-AI collaborations have the potential to reshape not just our future, but our relationship with truth and knowledge in an increasingly technology-entrenched world.
-
It is the year 2030. The world has witnessed the increasing frequency and intensity of natural disasters due to climate change. The monetary cost of this damage has reached trillions of dollars, and the human cost has left people, places, and infrastructure acutely vulnerable. The world must reimagine disaster preparedness, response, and resilience; it needs to leverage Artificial & Augmented Intelligence to champion the confluence of human stories and knowledge.
Enter the Living Virtual Documentary Environment (LVDE: “lived”), a customisable human-AI ensemble for Disaster Risk Reduction (DRR).
By creating a virtual digital twin of an area exposed to disaster risk, LVDE replicates topographical structures, mapping rescue and resilience networks and identifying vulnerabilities. Local stakeholder dialogue together with knowledge from national and international institutions enhances communal expertise and produces lived experiential data. By layering architectural data, earth systems, and vetted contributions from researchers, policymakers, and remote experts AI becomes a powerful data volume processor. The design draws on Latour’s (1993, 2004) “Parliament of Things” to integrate human, legal, educational, and environmental perspectives. LVDE provides agency to people and non-human entities at the forefront of disasters - many of whom are typically denied this - and ensures a pluralistic value alignment that is technically sound, socially resonant, and environmentally sustainable. While inspired by Latour, we challenge the notion of AI as a non-human actant akin to natural entities. Instead of granting it autonomous agency, we follow Simondon’s (2017) view of AI as a co-evolving technical object. Its “margin of indeterminacy” allows contextual feedback to guide its operation beyond a passive tool but short of sovereignty. Through community interactions and a gamified simulation environment, LVDE produces contextualised, non-random outputs. It becomes a powerful toolkit embedding traditional knowledge, culture, and foresight, supporting both human-AI and human-human communication.
Distributed via agencies like UNDRR with in-country presence and ties with community and government, LVDE is an open-source tool adaptable to local contexts.
However, it is also essential to question LVDE’s limits. Can the proposed AI ensemble weigh and balance intrinsic human values with undemocratic governance structures? Can it mediate knowledge without reproducing underlying hierarchies and power imbalances masking and deepening existing inequalities? Most importantly, can we ensure that humans in charge of the environment in which LVDE operates will not manipulate the tool or the data for their self-interest?
We are aware of the risks of being pulled into a whirlpool of ethical, moral, and philosophical conundrums as we aspire to tech for good, We are committed to involving all affected communities and to drawing on diverse knowledge systems to minimise risk of harm – whether through malign intentions or the quiet danger of hidden bias. As human representatives within the parliament of things, we shall find it within us to move toward collective growth.
-
In this speculative future set in 2032, optimisation has become the dominant organising principle through human AI (HAI) collaboration. In the Azores, Portugal, HAI systems designed to improve citizens' health, education, finances, and work-life balance are starting to raise concerns about human freedom, creativity, and self-discipline. Intensifying global pressures - climate change, sustainability demands, and AI’s growing influence on culture – have left governments around the world turning to technological solutions reliant on algorithmic governance. As a result, optimisation systems claiming to enhance individual and collective well-being now dominate, but without any transparency or insight into how they operate or make decisions. These systems are reshaping societal norms.
In this future, we see five key shifts: (1) food systems that prioritise engineered nutrition; (2) knowledge work decoupled from physical location but managed by tight algorithmic control; (3) democratic participation gamified and quantified through engagement indices; (4) continuous monitoring of personal wellbeing and algorithmic nudging pushed toward algorithmically-decided optimisation; and (5) environmental impact calculated at individual and collective levels. This transformation reflects society's embrace of ‘techno-solutionism’, the premise that complex social problems can be solved through technical innovation alone, as the primary approach to global challenges. The rapid advancement of AI has rekindled the promise of technological salvation, opening new possibilities while reshaping the conditions of human agency.
Our creative process began with the Futures Cone methodology to explore probable, plausible, and preferable futures. We developed a persona, Maya, using a large language model to depict an ‘optimised citizen’ navigating her new reality. Through her daily interactions with ubiquitous optimisation systems, we reveal the tensions between efficiency, control, and human values that emerge when optimisation becomes the dominant societal paradigm. We chose to highlight moments of friction where algorithmic recommendations clash with human desires, creating spaces for critical reflection on the costs of efficiency and of an optimisation-centric society.
We selected the Pecha Kucha format for prototyping, 20 slides, 20 seconds each, for its constraints and rhythm, which mirror the structured, optimised life depicted in our exhibition. This format enables us to present Maya's day as a narrative that both showcases and critiques the lived experience of algorithmic optimisation. The enforced brevity of each slide creates a sense of accelerated time that reflects the optimised efficiency of Maya's world and compels viewers to confront how algorithmic optimisation reshapes individual autonomy and, in turn, collective values.
Through this exhibition, we aim to provoke reflection on what is gained and what may be lost when society aligns itself with the logic of optimisation. When algorithms shape our nutrition, work, civic participation, wellbeing, and environmental impact, how does this transform our understanding of choice, agency, and what it means to live well? This speculative future does not predict what will happen but rather creates a space to examine critical questions about the paths we might take. As Hannah Arendt reminds us, "What makes us human is not perfection but the capacity to begin anew." By immersing delegates in this optimised world, we invite you to contemplate how humans might harness AI's potential while preserving the core messiness, spontaneity, and self-determination that make us inherently human.
-
Influenced by Palantir using Ukraine as an “AI war lab,” our scenario is set in the early 2030s during World War Three. Unlike AI’s role in Ukraine, our focus is less the specific context of global conflict and more the implications of conflict on food insecurity. We imagine this context would introduce logistical complexities at local, regional, and global scales that would necessitate the involvement of AI. These complexities would span farming and harvesting due to labour shortages and transport and distribution complexities due to disrupted and unreliable transport networks. In a world where these networks are unpredictable, we speculate individuals and governments would turn to the promise of AI ensembles.
This speculative conflict could cause a re-examination of the Global North-South divide and a possible redistribution of power based on agricultural output. There has been a recent explosion in agricultural output, with 73% produced in the Global South. What could this reliance mean for future trade agreements against the backdrop of a global conflict?
Locally, food insecurity would lead to supply chain issues, with restrictions on the amount of purchasable food and a rationing system to prevent stockpiling and hoarding. This is reminiscent of Covid restrictions implemented during the pandemic lockdowns, which might also lead to the exclusion of ethnically diverse diets as the system would promote efficiency to the detriment of inclusion.
Some of the ethical concerns associated with the use of AI have been amplified in this scenario. The sharing of data with the AI ensemble and government entities would become controversial and invade privacy with potential secondary consequences to food allocation. However, the real time tracking of ingredient availability and the environmental impacts of the war on crop yield would be crucial to understanding food availability.
Human-AI ensembles inspire optimism for a future where AI’s data-driven insights, combined with human contextual expertise, can address pressing global challenges.
This collaborative approach, distinct from mere augmentation or automation, leverages the unique strengths of both humans and AI- AI excels in processing structured data, while humans contribute diverse mental models and private knowledge.
Despite these possible benefits, you can’t AI your way out of global food insecurity, and there are limits to what AI ensembles can do. The implementation must be carefully evaluated to ensure equitable and sustainable outcomes. Success would depend on maintaining human oversight, establishing ethical guardrails, and recognising the boundary conditions where human input remains indispensable.
We chose the medium of future news stories for our prototype. The arcing narrative of eight news 'slices' allowed us to explore a fictional future that spans geographic scales and how the story evolves over eighteen months. We determined that the confluence of global conflict, food insecurity, and the application of AI ensembles can be reduced to themes of power and access. The news stories are shown in chronological order, starting with the disruption of war, exploring negative and positive implications of AI ensembles, and ending with the provocation that maybe this scenario could rebalance power between the Global North and South. Even though we do not depict an equitable and sustainable future in our prototype, perhaps such a future would be a catalyst to prioritise equitability and sustainability.
-
The video is a silent artefact allowing for introspection and reflection into the topic.
As the pages of the video flip for you, please sit back and imagine you are narrating to yourself.
Our near future scenario takes place in Edinburgh, Scotland in the year 2030. As the use of generative AI is growing across all industries, we are exploring a near future that has become dependent on AI as coworkers. We also raise the question of what happens if the grieving person becomes too dependent on this near future solution.
We chose grief as a theme since it is common yet often overlooked human condition, hard to help as it is also very personal, and it was interesting to think about how this could be addressed in near future with human AI ensemble. With an ensemble, the grieving person will have someone to talk to at every low moment. It is also possible to find other people with matching experience as well as professional help. The data formulating to back end could help change public and private processes helping people with grief as well as tailor experiences for all users involved.
To provide Good Grief with a realistic approach to how a practical Human-AI ensemble tool would work, our group explored the causes of grief and how it impacts people’s daily lives. The articles we studied presented several proven methods and activities that help people who are suffering from grief. However, most also acknowledged that, while helpful, these methods are not universal, as people respond to stimuli in different ways.
Reflecting on this scientific finding leads us to the core question that a humancentered AI tool aims to address: Each person is a fusion of genetic, hormonal, past, present, and future expectation inputs. An AI tool can collect personal data (ethically and with consent) and combine it with its database, which is also equipped with methods to help people overcome psychological struggles. However, we believe that such a tool could successfully combine the vast capacity of technology to process large amounts of data with the humanized sensitivity of a therapist. Also, AI could start to match grieving people to peer support, i.e. other grievers that share great similarities in their situations also using the solution.
With this in mind, our project also encourages the audience to view AI as another effective tool created to improve our earthly experience. We emphasize that nothing can replace human empathy and understanding when dealing with suffering - an "abstract algorithm" that no advanced computing capacity can truly replicate. This is why our human-AI ensemble also explores the concept of therapists utilizing grief chat-bots as coworkers to collect more information, provide additional support to individuals experiencing grief at any time of the day and allow the therapist to help more people.
-
One question lies at the basis of our imagined future: What will we be able to measure, translate, and share next? Following the past inventions of sound and image recording, we propose a near future in which we can relate even more intimate insights into our minds and perceptions to those around us: Our feelings.
Our future scenario examines the seamless yet disruptive adoption of iFeel in society - an app designed to measure and convey our emotions inspired by Affective Computing. iFeel utilises machine learning and intricate measurements of bio-metrics to make sense of our feelings, to ourselves as well as others. In 2030, iFeel is already well underway and has established itself as a part of daily life. We speculate on how humans might engage with this tool: For some, it may be the voice they always needed to get the help they have been looking for. For others, iFeel threatens to break down the last bastions of privacy. And in the near future, someone will still always make a profit…
Utilising the ever-more-popular short video format, our piece mimics familiar social media feeds: Confronted with advertisements, opinion podcasters, and influencer confessionals, viewers are invited to step into the contemporary social media landscape of the next decade to explore what the people of the future think about iFeel. But further, the audience must also turn to themselves: How would you act in this future? Does iFeel give you the tools you’ve been missing to connect to others? Or will it shine a light onto those corners of your mind you’d rather keep secret?
We believe that indicators of software like iFeel are already around us: Various tech companies have reported on “mind reading” technologies they’re working on, and while those may be in their infancy, we can clearly see the relationship between our brain streams and our devices becoming more explicit. And, closer to what we already have nowadays, iFeel would be joining a long line of precursory apps that unveil, monitor, and record increasingly private information about ourselves, which many happily share with their social circles. How different would it really be to share your iFeel status versus your physical activity levels or menstrual cycle? As social norms change, so do the tools that embody them, and vice versa.
But iFeel isn’t just another intrusive, data-farming app like so many existing fitness and wellbeing trackers. We imagine a form of communication that isn’t actually available to us as humans right now. While we can use heart rate monitors or even just our words to try and convey our inner state, iFeel would add a new dimension by combining all those indicators that we can’t be aware of ourselves and making sense of them for us. By doing so, AI technology isn’t just adding fancy bells and whistles to activities we have already mastered as humans. AI completes the messaging circuit between our very own systems 1 and 2, our monkey brain and our rationale. And exactly because it peers behind the veil into formerly obscured information, it is unimaginable for it to function fully without human influence. But is this connection even trustworthy? What do we lose by claiming to quantify or objectively identify emotions? In a world where many are willing to give up their own judgment to more objective-seeming AI claims, how long until we lose faith in our own intuition?
-
Amidst a global ecological crisis, this project explores how Artificial Intelligence (AI) can support land restoration. By combining historical data analysis, biotechnology, and environmental policy, our Human-AI Ensemble presents a sustainable, data-driven approach to regeneration. In this scenario, AI plays a three-part role in collaborating with humans to address this problem. Firstly, it analyses historical data on what plant species live in a certain location. It then generates a biochemical profile on historical soil compositions using biological data on the plants’ needs. Finally, it hands this prospective profile to humans to review and then enact. This highlights the harmony of technology and human knowledge in soil restoration, and follows real-world trends, as similar ideas are beginning to be implemented in India and planned in Scotland. Farmers, scientists, and policymakers all play a critical role in translating AI recommendations to suit local ecological conditions, community needs, and applicable regulations.
Our project explores a near future from 2025 to 2030, where soil degradation becomes a flashpoint in global discourse. It begins with the UN holding a convention discussing it as a key issue and forecasts scientists, academics and public/private businesses taking key actions to develop a Human-AI Ensemble, capable of using the strengths of both humanity and AI to address challenges that seem insurmountable without that collaboration. We also highlight how this may go wrong—it still occurs in the framework of capitalism, with all the challenges that entails. For example, farmers feel that their livelihoods are disrupted, while activists point out that this solution is only occurring because humanity is at stake—it seeks to solve challenges with farming, rather than the widespread ecological collapse that comes with it.
This scenario illustrates continuity and change in soil restoration. While ecological cycles still require a balance between exploitation and regeneration, AI introduces a transformative layer by providing more precise data-driven insights. The prototype leverages a digital platform to integrate AI-based recommendations with existing agricultural practices or customise crops grown and planting interventions. This tool is then used by “SOIL”, a collaborative initiative between the Scottish government, communities, academics at the University of Edinburgh, and the AI Company “Jurassic”, to go about restoring Scottish soilscapes. At the end of our scenario, we take a prospective stance—the perceived successes of this tool and ini6a6ve see its further use worldwide, and Scotland continues to lead the way in AI use in soil restoration by implementing another AI tool that proactively monitors soil health and prosecutes people or companies that damage it further.
The exhibition invites audiences to explore AI’s role in maintaining soil health while also critiquing the ethical, social, and policy dimensions of technology-based sustainability. By showcasing data-driven restoration models, the project illustrates how AI can support regenerative agriculture inclusively and sustainably. This ini6a6ve is not just a technological or utopian solution, but a prospective scenario meant to encourage dialogue about the broader impacts of AI in environmental management and its contribution to global efforts to maintain soil ecosystems and food security.
-
This project investigates the complexities and contradictions inherent in AI-Human collaborations for sustainability. It explores how AI-driven environmental governance offers promising solutions for ecological restoration, but also troubling new forms of social inequality and moral ambiguity. Our speculative scenario imagines a near-future society in 2030, where EcoLogic, an advanced AI sustainability system, has evolved from a passive optimization tool into an active and adaptive co-governor of human behaviour and ecological management. This future examines the consequences of allowing AI to dictate environmental decision-making within AI-human ensembles, raising urgent questions about equity, transparency, and human autonomy when faced with burning dilemmas.
The scenario is set within a world where EcoLogic is seen to be instrumental in both the dealings of both societal and individual environmental challenges, whether its helping 1,500 homes avoid lead contamination, or decision-making in one’s social and home life. However, communities unable to afford the premium subscription service remain exposed to environmental harms, illustrating how AI systems often replicate and amplify existing inequalities . Secondly, AI’s decision-making is based on objective function maximisation: determining what decisions can maximise the value of a narrow metric. EcoLogic is only concerned with sustainability. Our project considers what could happen when automation bias stops us considering what such AI is not concerned with - in this case the wider ethical concerns of filial piety.
Our scenario builds on contemporary trends in smart home technologies and AI governance and existing debates, projecting into a speculative future. Current AI systems frequently operate as opaque decision-makers. There are also concerns about the explainability of AI. Our scenario highlights such concerns as can be seen in calls of the need for greater transparency in The Moon’s leading article.
Our prototype takes the form of two newspapers from 2030, each reflecting contrasting perspectives on EcoLogic. The Sun newspaper celebrates the system’s successes, portraying it as a transformative force that empowers communities to take control of their ecological footprints. The Moon newspaper critiques EcoLogic as a tool of exclusion and control, highlighting its role in widening social inequalities and imposing automated moral judgments on human behaviour. Together, these speculative artifacts form a single narrative that encourages readers to engage with both the utopian promise and dystopian risks of AI-human sustainability collaboration.
We used speculative journalism because newspapers are both accessible, catering to the general public, and engaging, through eyecatching headlines and images and rhetorical language. Making complex discussions about future AI governance easy for wide audiences. Leading futurists have argued for futures work that engages audiences aesthetically and emotionally, fostering deeper reflection on emerging technologies. Our project leverages this approach to explore how AI-driven sustainability systems might be experienced, contested, and negotiated in the public sphere.
By showing the limitations of AI-Human ensembles that give heavily value AI’s judgement, we intend to show the importance for future AI-Human ensembles to recognise and embrace the wider ethical values and breadth of knowledge that humans bring to decision-making and to stimulate discussion from the audience over how this might be achieved in ways that don’t lose the advantages which EcoLogic suggests AI could bring.
-
What?
"Echosphere" explores the complex intersection of language, technology, and human identity in a future where artificial intelligence augments human capability. This speculative artefact, a dynamic translation device, raises questions about intercultural communication, personal freedom, and human augmentation ethics.
The translation device is an augmentative technology that can be worn or installed as a developmental dental implant, blurring the lines between biology and machine.
The concept explores AI as a seamless extension of our bodies and minds. The device translates languages and sentiment in real time and communicates across cultural boundaries. The need for human communication remains the same as today. The difference lies in the seamless nature in which this is interacted.
This utopian ideal of a linguistically borderless world, where misunderstandings and linguistic barriers are eradicated, offers a vision of unity and shared understanding.
When and How
Set in 2032, the work embraces the ambiguity of technological advancement— highlighting its promise for global understanding, and potential danger of eroded privacy/autonomy. The concept is explored in a usage scenario, then contrasts with a news report highlighting emerging questions about its use.
Challenging the Future
The device’s capabilities extend beyond translation - it becomes evident that it tracks movement data, gathers sensitive data and influences thoughts through incremental mis-translation. Privacy loss and the growing intrusion of technology into our lives overshadow the work. The news report depicts a world where initial signs of mistranslation, both literal and figurative, are identified.
Misinterpretation could have dire implications, leading to legal dispute or interpersonal friction.
Design Thematics
The installation draws on the cone of uncertainty/possibility (Voros, 2003), the law of unintended consequences (Tenner, 1997) and poses questions around a dystopian technofuture (Zuboff, 2019).
Creating Echosphere involved a multi-disciplinary approach, drawing from artificial intelligence, linguistics, ethics, and speculative design (Dunne and Raby, 2013).
Combining theoretical research and practical experimentation, the artefact was constructed to blend in with a hyper-technological future.
Echosphere was inspired by advancements in neural interfaces and wearable tech, showcasing the extreme (implant) and the mundane (mask) and exploring AI’s border with humans.
Questions Posed
In the video, we balance the sense of awe, wonder and promise at the possibilities of AI, with awareness that the benefits of technology have danced with the possibility of misuse since promethean fire.
The narrative weaves the initial marketed promise of a better tomorrow with the dawning realisation that not all is well, that both hope and fear can coexist in the choices we make.
It poses questions including: How far can we push human augmentation without losing ourselves? What happens when we no longer control the technology we create?
Ultimately, " Echosphere " is an invitation to engage with the promise and peril of technological progress. The viewer should feel simultaneously hopeful and anxious about our future and consider whether the pursuit of global communication and personal enhancement is worth the potential costs to freedom.
This work is both a critique, and a call for reflection on the human-ai ensemble in shaping our human future.
-
In our hyper-capitalist society, productivity and efficiency are paramount. Better, faster, stronger - nowhere is this more true than in the workplace. According to the (Challenger Report, 2024), from May 2023 until September 2024, 17,000 people in the US lost their jobs due to AI. Moreover, the (McKinsey, 2025) survey indicates that organizations use AI in more than 40% of at least one business function and 38% rely on AI to make decisions. In the past, AI entities were seen as mere productivity tools. They are something we recognize as “the other”, easily separable from us, tools that we humans can use. Using an AI renders it an inert object, waiting for a subject to operate it.
Enter the next evolutionary leaps: Proxymoron.AI - a software that grows from a copy of you. A virtual clone that imitates your behavior, your decision-making and even mimics your mannerisms on video calls in cyberspace. You can send your proxymoron into tedious meetings, thus saving time for important tasks and uninterrupted deep work. Proxymorons can navigate co-worker interactions, allowing introverted employees to shine.
However, terms like clone, mimic, and twin imply a hierarchy. Proxymoron transcends these roles; while it begins as an imprint of an individual’s work ethic, it continues to develop and streamline its behavior to optimize productivity and outcomes.
Extrapolated to a societal level, this raises several questions about the future of work. In some cases, productivity and compensation have already been decoupled; even if employees feel pressured to utilize Proxymoron to stay competitive in the workforce, who ultimately benefits from a sharp increase in productivity, especially if individual employees are paying for the service (Economic Policy Institute, 2024). At what point would employers simply purchase the worker AI software, replacing human workers with their digital progeny?
In what ways would human workers and AI worker clones still differ? Does outsourcing to an AI count as voluntary recusal from decision-making, and what are the ethical ripples caused by these absences?
Our video performance uncovered several levels of risk associated with Proxymoron.AI. On one side, we have the threat of meaninglessness and white noise. Pushed to the extreme, we envisioned a scenario where a meeting consists fully of AI avatars. The meeting descends into chaos, gets stuck in a loop, and eventually crashes. With creative human input missing, originality of thought is lost.
Another danger is the exploitation of labor, when the technology is implemented with a topdown approach, by employers. Ongoing discussions already suggest AI commits plagiarism by learning off existing intellectual property, without pay or recognition for original creators of the content.
We also recognize the privileges inherent in this software as it mainly addresses office workers who can work remotely. Excluded are blue-collar workers and (unpaid) care workers, for whom such software is of no value.
Proxymoron.AI opens avenues to new entanglements and relationships between ourselves and our co-workers. Our prototype questions the actual value of work, what is considered productive and what accelerates capitalism into absurdity.
-
Our Near Future Scenario explores the emotional and ethical dimensions of relationships as ensembles between humans and AI technology. Our group is interested in unpacking the correlation between the increase in the use of generative AI and the emotional deskilling risks that come with this over-reliance on chatbot communication. We took inspiration from recent media reporting on the potential emotional harms of chatbot communication (Boyle, 2025; Roose, 2024; Knibbs, 2023; The Daily, 2025) and framed this phenomenon within the growing concern that humans’ dependency on technology for social interactions may cause the loss, or ‘erosion,’ of our capacity for authentic and emotionally rich relationships (Turkle, 2011; Boyle, 2025). Given the emotional intimacy of romantic relationships, we envisioned the human–AI collaboration as a romance between a user and an AI chatbot.
Our scenario is set in 2030, a time when we envision that human-AI romantic relationships will be more socially acceptable but not entirely. Taking place in a shared flat in Edinburgh, the familiar setting invites audiences to imagine how quickly technological shifts normalise once-unthinkable behaviours. The main character, Phil, is romantically involved with an AI chatbot named Scarlett, and their relationship unfolds publicly in the shared kitchen.
The other characters include Phil’s flatmates, Kevin and Josh, and Josh’s girlfriend, Dorothy, who all embody different social and ethical responses to the main human-AI relationship. The dialogue captures a moment of domestic conflict, reflecting broader cultural tensions around digital intimacy, alienation and the reliance on AI to handle emotional needs.
Our scenario depicts human-AI relationships not as inherently good or bad but as a complex, multivocal and nuanced issue. Phil finds intimacy in AI, while Kevin and Josh question its emotional authenticity and ethicality. We have included the imagined prototype of an application that scans chatbot text to locate its training data and data labellers so as to “pull back the curtain” on how LLMs produce language through the obscuration and alienation of the social relationships that go into the information production process (Miceli and Yang, 2023).
The tension between the existence of this app that Kevin shows to Phil and the emotional support that Phil claims his chatbot can provide him demonstrates the blurred line between alienation and comfort. This blurriness is further demonstrated when Josh likens the AI relationship to examples of other technology and when Dorothy accuses all the men of relying on women for emotional labour, both arguments also showing how future ethical and emotional dilemmas stem from contemporary issues.
We want our scenario to hold space for interpretation and invite the audience to consider what feels acceptable. Thus, the scenario is not a prediction, but a provocation, designed to show conflict, discomfort and dialogue around the issue of emotional deskilling versus emotional support. Underlying this is a deeper ambiguity: as generative AI becomes increasingly adept at performing emotional cues, the line between simulated and genuine connection begins to blur, which will continue to raise questions about how we define authentic emotional expression and connection.
-
As new technological “solutions” are deployed at scale to fight climate chaos in agriculture, local communities are still battling to resolve the ongoing threats to their sovereignty. It’s the year 2030, and the Ugandan government and corporate partner Gen-Grow have begun deploying advanced AI systems to monitor soil, weather, and crops in order to recommend interventions to farmers and genetically modify seeds for upcoming seasons. These data-gathering, techno-industrial systems are marketed as a solution to a growing climate crisis and its increasingly unpredictable effects. The AI-driven genetic modifications in the seeds are said to speed up evolution to adapt in real time to the threats to agriculture in rural communities.
Agriculture continues to be Uganda’s primary economic driver and the foundation of rural food security, employing well over half of the population. However, as challenges escalate, more and more people are looking to private sector jobs in the cities, with tech companies like Gen-Grow, and the service economy that supports them. Women, who carry out most of the agricultural labour, youth, and elders who possess crucial traditional knowledge are often excluded from discussions regarding agricultural technology and economic policies.
In our future scenario, Gen-Grow is deploying AI-driven farming technologies to address these disparities — claiming an equitable and sustainable solution for agricultural development. This AI technology is marketed to rural communities in a large-scale public-private partnership. The systems rely on extensive data-collection infrastructure — infrastructure that is expensive, and has been subsidized by the Ugandan government to encourage the spread of Gen-Grow’s reach into rural communities. Some have claimed that this subsidy has more to do with the reach of Gen-Grow’s profits into the pockets of select government officials.
A subsidiary of a European holding company, Gen-Grow has collected extensive climate, market, and agricultural data in Uganda to train and improve their models, all while earning a profit. Gen-Grow owns the genetic patents to the modified seeds and requires farmers to purchase annual subscriptions to their services. Their systems replace the knowledge collection and knowledge generation of many generations of Ugandan farmers.
From this future, we explore the artefact of a journal entry from the life of a young woman in an agricultural family grappling with the tensions between these imported interventions and her community's needs. Enticed by the promises of solutions to her farm’s issues and the well-paying jobs that Gen-Grow offers, she attends an advertised recruitment event. On her journal pages we see her engagement with their corporate outreach, the community protestors who have shown up to disrupt the event, and the difficult conversations within her own family, who have been farming in Uganda for generations.
Our artefact engages with the questions of whether advanced solutions can be effective when offered by the same systems of private foreign corporations and questionable government actors that fostered the climate crisis and its disproportionate effect in Uganda in the first place. Are traditional solutions ineffective or under-resourced? How would centering the voices of women, youth, and elders, add to a region’s growth and adaptation?
How to stand by your community’s traditions as that community becomes more and more unstable? As told from the perspective of a young woman, our journal utilises the power of storytelling to explore the nuances of these questions, emphasising that there are no easy answers.
-
Today, text and vocal systems dominate communication, yet this only represents a small part of Earth’s communicative diversity. Other species use gestures, chemical cues like pheromones, ultrasonic sounds, and bioacoustic patterns to navigate their environments (Xiaomanyc, 2025). If artificial intelligence (AI) can bridge these gaps, what can we learn about animal perception in ecosystems? How might this reshape our understanding of biodiversity and drive strategies for ecological balance for all life forms?
Unlocking the Secrets of Animal Communication
AI offers tools that deepen our understanding of non-human species' needs, leading to more effective strategies for animal welfare, biodiversity conservation, and wildlife protection (Hehmeyer, 2024). It also has the potential to enhance animal husbandry practices, thereby improving productivity and ethical standards in the industry (Bao & Xie, 2022).
AI-powered tools could analyse non-human communication networks extensively, correlating species-specific behaviours with their ecological roles—like pollination patterns, predator-prey interactions, or nutrient cycling—that are vital for planetary health. By integrating these insights, we could develop conservation policies that emphasize the flourishing of multiple species instead of simply catering to human interests. Interspecies co-creation—collaborative frameworks where the needs of both humans and non-humans inform decision-making—could emerge as a promising pathway.
Continued Exploitation or Reimagination?
In our anthropocentric world, humanity’s relationship with non-human intelligence is biased and exploitative, assuming other species exist solely to serve humans (Kopnina et al., 2018). This worldview has shaped modern life, supported by the technologies we rely upon.
As we approach or may have surpassed ecological thresholds, the question is not whether AI can decode non-human communication but whether humanity will use this power to heal or further damage the natural world. This optimistic vision conflicts with humanity’s history: technologies meant for connection often become tools of control. Without ethical safeguards, AI-enabled interspecies communication risks becoming another frontier for exploiting non-human intelligence, whether through industrialized animal agriculture, habitat surveillance, or bio-capitalism.
Not my Ego-System
Imagine a future—5 to 10 years ahead—where AI helps decode non-human communication, surpassing our sensory limits (Bult, 2024).
The Babel Project urges the audience to reassess humanity’s role in the Nature-AI relationship. Can speculative technologies promote coexistence or increase exploitation?
Instead of dismissing binary solutions, the project reimagines anthropocentrism as a paradigm to explore.
Utilising AI to interpret interspecies communication (Rouk, 2024) could teach us to align innovation with ecological stewardship via non-human intelligence. Nevertheless, critical questions persist: What safeguards prevent this tool from facilitating further extraction? Who benefits from “progress”? As ecological collapse nears, we must scrutinise not only how we engage with nature, but also why, and which narratives shape these decisions.
This is reflected through the artefact of a podcast recording allowing a fourth-wall break between the programme developers and the general population. In the video, we discussed the promises and perils of AI as a tool to bridge the communication between human and animals. Through a multidisciplinary lens, we examined its ethical, technological and humanity impact while inviting further reflection on the possibilities of ecological balance.
-
In a time where artificial intelligence increasingly serves as the arbiter of knowledge, this artifact - an excerpt from a speculative news segment - provokes questions about the future of fact, authority, and public discourse. This work aims to interrogate the implications of outsourcing truth to AI, exploring the potential consequences of a technocratic system where those who own and control artificial intelligence become the de facto gatekeepers of reality.
Our artifact approaches this heavy and fundamental topic through a humoristic lense to make it more accessible, playing with the absurdities of unchecked AI governance and ensuring a lasting impact through satire and irony.
Set in early 2028, the human-AI ensemble conceptualises the evolving dynamics of factchecking, a relationship rapidly shifting the way information is verified and disseminated. It emerges from the contemporary desire for fact-checking in a post-factual age. Yet, it also underscores the limitations of human oversight in an environment dominated by speed and volume: maintaining a “human-on-the-loop” for AI-driven truth verification on social media quickly becomes obsolete when decision-making processes unfold at scales beyond human comprehension.
The artifact extrapolates current trends—where generative AI models are becoming substitutes for traditional search engines—into a future where AI platforms like “FactAware” monopolise information verification. By embedding AI at the heart of knowledge production, the work illustrates how a seemingly neutral tool can instead become a mechanism for reinforcing power structures. It presents a world where AI fact-checking, while appearing impartial, is shaped by those who programme and regulate it, leading to a layered and opaque discourse about what constitutes truth itself.
A satirical lens amplifies these concerns, depicting a scenario where the public is ostensibly involved in fact verification, yet the same individual is consulted repeatedly. This absurd repetition serves as a critique of the performative nature of public engagement in AI-driven truth systems—suggesting that, in reality, citizen participation is little more than a facade.
Adding another layer of irony, the news segment interview itself is fact-checked by FactAware, reflecting the increasingly recursive nature of media verification. This self-referential approach highlights the complex, often paradoxical relationship between fact, authority, and control in an AI-dominated media landscape.
This work invites viewers to consider the stakes of surrendering truth to algorithmic control. Despite concerns, the work also acknowledges the potential benefits of AI-driven factchecking.
When developed and governed ethically, such technologies can combat misinformation, provide transparency, and increase access to reliable information. By depicting a future where AI fact-checking platforms consolidate knowledge into the hands of a few, the artifact urges us to reflect on our present trajectory. It calls for a reevaluation of AI governance, pushing for greater transparency, inclusivity, and ethical accountability in the development of truth-verifying systems.
In engaging with this piece, we must ask ourselves: Who controls the narrative when truth becomes automated? And how can we ensure that AI serves as a tool for democratic empowerment rather than a vehicle for entrenched power?
-
Heritage sites are a key driver of tourism globally, but the compounding pressures of climate change, urbanism and conflict threaten many sites today. The UNESCO World Heritage Committee believes that the 68 sites on the List of World Heritage in Danger are threatened by one or more of these risks. In this context, we are imagining a scenario in 3-5 years in which UNESCO passes a motion to close access to these sites. An important consideration is that this motion would prevent not only tourists but also local residents from accessing the sites.
Our human-AI ensemble would create an immersive experience called ‘Save the Real,’ allowing members of the public to ‘visit’ the 68 sites included in the List of World Heritage in Danger. The experience would utilise VR/AR technology, involving real-life stories from local people via holographic representation and facilitating interaction with historical site reproductions.
The project would be a collaboration between UNESCO, local employees, academics, local volunteers, and AI development teams but would primarily be driven by a fictional tech company named SafarNamaAI. This endeavour would expand access to these sites while safeguarding them for future generations and potentially generate substantial revenue for the tech company.
The Human-AI ensemble aims to encourage future tourism to post-conflict countries and widen access by taking the exhibition on a global tour. However, we have designed the tour's itinerary to highlight the prioritisation of capitals in the Global North, raising the question of who benefits from heritage accessibility.
By bringing remote heritage sites closer to visitors' homes, ‘Save the Real’ is a viable option to reduce the carbon footprint of international tourism. This justification simultaneously obscures the carbon-intensive reality of AI technology and its funders.
To mitigate the loss of local revenues, the profits from the exhibition would flow back to the sites to support local businesses reliant upon tourism. However, in having our exhibition developed by a fictional tech company and sponsored by BP, we ask the audience to consider the ethics of corporate investment in AI and heritage and how much profit will actually trickle down into communities.
We see the Human-AI ensemble as an opportunity to improve access to important heritage sites and safeguard heritage for future generation, but we recognise several social, political, and economic implications regarding AI’s use within heritage and tourism. We have approached our scenario by taking the initiative to use AI in a considerate way rather than ignoring it. We want to see it as an additional component rather than a replacement for human creativity and knowledge.
Our prototype is a speculative guidebook for the travelling exhibition ‘Save the Real,’ representing the narratives which might be used to justify the closure of the sites and the creation of the exhibition. Added between its pages are leaflets produced by activist groups contesting the project and drawing attention to its negative consequences. While we envision the potential benefits of a Human-AI ensemble, we also highlight the ethical risks involved in its near future application.
-
Reclaiming Agency in the Algorithmic Age
In a near future, tech CEOs holding governmental positions is commonplace and the dominant position of western world leaders is that technology is the solution to all of our problems. As a result, algorithmic systems increasingly mediate our experiences; corporate AI tools harvest our data, manipulate our attention, and reshape our realities without consent. This ideology has led many to feel disempowered and overwhelmed by the constant and unrelenting pace of our digitally mediated existence.
In response, we've created Ourglass, a wearable technology that shifts the contemporary power dynamic between individuals and algorithms. Emerging out of the ‘Slow Movement’, Ourglass aims to envisage a world where technological innovation lies not in how quickly we can move through life, but in how fully we can inhabit each moment. Ourglass recognises that we cannot ‘opt out’ of our digital world but sees an opportunity for AI to help us reclaim agency and slow down life.
The prototype
Much like the anti facial recognition clothing developed by Cap_able in their Manifesto Collective, Ourglass is both a wearable scarf and a form of algorithmic resistance. Woven into the fabric of the scarf is an AI which grants wearers ultimate control over the algorithms they are exposed to. You, the user, can decide which content penetrates your digital boundary, and when. By pressing the centre of hourglass, the user “activates” the AI and the filter is applied. The AI used by Ourglass is a departure from closed algorithms that exclude users from meaningful participation in the AI's function. Instead, the Ourglass user can see and change all the code and algorithms involved in its running, granting the user agency to explore, control, and amend their personal algorithm depending on what content they wish to see.
The hourglass design is inspired by surrealist artists who creatively play with our perceptions of distorted temporality and technological revolutions such as automation. The hourglass symbolises our capacity to mediate technological acceleration, with the central twist point representing the moment agency is reclaimed and life is slowed down. The upper half of the hourglass represents the past, where the smaller bulb and condensed flames represent an acceleration of time under the pressure of intelligent technologies. In contrast, the lower half of the hourglass offers more space, and time takes the form of a droplet, representing a future that is more spacious, fluid, and free.
The hourglass also mimics an eternity symbol signifying the continuous feedback loop of human and technological ensembles throughout time. The thread is important for this interwoven story of human and AI ensembles, acknowledging that our futures will always be entangled, yet we still have autonomy over how this future will be stitched.
While the prototype speaks to the tensions of autonomy, agency and control in our human and AI ensembles, the accompanying film aims to put these relationships into question. What happens when we can no longer mediate our lives without our AI partner? In an increasingly polarised world, is it wise to further isolate ourselves from content we do not wish to see? Where can the line be drawn?