zum Hauptinhalt
Responsible use of AI also means leaving responsibility in the hands of humans, says Julia Stoyanovich.

© Falaah Arif Khan

Anzeige

Far more than just a tool: How AI can be used responsibly

Ten years ago, artificial intelligence was a niche technology. Today, it is found in classrooms, clinics, and government offices. What challenges does this pose?

Von Julia Stoyanovich

Stand:

If you asked in 2016 what “felt new” about AI, many would have pointed to self-driving cars or smart assistants. Fast-forward to 2025, and the better answer is less flashy and more consequential: artificial intelligence has quietly become a general-purpose technology, reliable enough to be woven into everyday tools, broad enough to shape work, education, healthcare, and government. That generality and success, not the mere existence of clever algorithms, is what gives AI a new social relevance today.

We should say upfront: AI is not new. We have long had autonomous systems that we understand and can verify. Think of smart vacuums that reliably leave a room clean; chess engines that beat grandmasters; machine translation that moved from punchline to practical aid. These successful systems share three traits: they meet a clear need for improvement, we know how to make them, and we can check if they actually work. These are some of the hallmarks of responsible AI.

But, to put AI systems into safe use, we must also insist on human readiness: the people who use or are affected by AI, clinicians, teachers, caseworkers, patients, must be able to understand, interpret, and appropriately act on its outputs; know when AI is in the loop; and have a path to question or override it.

Since 2016, three traits have come to characterize AI. First, scale: models train on vast corpora and transfer across tasks, powering today’s generative systems that compose text, code, images, and audio on demand. Second, reach: AI has moved from a handful of specialized domains into the fabric of daily life and work, drafting emails, summarizing medical notes, routing city services, and tutoring students. Third, dependence: institutions have begun to trust AI – sometimes too quickly – so we now rely on these tools to allocate attention, opportunity, and resources.

Generative AI deserves attention, but the deeper shift is success at breadth. The technology increasingly works in ways ordinary people can use, which means its decisions matter at scale. That raises two responsibilities that were underdeveloped in 2016 and are urgent now: building public literacy so people can use, question, and contest AI, and establishing guardrails with public participation so deployment is transparent, accountable, and aligned with democratic values.

People, not machines, are responsible

Responsible AI is not a property of code; it is a practice by people and institutions. Two ideas matter here. The first is human readiness. In high-stakes domains (radiology, oncology, triage), integrating AI safely requires trained professionals who understand limits, uncertainty, and accountability. Even highly accurate tools fail if clinicians don’t know when to defer, double-check, or ignore an output. Patients, too, deserve intelligible explanations. Without such literacy, we invite misplaced trust and new inequities.

We need guardrails that are both legal and lived.

Julia Stoyanovich, Computer Scientist

Rigorous evaluation follows: AI systems are engineering artifacts and should be audited like instruments, test stability, measure error, probe bias. Audits of flashy hiring tools that claimed to infer personality from CVs collapsed under trivial input changes; if a ruler stretches when the paper is glossy, it’s not a ruler.

In 2016 this audit culture barely existed; in 2025 it must be routine across companies, agencies, universities, and newsrooms. And remember: prediction is performative, when models assign grades, label neighborhoods „high risk,” or shortlist candidates, they shape the world they measure. Design and oversight are therefore civic, not merely technical, questions.

Socially negotiated guardrails

Europe has led by translating AI principles into law and procurement: risk-based rules and required assessments for higher-risk uses. That’s a milestone, but rules written once, far from the point of use, aren’t enough. We need guardrails that are both legal and lived.

That means impact assessments that matter: plain-language, public documents stating who benefits, who bears risk, and a fallback plan, published before deployment, updated after incidents, and archived. It means independent audits with teeth: third-party tests for robustness, bias, and misuse, including red-teaming for generative AI, with disclosures detailed enough for experts to reproduce key claims. And it requires participatory governance: those affected should have a seat before deployment and a hotline after; public comment must change designs and decisions.

AI is social infrastructure.

Julia Stoyanovich, Computer Sciencientist

Together these practices create distributed accountability and surface the “knobs” of responsibility – thresholds, exceptions, escalation paths – for open debate, rather than burying them in code.

Why AI is a new social question for Europe

In 2016, “AI ethics” was a niche debate. Today AI sits in classrooms, clinics, court filings, customer service, and government back-offices. A technology with this footprint is social infrastructure. Europe’s strength is rights-based governance and the conviction that markets serve democratic values; the task now is to operationalize that conviction for general-purpose systems whose behavior shifts with data and prompts.

That is exactly why public literacy and participatory guardrails belong together. Literacy lets people use AI confidently and skeptically; guardrails make deployments contestable, revisable, and fair, so moral judgment isn’t offloaded to software. Neither suffices alone: literacy can’t fix opaque procurement, and perfect rules fail if citizens can’t tell when systems overreach.

The difference between us and AI is humor

Let’s allow a little optimism. We don’t predict the future; we make it together, in public. We are not passengers on a runaway train but the engineers and the brake operators. If general-purpose AI is to become public-purpose AI, it will be because teachers, clinicians, students, city workers, entrepreneurs and citizens co-author the rules and keep revising them in the open.

Working with AI can also be fun: insight into the animated short story “Happy Birthd-AI” from Julia Stoyanovich’s teaching activities.

© Julia Stoyanovich

And we can have fun while we work. The difference between us and AI is that we have a sense of humor: we can look at this moment, reflect, and laugh at ourselves. Humor lowers the temperature, punctures hype, and makes room for learning. I fold it into public literacy and education: a good joke can carry a serious lesson about failure modes, uncertainty, and why “just ask the model” is not a governance plan. Play – comics, shorts, classroom skits – helps people try systems, see where they stumble, and keep their bearings.

If you’d like to see a playful take on AI, have a look at the animated short Happy Birthd-AI (https://r-ai.co/birthd-ai), which I created as part of my teaching. In the end, the real question is not “Can AI do it?” but “How should we do it – together with AI?”

This essay adapts ideas from Julia Stoyanovich’s upcoming textbook on responsible AI.

Zur Startseite

console.debug({ userId: "", verifiedBot: "false", botCategory: "" })