2026 Sessions

Hear big ideas
from thought leaders

Сonference schedule
Panel Discussion

Process Redesign for AI Integration

Reachy Mini: Giving a Body to AI

Multimodal Research Lead@Hugging Face

Large language models can reason, generate code, and hold conversations. But they remain trapped behind screens. If AI is to become truly useful in our daily environments, it needs a body, a presence, and natural ways to interact with us.
In this talk, I’ll present Reachy Mini, an open and developer-friendly robot designed to explore what embodied AI can look like today. I’ll walk through how we are building its core software stack, from multimodal perception to real-time voice interaction, and why voice is emerging as the most natural interface for physical AI. We will look at how speech-to-speech pipelines, local inference, and modular backends allow Reachy Mini to move beyond scripted demos and into responsive, real-world interaction.
You will leave with a clearer understanding of what it takes to give AI a body, why voice-first interfaces matter, and how open tools can accelerate the next generation of interactive robotics.

Apertus: Democratizing Open and Compliant LLMs For Global Language Environments

This presentation introduces Apertus, a fully transparent large language model initiative developed through collaboration between EPFL, ETH Zürich, and CSCS, representing a significant public-sector response to the concentration of AI development within private corporations. The talk examines its origin and fundamental challenges in current LLM deployment, such as opacity of training data, limited multilingual representation, and proprietary control. The initiative demonstrates how public research institutions can develop competitive language models while adhering to stringent ethical standards, including training exclusively on public data with copyright compliance and supporting over 1,000 languages.

The Illusion of Intent: Why Language is the First Line of AI Governance

"The model isn't sure", "The AI assistant wants to be helpful"…. We use these phrases daily as shorthand, but are they undermining your risk strategy? This talk argues that anthropomorphic language is a big sleeper risk in GenAI governance today. When we assign human verbs—thinking, deciding, lying—to probabilistic systems, we create an "Illusion of Intent." This linguistic drift isn't just a semantic annoyance; it is a governance hazard. It hacks human empathy, creates false trust, and obscures liability by treating system failures as character flaws. In this session, we will dismantle the habit of humanizing the machine. We will explore how precise, de-anthropomorphized language acts as a firewall for ethical safety and improves collaboration between Legal, Tech, and Product teams. Join us to learn why the most critical update to your governance framework isn't new code—it's a new vocabulary. Let’s stop governing the ghost and start governing the tool.

The Robot Renaissance – When Machines Do Our Jobs

Problem:

Kaufmann argues that humanity stands at a turning point: machines and Generative AI will soon outperform us in many routine and even complex tasks, while our social, economic, and cultural systems are still built around compulsory work and industrial‑age roles of “Homo faber.” Without a new vision, fears of job loss, loss of control, and foreign dominance (China/USA) over AI systems will shape the future instead of our own European values.​

Approach:

Kaufmann reframes robots and Generative AI as tools, like excavators or calculators, without intrinsic power fantasies, and insists we design them to augment humans rather than replace them. He sketches a near future where every person works with several humanoid or software agents that handle routine tasks, enabling humans to focus on uniquely human, hard‑to‑automate activities. He also calls for sovereign Swiss and European AI (e.g., SwissGPT, AlpineAI) to embed local values, privacy, and trust into foundational Generative AI infrastructure.​

Key takeaways:

Generative AI can trigger a shift from “Homo faber” to “Homo gaudens,” freeing people to pursue meaningful work and curiosity instead of mere survival.

A “Eutopia” – a realistic, golden age – is possible if productivity gains from AI are used to reduce compulsory work and secure public finances.

Trust, culture, and data protection will be the decisive “currency” in the global race for AI; Switzerland can lead by building reliable, privacy‑preserving Generative AI systems for governments, hospitals, and universities.

Keynote

Switzerland Hosting the 2027 AI Summit in Geneva

My talk will explain why Switzerland will host the next global AI Summit, what the summit aims to achieve, and how the GenAI community can contribute.

The Future of AI Discovery: From Generation to Real-Time Perception

Generative AI has led to an explosion of content creation. Yet discovering relevant and inspiring content and products online is becoming increasingly difficult. Most digital experiences remain backward-looking, optimizing for historical interactions rather than understanding user interests in the moment. As a result, users are often trapped in narrow recommendation loops, while new and diverse content struggles to surface. This talk introduces a shift from AI generation to real-time perception: AI systems that continuously interpret user behavior and context as they evolve within a session. Real-time perception enables adaptive discovery, contextual search, and more effective agentic experiences; moving beyond static personalization toward moment-by-moment intelligence. Drawing from real-world deployments in large e-commerce and marketplace environments, the talk covers perception model architectures, product and system design, and concrete case studies with measurable business impact. Real-time discovery is emerging as a must-have capability for online platforms, one that directly translates into measurable gains in engagement and revenue.

Conscious Humans Lead: The Ethical Decision Architecture for Safe GenAI Scaling

Conscious Humans Lead: The Ethical Decision Architecture for Safe GenAI Scaling Many organisations are accelerating their GenAI initiatives, yet most struggle to move beyond pilots into safe, reliable, enterprise scale. The core problem is not the technology. It is the quality of human judgement, ethical awareness, and decision structures that surround it. In regulated industries in particular, unconscious decision habits, legacy governance models and unclear accountability create hidden risks that block GenAI’s strategic potential. This talk introduces a Conscious Humans Lead approach to GenAI adoption. It emphasises that safe scaling begins with humans who understand the implications of AI, can recognise behavioural and ethical blind spots, and can guide the technology with clarity and intention. Drawing on applied work and research in financial services and other regulated environments, I explore how organisations can build ethical intelligence, design higher quality decisions and create value in ways that are sustainable, socially aware and aligned with long term shareholder expectations. Participants will learn: • why human behaviour and governance maturity are the central barriers to safe GenAI scaling • the most common ethical and behavioural failure patterns that appear in real implementations • practical methods for strengthening ethical intelligence and decision quality • what leaders must prioritise to scale GenAI responsibly in 2026 and beyond Attendees will leave with a practical framework for aligning people, decisions and AI systems to enable safe, ethical and accelerated GenAI scaling.

How the Leading Banks Kill Their Biggest Hidden Cost with AI

For many financial institutions, Quality Assurance (QA) has ballooned into a massive liability, costing millions and stalling deployment. This session exposes the playbook leading banks use to turn this cost center into a competitive advantage. We explore the shift from manual grunt work to human-supervised AI agents. This approach delivers the speed of automation with the safety of expert oversight. We will analyze real-world cases showing how this specific AI implementation slashes overhead and dramatically.

Moderator

Main Stage Moderator

Moderator & Journalist@SRF
Tanya König is a multilingual moderator, anchor, and broadcast journalist. Since 2022, she has been working as an anchor and journalist for Swiss Television (SRF). Prior to that, she created, produced, and presented programs for CNNMoney Switzerland. Tanya is a graduate of the Ringier School of Journalism and holds a Bachelor of Arts degree in Sinology and Political Science from the University of Zurich.

Designing and Delivering the First Agentic Claims-Handling Platform in Insurance

Most companies are still experimenting with GenAI, but very few manage to scale beyond pilots—especially in highly regulated industries like insurance. In my talk, I will share how we moved from isolated GenAI use cases to designing and delivering the first Agentic AI–native claims-handling platform across three companies and five countries within a global insurance group. I will explain how Agentic AI changes process design, product architecture, and governance, and what it takes in reality to orchestrate multiple specialized AI agents while ensuring safety, reliability, and compliance with the AI Act, GDPR, and DORA. The talk focuses on practical learning: how to design agent roles and orchestration patterns, how to build trust and non-functional safeguards into autonomous flows, how to prepare the workforce for AI-centric operations, and how to balance speed with regulatory requirements. Attendees will leave with a concrete playbook for evolving from GenAI experiments to an AI-native operating model and with insights into the measurable impact on productivity, quality, and customer experience in claims handling.

The Playbook for a Sovereign Model-as-a-Service Platform

Senior Specialist Solution Architect@Red Hat

Relying on black-box AI APIs often means trading data sovereignty and cost control for convenience. This session provides a practical blueprint for architecting a private, sovereign Model-as-a-Service platform using powerful open-source models. Attendees will leave with a concrete playbook to master observability, enforce security policies, and transition from an API consumer to a platform provider.

We Taught an AI to Design in CAD. Here’s What Happened Next.

What happens when an AI learns to design? Raven, our AI-driven CAD tool, revealed unexpected possibilities and limitations in how machines interpret space, intent, and creativity. In this talk, I will share the story behind Raven’s development, the challenges of teaching design principles to an algorithm, and the surprising ways humans respond to an intelligent design partner. You’ll see how Raven accelerates exploration, elevates decision-making, and reshapes the early stages of architectural work. This presentation invites you to imagine a future where designers and AI collaborate to unlock ideas once out of reach.

Designing Context-First AI Systems

This session explores how to design context-first AI architectures where retrieval, memory, metadata, and workflow logic work together to ground LLMs in the right information at the right time. I will break down the building blocks of context engineering, vector search, structured context injection, guardrails, and adaptive memory and discuss how to build systems that reduce hallucinations, improve reasoning, and deliver reliable real-world performance in the business. Attendees will walk away with practical patterns for designing AI applications that adapt with high contextual intelligence.

It's Hard to Talk to People

Head of ML@Mavenoid

Building a generative voice assistant demo is easy. Getting it to production is hard. Scaling it to thousands of calls per day? That's where the real learning begins. This talk shares battle-tested lessons from taking a voice AI from prototype to handling thousands of calls daily - covering the unexpected challenges that no tutorial prepared us for.

From GPT to Agent Orchestration

Many organizations experiment with AI through chatbots and isolated pilots but struggle to scale impact in innovation. This session shows how the Innovation Lab of ZF’s Commercial Vehicle Division moves from ad-hoc AI use to orchestrated, agent-based systems supporting the full innovation process. Participants learn the maturity path from simple GPT usage to agent orchestration and AI-native innovation, and how this enables faster, more consistent, and better-governed innovation decisions.

AI Is Defined by Its Most Predictable Error

CEO and Co-Founder@Legartis

Problem:

In sensitive application domains such as legal AI, impressive demos are easy to produce — reliable systems are not. What matters is the ability to systematically measure, compare, and control model performance. Without transparent statements about precision, error rates, and limitations, GenAI in real-world workflows remains opaque and risk-prone.

Approach:

The talk shows why high-stakes AI requires a clearly defined gold standard: structured data, annotated samples, and systematic benchmarking of model performance against human experts. With the emergence of agentic AI, many control steps can be significantly accelerated and partially automated as operational human intervention is reduced. The core principle remains unchanged: transparency about output quality. Model precision and error rates must be measurable and clearly communicated.

Key takeaways:

• Why demos fail as a proxy for real AI performance?
• Why human benchmarking remains essential even with agentic systems?
• How agentic AI increases speed without removing responsibility for output quality?

Keynote

When AI and Human Together Created a Fictional Alternative Rock Band

Imagine you're into playing instruments, singing, and producing music, but you’re lacking the decades of practice. Still, you write great lyrics and just "want to make music". As a personal project/side quest, I bridged that gap leveraging GenAI, my production skills, and professional post-production tools.

The result is Windlereye, a fictional alternative rock band with over 100 songs. Some of them are even good!

In this talk, I’ll demystify the "one-click" misconception by sharing details on my hybrid workflow and the workarounds I used to jump the biggest GenAI hurdles (vocal consistency, artifacts, instability). I'll explain how GenAI made me a better lyricist, and how I made my first whooping $0,000,001.12 in royalties.

Finally, I'll touch on my non-expert opinion on ethics and legal matters of this new frontier.

AI Value First – How Tech Leaders Avoid Zero-Impact AI

AI rarely fails due to the algorithms, but because organization focuses on the technology instead of measurable outcomes. In this presentation, we share a value-first approach proven in the Swiss market that ensures to connect business priorities with strong technical delivery - so that AI creates measurable business impact rather than prototypes. Our method combines three elements that must work together: clear business goals, sector-specific insights, and pragmatic hands-on engineering. The journey starts with value discovery and how it will be translated in to clear KPIs and success criteria). We then demonstrate the impact through improvements in data, process, and systems. This builds the foundation required for reliability and sustainable adoption. Only this allows to introduce AI at scale for smarter automation, faster decisions, and consistent execution. The presentation concludes with real examples that illustrate how this method translates AI strategy into tangible business results, and what technical capabilities are required for an end-to-end delivery.

Driving AI Value as an SME: Organizing Change & Adoption

The Problem: Generative AI has shifted from a technological novelty to a critical organizational challenge. However, for SMEs and agencies, the reality isn't about infinite R&D budgets or massive teams—it is about limited resources, short ROI expectations, and a "fully booked" workforce that struggles to find time for innovation. How do you move from "doing the right thing" to "doing things right"? The Approach: As the AI Lead at Webrepublic, I will share how we kill the buzz to focus on functionality that brings immediate value today. We view AI not just as a new tool, but as a "second chance" to fix essential data structures and define processes. This talk explores how we navigate the difficult balance of scaling with a small AI core team while managing diverse stakeholder fears regarding job security and business model disruption. Key Takeaways Organizational Shift: Strategies to push participation and adoption across departments, even when teams are at capacity. Smart Scaling: How to secure internal budgets, build vendor partnerships, and track adoption with limited manpower. Real Impact: Methods to moderate innovation conversations and prove the direct value of integrated AI to clients.

The Zero Partner Fund: Building an AI-Native VC

VCs evaluate AI startups. But what if the VC itself runs on AI? At Ellipsis Venture, we've built agentic systems for deal flow, diligence, and fund ops—turning a two-partner fund into something that operates like a much larger team. Moreover, if AI can help run a fund, can it also help founders build companies from ideation through validation to MVP? This talk is part demo, part manifesto. I'll show the tools we actually use, and present a framework for investors and founders building companies where AI isn't a feature but the foundation. If you're skeptical that agents can do real work, come and join the conversation.

Redefining Industrial Reliability & Safety: The Role of AI-Driven Risk Analysis

As industries such as transportation, energy, defense, construction, and space evolve toward increasingly complex, autonomous, and interconnected systems, the demand for robust safety and reliability measures continues to intensify. Prominent failures in recent years across various sectors have underscored the limitations of traditional, resource-intensive risk analysis methods and the urgent need for more effective solutions. Reshape Systems SA, a CERN spin-off, addresses these challenges with a proprietary, AI-driven SaaS platform purpose-built for comprehensive risk analysis across the entire project and product lifecycle. Leveraging advanced generative and explainable AI (XAI), our platform automates time-consuming hazard assessments, integrates supplier and design data, and ensures full transparency for expert oversight—meeting rising regulatory and customer demands for trustworthy, auditable AI. This automation enables early identification of potential failures, streamlines engineering processes, and delivers up to 80% savings in manpower, accelerating time-to-market while enhancing safety and reliability.

AI Journey of the ZKB

This talk shares Zürcher Kantonalbank’s (ZKB) journey in developing and implementing its Artificial Intelligence strategy. It outlines how the approach was crafted, with a focus on technology, people, governance, and real-world applications. Selected use cases are highlighted to illustrate how the strategy is being applied in practice. The session also describes where ZKB stands today, reflects on key lessons learned along the way, and outlines the next steps as the organization continues to unlock the potential of AI.

Scalable AI Adoption

"The $10M Question: Why Is Nobody Using Your AI Tools?" You've invested. The tools are deployed. The licenses are running. But only 15% of your employees actually use AI productively. 74% of companies see zero ROI from their AI investments – not because the technology fails, but because they're ignoring the critical 70%: people and processes. In this talk, I'll show you: Why the AI Adoption Gap is widening in 2025, not closing What the 5% leader companies do differently (Spoiler: They invest 70% in enablement, not tools) How our Parnters went from 15% to 85% adoption – in under 6 months. With concrete numbers, measurable results, and a playbook you can implement the next day. For everyone tired of burning millions on unused AI.

From Data Chaos to Cognitive Enterprise: How SLMs Will Transform Governance

In a world of fragmented data, slow decisions, and organizational silos, true intelligence doesn’t come from bigger models, but from smarter, domain‑aware ones. Instead of building massive models, the real breakthrough is weaving meaning into the systems we already use. This talk shows how domain‑optimized Small Language Models harmonize data, processes, and decisions across complex architectures. From semantic extraction to real‑time conflict detection, learn how SLMs evolve into intelligent agents that strengthen architecture, reduce rework, and drive coherence at scale. Discover how a semantic fabric turns complexity into clarity, enabling real‑time understanding, seamless collaboration, and intelligent governance as organizations move toward a truly cognitive future.

Scaling GenAI from POC to Enterprise Readiness at Lufthansa Group

While recent studies suggest that 95% of AI pilots fail, we want to present a practical case study from the 5% that succeed. Lufthansa Group's Digital Hangar, in partnership with Cognizant Netcentric, successfully scaled SkAI—a production-grade GenAI Hub now deeply embedded across their internal data analytics ecosystem and beyond. Moving from initial prototype, to enterprise readiness required solving real challenges: governance to ensure compliant usage, seamless integration with existing enterprise systems, and adoption and training that drove 75% user uptake. The impact is tangible: 25% of users save 4–8 hours weekly on routine tasks. Attendees will gain actionable strategies for overcoming organizational barriers, fostering high-impact engagement measured through clear KPIs, and technical insights into how they successfully transitioned their agentic platform from proof-of-concept to sustained business value. This session will be presented by: Margot Löwenberg (Head of Data Analytics, Swiss International Air Lines) and Nicolas Athanasopoulos (Principal Data & AI, Cognizant Netcentric).

How to Make the Human/Ethics Side Work When Applying GenAI

Managing Director, Data Special Project@Julius Baer
Culture can make or break any strategy and is resistant to change. How do you bring your culture on the AI journey? Which skills are going to ensure future success? What happens if you don't factor in the human and ethical elements as part of your AI journey? Come and learn why the human and ethical side are so critical to sustainable strategic outcomes.

How Zurich Airport Plans Responsible AI & Autonomous Solutions at Scale

Zurich Airport is exploring how AI-driven and autonomous solutions can be introduced responsibly in a complex and highly regulated environment. While the potential is significant, moving from experimentation to operational use requires more than new technology — it requires careful preparation and clear structures. This talk shares how Zurich Airport is approaching this journey. We will outline the steps taken to strengthen digital foundations, clarify governance responsibilities, and ensure compliance, safety, and transparency when evaluating AI use cases. Rather than presenting large-scale transformation, the session focuses on practical progress, lessons learned, and the realities of implementation. Examples from the aviation context will illustrate where AI shows tangible value today, where limitations remain, and how expectations are managed internally. The emphasis is on building understanding, aligning stakeholders, and creating conditions that allow innovation to develop in a controlled and responsible way. The session offers a realistic perspective on what it means for an airport to move from isolated experiments toward a more structured approach to AI — acknowledging both opportunities and constraints along the way.

AI Adoption in a Global Manufacturing Company: From Pilots to Real Impact

AI Transformation Manager@SFS Group
Manufacturing organizations are built on processes refined over decades. While AI promises efficiency and automation, introducing it into such environments often creates friction — confusion, resistance, and loss of trust among employees. In this session, I will share how AI can be introduced into a mature manufacturing organization without disrupting existing workflows, and how thoughtful AI adoption can improve day-to-day work for employees rather than complicate it. Drawing on real-world experience leading AI transformation in a global manufacturing company, this talk focuses on integrating AI into established processes in a way that feels natural, supportive, and reliable. Instead of replacing people or redesigning everything from scratch, the approach emphasizes augmenting existing roles, reducing cognitive and operational load, and building trust through transparency and gradual adoption. I will walk through how AI solutions were embedded into daily operations, how employee concerns were addressed, and how adoption was driven through practical value rather than top-down mandates. Key takeaways: - How to introduce AI into long-established processes without creating resistance - Why employee trust is critical for successful AI adoption - How AI can improve individual productivity while strengthening the overall system - Practical lessons from integrating AI into real manufacturing operations

The Sovereign AI Stack No One Else Can Switch Off

CEO & Co-Founder@Xinity
In April 2025, a US president had Microsoft block European Court of Justice accounts overnight. No negotiation, no warning, just a switch flipped from across the Atlantic. For European enterprises handling sensitive data under GDPR, FINMA, or the upcoming EU Digital Networks Act, this isn't a hypothetical risk. It's a preview. Yet 92% of European enterprise AI still runs on US infrastructure. Most "sovereign cloud" offerings are policy promises: contractual protections that can be rewritten or revoked. Architectural sovereignty is different: your hardware, your building, your data, your AI never leaving your infrastructure. In this talk, Alexander Zehetmaier shows what a truly sovereign AI stack looks like in production: OpenAI-compatible APIs running on enterprise-owned hardware, deployed today at European companies protecting journalistic sources and trade secrets, with an 83% median cost reduction versus cloud equivalents. The session culminates in a live open-source launch: Xinity's API Gateway goes from private to public on GitHub, on stage, making a European sovereign AI infrastructure stack freely available for the first time. Key takeaways: why policy-based sovereignty is theater, what architectural sovereignty looks like technically, and how to deploy it today.

The Power of Hyper Personalization in Banking: Citadele Banking Group Case-Study

For years, Citadele bank has led our region in customer experience, but the expectations of banking clients are evolving rapidly. Today, customers compare their banking interactions not to other banks, but to the seamless, personalized journeys delivered by global social media platforms and world‑class e‑commerce leaders. To understand how the industry is responding, we engaged with top banks worldwide and discovered three common challenges shaping the future: the need to strengthen collaboration between Business and IT, unlock greater data availability, and choose whether to innovate slowly alone or progress faster together. Amid the noise surrounding artificial intelligence, one theme consistently proves real and impactful—hyper‑personalization. Banks are turning to AI‑driven personalization not only to boost commercial performance but to elevate customer experience to entirely new levels. In this presentation, we share how Citadele is applying hyper‑personalization in sales automation, with concrete, real‑world examples. Most importantly, we demonstrate how to measure the true value of these initiatives and showcase a major corporate success story that highlights the tangible business impact of AI done right.

From Data to Wisdom: Designing Robust Human – AI Decision Systems

Advisor, Digital Natives & Scale-Ups@Databricks

The future of decision-making will be won by organizations that deliberately pair human judgment with AI at scale: decision flows where machines grind through the data and people bring expertise, nuance, and accountability.

This session is for executives and senior leaders who don’t need to code models but do need to own the impact of AI‑infused decisions. We will unpack how data becomes wisdom through four layers, giving participants a clear, practical mental model for designing robust Human‑AI decision systems that are reliable, auditable, and safe to use in the boardroom.

How AI Agents Negotiate: Why Governance Matters to Scale Good Intentions

Researcher in AI and Decision Making
As organizations increasingly rely on AI systems to support collective decisions, a key question emerges: how do AI agents resolve conflicts when there is no “right” answer? In this talk, I share insights from nearly 500 automated negotiation experiments involving leading AI models (Claude, GPT, and Gemini). The agents faced real game-theoretic dilemmas with conflicting preferences, voting paradoxes, time pressure, and institutional consequences for failure. Rather than focusing on who “wins,” the experiments reveal something more important: most successful negotiations were driven by agreement on process, not on outcomes. Over 85% of simulations reached agreement - and in virtually all successful cases, agents did so by adopting structured governance mechanisms such as voting rules, delegation, and agenda-setting. When governance was weak, even advanced models frequently failed. The key takeaway is that governance design shapes behavior more than model intelligence alone. For organizations deploying AI in procurement, compliance, or decision support, provider choice and institutional rules are strategic decisions, not technical details.

Gen AI in the News

Generative AI is beginning to reshape how news is created, translated, verified, and delivered — but for a global news organization like Reuters, innovation must go hand in hand with trust. In this talk, I will share how Reuters is approaching generative AI as both a powerful technological enabler and a responsibility‑critical capability.
Drawing on practical experience, the session explores how generative AI is being applied across the news lifecycle, including support for journalists and editors, workflow efficiency, multilingual content, and product experiences. The focus is not on experimentation for its own sake, but on real deployment decisions in a high‑stakes environment where accuracy, independence, and transparency are essential.
The talk will highlight key design choices and trade‑offs: where generative AI delivers clear value, where it must be constrained, and how human editorial judgment remains central. I will also discuss governance, risk management, and cultural adoption challenges when introducing generative AI into a trusted media organization.
The session concludes with practical lessons for media leaders and technologists navigating generative AI in environments where credibility is the product.

Generative AI for Evidence-Based Hiring in Talent Acquisition and Executive Search

Traditional executive search and recruitment remain largely relationship-driven and under-digitised, with limited transparency into how candidates are evaluated and shortlisted. Data and AI are still rarely used in a structured way for leadership decisions. Tagliabue & Partners, a specialist executive search firm, partnered with thirty3, an AI solutions company, to rethink this model. Together, they built a platform that brings structure, transparency, and scalability to the hiring workflow. In this talk, we will showcase the system that automates batch CV screening and generates client-ready longlist decks with structured, evidence-based candidate evaluations — demonstrating how Generative AI can transform high-stakes talent decisions.

Solution Study: From Prototype to Production – Scaling Trusted GenAI

Senior Director Product Marketing@Cloudera Switzerland GmbH
As organizations evolve from digital to AI-native, data serves as the foundation for innovation, enabling AI to transform raw information into actionable insights. Cloudera powers modern AI-driven use cases, including Generative AI, by unlocking the full potential of enterprise data. This session will explore how Cloudera can help your organization accelerate AI deployment to production, without compromising performance, accuracy, or security.

Shadow AI: The Trojan Horse of AI Security

Following the rapid rise of autonomous AI agents in 2025, the enterprise security landscape is undergoing a critical transformation. While agentic AI offers massive productivity gains, it deepens the security risk through sophisticated attack vectors like memory poisoning and privilege abuse. At the same time, it also broadens the perimeter through Shadow AI - the unsanctioned use of AI-related tools by employees. This leads to a dangerous "visibility gap". Most enterprises lack the monitoring tools to detect unauthorized agentic workflows or local Large Language Model (LLM) deployments, leaving them exposed to data exfiltration and other attacks. The impending regulations around AI, such as the EU AI Act, are shifting liability to enterprises, making ignorance of Shadow AI a significantly costly legal and financial risk. To limit their exposure, enterprises need a strategic pivot from traditional block-and-deny tactics toward radical observability. Effective defense requires the automatic detection of unknown assets, cross-domain data correlation, and the rigorous management of non-human identities (NHIs). Security operations must evolve to employ agentic defenses - utilizing AI to monitor and counter other AI agents. To survive this "AI Wild West", companies must treat security as a business enabler rather than a barrier. Success in 2026 depends not on building higher walls, but on achieving total visibility and implementing automated, agent-driven response capabilities to secure the future of digital work.

Are You a Target: Predicting Cyber Attacks with AI

Europe is no longer facing random cyber attacks. We are facing coordinated, state-sponsored operations now powered by AI. From ransoms to infrastructure sabotage, Advanced Persistent Threat (APT) groups are targeting enterprises, supply chains, defense contractors, and critical infrastructure with precision and speed. The reality stands that most organizations are already being targeted they just don’t see it. In this talk, we reveal what happens before an attack actually hits. We demonstrate how AI can detect early indicators of ransomware and advanced threats by learning attack patterns and correlating telemetry, predicting attacks before impact.

Document Intelligence Without the Black Box: How Thesify Makes AI-Assisted Writing Traceable, Auditable, and Trustworthy

Co-Founder & CEO@thesify.ai
Every organisation produces documents it cannot afford to get wrong: technical reports, regulatory filings, compliance documentation, proposals. Generative AI can accelerate this work, but it introduces a problem most tools ignore — when AI contributes to a document, who is responsible for what? Thesify is a Swiss-made document intelligence platform that solves this with two products: AI agents for collaborative writing and a structured, criteria-based review engine. Unlike conventional AI writing tools, every AI contribution in Thesify is visible, attributed, and reversible. The result is a system where authors stay in control, organisations can audit every edit, and AI becomes a transparent collaborator rather than an opaque shortcut. In this pitch, we show how Thesify brings the traceability standards of scientific peer review into enterprise document workflows, and why trust — not speed — is the real bottleneck for GenAI adoption in high-stakes writing.

How to Create Trustworthy AI Solutions for Regulated Industries

Business Development Operations Director@Oxagile
We are drowning in AI demos that dazzle in the moment but fail in production. Why? Because Generative AI is incredibly good at returning an "echo" of your request - smooth and confident - but terrible at making a commitment. The biggest bottleneck for GenAI adoption in 2026 isn't intelligence or latency - it's trust. A real case of Oxagile’s 20+ years of engineering experience that converted into a trustworthy solution for a regulated Swiss organization (Pharma, FinTech).
Workshop

Systems of Action: Data, Decisions and the New Agentic Operating Model

Senior Industry Solutions Consultant@MongoDB

Enterprises have long relied on systems of record: databases and applications optimized for capturing and reconciling business data as the authoritative source of “what happened”. These human-centric architectures with rigid schemas and batch processes create barriers for agentic AI, which requires perceiving context, reasoning, and taking real-time action. This session explores the shift to systems of action, where an intelligence layer augments (rather than replaces) existing systems of record, enriching business objects with agent-generated insights to enable faster, smarter decisions.

Open Machine Learning Ecosystem

Building with open-source AI models has a lot of benefits: it ensures privacy, gives the application owner control and transparency over the model lifecycle, and cuts costs at scale. In this talk, I will go through the state of open AI, workflows, tooling, and more for building with open models.

Building Chatbots in Minutes: How Migros Made GenAI Fast, Secure and Enterprise‑Ready

GenAI promises to transform how organizations work — but turning that promise into secure, scalable, real‑world solutions is often harder than it seems. In this talk, discover how Migros built a flexible, enterprise‑grade chatbot platform that enables teams to create powerful assistants in just minutes, without compromising on security, governance, or quality.
Two years ago, we created our first chatbot — and quickly learned that technical innovation was only one part of the challenge. Alongside building early prototypes, we had to navigate aligning stakeholders and define governance models that satisfied both business needs and strict security requirements. This journey laid the foundation for what would eventually become our Chatbot Creator Platform.
In this session, we’ll show you that platform in action, share the lessons we learned while creating it, and explain how we built a system that now empowers teams across Migros to ship AI assistants in record time. You’ll walk away with practical insights for scaling GenAI in a complex enterprise environment — beyond the hype and toward real, sustainable impact.

Migrating Complex Systems with Agentic AI

Over 10,000 SAP BW systems face end-of-support by 2030 — a migration challenge so complex that traditional automation cannot scale to meet it. Manual migrations take two or more years, are error-prone, and require scarce expertise. We're using agentic AI to change that.

We'll show how multi-agent systems reverse-engineer legacy platforms, construct knowledge graphs of thousands of interdependent objects, and autonomously generate complete modernized systems — including the migration tooling itself.

Our three-phase framework — Reverse, Rethink, Rebuild — is a generalizable pattern for any complex system transformation, compressing years of work into months with automated validation at every step.

Attendees will leave with practical insights on multi-agent orchestration, knowledge graphs for spec-driven-development, synthetic data strategies for safe testing at scale, and hard-won lessons about where AI excels versus where human judgment remains essential.

Grounded in production experience, not theory.

Keynote

The Intelligence Bomb. Do We Want to Master or Submit?

Not a single day goes by without a new record investment in the field of artificial intelligence. The unit of measurement is no longer billions, but trillions and beyond. The arms race between tech giants seems limitless – and in any case defies the physical limits of our planet.

AI is infiltrating all areas of human activity at an unimaginable speed. Mastering artificial intelligence is the most significant quantum leap since the advent of the atomic bomb.

Does the future belong to the American and Chinese tech giants? Are Switzerland and Europe doomed to follow and submit?

There is an alternative. Sovereign, collaborative, efficient and ethical.

Giotto.ai seeks to exceed the current capabilities of AI to push the boundaries towards artificial intelligence capable of going beyond memorisation – towards reflection. Giotto.ai is developing technology that stands out for its ability to generalise tasks, solve problems and offer transformative potential for industries and society in general.

Way more efficient than large LLMs and infinitely less energy-and data-intensive, the solution advocated by Giotto.ai is based on two fundamental pillars: sovereignty and efficiency.

- Yes, it is possible to escape the frantic race for resources and infrastructure.
- Yes, it is possible to assert Swiss and European leadership in artificial intelligence that serves society, democracy and humanity.

Switzerland and Europe have all the talent, universities and centres of expertise needed to succeed. Together, let's create networks of investors, developers and conditions to control our own destiny.

Submitting is not an option. We can decide.

AI Concierge - A Solution to Generate Counter Offers Instantaneously

Selling insurance or banking products usually involves analysing a lot of documents. With the AI Concierge, we greatly simplify our insurance and banking agents lives. The concierge analyses third party car insurance contracts as well as securities account statements and instantaneously creates a counter offer or an investment proposal.

Demystifying Humanoid Robots & Physical AI

Dr. Jorge Peña Queralta
Co-Founder and CEO@Binabik.ai
Problem: humanoid robots and Physical AI seem to be coming into our lives any moment. At the same time, adoption beyond pilot projects is scarce. How ready are humanoids to enter our homes and industry? Approach: we will look into how we actually teach AI models to control robot bodies; the strengths and limitations of current methods. How do we ask the right questions to get the most value out of Physical AI today? Key takeaways: identify how to extract the most value of Physical AI and robotics today.

Constraining AI Until It Works

Software Engineer@SkipLabs
The compiler changed everything. Before it, programmers worked in assembly — painstaking, error-prone, impossible to scale. The compiler introduced a contract: if it finishes, the output works. You don't read the assembly. You trust it. That contract unlocked decades of software progress. Today, AI is writing more code than ever. But that contract is gone. Agents say "done" when the code is broken. Engineers spend more time debugging AI output than shipping. What if we could bring that contract back? In this talk, we'll explore how combining opinionated execution, reactive computation, and sound static analysis can restore deterministic confidence to AI-generated code — and what it takes to make "done" mean done again.

How to Use AI Responsibly in War: The Red Cross Case

Technologies and Conflicts@Independent expert
"Move fast and break things". At the International Committee of the Red Cross, breaking things means putting the lives of the most vulnerable at high risk. In this talk, the audience will learn how this 160 years old humanitarian organisation has developed its Artificial Intelligence policy. How it found ways to take into account complex working environments: wars, autonomous weapons systems, polarization, politization of aid, hate speech, scarce resources and data, people who have lose their house or family members. How it embedded its fundamental principles of humanity, impartiality, neutrality and independence that are critical to allow access to communities affected by armed conflicts. But how in the meantime, it wanted to foster innovation in an accountable, responsible and fair manner.

Inclusive Teams: Slaying Bias in GenAI Products

Co-Founder & Managing Director@TechFace

Biased GenAI products stifle innovation, from flawed data to homogenous teams missing real-world blind spots. How do you build diverse teams that catch biases early and deliver superior products? What rituals and skills ensure ethical, inclusive development from ideation to production? Join Priska Burkard to discover why diverse teams are your edge in GenAI success.

From Pilots to Production: How Enterprises Build, Orchestrate and Secure AI Agents

Enterprises are piloting AI chatbots, but most projects stall at FAQ and RAG. The assistants answer questions, yet cannot reliably execute work across real services. They often lack memory, proactivity, and are stuck in silos. In this talk, I present an agentic execution layer for production AI agents. We show how to orchestrate a multi-agent system that turns your APIs and workflows into domain agents, adds a persistent memory layer, and provides observability, replay, and regression testing so teams can ship changes with control. Key takeaways: A reference architecture blueprint for enterprises. A platform for observability, tracing and versioning.

From Personal Productivity to Reimagining Work

Group Head of AI Engineering & Platforms@Zurich
The problem: AI is moving faster than most organisations can respond to. $285B was wiped from legal and finance tech almost overnight when foundation model companies started competing directly with their own customers. Most professionals know AI matters but don't have a clear picture of what to do, in what order, and why. The approach: We walk through how AI adoption actually evolves inside organisations — from individual productivity tools through to full agentic workflows. We cover the tech investments that matter, how to avoid LLM lock-in, and what the shift to agentic AI means in practice, including MCP architecture and RAG pipelines explained without the jargon. Key takeaways: A four-layer AI strategy you can map your organisation against today Why your competitive advantage lives in Layer 3 — proprietary data plus a frontier model A three-phase agentic AI roadmap: connect, build your first agent, scale to multi-agent

Reinventing Industries with GenAI: Bridging Everyday Needs via Intelligent Automation

The talk outlines how generative AI enables Credura's intelligent insurance advisor and how AI impacts the software development space as a whole. The talk will showcase Credura's GenAI-powered product capabilities, which include Intelligent Document Understanding, AI-Generated Recommendation Narratives, and a client-centric approach to insurance advisory. S-PRO touches on the GenAI engineering backbone of such products and how software development in general is being impacted by Claude Code and other new GenAI tools.

Building the Trust Stack for Multi-Agent Collaboration

Most enterprises haven't shipped multi-agent AI to production yet — but the ones that have are hitting a wall nobody warned them about. The bottleneck isn't intelligence; it's trust. The moment agents cross organizational boundaries, discovery, identity, shared semantics, and secure execution all break down. An analysis of 1,600+ multi-agent execution traces found that system design issues and inter-agent misalignment account for over 76% of all failures (Cemri et al., 2025). Drawing from hands-on experience building multi-agent systems in fintech, this talk presents a four-layer "trust stack" and maps each layer to real protocols at different maturity stages: Agent discovery via A2A and Agent Cards (a Linux Foundation standard with 150+ backing organizations); zero-trust federation patterns consolidating from the ACP-into-A2A merger; Verifiable Semantics to prevent silent "semantic drift" failures; and AWCP for secure workspace delegation when agents need to compute, not just chat. Attendees will leave with a clear architectural model for layering trust into multi-agent systems, an honest maturity map — what to deploy now, what to design for, and what to watch — and concrete failure patterns to avoid when crossing organizational boundaries with agents.

From Grassroots to Productivity: Sonova’s GenAI Journey in R&D

Generative AI adoption is accelerating across industries as organizations integrate it into everyday workflows to improve productivity and decision‑making. At Sonova, we are on the same journey within Hearing Aid R&D. This talk shares Sonova’s experience of adopting Generative AI in R&D workflows, starting with grassroots initiatives in software development and expanding into a broader set of use cases. A central focus of the journey has been responsible adoption. Introducing GenAI in medical‑device R&D requires careful consideration of security, quality, safety, and intellectual property. The session concludes with concrete lessons learned from our adoption journey to date and a practical overview of how GenAI is being applied in Hearing Aid R&D today.

Beyond the POC: Scaling AI Agents with Control and Governance

Strategic Sales Engineer@Dataiku
As the organisations move past the initial excitement of Generative AI, they face a sobering reality: while building a single AI agent is easy, scaling agentic systems to deliver true enterprise impact is remarkably difficult. Currently, most organisations have moved fewer than 30% of their GenAI experiments into production. The primary hurdles? Siloed experiments, a fragmented LLM ecosystem, and a lack of inherent trust in autonomous decision-making. In this session, Serena Yuen, Strategic Sales Engineer at Dataiku, will discuss how organisations can transition from isolated AI concepts to robust, controlled agentic systems that deliver business value. Discover how a secure LLM Mesh and robust AgentOps practices can help you move from disparate POCs and cross the gap into full-scale enterprise AI transformation, along with all the principles data and architecture teams need to consider for true AI success. You’ll leave the presentation with a holistic understanding how AI at scale can deliver value with your people, orchestration and governance in mind.

Towards Business Superintelligence

Managing Consultant@D ONE
Superintelligence is increasingly perceived as sophisticated orchestration of autonomous, collaborating AI agents. While the potential for full process automation is widely recognized, the current immaturity of agentic systems requires a professional bridge between state-of-the-art research and reliable business integration. My approach to address this gap is by running a research lab in parallel with consulting work, allowing for the direct translation of academic findings, e.g. in automated information extraction, into production-ready enterprise solutions. The ultimate goal is Business Superintelligence: a practical reality where an ecosystem of agents understands a company’s strategy, data, and services as intimately as a human coworker can do. Achieving this requires a disciplined path: investing in agentic research, building robust data infrastructure, applications and services with rich CLIs, and taking responsible risks through well planned and executed pilots. To illustrate this, I will share the specific architectural choices and hard-won lessons from our recent retail deployments, providing a realistic look at what it actually takes to get agentic systems into production.
Keynote

Interactive AI Agents for Enterprise: The Realistic and Practical Way

Technical Director, GenAI & Data Science@Unit8
Senior Data & AI Consultant@Unit8
AI agents offer new possibilities for automating work in enterprises, but their adoption is often limited by constraints around security, governance, system integration, reliability, and operational complexity. This talk discusses common challenges encountered when integrating AI agents into existing enterprise environments, shares practical lessons learned from real deployments, and includes a brief demo of the Unit8 GPT Wizard to illustrate how agentic workflows can be embedded into real systems.

What the 1980s Got Right About AI and BI

Before LLMs, before dashboards and data warehouses, the 1980s had two disciplines quietly working on the right problem from opposite directions. Decision Support Systems asked: how can machines help humans make better decisions? Knowledge Engineering asked: how do we encode what experts know so that AI can reason over it reliably? Both asked the right question. Both fell short. The technology wasn't ready. 40 years later, we are living through an AI renaissance. Organizations are deploying AI agents that query databases, generate charts, and surface insights autonomously. And as organizations experience issues with using raw AI (wrong numbers, hard-to-verify answers, models that hallucinate with confidence) the lessons of the 1980s are coming back into focus. Reliable AI isn't about better prompts. It's about engineering knowledge. Key takeaways: - Why did these pioneering disciplines from the 1980s fail, and what has changed now? - What does Knowledge Engineering mean in the age of LLMs? - How does Knowledge Engineering turn Agentic Analytics from an "impressive demo" into a reliable system?

Google DeepMind AI Stack for Developers

Developer Experience Lead@Google DeepMind
The landscape of generative AI is evolving rapidly. In this interactive session, Omar will guide you through the Google DeepMind ecosystem. We will explore the latest advancements in the Gemini family of models, focusing on capabilities and applications like image generation and agentic features. Whether you are looking to integrate open models like Gemma or leverage enterprise-grade APIs, you will leave with a clear understanding of how to start building with state-of-the-art models.

Winning the Digital Sovereignty Race

From generative AI transforming software development to advanced AI systems powering industry, defense, and public services, the mission for Europe is clear: AI sovereignty is no longer a political slogan, it is an economic, technological, and security imperative.

Agentic AI @ IKEA Supply

The session gives an overview how IKEA is applying "Agentic AI" within the supply chain organization. The session will explain the problem which IKEA is phasing in a historical grown supply chain over the last 80 years. How did IKEA approach this challenge? How do coworkers use now Agentic AI support in their daily work? What have been our key learnings on this project.
AI agents are rapidly becoming integral to enterprise workflows, but today's agentic systems operate in silos. They can't verify each other's identity, negotiate payments, or coordinate tasks across organizational boundaries. This creates a bottleneck: as agents become more capable, the lack of interoperability and accountability limits their real-world adoption. Masumi Network addresses this by providing decentralized infrastructure for the AI agent economy, built on Cardano. The protocol gives every agent an on-chain identity, enables trustless agent-to-agent payments via escrow smart contracts, and offers a public registry for discovery and reputation scoring. Combined with multi-agent orchestration, this allows autonomous agents to discover, delegate, and collaborate on complex tasks without centralized control. Enterprises are already using this infrastructure through Sokosumi, our open marketplace for AI agents, to deploy and transact with agents in production. In this talk, Patrick Tobler, Founder of utxo AG and co-creator of Masumi, will share the architecture behind the protocol, real-world enterprise adoption, and why blockchain-based trust layers are essential for scaling agentic AI beyond single-vendor ecosystems. Key takeaways: Why AI agents need decentralized infrastructure, how multi-agent orchestration works in practice, and how enterprises are already building on it today.

The Future Nervous System of Businesses: Is Agentic AI the New Operating System?

Manager Agentic AI Strategy@Wavestone
Something fundamental is changing. Organizations across industries are evolving from passive transaction processors to proactive partners — and Agentic AI is the force driving that transformation. This is far more than another wave of automation or just a smarter chatbot. What truly sets Agentic AI apart from everything before it? Intelligent, context-aware systems are replacing rigid processes and forms with something far more human — systems that communicate naturally and anticipate needs before they're even expressed. A concrete picture emerges of what organizations could look like in the future: personalized, predictive, and seamlessly embedded into the everyday lives of the people they serve. Grounded in technological and societal forces already in full motion, the central question becomes clear — is it time to stop treating Agentic AI as an add-on and start building it as the new foundation of strategy?

Testing LLM Outputs: Caging the Wind or Just Another Day in the Office?

Software Development Engineer@Adobe
Building LLM-based apps is all fun and games until they start growing. At that point, and particularly as your team starts getting bigger you can no longer rely on your "dev intuition" and every little change can cause a "butterfly effect" with unexpected consequences. In this talk, I’ll share our journey from a small, easy-to-test LLM app to a complex, skills-based system where one change could ripple across the whole codebase. You’ll hear how we struggled with custom testing solutions, why we eventually turned to Promptfoo, and how we built a system that not only catches regressions but also evaluates solution quality and maintainability. Along the way, I’ll show what worked, what didn’t, and the lessons we learned while trying to tame a system that sometimes felt more like caging the wind than writing tests.

Social Influencer – with AI Against Hate Speech

Hate speech spreads unabated across social media platforms and online news comment sections, often overwhelming voluntary counter-speech. Although it is crucial to speak out against toxic statements, responses are often hesitant, inconsistent, or lack the strategic focus necessary to be effective. Furthermore, the actual impact of such countermeasures has only been partially researched. This presentation addresses these challenges and presents a methodologically sound approach to counter-speech that is supplemented by artificial intelligence. We tested the effectiveness of structured interventions in which AI was used not to automate the conversation, but to assist in formulating precise, empathetic, and effective responses. Both the model suggestions and the final interventions are recorded, creating a dataset that allows us to analyze how AI-generated counterspeech suggestions are used and adapted by practitioners, and to inform future improvements to AI-assisted tools. This session will provide practical insights into effective counter speech and how AI can support it.

Why Specialized Models Will Always Matter

Every frontier model follows the same lifecycle: dazzle in demos, get adopted for PoCs, then teams realize they need something smaller, faster, and purpose-built. This talk walks through the economics of the frontier-to-specialist pipeline, when to fine-tune, when distillation works, and why production AI is heading toward diverse model ecosystems, not monoculture.

The Legal Challenges of Agentic AI: When Systems Start Acting on Their Own

Agentic AI systems are moving beyond passive tools. They can make decisions, initiate actions, negotiate with other systems and execute tasks with limited human intervention. This shift raises fundamental legal questions. If an AI agent enters into commitments, causes damage or processes personal data autonomously, who bears responsibility and how do existing legal frameworks apply? This session examines how current legal concepts such as liability, contract formation and data protection operate when software agents act with a high degree of autonomy. Using practical examples from emerging AI deployments, the presentation highlights where traditional legal assumptions no longer fit neatly and where companies face new forms of risk. Rather than focusing only on theory, the session takes a pragmatic approach. It identifies the key points in the lifecycle of agentic AI systems where legal exposure typically arises and explains how organizations can structure governance, oversight and contractual safeguards to manage these risks. Participants will gain a clear understanding of the main legal challenges created by agentic AI and how existing rules in areas such as liability and data protection can still be applied in practice. The session also provides concrete guidance on governance mechanisms that enable companies to deploy autonomous AI systems responsibly while continuing to innovate.

Cognitive Debt and AI: Reality and Mitigation Strategy

Generative AI is already delivering measurable gains: in a peer-reviewed study of professionals, access to ChatGPT cut task time by about 40% while improving output quality by 18%. In my own research and practice, I found GenAI users could double their productivity. But not everyone. And speed is not the whole story. Recent studies suggest that, in some contexts, GenAI can also weaken quality control and oversight, creating new risks for organizations. Cognitive debt accumulates invisibly until a crisis exposes it. This session is for executives who want the signal beneath the hype. I will examine where AI creates real value, when it begins to accumulate cognitive debt, and why some organizations become truly AI-ready while others struggle. Without going into detail, I will offer a practical leadership lens for recognizing cognitive debt and strengthening human judgment as the AI race continues evolving in cost, speed, and capability.

Designing AI That Works With Humans: How Agents Reshape Work

Why do we trust a brand? Not because of the technology it uses, but because it consistently delivers on its promise. As AI becomes embedded in products and decisions, the same principle applies. Many organizations start with copilots and productivity tools. The real transformation happens when companies reshape workflows with AI and agent systems. But with greater autonomy comes a key question: where must humans stay in the loop? This talk explores how agentic workflows can empower people while maintaining accountability and trust and introduces practical design patterns for human-AI collaboration.

When AI Runs the Campaign: Building a Fully Agentic Marketing Workflow

Generative AI is already transforming marketing, but in most organizations it is still used as a collection of disconnected tools. Campaign production remains slow, fragmented and highly manual. In this talk, Groupe Mutuel and elaboratum share a real world experiment: designing and running a fully agentic marketing campaign workflow. Instead of using AI for isolated tasks, we orchestrated a system of specialized AI agents that collaborate across the entire campaign lifecycle, from briefing and research to content creation, multilingual adaptation, testing and iteration. The approach was applied to a campaign targeting people newly arriving in Switzerland. The system generated and optimized campaign assets across seven languages and multiple channels while incorporating behavioral insights through simulated customer personas. We will present the architecture behind the workflow, how the agents interact and what worked and did not work in practice. Participants will leave with practical insights on how agentic workflows can accelerate campaign production, enable large scale experimentation and shift marketing teams from content producers to campaign orchestrators.

AI at the Edge: Enabling Semantic Search and Device Memory with Qdrant

Recent advances in AI and generative models are enabling a growing number of applications that run directly on devices such as robots, cameras, industrial systems, and mobile devices. In many of these environments, relying entirely on cloud infrastructure is not always possible due to latency constraints, limited connectivity, or data privacy requirements. As a result, more AI systems are moving toward edge architectures, where part of the intelligence runs locally. The talk explores when and why edge AI becomes necessary and looks at the technologies enabling this shift, including lightweight language models and specialized edge hardware. We then focus on a critical component of modern AI pipelines: vector search. Many AI systems rely on embeddings for semantic retrieval, which raises the challenge of performing similarity search directly on edge devices. Join us to learn how to enable efficient vector search and semantic memory locally, illustrated through a practical edge-AI architecture.

Full-Stack Sovereignty: What It Really Takes to Own Your AI

Senior Solutions Architect@Cohere

Europe is making unprecedented investments in AI sovereignty – but most strategies stop at infrastructure. True sovereignty is a stack: from the data center and cloud environment, through model training and provenance, to fine-tuning, application orchestration, and day-to-day operational control. A gap at any layer means someone else holds the keys. This session unpacks the full sovereign AI stack, layer by layer, and shows what deliberate choices at each level look like in practice. We'll share real-world architecture patterns and outline what a mature, full-stack sovereign AI deployment looks like for regulated enterprises.

Agentic Fashion: Using Generative AI to Decode Life's "What Should I Wear?" Moments

Traditional e-commerce discovery relies heavily on structured data and filtering, which often fails to capture the nuanced, high-dimensional nature of human fashion intent. When a customer asks, "What should I wear to an 80s themed party?" or seeks a "glittery dress for a summer party," they are not just searching for products; they are navigating a complex intersection of cultural context, personal aesthetic, and functional requirements. This presentation explores how Zalando leverages generative AI to bridge the gap between unstructured natural language queries and a vast retail catalog. By utilizing generative AI, including Large Language Models (LLMs), to understand user queries and bridge fashion language gaps, the Zalando Assistant translates subjective human "moments", from summer vacations in Santorini to specific subcultural events, into precise product recommendations. We will discuss the architectural challenges of mapping fuzzy intent to structured SKU logic and how this shift from search to conversation is redefining the boundary between algorithmic logic and human creativity in global retail.
Panel Discussion

Why Technology Needs Art?

2girls1comp
Artist Duo@2girls1comp
Festival Founder, Filmmaker, Researcher@'After the Algorithm' Festival
Adrian Christopher Notz
Generative AI is transforming every sector. However, most public conversations about it occur in isolation, with experts talking to experts and critics talking to critics. The 'After the Algorithm' (ATA) festival breaks that pattern. By bringing over a hundred artists and scientists into direct contact with general audiences, students, and decision-makers, the festival made abstract AI concepts tangible — not through slides, but through experience. The ATA festival's key element was not only bringing artists and scientists together, but also using them and their work to mediate difficult, inaccessible AI topics. Alongside the artist duo 2girls1comp, who are exhibiting their piece Dancing Plague at GenAI Zurich; the artist and software developer Julia Schicker; the film director, researcher and ATA festival director Manuel Flurin Hendry; and the curator of the ATA exhibition, Adrian Notz, we will discuss the importance of art and its role in the current AI hype. When technology moves this fast, we need more critical and creative thinking, as well as more ethical questioning, all of which should be playful, experiential and speculative. The festival is supported by Pro Helvetia, the Mercator Foundation, Innovation Zurich, the ETH AI Center and the UZH AI Hub. GenAI Zurich is our lead community partner. Panel discussion with 2girls1comp, Julia Schicker and Manuel Flurin Hendry. Moderated Adrian Notz.

Connecting Art and Computer Science

Julia Schicker is an artist and software developer based in Basel, Switzerland. She holds a BA in Media Arts from ZHdK, an MFA in Fine Arts from LUCA School of Arts in Brussels, and a BSc in Computer Science from ETH Zurich. Her work explores the cultural impact of new technologies, with writing published by HEK Basel and Nature Scientific Reports. Together with Anna Flurina Kälin, she co-founded the «Art & Computer Science» talk series at ETH Zurich and regularly leads coding workshops at institutions including HEK Basel, ZHdK, and FHNW. Her talks and panel moderations have taken her to the Generative Art Summit in Berlin, ARCOmadrid, and the Culture & Museums International Tech Forum in Malaga.

Dancing Plague - Hackinig GTA to let the men dance

2girls1comp
Artist Duo@2girls1comp
2girls1comp is a modding duo founded in 2023 by Marco De Mutiis (Italy, 1983) and Alexandra Pfammatter (Switzerland, 1993). Their work changes the logic of video games as an act of creative counter-play, revealing the social and economic fabric in which they are immersed: from reclaiming global digital infrastructures to commenting on free labor within the capitalist ideologies of the gaming industry, to showcasing the way play can influence its subjects through its mechanics. Their projects are distributed within the gaming and modding community, as well as cultural and artistic contexts.

Friendly Fire at the Shrink - An AI psychiatrist for (neuro) physiological impact research

Festival Founder, Filmmaker, Researcher@'After the Algorithm' Festival
Manuel Flurin Hendry is an award-winning feature film director, screenwriter, lecturer and academic researcher born and based in Zurich, Switzerland. He teaches cinematic arts at the ZHdK Zurich University of the Arts and at the International Film School Cologne. At ETH Zurich, he lectures on AI, algorithmic literacy and the societal impact of Large Language Models. His research projects, investigate the pressures of digital systems on self-perception, visual education, labor relations and artistic practice. Manuel promotes computational literacy and critique through workshops, art interventions and the festival «After The Algorithm», which he founded. His book «The Feeling Machine» will be published by De Gruyter in 2026.

Beyond the Pilot: How Your Data Strategy Makes or Breaks Your AI Differentiation

Global AI & Data Science Lead@Avolta
Managing Consultant@D ONE

As foundation models commoditize, the real differentiator is how well you connect them to your own data and workflows. Avolta and D ONE share hard-won lessons from moving beyond pilots: consolidating data in a modern Lakehouse-as-a-Service architecture, enabling employees with internal copilots, and building focused business agents on proprietary data. We'll cover what we shipped, what surprised us, and three decisions we'd change if starting over today.

Every Time You Say "CLI Is Better Than MCP", a Clawdie Dies

MCP adoption is accelerating, but many agent systems still misuse it. Instead of leveraging MCP as a runtime interface, tools are treated as prompt-time APIs: definitions are repeatedly injected into context, intermediate results are re-serialized, and valuable tokens are wasted. The result? Slower, less reliable agents, and the misleading conclusion that "MCP sucks, CLIs are great."

This talk challenges that narrative. We explore a better paradigm: code mode, where models generate small programs that call tools directly in a sandbox, dramatically reducing context overhead while improving multi-step accuracy. We’ll also cover dynamic tool discovery, ensuring only relevant capabilities are loaded when needed.

To make this practical, we introduce mcpc (https://github.com/apify/mcpc), an open-source universal CLI client for MCP. Built for real workflows, it combines code mode, persistent sessions, JSON outputs, and sandboxed execution, bringing MCP to where it shines: the shell.

In a live demo, we’ll orchestrate multiple MCP servers, transform results locally, and turn interactions into reusable scripts.

From SaaS Sprawl to Savings: How GenAI Cuts Software Waste at Scale

Software spend is large, but the data behind it is fragmented across contracts, finance, IT, and usage systems. That fragmentation creates SaaS sprawl: overlapping tools, underused licenses, weak governance, and hidden shadow IT. Scaleups feel it as wasted budget and unclear ownership; enterprises feel it as renewal risk, compliance exposure, and slow decision-making. This talk explores how GenAI helps when it is grounded in real operational data and domain playbooks. Instead of acting as a generic chatbot, AI becomes a decision layer on top of existing tools—turning contract, usage, and portfolio data into savings opportunities, renewal briefings, and governance actions. We will cover practical use cases such as duplicate SaaS detection, license right-sizing, shadow IT triage, and procurement-ready negotiation support. The key takeaway is simple: GenAI creates value not by replacing systems, but by helping teams move faster from fragmented software data to confident decisions.
Global displacement is rising, placing unprecedented pressure on humanitarian systems. Refugee crises are complex, fast moving and data intensive, yet the stakes are measured in human lives, dignity and protection. The question is no longer whether AI can help, but how to deploy it responsibly in high risk environments. This session explores how UNHCR Innovation is operationalizing AI to strengthen refugee protection, improve humanitarian decision making and enhance service delivery at scale. From predictive analytics that anticipate displacement patterns to AI tools that support case processing and operational planning, AI is being embedded across four strategic pillars: protection and service delivery, humanitarian decision making, organizational efficiency, and oversight and planning. Grounded in strong humanitarian values, this approach prioritizes responsible governance, risk mitigation and human rights safeguards. Participants will gain practical insights into deploying AI in crisis contexts, building trusted partnerships, and scaling innovation while safeguarding safety, dignity, inclusion and measurable impact for refugees and stateless communities.
Moderator

Use Case Stage Moderator

Hilda Liswani is a global executive with more than 15 years of experience advising leaders and building strategic partnerships across sustainability, technology, media, and finance. As CEO and Founder of WeBloom, she works with C-level decision-makers to navigate the forces reshaping business—AI, cultural change, leadership transformation—and to translate them into strategic clarity and organisational impact. Hilda specializes in creating high-trust, high-calibre environments where senior leaders can calibrate decisions, access cross-sector intelligence, and drive change at scale. Her work is grounded in a conviction that the future belongs to leaders who combine strategic foresight with human depth, and she has dedicated her career to equipping executives to lead with both.

Service Offerings from CSCS for the Swiss R&D+Innovation Ecosystem

Innovative start-ups/spin-offs and SMEs use a lot of computing resources for their research & development work. Often this takes place in cloud computing services from corporations out of Switzerland at very high prices, especially when comparing with the magnitude of seed funding or staff budgets. This raises questions of dependency/sovereignty, among many others. The Swiss National Supercomputing Center offers compelling alternatives for this category of work. This way it serves not only the academic ecosystem but also serves its extended mandate of serving broader society.

Search APIs: Letting Agents Browse the Internet

Head of Knowledge Research@Exa
AI agents need to access the live web — but traditional search wasn't built for them. As autonomous systems move from demos to production, a new infrastructure layer is emerging: search APIs purpose-built for machine consumption. Companies like Exa, Tavily, Brave Search API, and Serper are racing to become the default way agents retrieve, filter, and reason over web-scale information. The two companies you'd expect to dominate this space are actively ceding it. Microsoft retired the Bing Search APIs entirely in August 2025, funnelling customers into Grounding with Bing Search — a tool that only works inside Azure AI Agents. Google offers web access exclusively through Gemini's Grounding with Google Search, not as a standalone API. Both are bundling search tightly with their own model platforms, locking it behind their clouds. For developers building on open or multi-model architectures, neither is an option. This talk maps the competitive landscape, examines what makes a search API "agent-ready" (structured outputs, neural relevance, content retrieval in a single call), and makes the case for Exa — Zurich's newest AI arrival and the only company in this space building its own neural search index from scratch rather than wrapping legacy engines. You'll leave with a clear mental model of who's building what, why the incumbents walked away, and what it means for your agent stack.

Who Is Accountable for the 10 Million Dollar Bug?

Every enterprise AI coding workflow has the same hidden flaw - when something goes wrong, nobody can explain what humans and AI decided together, why those decisions were made, or who was ultimately responsible for the outcome. At scale, that isn't a developer problem. It's a governance crisis. Andre draws on real-world experience building DOTBOT, an open-source autonomous coding framework that brings genuine engineering discipline to AI-assisted development. Every decision, rationale, and code change - human and AI alike - is captured in version-controlled audit logs. Planning is separated from execution. Ambiguity is escalated to humans before a line of code is written. The result is an AI coding workflow that can actually be reviewed, defended, and governed. You'll leave with three things - a clear model for the accountability gap that unstructured AI development creates at enterprise scale, a set of architectural principles your teams can adopt immediately, and a practical answer to the question regulators, boards, and CTO's are already asking: when your AI ships a critical bug, can you produce a full audit trail of every decision that led there? The $10 million dollar bug isn't hypothetical. The only question is whether your organisation is structured to survive it.

Gran Turismo Sophy: From AI Research Breakthrough to Video Game Innovation

Strategy and Partnerships Manager@Sony AI
Mastering the highly realistic driving simulator Gran Turismo requires more than just following a racing line. The problem is that an autonomous agent must simultaneously master complex real-time car physics, execute high-speed tactical decisions (like slipstreaming and blocking), and adhere to strict, highly nuanced racing etiquette to avoid unfair collisions with human players. To solve this, our approach utilized a novel deep reinforcement learning platform developed collaboratively by Sony AI, Polyphony Digital, and SIE. Instead of relying on hard-coded rules, we trained Gran Turismo Sophy (GT Sophy) using a massive cloud-based infrastructure. The AI learned through continuous trial and error, eventually discovering how to control cars at the absolute limits of physical friction while racing fairly. Key takeaways from this session include a high-level overview of the reinforcement learning approach used to train GT Sophy and insights into how the team went from the research breakthrough of making a Superhuman Racing AI featured in Nature magazine to productizing the AI into a fun in-game opponent enjoyed by millions of Gran Turismo players.
Workshop

Beyond Conversational AI: Engineering Structured Intelligence for the Enterprise

Generative AI is only as valuable as the structure behind it. While simple interfaces are great for prototyping, enterprise-grade autonomy requires a structured approach to ensure reliability and governance. In this workshop, we dive into the visual tools within Dataiku to build Autonomous Agents—AI workers designed to follow complex business rules, safely access enterprise data via the LLM Mesh, and execute multi-step tasks. We will explore the design patterns that separate "toy" use cases from sophisticated agentic loops, moving from basic interactions to robust engines that solve real organizational challenges.

Workshop

Harnessing Silicon Personas: Engineering Behavioral Truth into Agentic Workflows

Building on our previous exploration of agentic marketing campaign workflows, this session deep-dives into the "intelligence engine" that adds a predictive layer to those workflows: Silicon Personas. We move beyond simple "chat with your data" to show how harness engineering in practice creates a robust simulation system. By systematically combining real-world data and behavioral instructions we can replicate the decision-making "black box" of thousands of individuals. This deep dive focuses on another approach where we leverage the power of these AI harnesses to drastically simplify traditional research while reducing biases and "AI hallucinations". Participants will learn how to integrate these high-fidelity synthetic outputs directly into their existing marketing automation to validate business assumptions in minutes and drive measurable growth in Customer Lifetime Value (CLV).
Workshop

From Copilot Moments to Team Velocity: How AI Coding Assistants Transform Enterprise Software Development

AI coding assistants are rapidly becoming part of modern software engineering, but their real impact varies dramatically across teams, codebases, and ways of working. In this workshop, we’ll draw on hands‑on experience from enterprise development contexts to highlight what actually improves delivery speed, code quality, and team agility in practice. We’ll break down the most effective use cases, common pitfalls, and the engineering practices that matter most when adopting AI at scale - ranging from documentation standards to modular architecture to collaborative review workflows. This session is designed for engineering leads and technically minded decision‑makers who want a pragmatic, real‑world view of what it takes to make AI assistance deliver repeatable value in day‑to‑day delivery.
Workshop

From Zero to Agentic: Building Enterprise AI Agents in 10 Minutes

While the potential of Agentic AI is clear, many organizations struggle with the complexity of moving from simple chatbots to autonomous agents that can actually do work. In this live demo, we strip away the complexity. Using Cloudera AI Agent Studio, we will demonstrate how to build and deploy a fully functional AI agent in just 10 minutes. You’ll see how to leverage a low-code environment to connect agents to enterprise data, define multi-step reasoning workflows, and integrate with existing data services—all while maintaining the security and governance required for production environments. Join us to see how "Agentic" is moving from a buzzword to a standard tool in the developer’s toolkit.
Workshop

APEX - AI Prototyping & Execution Workshop

Digital Strategy & Solutions Lead@S-PRO
Head of S-PRO Switzerland@S-PRO

Maximize your learning at GenAI Zürich with this hands-on workshop designed for executives, product managers, innovation managers and founders. In a condensed 50-minute sprint, your team will tackle a set scenario and build a minimal working prototype with the help of Claude Code. Immediate, expert coaching from S-PRO facilitators ensures rapid progress and technical breakthroughs. Perfect for maximizing hands-on experience and showcasing the speed of GenAI development. No prior coding or AI expertise needed!

The Intelligence Layer: Reimagining Financial Services with Opus

AI Solutions Architect@AppliedAI
Financial institutions are sitting on a paradox: more data, more pressure, and more AI tools than ever — yet most organisations are still stuck in pilot mode, struggling to turn AI experimentation into measurable business outcomes. The gap between proof-of-concept and production isn't a technology problem. It's an orchestration problem. In this session, we introduce Opus — an enterprise AI Orchestration platform built to bridge exactly that gap. We'll walk through how Opus transforms a simple prompt into an end-to-end automated process, and why that matters for financial services specifically. From intelligent document processing and compliance automation to client onboarding and risk decisioning, we'll map the platform's capabilities directly to the use cases that move the needle in finance. The session closes with a live demo — no slides, no theory — showing Opus in action on a real financial services workflow, with the business value quantified. Key takeaways: What the Opus platform does and how it fits into your Tech Stack. The highest-impact AI use cases in financial services today What production-grade AI automation actually looks like — and what it's worth Bring your scepticism. Leave with a blueprint.

Can You See the Algorithm?

Artificial intelligence increasingly shapes everyday life, yet its infrastructures often remain invisible to adults and to young users. Drawing on the exhibition Can You See Me Now? by WE ARE AIA developed educational formats such as Escape Room Hacked. This input explores how art contributes to the critical discourse on AI from an educational perspective. Artistic practices reveal systems of data tracking, algorithmic governance, and digital surveillance, making complex technologies tangible through experiential learning. Workshops and participatory formats invite students to question how data is produced and used in daily life. For educators, this highlights the importance of interdisciplinary, experience-based approaches that combine technological literacy with ethical reflection.
Keynote

Computer Use: From Knowledge to Autonomous Operations

Chief Product Officer@H Company
Generative AI has changed how we interact with computers through language. The next step is AI that can use computers to complete real tasks. In this talk, Caroline Charra, CPO of H Company, shares her vision of this shift: from AI assistants to systems that can understand interfaces, navigate software, and execute real tasks. She will explain what computer use means, how H approaches it differently from other AI solutions, and what it enables in practice. Through concrete examples, the session will show how AI can interact with tools, automate workflows, and augment human work.

From PDF Chaos to ERP Gold: Document Extraction in Manufacturing with GenAI

Every manufacturing company has that one person who spends their week copying data from PDFs into the ERP. At one Swiss CNC shop, that person was the CEO. Twenty hours a week, every week. We built inbox.factoriq to fix this: an AI-powered extraction engine that ingests messy manufacturing PDFs, classifies document types, extracts structured data with confidence scoring, and prepares it for ERP import. A human-in-the-loop review catches what the AI misses. User corrections feed back as few-shot examples, so the system learns per customer over time. This talk is an honest field report. We'll share what works: 90% automation rates, payback within months, and iterative learning that genuinely compounds. But we'll also cover what's still hard: scanned documents from the '90s that degrade accuracy, vendor layouts that seem designed to confuse parsers, delivery schedules that collapse into nonsense, and edge cases that will keep humans employed for a while yet. Key takeaways: how to scope a document extraction project realistically, where LLMs excel and where they quietly hallucinate in manufacturing contexts, and why "90% automated with human review" beats "100% automated with silent errors" every time.

Beyond GenAI: Scaling Global Business with Specialized Language AI

Sales Director@DeepL

Seamless communication is the key to unlocking global markets when scaling a business across borders. However, companies face a crucial dilemma. While general generative AI solutions offer broad capabilities, relying on them for enterprise localization introduces significant risks because these tools often produce generic text that dilutes brand nuance and mistranslates industry-specific terminology. This can result in a damaged brand reputation, financial losses, and legal and security risks.

In this fast-paced session, we will explore the critical differences between general-purpose GenAI and specialized language AI. We will present a targeted approach to global communication that prioritizes secure, purpose-built language models over generic tools.

In this session, you will learn how to:

-Ensure uncompromising quality: Use purpose-built language models to generate context-aware translations that protect your brand identity.
-Drive end-to-end efficiency: Deploy a centralized AI platform to speed up global operations and reduce costs securely.
-Connect the entire enterprise: Seamlessly integrate language AI across all business communications, from text APIs to real-time voice translation.

Workshop

AI Without the Effort: Building Intelligence as Fast as You Can Type

Principal AI Specialist@Snowflake

What if you could bridge the gap between a business idea and a working AI application in minutes without a massive engineering team? The way we build software has changed. Today, the most successful teams aren’t the ones with the most code; they are the ones who can turn data into insights the fastest.
In this interactive session, we strip away the complexity of traditional AI. We’ll show you how Snowflake Cortex acts as your "AI Co-pilot," allowing you to go from a simple question to a functional workspace using just natural language.

What you will experience:
- The "Zero-to-Hero" Workflow: See how a single prompt can automatically generate a full analysis notebook for you.
- AI for Everyone: Use the Snowflake CLI and Cortex to run powerful AI models on your data without writing complex algorithms.
- Speed as Your Superpower: Learn how to move 10x faster by letting the platform handle the heavy lifting, security, and infrastructure.
- The Future of "Prompt-Based" Building: A live demonstration of how software development is shifting from manual coding to intelligent orchestration.

Who should attend? Whether you are a curious lead, a data enthusiast, or a tech-savvy strategist, this session is designed to show you that the barrier to entry for Enterprise AI has finally vanished.

Keynote

Stop Prompting, Start Engineering

93% of developers say AI makes them more productive. Yet 91% of organizations report no impact on the top line. That gap isn't a model problem — it's a systems problem. Most teams are still treating AI like a magic autocomplete: powerful in the moment, inconsistent at scale. The result? Rework, unpredictable quality, and a growing trust gap between developers and their AI tools. In this talk, we'll explore four techniques that close the gap and turn unpredictable AI assistance into reliable engineering outcomes across the SDLC: Specs (defining what you want before the AI builds it), Steering (persistent behavioral guidance that shapes how AI operates), Skills (reusable task-specific workflows), and SOPs (step-by-step procedures that ensure consistency across complex tasks). Through real-world examples across coding, testing, and architecture review, you'll see why the teams getting the most from AI in their development lifecycle are the ones engineering the system around the model — not just the prompts they feed

Workshop

Risk Analysis: Compliance Hurdle or Value-Add with Explainable AI?

Join us for an interactive workshop where we’ll dive into the real-world challenges of risk analysis and compliance in regulated industries. We’ll open the floor to discussion about the hurdles professionals face and explore how emerging technologies - especially AI - can help turn these obstacles into opportunities. Discover how Reshape’s risk analysis platform streamlines time-consuming engineering tasks, offers transparent model reasoning, and supports users from the earliest design phases. Through live demonstrations, you’ll see how compliance requirements can be transformed from a burden into a valuable part of your project’s success.
2026 Sponsors and Partners

Supported by
industry leaders and innovative startups

We are building a community of forward-looking practitioners who are at the frontier of GenAI innovation. Interested to join?

Become a partner

The Turing Genius

Neural Masters

Generative Starters

Startup Allies

Hackathon Co-organizers

Hackathon Partners

Corporate Partners

Special Partner for Ethical AI, Cybersecurity Governance & Human Rights

Community Partners

Join us

Shape the future of GenAI in Switzerland and beyond

Join us for the next edition of GenAI Zürich to learn about latest developments in the field, network with like-minded professionals and gain deeper insights into GenAI applications reshaping both our professional and personal lives.