AI Explained by the Experts
- X —iO

- Jul 28
- 5 min read
Updated: Sep 3
Insights from Finance, Web3, DAOs, Training, Prototyping & Future Risks
During my time mentoring at the AI-Talents program, I engaged with a remarkable group of experts — researchers, engineers, and visionaries — all working at the frontier of AI. What stood out was not just what they were building, but what they were questioning. In this second part, I compiled some of the most compelling insights from their talks — spanning topics like ethical risks, AI in finance and Web3, DAO governance, no-code prototyping, and the future of training AI models. [Part I]

AI in Finance
Speaker: Prof. Co-Pierre Georg, Frankfurt School of Finance & Management
Prof. Georg presented a sharp and sobering view of how AI is reshaping finance — faster than regulation or academia can keep up. His lecture broke down three main perspectives:
Ethical & Labor Market Concerns
“AI is changing finance faster than academia can analyze it… We want innovation. But if it comes at the expense of the weakest, we’ll pay a price.”
AI has accelerated faster than previous tech booms (e.g., blockchain), driven by historic funding surges and collapsing costs.
Running large language models (LLMs) now costs less than $1 per million tokens.
Productivity and ROI are up — but white-collar jobs are increasingly at risk.
Emerging markets may face disproportionate disruption.
Wealth is concentrating further in big tech firms.
Regulation is trailing: the EU AI Act (2024) is a start; U.S. initiatives are on the rise.
Systemic Risks & Fragilities
“The danger isn’t just bad predictions — it’s model collapse from AI training on AI.”
Overreliance introduces fragility: hallucinations in GenAI can pose systemic risk.
AI-generated data used to train future AIs → risk of feedback loops & collapse.
AI agents increase the surface for social engineering attacks.
Web3 + AI: Friend or Foe?
“Web3 might be AI’s lifeline, not its rival.”
Web3 technologies — like blockchain — can provide transparency, accountability, and data provenance for AI systems:
Enable copyright enforcement & traceability of AI training data.
Prevent future collapse from synthetic-data feedback loops.
Build infrastructure for decentralized AI governance.
DAOs x AI
Speaker: Jan-Gero Alexander H., Legal Scholar & Blockchain Researcher
A DAO, or Decentralized Autonomous Organization, is a community-led structure governed by smart contracts and blockchain-based voting. Jan explored the philosophical and technical intersections between DAOs and AI, going as far back as ancient history:
AI concepts date back to logic machines by Lull (13th c.) and symbolic logic by Leibniz (17th c.).
The formal field of AI began in 1956 at the Dartmouth Workshop (McCarthy, Minsky, Shannon).
On the DAO side, he revisited the infamous 2016 launch and hack of "The DAO" built on Ethereum, due to a smart contract vulnerability—highlighting the challenges of trust, code-as-law, and decentralization. One of the boldest ideas he presented was the "Maria System" (Master AI for Revolutionary Intelligent Autonomy). This vision of AI-powered DAOs operates transparently, efficiently, and continuously — without central control.
The promise: real-time governance, neutrality, scalability, resilience.
The risk: ethical and existential threats if autonomy outpaces human oversight.
Current reality: Most AI is still “weak AI,” and most DAOs act more like DOs (Decentralized Organizations).
Training AI Models
Speaker: Alexander Del Toro Barba, Google Cloud Machine Learning Specialist
Modern enterprises have moved from single-model AI to orchestrating multiple models:
Supervised Fine-Tuning: Small, specific training tasks (Spotify’s personalized playlists).
Preference Tuning: Teach models by example (chatbots rated for helpfulness).
Reinforcement Learning: High accuracy but resource-intensive.
Distillation: Compress big models into mobile-friendly assistants (e.g., wearable apps).
Today’s enterprise AI isn’t about the perfect model — it’s about the right mix for the task. Multi-model orchestration boosts flexibility and performance while minimizing lock-in. Future-looking trends include:
Modular agentic systems coordinate multiple AI models.
Agent-based setups already used by banks like ING (translation, entity recognition, sentiment).
Cost reduction focus (e.g., Chinese labs like DeepSeek).
Hardware innovation: quantum, photonic, neuromorphic computing.
“We’re approaching the limits of autoregressive models. The future may lie in hybrid systems — combining transformers, expert systems, and new hardware like photonic or neuromorphic chips.”
AI Jargon, Simplified
Yes, I first nodded... but Googled later. Here’s what I learned.
Agentic Systems: Teams of intelligent assistants. A main agent assigns tasks to sub-agents (e.g., one for sentiment, one for translation).
Diffusion Models: Like sculpting from noise — they start chaotic and refine to produce clear outputs (great for image generation).
World Models: AIs that simulate environments (e.g., chessboard logic or self-driving in traffic).
Hybrid + Expert Systems: Combine modern AI with old-school logic rules (“if-then”) for safety and transparency.
Photonic Computing: Uses light (not electricity) for faster, cooler processing.
Quantum Computing: Leverages quantum physics to solve complex problems.
Neuromorphic Chips: Brain-inspired chips that mimic neurons, great for low-energy AI tasks.
AI in Banking
Speaker: Sascha Dölker, Head of Digitization, DWP Bank
Starting with the “Why?” → Despite strong innovation potential, why aren’t more AI use cases funded in banking? Because strategy, not tech, is the bottleneck.
“AI is not only a tool. It should be a strategic skill.”
Most banks treat AI as a plug-in (a tool to fix isolated inefficiencies) — not as a driver of long-term value. Sascha’s framework:
Expectation: What do we believe AI can do?
Education: What do we really understand?
Imagination: What could we achieve with it?
Internal “AI ambassadors” now help DWP teams build small, smart tools (e.g., knowledge agents that cut hours from workflow). AI is not about replacing employees but "augmenting their roles" in an aging workforce.
“Even a 10-minute productivity gain per employee per day has massive impact.”
Infrastructure readiness matters: bad data governance or legacy APIs will undermine even the best models.
“Don’t start with the model. Start with the problem. Then reverse-engineer the tech.”
Example: AI + satellite imagery of a parking lot predicted retail traffic better than Wall Street.
“You don’t need to fear AI. You need to know how to talk to it — I do it quite a lot in my car when I prepare workshops.”
AI for Builders: From Big Tech to Solo Founders
Speaker: Bastian Burger, TUM Venture Labs
One of the most energizing messages: You don’t need to be a developer to build with AI.
“If you can describe it, you can build it.”
His 10-step blueprint to go from idea to MVP — in hours, not weeks:
Define your hypothesis.
Talk to users.
Analyze feedback with AI.
Identify who really cares (and why).
Build a quick MVP (frontend).
Add logic & backend.
Test reactions.
Capture 7 meaningful user interactions.
Run a low-budget ad (optional).
10. Iterate, improve, repeat.
Tools mentioned: OpenAI, Gemini, Groq, Claude, Framer, Fireflies, Notion, Softr, Vibe, Make, n8n, Cursor…
"The tools evolve weekly. Don’t wait to master them — explore, test, adapt. In this AI-powered world, speed is your unfair advantage."
Final Thought
AI is no longer confined to labs or large corporations only — it’s a modular, accessible, and at times risky force reshaping every domain. From startups to banks, from data governance to decentralized autonomy, the future of AI demands more than just intelligence. It demands intention. But, let’s build with humanity in mind.
by X⎻iO mentor Web3 Talents | AI Talents | C1 2025*
Curious?
Stay tuned for more. The AI ride is a rollercoaster!



