Why Open Source, Not Closed Models, Will Decide AI’s Future
Reflections and takeaways from two days at AI_Dev with practitioners across industries.
Walk into an AI conference today and you might expect endless hype. What I found at AI_Dev was different. The people driving the discussions were not polished keynotes or professional influencers, but practitioners working in defense, retail, finance, construction, and government. Their message was clear: open source is no longer optional, it is the practical choice whenever possible for AI.
I attended a few of those sessions and tried to capture my learnings here.
Key Sessions for me
Securing AI Pipelines
Abhinav Sharma, KodeKloud
I learned how Kubeflow can be used to build pipelines on Kubernetes and how to secure pods, apps, and storage from unwanted traffic. Abhinav also introduced MinIO as an open-source storage layer.Building Intelligent Applications Beyond Prompts
Peter Friese, Google
Peter explained how Genkit and the Model Context Protocol (MCP) help developers build AI applications more easily. The key idea is to let LLMs interact with existing APIs, systems, and specialized tools, enabling orchestration of complex tasks without reinventing everything.Scaling AI Inference with Serverless
Anmol Krishan Sachdeva & Paras Mamgain, Google
This session highlighted how Knative and Kubeless on Kubernetes can scale inference workloads, distributing model training jobs across ephemeral pods. This makes it possible to cut inference time from hours to milliseconds.Open-Source Alternative: Ubicloud Showcase
Ubicloud team
Ubicloud presented itself as an open-source alternative to AWS. They also demoed a fully local RAG setup that runs on a laptop, letting you “talk to your docs” privately with open-source LLMs. It is a compelling option for sensitive or air-gapped environments.Day 2 Keynotes
Stephen Chin, Neo4j curated the program with thoughtful flow and energy.
Keynote on BAML: Vaibhav Gupta introduced a new programming language for building AI agents.
Manos, from Oumi made the case that open models are catching up to closed models, creating a path for both enterprises and startups to innovate without being locked in.
Improving AI Inference Using KServe and vLLM
Red Hat team
This talk introduced OpenShift AI and explained how KServe functions as a model serving platform. With vLLM, inference can be optimized further through techniques like continuous batching, paged attention, speculative decoding, and quantization using the LLM Compressor library.Blueprints & any_llm/any_suite.
Stefan French from Mozilla.ai
Mozilla.ai showcased their open-source projects Blueprints and any-suite. The aim is to help developers who are new to AI adopt it in a simple, approachable way.From Local Experiments to Production
Oleg Šelajev, Docker
Oleg shared a repeatable process for turning local AI workflow experiments into production-ready deployments using containerized, static configurations.
There were many more sessions but I really had to make a difficult choice of selecting few of them to match my limited understanding and the relevance for me. Going by the attendance and chatter I am pretty sure I missed something interesting but I am still glad I had a chance to attend this event.
Conversations That Counted
The most valuable part of AI_Dev came from speaking with practitioners. I met people from defense, retail, finance, construction, government, and deep tech. These conversations were practical and unfiltered, and the common themes were clear:
Lack of standards is holding people back.
Rapid innovation means today’s choice could be outdated tomorrow.
While much of the discussion was tactical, it gave me insights no curated online course could match. In just two days, I learned more about the state of AI adoption than months of reading articles or LinkedIn posts could have given me.
Why Open Source Matters
History tells us that technology only reaches maturity when it becomes open. Open source creates standards, reduces vendor lock-in, and keeps you close to where innovation actually happens. For enterprises trying to adopt AI responsibly, those benefits are not theoretical, they are survival tactics.
Why AI_Dev Was Worth It
This event gave me a genuine boost into the AI space. The value came from the absence of sales pitches. The content was about learning, not selling, which is what made it authentic and worth my time.
Takeaway for Architects and Leaders
It is easy to scroll LinkedIn and be swayed by the loudest voices. The real insights come from listening one layer deeper, where practitioners are working with the technology every day. What I saw at AI_Dev convinces me that open source is the natural path forward for AI adoption in enterprises.
Reflection
I was once a skeptic. Now I am fully invested in AI’s potential. These two days showed me just how widespread adoption has already become, and how many possibilities are still ahead. AI is not a future discussion. It is happening everywhere, right now.
I finally want to thank the Linux Foundation and AI_DEV for organizing this fantastic event. I hope to attend their future sessions too.
#AIAdoption #OpenSourceAI #SoftwareArchitecture #AI_Dev #MLOps


