Hi, I’m Chris Deaton.
AI Educator, Product Manager, and Responsible Innovation Lab Founder
I build systems, stories, and tools that help real people survive real systems. I don’t treat “innovation” like a pitch deck. I treat it like a survival skill.
Too many innovation initiatives fail because they treat people as data or dot-points rather than human beings. I flip that: I build with people, not for them. I build a lot of things just to see what happens.
For 30 years I’ve worked like a release manager: ship, watch what breaks, tell the truth about it, then make it safer for the next person. I’ve carried that mindset from AOL-era product launches to student-led labs to persona-driven AI systems that defend people instead of manipulating them.
For 15 years I wore the hat of a student. This helped me stay close to our customers as well as I gathered a bunch of new knowledge and skills as I worked through 2 Bachelor’s degrees in Anthropology and English, and 4 Master’s degrees in 2 different interdisciplinary programs, Sociology, and the Information Systems Management degree at W.P. Carey. The majority of my academic work is centered on group dynamics and communication.
What I Do Now — Solve Real Problems
I run the Responsible Innovation Lab, a volunteer run, independent public-impact lab focused on:
- Small Group Project Based Learning: where we ideate, prototype, break it, and then often start over – on a bi-weekly basis.
- Resource access: tools like HouseKey that connect unhoused and food-insecure people to actual services instead of “here’s a PDF of shelters.”
- Bias-declared assistants: the Give Me Back My Bias model of building AI personas that openly state their perspective, values, and limits.
- Student-powered innovation: real-world projects that turn frustration into working prototypes, not just papers.
I also build story worlds. My fiction explores power, surveillance, consent, harm, and the people who refuse to be quietly managed.
The Operating System — INNOVATE Framework
My work is built on a loop I call the INNOVATE Framework — Inclusive, Next-Gen, Nimble, Open, Visionary, Accountable, Tailored, Ethical. That means we build fast, but not reckless. We design for the person being harmed, not the person doing the harm. We don’t hide bias; we surface it, declare it, and make it part of the contract.
I don’t do “neutral.” Neutral is how harm keeps winning. I build loud tools with receipts.
How I Work
I work like a release manager and storyteller combined — every build runs this loop:
1. Listen
Start with who’s being harmed or ignored.
2. Build
Prototype something they can actually use.
3. Watch
See what breaks. Admit it publicly.
4. Collaborate
Work with the people impacted most.
5. Repair
Fix fast and ship again — louder and safer.
6. Iterate
Rinse and repeat until its right.
Why I Work This Way — IP Mode
I’ve watched a lot of organizations say “innovation” when they mean “PR.” I’ve watched people and organizations win awards and never do the work. I’ve also watched students, unhoused neighbors, caregivers, and burned-out workers solve impossible problems with zero budget.
My job is to build with them, not for them. That’s what Responsible Innovation Lab is. It’s not hypothetical work. It’s a table where real people sit down, describe what’s broken, and we fix it in public.
Where I’ve Been — The Work Behind the Work
I’ve spent three decades moving between tech, education, and public systems — always building what didn’t exist yet. My career started in Quality Assurance product management and release engineering during the early internet era, where “shipping” meant debugging humanity in real time. I helped stand up and organize the first customer services quality assurance team at AOL in the 90s.
- Arizona State University — School for the Future of Innovation in Society: Designed and led hybrid innovation labs connecting students with global partners through Future17 and NextLab.
- Zoom Education Hackathon (Keynote + Mentor): Delivered the Responsible Innovation keynote during a synchronous hybrid learning program.
- Zoom + Amazon Twitch Classroom Pilot (Product Manager): Drove end-to-end design and approvals from provost and president-level leadership through public rollout.
- DARS and eAdvisor – undergraduate success suite (QA and Product Manager): I was on and off of this team in various roles for 10 years.
- Culture Community of Practice (Product Manager): Led, co-led or managed the Giving Back, Health and Wellness, and Responsible Innovation Guild workstreams.
- Food Security Collaborations: Prototyped HouseKey with pantries and local agencies for real-world deployment. Partnered with college and public organizations to impact legislature related to the Swipe Out Hunger initiative.
- Grants & Portfolio: Built partnerships and programs in Responsible AI, Workforce Readiness, and Ethical Innovation.
Across every role, I connect the blueprint to the human being
— I code the thing, test it, write the story, and stay through the fallout.
The Voices I Work Through — Give Me Back My Bias
I build intentional personas — Lucille, Mave, Dave, Joe Bob, Simone, Grumpy Old Man — not as gimmicks, but as instruments. Each one carries a defined point of view, boundaries, and responsibility. When we deploy these assistants, nobody has to guess “who’s talking to me and why.”
That’s the Give Me Back My Bias model: honesty on purpose.
If you want to understand my work, understand this:
I don’t build chatbots. I build accountable voices. Check out my AI Assistants.
Experimental Playgrounds
GottaDoobIt & DoWhatMATAs
Some of my favorite work lives outside the lab — in the cultural sandbox where humor, bias, and reflection collide. GottaDoobIt started as a cannabis-culture satire project and grew into a mindfulness experiment about joy, burnout, and permission. It’s where personas like Dave and Brandy remind people that slowing down, laughing, and taking care of yourself are forms of resistance in over-optimized systems.
Do What MATAs (Make America Think Again, Seriously) is a civic humanities experiment that uses digital voices — Joe Bob, Ezra, Liberty Lane, and Quin — to explore how bias, storytelling, and rage interact online. It’s not performance art. It’s participatory ethics in real time: asking how we argue, what we amplify, and who gets erased when we call something “neutral.”
I develop story worlds we can blow up so we can learn from the catastrophes and fallout.
Together, these experiments test how far a conversation can stretch before it breaks — and what it looks like to build AI that feels human without pretending to be.
Who I Build With
The Responsible Innovation Lab is a public workshop powered by students, educators, technologists, and community advocates. Students handle research, outreach, and narrative documentation; I lead architecture, coding, deployment, and testing — and I stay accountable for what ships.
AI Chatbot Consultant
Academic & Professional Background
I hold multiple degrees in anthropology, sociology, and innovation — disciplines that all ask the same question: “Why do people build the systems they do, and who gets left out?”
I teach and guest lecture in programs focused on responsible innovation, ethics in technology, and AI literacy. My courses and labs have appeared in partnerships with ASU, QS Impact, Future17, and civic organizations around the U.S.
I’ve served as an advisor and evaluator for programs at the intersection of ethics, design, and emerging tech — helping teams apply the INNOVATE Framework to real institutional change.
Work With Me
You’re a school, a city, a nonprofit, a newsroom, a foundation, or a team that has to answer to real people. You need to build something you can defend — not just something that demos well.
The first 4 are foundational, stable, and set.
The rest are playgrounds and run by various personas. Test beds for chatbots.
