What I Learned Implementing AI at a 40-Person Startup
Most companies get AI adoption wrong. They buy tools, write policies nobody reads, and wonder why nothing sticks. I spent a year embedding AI into a climate tech startup's daily operations. Here's what actually worked.
I was the CTO of a home electrification company. We had field technicians, sales advisors, designers, ops people, and a small engineering team. Forty people across three states installing heat pumps, induction stoves, and EV chargers. Not a software company. A real company, with real trucks and real customers.
When I set out to make us an "AI-empowered" organization, I quickly learned that the hard part had nothing to do with technology. There are a thousand AI tools to choose from, and that abundance is paralyzing. Half the team was excited and racing ahead with random free tools. The other half was scared, quiet, and convinced this was the beginning of the end of their jobs.
The technology was easy. The change management was brutally hard. This is the playbook I wish I'd had on day one.
1. Pick one LLM. Standardize ruthlessly.
This is the decision most companies refuse to make, and it costs them everything.
When everyone uses different tools, you get chaos. One person uses ChatGPT, another uses Claude, a third uses some random free tool that trains on your data. Nobody builds shared prompts. Nobody shares what works. Institutional knowledge about how to use AI stays trapped in individual heads, which is the exact problem AI was supposed to solve.
We picked one primary LLM ecosystem and made it the standard. Everyone got a paid account through the company. We standardized the login, the security settings, the data policies. One platform, one set of credentials, one place to build.
This doesn't mean you can't experiment. We had a process for that: post in a shared channel, get a yes or an alternative within 24 hours. Default to yes, with guardrails. But the default stack was non-negotiable.
2. "AI-empowered" is not the same as "AI-automated."
The framing matters more than you think.
When you tell a team of 40 people that you're "implementing AI," half of them hear "you're replacing me." That's a trust problem that no amount of tooling can fix.
We were deliberate about the language. We weren't building an AI-automated company. We were building an AI-empowered company. Every team member paired with intelligent tools that handle the monotonous work, surface insights, and support faster decisions. The humans stay accountable. The humans make the calls. The AI handles the grind.
This isn't just corporate messaging. It's an architectural decision. When you build for empowerment, you design tools that keep humans in the loop. When you build for automation, you design tools that remove them. Those are fundamentally different systems, and the first one is the only one that works when your business requires trust, judgment, and customer relationships.
3. The guardrails are the product.
Most AI policies read like legal documents written by people who've never used AI. Ours fit on one page, and the core of it was this:
If you wouldn't email it to a stranger, don't paste it into an AI tool.
That one line did more for our security posture than a 20-page policy would have. People understood it immediately. It was memorable, actionable, and didn't require a training session to internalize.
Beyond that, we kept it simple:
- Allowed: Generalized scenarios, anonymized summaries, draft rewriting, structuring thoughts, research.
- Not allowed: Customer PII, financial data, API keys, proprietary system architecture.
- Ask first: Connecting new tools, uploading large datasets, automating customer-facing communications.
The "ask first" category is the key insight. Most companies default to "no" on anything ambiguous. We defaulted to "yes, with a conversation." That one shift turned AI adoption from a compliance exercise into a culture of experimentation.
4. Start with the AI that's already in your tools.
Every company I talk to wants to build custom AI agents. Almost none of them have turned on the AI features already embedded in the software they're paying for.
Your CRM has AI. Your email has AI. Your project management tool has AI. Your call recording software has AI. Your workflow automation tool has AI. Most of these features are included in your existing subscription.
We got more leverage from turning on embedded AI features across our existing stack than from any custom tool we built. The reason is simple: embedded AI operates inside the workflow. There's no context switching. There's no "open a new tab and paste this in." It just works where people already work.
5. Crawl, walk, run, sprint.
We built a design assistant bot that helped our sales advisors make HVAC system recommendations. The roadmap for that one tool looked like this:
- Crawl: Knowledge retrieval. Feed it our internal playbooks and design guidelines. It answers questions with links to existing documentation.
- Walk: Structured Q&A. Train it on real question-and-answer pairs from our internal channels. It starts giving recommendations in a consistent format with confidence scores.
- Run: Connected systems. It reads from our proposal tool and CRM, pulling real customer data into its recommendations.
- Sprint: Autonomous actions. It updates records, triggers workflows, and handles routine design decisions without human intervention.
Most companies try to jump straight to sprint. They want the autonomous agent that does everything. The result is a brittle system that nobody trusts, built on data that hasn't been validated, making decisions that haven't been tested.
The crawl phase isn't glamorous, but it's where you learn what data you actually have, what's missing, and what format it needs to be in. Skip it and you'll pay for it later.
6. Security is a feature, not a blocker.
AI makes it trivially easy to leak data you didn't mean to share. Customer names pasted into a free chatbot. Internal documents uploaded to a tool that trains on user inputs. API keys dropped into a prompt.
We took PII and data security seriously not because lawyers told us to, but because our customers trusted us with their home addresses, electrical panels, and financial information. That trust was sacred.
The fix wasn't locking things down. It was making the secure path the easy path. Company-managed accounts with enterprise privacy protections. Approved tools connected through official channels. Clear rules about what data never leaves the building. When people have a fast, approved way to use AI, they don't reach for the sketchy free version.
7. Celebrate builds, failures, and weirdness. In public.
We ran regular sessions where anyone in the company could demo something they'd built or discovered with AI. A sales advisor who automated her follow-up emails. An ops manager who built a scheduling optimizer. A field tech who used AI to troubleshoot a tricky installation.
The key word is "anyone." Not just the engineering team. Not just the people who were "good at tech." Anyone.
We celebrated the failures too. The bot that hallucinated a product model that doesn't exist. The automation that sent the wrong email. The prompt that returned something hilariously wrong. Failures meant someone was trying. Not trying was the only real failure.
This has to be fun or it won't stick. AI adoption driven by mandates and compliance training dies on contact with reality. AI adoption driven by curiosity, play, and showing off to your teammates compounds. Nobody needed to be told to use AI. They wanted to because their coworker just demoed something awesome and they wanted to build something better.
8. Learn together or don't bother.
Individual AI adoption is a dead end. One person gets really good at prompting and builds amazing tools that nobody else understands, can maintain, or even knows exist. When that person leaves, all the knowledge walks out the door.
We made AI literacy a team sport. Shared prompt libraries. Internal channels where people posted what worked and what didn't. Office hours with the AI team where anyone could bring a problem and we'd build a solution together in real time.
The people who became our best AI users weren't the most technical. They were the most curious, and the most willing to share what they learned. Create the conditions for that and get out of the way.
The bottom line
Implementing AI at a startup isn't a technology problem. It's a change management problem. The scared people need safety. The excited people need guardrails. Everyone needs to feel like they're learning together, not being left behind. The technology is the easy part.
Standardize your tools. Frame it as empowerment, not replacement. Write guardrails that humans can actually remember. Make the secure path the easy path. Start with the AI you're already paying for. Build incrementally. Celebrate everything in public. Learn together. Have fun.
Everything else is noise.