Subscribe

Click the Subscribe button to sign up for regular insights on doing AI initiatives right.

Clemens Adolphs Clemens Adolphs

Does a Speed-Up Even Help You?

A few folks joined me and my cofounder, Ehsan, to hear how relatively simple machine learning methods can give significant speedups in otherwise laborious computations.

You can access the recording here https://www.crowdcast.io/c/r3pdzydr76v0 if you want to catch up.

On a higher level, what I like about this sort of work is that it highlights the importance of thinking about your whole system: When implementing solutions to speed up costly processes, the question should always be: “And what does that lead to?” which is also crucial for assessing which workflows to speed up with AI Agents. If speeding up one part of your system just leads to a pileup of untouched work in another part, you don’t gain efficiency; you destroy it, because all that surplus now clogs up the proverbial pipes.

This is where you’ll want to look at the overall flow of work: Where do things get stuck? Which parts of the system are choking and which are starving? Even without AI, this is a critical analysis. Are your developers churning out massive amounts of code that then get stuck in a lengthy review process? Pushing the devs to produce even more code, faster, won’t do you any good then. Optimize not for the speed of an individual stage in the pipeline. Instead, optimize for the overall throughput: From the start of a task to its completion, where does it spend the most time?

And don’t neglect the “interaction” of separate work streams, either: If part of producing value in your company depends on specialists using their specialist skill, you can either try to get them to apply that skill faster, or you can free them up to do more of that special skill by empowering them to do less of another thing. In a concrete example, if you run an award-winning restaurant, the way to serve more diners, faster, isn’t to exhort your star chef to work faster. It’s to get someone else to clean their dishes and chop their ingredients for them.

That’s where I’m confident AI will unlock more value, at least in the short term: by allowing specialists to spend more time on high-value tasks instead of low-value administrative overhead.

Read More
Clemens Adolphs Clemens Adolphs

Big Data ≠ LLMs

A while I ago, I was talking with a friend about potential use cases specifically for generative AI. The friend was bringing up a number of areas of their business where AI might help. Their intuition was spot on, but in most of these cases, you would not use generative AI or language models. Instead, it was mostly number crunching: Big data, statistics, and “classical” machine learning.

To set the record straight: Just because large language models are trained on massive data sets does not mean that they themselves are good at dealing with massive datasets. They are not, and they’re not intended to. You would not load gigabytes of numerical data (financial records, for example) into ChatGPT asking it to clean the data or check for anomalies.

I understand the allure: For language problems, LLMs appear to obsolete a lot of the finicky use-case dependent model building you had to do in the past. No need to build complex custom systems to classify reviews, apply content moderation to social media posts, or even grade essays. Just throw it all into ChatGPT with the right prompt. (If only at was that easy. But at least it’s plausible.)

With big data, though, there’s no way around custom building. There’s no general model that deals with it all, because there’s nothing that would make sense to train such a model on. And so if you need to find signatures of fraud in a list of credit card transactions, or patterns of buyer behaviour in sales data, you cannot use the same generic model with just a few tweaks to the prompt.

You might, of course, have a tool that performs some basic statistical analysis automatically, and expose that tool to a language model or agent via the Model Context Protocol (MCP). So you would throw your dataset into the system, then ask a chatbot for a plot of this or that statistic, and it would oblige. I could even envision an automated system that asks you a few questions and then trains an appropriate model on the data so you can start making proper predictions.

In these scenarios, the LLM would be providing a more ergonomic interface but, under the hood, you’d be dealing with the tools of statistics, machine learning, and data analysis, not just vibes and prompts.

Read More
Clemens Adolphs Clemens Adolphs

Elephant Carpaccio

How do you eat an elephant? Bite by bite.

Yes. We know that joke. But what shape should the bites have? I recently came across the term “Elephant Carpaccio” and, of course, had to go down an internet rabbit hole to learn more about it. Not to worry, we’re not serving up an endangered species for consumption. Instead, we’re looking at slicing our work down into manageable tasks in the correct way.

(Feeling a bit technical after all this recent philosophizing :D )

The term refers to an exercise run by Scrum trainers (a great guide is available here ) to teach how a task (or User Story) can be sliced vertically and really, really thin. I find this is both a crucial and very unnatural skill. We’re somehow wired to slice horizontally: Build the backend first, then the frontend, then wire them together. We achieve much better results (fewer defects, less pain in integration) if we slice vertically. That means: Build something with backend and frontend at the same time. And make it a very small feature. The mentioned exercise takes that idea to the extreme. Building out a feature (such as a shopping cart’s calculation of the order total, including tax, discounts, etc) in such small increments that the feature gets built in five slices that take less than ten minutes each.

We can debate the merits of such an extreme and artificial constraint. But given that we’re so hardwired to bite off more than we can chew (whether it’s off a pachyderm, a project, or a user story), forcing ourselves to go far into the opposite direction is an excellent workout for our brains.

This concept should have broader applicability beyond just software development, too. Any artifact you have to create, whether that’s a presentation, a business plan or a design brief, you could ask: Is my default mode of delivery a vertical slice where I slowly build up the layers? Can I shift to horizontal slicing so I deliver more value sooner, in smaller but still functional increments?

Read More
Clemens Adolphs Clemens Adolphs

A Tale of Two Philosophies: Duolingo vs Google Translate

Earlier this year, language-learning app Duolingo faced significant backlash over a botched “AI first” initiative. Using generative AI to create its lessons, users felt that this hurt the quality and lacked the human connection they hoped to see in a language app.

In contrast, to much less fanfare, Google Translate is testing a new feature, “Learn with AI”. It also relies on generative AI, but instead of using AI once to cheaply generate pre-made lessons, it uses it to create lessons dynamically to match the user’s skill level and needs. I tried out the Spanish feature and had conversations with the AI, booking a room in a hotel, asking where and when breakfast would be served, and even ordering a Margarita. It’s currently free, and if it’s available in your language, I encourage you to see for yourself how it works.

While it remains to be seen whether Google’s approach will revolutionize language learning, it already highlights an interesting philosophical difference:

  • We can try and use AI to do more cheaply what we’re already doing

  • We can try and use AI to radically do better what we’ve been doing so far

I doubt that AI can completely replace other, more formal, ways of language instruction (or the best way, which is total immersion). Still, a large language model’s ability to tailor responses right in the moment to its inputs has great promise: If the learning tool is well built, it can constantly keep the learner in the zone of optimal difficulty: Not too easy, not too hard. It can provide tailored feedback and, at scale and at cost, offer tailored grading. No more “fill in the gaps with one of these pre-selected words”.

Just in time for Canada’s Thanksgiving Weekend, I’m thankful for the potential that AI, when it’s well-done and in service of higher goals, can offer (and I’ll be back Tuesday)

Read More
Clemens Adolphs Clemens Adolphs

Thoughts on Workflow Builders

OpenAI recently announced their workflow builder, where you can drag and drop on a visual interface and build your own agents and agentic workflows. This led to some excitement from the folks who like visual workflow builders, but a bit of a “meh” from those who don’t.

So-called no-code and low-code tools have been around for ages, and their promise is always the same: Build sophisticated applications without writing a single line of code. I’m not convinced.

I don’t doubt that they work great for fast prototyping or straightforward workflows that plumb together content from different apps. For example, “If an email comes in to our support email address, add a bug ticket to JIRA and send a message to our Slack channel.”

For serious development, though, I see multiple issues:

  • Complexity. You can’t run away from inherent complexity. If the workflow you’re modelling is complex, your visual workflow will soon resemble a bowl of spaghetti.

  • Testability. How do you even create and maintain a test suite that protects you from messing things up when you add functionality?

  • Vendor lock-in. Okay, so your project has outgrown n8n, Bubble, or Draftbit. Now what? You’re essentially starting over from scratch with a “real” tech stack.

There are situations where these trade-offs favour no-code visual workflow builders, especially when market risk significantly outweighs product risk. Sacrificing quality for speed might be the correct strategy when uncertainty is high.

On the other hand, if speed is of the essence, can you afford to waste time with something that won’t scale and will start slowing you down sooner rather than later?

Here’s an idea. A compromise of sorts. If you insist on crappy-but-fast, don’t bother with no-code tools and don’t bother with anything that locks you to a particular platform. Just vibe-code your idea on a proper “independent” tech stack. Maybe Python on the backend and React (not the biggest fan, but LLMs are really good at it) on the frontend and you’re off to a much better start. You might even find that coding isn’t nearly as scary as the no-code advocates have you believe.

Read More
Clemens Adolphs Clemens Adolphs

Lessons From the Electric Motor

Early factories, powered by steam, had a single large central steam engine that supplied power to the various workstations via complicated gears, pulleys, and crankshafts. The invention and introduction of electric motors did not change that first. It took a while for engineers to note that, with electricity, it is much more convenient and efficient to provide each workstation with its own small electric motor.

Revolutionary new technology does not reach its maximum potential if we apply it only superficially. I have written before that slapping AI onto a dysfunctional or messy process will not save you. Yet even slapping AI at a currently optimal process will not yield the best results. After all, the steam-powered factory with its pulleys and shafts was making optimal use of the steam engine. Instead—and that can be scary—a complete rethink is required. The famous "step back" to ask yourself what the business process is trying to achieve in the first place, and then putting the pieces back together, now with additional tools in your toolbox.

By all means, start simple by adding a sprinkling of AI assistance into the existing process. But never stop questioning if you can't go further.

Read More
Clemens Adolphs Clemens Adolphs

E-Bikes of the Mind

There's a beautiful Steve Jobs quote that I came across just yesterday, and it fits so well with the theme of yesterday's email that I'll share it here in full:

“I think one of the things that really separates us from the high primates is that we’re tool builders. I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts. And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.”

What's special about the bicycle, and makes the metaphor so beautiful, is that, while it's a tool, it's still powered entirely by the human body, what we would call self-propelled. In this view (expressed in the 1980s), the computer takes our thinking, the output of our minds, and makes it go further.

AI takes it up a notch, and so we can compare its use to the various types of electric bikes:

  • Those that come with a small motor providing just a bit of assistance to your pedal strokes, letting you go faster with more ease, but still requiring you to pedal. If you stop, they stop as well.

  • Those that require no effort from you at all.

  • (And if we continue along the e-theme but drop the bike, there are the mobility scooters from the movie Wall-E.)

There's nothing inherently wrong with either type, depending on your needs. But you're not getting any physical exercise with the full-assist version. Whether that's a problem depends on whether you get any exercise at all in your life.

And so it is with AI and tools that do our thinking for us. Nothing wrong with that. We are a tool-using species, after all. We just have to ensure that critical parts of ourselves don't atrophy, causing all sorts of problems.

Read More
Clemens Adolphs Clemens Adolphs

Multiplying By Zero

Reflecting on how AI can enhance our workflow in a way that makes us smarter, not dumber, I recall a time when I was in high school, tutoring a middle school student. At that point, they were allowed to use a calculator in class, for homework, and in exams, because the subject matter had moved on from simple arithmetic to more abstract concepts, like solving linear and quadratic equations.

At some point during an exercise, we were able to make some simplifications and were left with something like 12563 * 0 . I watched in amazement (horror?) as the student dutifully typed in: 1 2 5 6 3 * 0 = into their calculator and wrote down the answer, 0. Just to confirm I wasn't imagining this, I asked them, "Hey, quick question, so, what's 452 times zero?" And again, they looked at me and went right to their calculator.

I want AI to do great things for me and for humanity, but for it to reach that level, we must be constantly vigilant: Where are we using AI for the equivalent of "what's 1243 * 53362," and where are we using it to multiply by zero on our behalf? When AI frees us of drudgery, it's fantastic. When it robs us of our intuition about how things work (like the fact that anything times zero is zero, no calculator required), we become less effective because we aren't thinking at a high enough level anymore.

Read More
Clemens Adolphs Clemens Adolphs

Quantum Won’t “Save” AI

I've seen an uptick in commentary and headlines along the lines of, "Oh well, current large language model progress is plateauing, so we won't have Artificial General Intelligence next month; but with quantum computing, we'll soon have it, because... quantum" (waves hands).

I've worked in the quantum computing sector and am still in contact with my former colleagues (👋 shoutout to the 1QBit team!) to say with reasonable confidence: Quantum computing won't do anything meaningful for the sort of AI a business would care about, and certainly not for large language models / generative AI, for the foreseeable future.

Yes, important and exciting work is happening. Progress is steady. Multiple players are advancing the state of the art, and I'm certain that great things will come of that.

No, none of this matters for AI systems that work at the scale of a GPT-5.

Quantum computing is not a drop-in replacement for classical computing, where you just replace a conventional CPU with a quantum processing unit (QPU) and off you go. Instead, it's specialized hardware designed to solve incredibly narrowly defined problems, such as factoring a large number or determining the ground-state energy of a spin glass. The latter is what the D-Wave quantum annealing hardware is designed to do. If you do some clever math, you may be able to cast other problems you actually care about in those terms, particularly in scheduling and optimization. None of these use cases matters for training a gigantic machine learning model. (There is a quantum algorithm for solving linear equations, but its requirements in terms of the number of qubits are beyond ridiculous for current quantum hardware.)

In a way, the computational paradigms behind AI and quantum are opposed to each other; on the AI side, we're dealing with staggeringly large models with billions of parameters, on the quantum side, we're (currently) dealing with, at best, dozens of usable qubits.

It's almost as if, now that the irrationally exuberant hype is wearing off, certain tech influencers (and CEOs of quantum hardware companies?) latch onto the next topic for their hype. Blockchain. VR. AI. Quantum. All of these have kernels of usefulness that are at risk of being crowded out by undifferentiated hype.

Instead of dreaming about living in the Star Trek universe with sentient androids, holodecks, and faster-than-light travel, let's focus on solving actual problems with existing and proven solutions.

Read More
Clemens Adolphs Clemens Adolphs

1000x Faster Monte Carlo Simulations

I've written before about using the right, simple, tool for solving a problem, rather than going after the shiny new thing.

One such example: On a previous project, we achieved great success using relatively simple machine-learning models to achieve massive speedups in the complex simulations that a large insurance company or financial institution would run to manage the risk of their portfolio.

Massive here means that, instead of spending 80 hours for a complete run, it now takes a couple of minutes. This is, of course, a massive unlock. You can either save the time and use it elsewhere or spend the same amount of time doing a much more thorough analysis. These sorts of risk calculations are often required by regulators, with hefty penalties if reporting doesn't happen on time.

Despite this success, the technique, as far as we can tell, is not widely adopted. That's why we've decided to run a short, free webinar on the topic. It will take place this October at 10 a.m. Pacific Time, which corresponds to 1 p.m. Eastern Time and 7 p.m. Central European Time.

Who is this for?

  • People interested in applying machine learning to financial and other statistical simulations

  • Insurance analysts, quants, and actuaries tired of long runtimes

  • Risk modelers who want to integrate machine learning into existing workflows

  • Analytics and data science teams pushing against time, compute, or compliance pressure

Check out the event page and register for free here and tell your friends in finance and insurance.

Read More
Clemens Adolphs Clemens Adolphs

Big Consulting Agile

I've now heard this story from multiple independent sources, working at completely different companies:

  1. Leadership brings in a big consulting company to "help with efficiency"

  2. The consultancy introduces by-the-book Scrum, a popular agile framework: Two-week iterations, story point estimates, and all the roles and ceremonies associated with it

  3. The consulting company collects a fat check and leaves

  4. Employees are unhappy with the overhead and heavy-handed processes, and efficiency does not, in fact, increase

The problem: Neither of these companies was a traditional software company. They were a research-first hardware company and a large "legacy" industrial company, respectively. Work there just does not fit neatly into two-week increments of small, estimable user stories. In the case of the former company, the fellow I talked to complained:

"Now I can't just go read a research paper. No, I have to write a user story first about what I'm researching. Then I have to give an estimate for how long it'll take me to read that paper, and every morning during standup, I have to say that I'm still working my way through the paper."

Doesn't that just sound like the opposite of agility?

In the case of the industrial company, the lament can be summarized as, "Everything we do is on a large scale with complex interlocking processes; nothing there can get done in two-week increments."

Now, with AI, many companies are in danger of repeating the mistake of using the wrong methodology to explore it, by going too wide too soon, and adopting a top-down mandate driven directly from the C-suite, supported by a one-size-fits-all playbook courtesy of the Big Expensive Consulting Co.

Companies would do well to remember Gall's Law, which states that anything complex that works must have gradually evolved from something simple that worked. This goes for adopting agile methodologies as much as it goes for integrating AI into the company. Small pilot, learn what's required for your company specifically to make it work, and don't expect much value from an off-the-shelf, by-the-book transformation, whether it's agile or AI.

Read More
Clemens Adolphs Clemens Adolphs

Lessons from Harvey AI

An anonymous person posting on the social media platform Reddit claims to be a former employee of the Legal Tech startup Harvey AI. They allege that the tool has low internal adoption, is favoured more by leadership and procurement than by those doing the actual work, wasn't built in close collaboration with actual lawyers, plus a number of other criticisms around the product’s quality.

While Harvey's CEO responded and countered these claims, there has been a lot of schadenfreude from others in the legal tech industry, as well as plenty of piling on from AI skeptics. While I'm in no position to judge who's right and who's wrong, we can still extract some lessons, based on the complaints levelled by the anonymous Redditor and the other practitioners.

Biting off more than you can chew

It seemed to me, back in 2023, that Harvey was starting with an overly broad mission: essentially feeding a large amount of legal documents to an AI and having it become proficient at writing legal documents to the point where you could replace, if not your senior lawyers, at least a bunch of your paralegals. Yet, even if a large language model is fine-tuned with incredibly industry-specific material, it only delivers value when plugged into a concrete workflow aimed at solving a particular problem. Lawyers (presumably) don't just want a ChatGPT that's aware of how lawyers write. They want tools that tackle specific tasks, such as drafting and reviewing contracts.

From the observed criticism, I get the impression that Harvey is sort of "bleh" at a lot of lawyer-like tasks, but not amazing at any one of them. If that's true, then it's no surprise that adoption is lacking.

There was a sort of irrational exuberance in the air right around GPT version 3.5, where it seemed the winning formula would be to take an off-the-shelf language model, finetune it with proprietary industry-specific data, and instantly get an expert that could handle any task in that industry. By now, we know that this isn't quite the case, as in the recent MIT study about enterprise AI pilots.

What we must realize is that AI doesn't let us skip proper product development. AI might enable previously unthought-of capabilities inside a product. However, the rest of the product still requires solid engineering, user experience design, and all the other pesky things that are hard work, requiring human insights.

Read More
Clemens Adolphs Clemens Adolphs

Why 95% of AI Initiatives Fail And Why Yours Doesn’t Have To

You've probably come across the striking headline that "95% of enterprise generative-AI pilots" fail, with failure defined as "no measurable P&L (profit and loss) impact".

Read the full article here if you're curious about the research methodology and exact findings. Here, instead, let us focus on takeaways.

What goes wrong

There are a lot of reasons mentioned in the report. A few standout ones:

  • Poor integration into actual business workflows

  • Unclear success metrics

  • Top-line hype instead of concrete use-cases

Incidentally, we've written about all these before (check out our archive) or, in particular, these:

It's a nice validation of our thinking.

How to get it right

To distill the whole article—with the pitfalls and the things that those who succeed with AI are doing right—into a single sentence, I'd say:

  • Start one pilot focused on a single, measurable back-office process and define the P&L metric before building.

No sweeping, company-wide digital transformation, no press-release-driven bravado, no chasing after shiny objects. Just one area where your well-paid knowledge workers (engineers, lawyers, copywriters, you name it) waste time on a back-office process that's not part of their value creation chain. Declare what success looks like and then go build and iterate.

Finally, the researchers found that your success rate increases dramatically if you bring in a specialized partner who can help you bridge the tech-business gap, rather than going it alone. If that sounds intriguing, hit reply and let's start a conversation.

Read More
Clemens Adolphs Clemens Adolphs

We Need to Talk About Workslop

We've got a strong contender for word of the year: Workslop. Defined in this article by a research team at BetterUp Labs and the Stanford Social Media Lab, it refers to "AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task."

The article is well worth a full read. The pernicious thing about workslop is that, at first glance, it appears to be of high quality. A well-structured report, polished slides, etc. However, because it's not carefully reviewed and crafted, it actually creates more work for the other people in the organization, who now have to review and redo the sloppy work.

My own takeaways:

  • This is what you get when you vaguely demand that your employees use AI to boost productivity, without clear goals or measures on what that would entail.

  • It's also what you get when "visible activity" is valued above concrete outcomes—what Cal Newport calls pseudo-productivity. You reward people only for the number of emails they send? You'll get a whole lot of rapidly generated and ultimately useless emails.

  • Finally, the ease with which AI generates polished outputs can paper over processes that are inherently inefficient and shouldn't exist in the first place. Yes, thanks to AI you can rapidly generate all sorts of reports—if only at low quality—but is anyone even going to read them? (They'll probably read the AI-generated summary 😛)

What to do? As is often the case, the answer was there all along, made more acute by the generative AI revolution: empower people to own outcomes instead of outputs and hold them accountable for them. For example, don't ask for a report or presentation that compares different solution providers. Ask for a decision on which solution provider to pick. The person working on that task might ask AI to create an initial overview, but because they'll have to defend their choice, they won't just send it off to some poor coworker to sift through.

In short, don't just yell at your workers to stop generating AI workslop. Let them own the outcomes of what they create, and the problem will take care of itself.

Read More
Clemens Adolphs Clemens Adolphs

AI Advice Must Come From the Trenches, Not the Sidelines

There's a common saying that I dislike: "Those who can't do, teach." If teaching isn't informed by practice, it tends to drift into the theoretical. Especially in fast-moving fields such as AI, with numerous pitfalls and nuances, only those who also practice are qualified to teach and advise.

How else can you help a company pick the correct path to pursue with their AI initiative if you haven't walked that path before? In slower-moving fields, you can rely on a wealth of accumulated knowledge, slowly absorb it, and then disseminate it: You don't have to lay brick or cut wood to be a successful architect; you can learn the relevant material properties from a textbook. After all, wood, stone, and concrete, among others, don't undergo dramatic progress every few months.

But with your AI projects and initiatives, you'll want to get advice directly from those who are building and doing, not just theorizing. This, of course, goes against the common industrial-scale consulting model: A hyper-effective sales team creates FOMO (fear of missing out) and/or promises untold riches if only you buy their services, then hands you off to their engineers who're now tasked with doing the impossible (or delivering something over-built). Or, a seasoned strategy consultant draws up all sorts of ares where your company would be guaranteed to benefit from AI, but their suggestions are completely unburdened by actual feasibility because that person has never gotten their hands dirty.

So when you're shopping around for advice on AI, seek out the nerds and hackers and doers. You might not get as polished a PowerPoint deck, but you might just save yourself a lot of headache and wasted effort.

Read More
Clemens Adolphs Clemens Adolphs

When are you ready for AI?

It might be sooner than you think.

I've heard some department heads say that they want to explore AI, but only after they're ready. It's wise not to rush headlong into an AI initiative—that's precisely how you get it to fail and join the 95% of AI projects that fail. However, there's also such a thing as waiting too long for a perfect world that never comes. Here then are a few checks for whether you're ready to start properly exploring AI:

  • You think the way your company does things could be improved, but at least there is a well-defined way your company does things. If everything is a hot mess, throwing AI at it will give you a white-hot mess.

  • Data isn't perfect, but it's available. Modern AI can handle messy data. What it can't handle is an absence of data. And even there, you might need less data than you thought. LLMs have already been trained on the entire available written word, so getting them to understand your specific data might not require gigabytes of it.

  • You know what you want to achieve. Nothing leads to project failure quicker than a lack of clarity on this point. There are dozens of ways a given problem can be tackled with (or without) AI, and each way has its own set of tradeoffs. Only absolute clarity on what "solved" looks like can the tradeoffs be matched to the requirements.

How did you score?

  • 3 - Absolutely ready. Don't wait for perfect. Just start implementing your initiative. In-house, or with a trusted partner.

  • 2 - Almost ready. You just need a little nudge to remove the last hurdle before you can start implementing AI. Perhaps a quick call with us can help you get unstuck.

  • 0-1 - Not quite ready. There's still a lot of groundwork to lay before you can start fully adopting AI. These steps might be projects in and of themselves, such as building a solid data pipeline, mapping your business processes, and defining success criteria for an AI initiative. You might also benefit from ongoing coaching in this topic. Most importantly, you'd want to pick an area where we can work iteratively, in small steps, with low-hanging fruit and easy wins available.


At AICE Labs, we'd love to help you do AI right, no matter what stage of your journey you're in. Whether it's a custom project right away or providing you with the advice to get you there, we've got the experience to get you unstuck.

Read More
Clemens Adolphs Clemens Adolphs

Problems Vs Solutions

Common advice for founders and other problem solvers is to fall in love with a problem, not a solution. This is meant to keep people from building "solutions" that nobody wants. These days, that means avoiding the temptation to shove AI into just about everything.

On the other hand, I find this advice incomplete. It's solid when technological progress has been slow and steady. But when novel technology arises, the game changes. You will want to take a really close look at new tools and ask: Where could I apply this?

Think about other monumental technological achievements, like electricity. It was so different from everything that came before that it made perfect sense to ask: What are all the marvellous things we could do with this?

Same with AI. It really is a monumental shift in what computers are capable of doing. Why not brainstorm a long list of ways to apply this in our lives and business? The initial advice, to avoid obsessing over the cool tech of the solution rather than the actual problem, is still sound. Once you have your brainstormed list of AI use cases, ask critically whether it tackles a real problem: Does it cost you time, money, energy, or peace of mind? Have you tried solving it before but ran into challenges that AI could overcome? If not, move on, because a use case that's not for a painful problem isn't a real use case. But if yes, dig deeper and define what "solved" would look like, and then go try solving it (or hit reply and chat with us about solving it).

Read More
Clemens Adolphs Clemens Adolphs

That’s Not an Agent

There are two places where I've seen people misuse the term "agent". One of them is benign, the other not so much.

First, the benign version. Talking with potential clients, they're genuinely curious about AI but aren't necessarily familiar with all the fine distinctions. So they have an idea for where AI might help them, and they call that solution an "agent". That's not the place to barge in with a "Well, actually... ". What matters more is their intent and the problem they're facing, as well as what "solved" would look like for that problem. Once we design and present a solution, we'll explain that the final product may or may not end up being an agent. What matters is that the problem gets solved.

Now for the not-so-nice version: Folks who sell something software-related, knowing full well that it's not actually an agent, but they call it that to tap into hype and fear. I've seen simple automations ("Post a message to the team chat when a user files a bug") described as "our customer support agent". Ouch. If it's not a large language model (or multiple, at that) embedded in a system with a feedback loop, autonomously invoking tools to achieve an outcome, it's not an agent.

Why does it matter there, and not in a client conversation? Because if we're selling a service and positioning ourselves as experts, we have to be precise in our communications. We have to stand for what we advertise. You get what we say you get, and it won't be dressed up in colourful, hyped-up language.

Needless to say, if you're looking for someone to blow smoke and sound fancy, you can go somewhere else. But if you're after someone who'll solve challenging problems with what’s appropriate instead of what’s hip with the tech influencers, we're right here.

Read More
Clemens Adolphs Clemens Adolphs

Don’t Distrust The Simple Approach

Phew, it's been a while. Summer, start of school, travels. Anyway.

I've recently come across multiple situations where simple is best, but gets overlooked in favour of something complex:

  • I've had discussions about diet and exercise. Simple (eat less, move more) is best, but people don't trust it, so they have a 12-step exercise routine and a complicated schedule of exactly which food group is verboten at what time of day.

  • I've had finance folks reach out about investing. Simple (index funds matched to your risk tolerance) is best. Still, people don't trust it, so they want a complicated, actively managed portfolio that gets adjusted every time the US president sends a tweet.

  • I've chatted about strategy and consulting with a friend. For exploratory work and new initiatives, the best approach is to just start and iterate on the feedback. But, of course, that just seems too simple, so instead we ask a big consulting company to make a deck with 60 slides, complete with SWOT analysis, 2x2 matrices, stakeholder personas, ROI projections, a RACI chart, a change management framework, risk register, industry benchmarking, and an executive summary that uses 'synergy' unironically.

We're all smart people here, so we have domain experience that's genuinely complex. That can bias us to distrust simple solutions. What we should adopt is a mindset that distrusts complexity and isn't ashamed to select and defend the simple approach.

Read More
Clemens Adolphs Clemens Adolphs

Do You Have Experience With…?

It's a running gag among programmers that job descriptions, often created without input from technical team members, will ask for five years of experience in a technology that hasn't been around for even three years yet. And recently, nowhere has the fallacy in that been more apparent than with generative AI. In a sense, we're all newbies here. By the time you've become proficient in working with one particular model, the next one gets released. If we take this narrow, "HR needs to check the boxes"-style view of skill, then everybody is a bloody beginner at this.

This applies not just to individual job seekers, but consultants and their companies as well. How many years of GenAI-productization experience does Delloite have? Accenture? AICE Labs, for that matter? In every case the answer is, "as long as those things have been around, which isn't really that long".

Explicit experience, measured in years, with exactly one piece of technology or its subdomains is a poor measure of the likelihood that the hire will get you what you need. What changes it the new and shiny tool they get to wield. What stays the same is their methodical approach (or lack thereof...) to the underlying challenges. At the end of the day, it's engineering: Solving complex challenges with the best tools available under multiple competing constraints. Once you've got a knack for that, the actual tools become much more fluid, and checking how much time a practitioner has racked up in tool A versus tool B becomes much less relevant.

For instance, take someone with twenty years of programming experience but no prior JavaScript knowledge, who has deeply internalized the principles of good coding, then run them give them a one-hour overview to the language. Then pit them against a programming novice who spent three months in an intensive JavaScript bootcamp. I'd bet money the veteran will write better JavaScript.

With AI, we certainly have lots of new kids on the block who poured hours into prompting and vibe coding tutorials. They'll all be outperformed by those with solid engineering principles.


A quick personal note, it's the end-of-summer, almost-back-to-school chaos, so while I try, just for myself, to keep posting regularly, it's a bit more challenging than usual. :)

Read More