Google Deepmind: Generative & Adaptive AI

Where’s the work? It’s hiding. Being that I can’t share what I’ve been working on at Google or Meta in any real capacity due to NDAs, I’ve decided to provide here a post on what I have learned over the last few years of designing for AI experiences…

AI takes up an unreasonable amount of my life. I spend my 9 to 5 deep in the weeds designing the UX for AI frameworks and experiences, only to spend my 5 to 9 on aifeed.fyi reading whatever newly released newness just dropped as a constant reminder that the shelf life of expertise is now measured in weeks, not decades.

But this is ok. This is what designing at the edge of an emerging field is like. Through some interlock of passion, foresight, and dumb luck, this is all very familiar to me. I entered UX design with a degree in Design Media Arts before the career even had a name (ok, Don Norman had coined the phrase, but it was by no means an industry standard). The iPhone had just been released, the internet was still cool, and there was a genuine sense that what we were building was going to make life better. It was the Wild West, and it was fun.

Cut to today. It is still fun, but the stakes feel higher now. Perhaps they always were and we just didn’t have the data yet to understand the consequences of our actions. That is what makes designing AI such a tender endeavor. Back then, the challenge was spatial: how do we engage with information on a four-inch screen? Now we’re facing something more existential: how do we design a relationship with a machine that thinks, or at least puts on a very convincing show of thinking? We have shifted from designing interfaces to designing intent, and the margin for error has moved from a broken layout to a broken sense of truth.

So What’s the Score?

The horses have bolted and the barn is empty. Essentially we are left trying to build not just tools, but predictive beings (albeit predictive beings with a penchant for high-confidence errors and a pathological need to tell us exactly what we want to hear). This is hardly surprising, given that humans have displayed time and time again they tend to not care if what they are being told is accurate, so long as it is what they want to hear.

This shift has caused me to grapple with what we are actually building and what is at stake. The current discourse is caught in a collective fever dream, oscillating between characterizing AI as our mechanical savior and our digital downfall, with little room for the messy reality. AI was designed by training it on all we have to offer: our genius, our bias, our weirdness, and our evil. Then we gave it a massive amount of computing power, ill-defined parameters, and plugged it in. What we are contending with is not a misrepresentation or distortion of ourselves - it is an amplification.

The Case for visible seams

Back in the early 2010s, UX design was concerned with how to make technology disappear. We counted clicks, skeuomorphized every button, and treated the bottom of the screen like the edge of a cliff. We were obsessed with making the path from impulse to action as frictionless as possible. “Seamless” was the world of the day. Every. Damn. Day.

The problem is here is that you don’t actually want an amplifier to be seamless. You want to see the seams. You want to know where the machine ends and your own judgment begins. My philosophy toward designing for AI is less focused on speed and more focused on legibility. My task is to augment text-heavy responses with UI elements that are intentional: these are not just aesthetic choices, but tools for interrogation. They allow a user to poke at the logic of the machine rather than just swallowing a block of generated text.

Emotional Intelligece and The Grid

I am shifting from designing interfaces to designing procedural transparency. Instead of a UI that hands you an answer on a silver platter, I am building systems that facilitate a dialogue. We are designing the guardrails for a world where the AI handles the cognitive heavy lifting, but the human retains the final, intuitive decision.

This is where emotional intelligence meets the grid. It is the realization that a user does not just need an output, they need to feel in control of the process. We need to treat the user not as a consumer of answers, but as the pilot of an incredibly powerful, more-than-slightly erratic tool.

So, I keep my head down and continue to manage the volume. The goal is no longer to make the technology vanish into the background. The most responsible thing I can do is make it visible. I want to build interfaces that admit when they are guessing and layouts that invite a second opinion, that welcome scrutiny. This is not a retreat from innovation, it is an advancement into maturity. We are finally moving past the novelty of what these models can do and starting to figure out what they should do.

An AI Perspective from an NDA-locked Designer