Designing for AI Restraint

Design·February 18, 2026

A concerning trend in design for me is the rapid and pernicious mindset of “AI-first design.” AI is an amazing execution tool, but it should amplify thoughtful design and strategy instead of being a default. I’m legitimately concerned that with AI design, we’ve indexing on innovation theater.


I recently took an intentional break from full time work after choosing to leave my last role, and that distance helped me realize something I hadn’t fully articulated before: I’d been feeling increasingly disconcerted about where “design” seems to be heading.

Now that I’m exploring what’s next, I’m noticing the same red flags in job postings. Many companies proudly describe themselves as AI-first or AI-native, expecting designers to lean heavily into emergent tools to stay ahead of the curve.

As a tenured designer who has built their career fixing massive gaps in product experiences, I’m shaking my head and making this face 😬. A lot of my career has been spend doing what I call “clean up crew” work—the companies and teams that hired me did so because they moved too quickly and shipped bad experiences that tarnished their hard-earned brand equity. Then, they needed folks to come in and make the work better for compliance or churn reasons. I’ve seen it countless times before, and it’ll definitely continue to happen in the future without experiential guardrails and constraints.

To be clear, I’m not anti-AI—far from it actually. AI powers many of the conveniences we rely on every day—fraud detection, mapping, recommendations, search. It has quietly been driving most of the products and features I’ve worked on over the last 6-8 years and I’ve been immersed in this space, creating outputs and using tools that use AI. ChatGPT recently helped me diagnose a time-sensitive issue with my confusing hydronic boiler at my house based on a single photo. I leverage Cursor to create realistic concepts to communicate with my product and eng teams in a way that cuts time in more than half if I had designed it and wired it in Figma.

If you like using Google Maps, Netflix, Spotify, Instagram, or any other digital tool—you too love AI because let’s be honest, that’s what powers the recommendation engines that create dynamic content. We’re all using it, whether we realize it or not. It’s not about being pro-AI or anti-AI at this point, it’s about being ethical and conscious consumers of this energy-hungry technology.

So while the industry constantly and insufferably talks about “taste” around what will differentiate AI tooling in design—I’ve got spicy takes about this as well—what I think is overlooked and more interesting is the discussion of the cost of using AI indiscriminately and our ethical responsibility of use.

The Cost of “AI Everywhere”

As I write this, my home state of Colorado is experiencing one of its lowest snowpacks in decades. Reservoirs are already low, and projections for this summer include heat, drought, and water restrictions. And an increased risk of wildfires! Yay. And at the same time, new massive data centers are being built to support growing AI demand.

And as we know—we know this, right?—prompting large language models requires significantly more processing power than many other applications. Poorly considered AI workflows often require users to:

The Prompting Efficiency Problem

Unfocused AI usage increasingly feels like the digital equivalent of junk mail: high volume, low signal, easy to discard, and environmentally wasteful. Over the last few months at work, I’ve seen endless vibe coded concepts. They’re neat, but over-index on visual polish and speculative interactions. What they’re absent of are any real user need, actual behavioral adoption or desirability, and a complete gap in an assessment of how they establish trust and comfort in daily scenarios.

As designers who claim to think in systems, we should hold ourselves to a higher standard.

This doesn’t just include prompt tools, it also includes participation in shipping AI features that regularly and consistently use AI when nobody has asked for it. I recently worked on an embedded, always-on AI tool that didn’t have a lot of thoughtful optionally and wasn’t opt-in by default. Each call—dynamically repopulating relevant prompts and suggestions—costs money and energy. At this point in my career, it’s difficult to defend wasteful design when the standard has long been about thoughtful decision making.

Most people aren’t skilled prompters—including me apparently. I recently tried to use Figma Make and had to re-prompt it 5 times to work effectively with our connected design system. The output was kind of interesting, but it didn’t yield anything code-ready. Cursor was significantly more effective and I was honestly really impressed with it. Either way, it wasn’t associated with an actual project, but rather a future exploration that wasn’t time-sensitive. It was not anything I couldn’t have done on my own time and there was no urgency. The company didn’t save any time, money or labor, and there was absolutely no meaningful output.

When designers embed AI into experiences or exploration without clear purpose, we multiply those costs at massive scale. We’re responsible for the data center that gets built to “summarize this document” unnecessarily, sucking up rural groundwater and draining low-income residents’ wells. When we prompt a concept or an idea, it might feel neat to have “built” something, but it also has a downstream impact—you’re just not immediately impacted by it.

Designing With Restraint

Personally, I think about using AI in my design workflows the same way I think about other finite resources. I try not to waste water, reduce unnecessary consumption, reuse when possible, and make choices that prioritize intentional use over convenience. But at the end of the day I do use water, gas, electricity and throw away garbage. It’s going to happen, but I’m also incredibly thoughtful and mindful about it.

That’s how I want to approach AI in design: not as an infinite default, but as a powerful tool that deserves thoughtful application.

Before using AI, we should ask: