SideDrawer Blog

AI at Work: What It’s Good For (and What It Isn’t)

Written by Ryan Guichon | Sep 18, 2025 8:00:00 PM

The rise of tools like ChatGPT has sparked a familiar tension in workplaces: unbounded productivity potential on one hand, and deep mistrust on the other. A recent conversation I had with colleagues captured the essence of this debate perfectly:

  • “ChatGPT can be wrong with authority, even making up sources.”

  • “But I can’t imagine being as productive without it ... it’s like search engines all over again.”

  • “The risk is the unknown unknowns ... we won’t know it failed us until it’s too late.”

This exchange raises the question: what exactly are today’s AI technologies good at, and where do they fall short?

What AI Is Good At

1. Acceleration of Workflows

LLMs shine when used as accelerants. Drafting, summarizing, reformatting, brainstorming, and exploring new angles can be done at speeds that rival or exceed a team of human researchers. Think of it as intellectual scaffolding. AI can build the outline fast, and you can decide which parts need reinforcing.

2. Pattern Recognition Across Text

AI is effective at spotting themes, clustering ideas, or identifying stylistic or linguistic patterns across large sets of unstructured text. This makes it useful for things like customer feedback analysis, regulatory text comparisons, or contract reviews.

3. Generating Options, Not Answers

When the goal is idea generation (e.g. marketing slogans, strategy options, user flows, etc.) AI is an excellent partner. It won’t always deliver the final product, but it expands the creative aperture in ways humans alone often can’t at scale.

4. Productivity Parity in Competitive Environments

Like the internet and search engines, AI now forms a baseline productivity multiplier. It accelerates research, time to first drafts, can routine low-value, highly repetitive tasks and dramatically accelerates access to information. Teams that don’t adopt it risk falling behind peers who are already compounding efficiency gains.

What AI Is Bad At

1. Fact-Checking and Source Validation

LLMs aren’t designed to verify information. They generate likely answers, not necessarily true ones. Without rigorous fact-checking, AI can output polished falsehoods, sometimes citing “proven statements” that are anything but. Blind trust in the outputs is reckless.

2. Handling Unknown Unknowns

Humans can sense when something “feels off.” AI can’t. The real danger lies in subtle errors that slip through unnoticed until they cascade into bigger failures, especially in domains like compliance, finance, or medicine, where precision is non-negotiable.

3. Contextual Judgment

AI lacks lived experience and professional accountability. It cannot weigh ethical trade-offs, anticipate second-order consequences, or apply professional responsibility in the way engineers, doctors, and lawyers are trained to.

4. Overreliance and Dependency

While AI is a productivity multiplier, becoming dependent on it without safeguards is risky. Acceptable performance may not require AI, but peak performance often will. The danger is not in the use, but in misuse by applying it to tasks it is fundamentally unsuited for.

Using AI Responsibly

The conversation also surfaced an important middle ground: adopting best practices and minimum acceptable standards for AI use. Some emerging norms include:

  • Always Demand Sources: Configure outputs to include citations, and verify them.

  • Audit Outputs: Apply human oversight, especially in regulated or high-stakes domains.

  • Define Boundaries: Know which tasks are AI-appropriate (summarization, drafting) and which are not (compliance decisions, critical fact-finding).

  • Embrace Professional Responsibility: Like engineers of the past two centuries, today’s AI practitioners must pair ingenuity with accountability.

Closing Thought

AI is neither savior nor saboteur. It’s a tool, one with immense leverage but equally immense pitfalls if misapplied. Overreliance isn’t the enemy; misuse is. By understanding where the technology shines and where it fails, we can build confidence in integrating AI into our daily workflows without abdicating responsibility. These reflections are provided for general informational purposes only and do not constitute professional, legal, or compliance advice. SideDrawer Inc. and the author disclaim liability for any actions taken based on this content.