A Response to β€œAppearing Productive in the Workplace”

The Honest Productivity Paradigm

How AI Should Empower the Expert β€” Not Impersonate One. A balanced exploration of where AI genuinely shines, where it dangerously deceives, and why teaching tools beat replacing tools every single time.

10 minute read PeopleWorks GPT Editorial Education over Automation

A few days ago, an essay made the rounds in our circle β€” a sharp, painful, and absolutely necessary critique of how generative AI is reshaping the workplace. The piece, titled "Appearing Productive in the Workplace", by the author behind No One's Happy, describes a phenomenon many of us recognize but few have been brave enough to name: people producing work that looks expert without being expert.

We read it carefully. We agreed with most of it. And then we sat with an uncomfortable question:

πŸ€”If AI is so easy to misuse, what does responsible AI tooling actually look like?

This article is our answer. Not a defense of every AI product on the market β€” many of which deserve the critique β€” but a thoughtful framework for distinguishing tools that teach from tools that replace, and a transparent look at how we've designed PeopleWorks GPT to live on the right side of that line.

What the Critique Got Right

Let's start by honoring the source. The original essay identifies several phenomena that we, working inside this industry every day, have witnessed firsthand:

"Generative AI can produce work that looks expert without being expert, and the failure arrives in two shapes. The first is when novices in a field are able to produce work that resembles what their seniors produce, faster or more advanced than their judgment. The second is when people generate artifacts in disciplines they were never trained in."

β€” No One's Happy, May 2026

The author calls this "output-competence decoupling" β€” and it is, in our view, the single most important phrase in the discourse around AI productivity today. For most of history, the quality of work was a reliable signal of the competence of the worker. AI has severed that connection. And when that connection is severed without the worker noticing, three things happen, all of them documented in research the original essay cites:

1

Sycophancy

Leading models are roughly 50% more agreeable than human reviewers (Cheng et al., Science, 2026). The tool affirms the user even when affirmation is unwarranted β€” a structural flattery built into the loss function.

2

Overconfidence

NBER and Harvard Business School studies confirm: novices using AI gain ~30% productivity while losing the ability to evaluate their own output. The work improves; the reviewer disappears.

3

Internal Slop

Documents balloon from one page to twelve. Status updates become summaries of summaries. The cost of producing words has collapsed; the cost of reading them honestly has not.

We have seen all three of these. We have lived them. And we want to be very clear: nothing in the rest of this article disputes any of this. The critique is correct. The danger is real. The reckoning is coming.

Our argument is different. It is this:

The problem is not AI. The problem is how most AI tools are designed. Tools that hide their reasoning create conduits. Tools that expose their reasoning create apprentices. PeopleWorks GPT is built on the second premise.

The Conduit Model vs. The Apprentice Model

The original essay introduces a haunting metaphor: under the current paradigm, the worker becomes a conduit β€” a pipe through which AI-generated work flows from the model to the recipient, without the human in the middle being able to evaluate what passes through them. The human, in the worker's own words, is "capable of routing the output to a recipient and incapable of evaluating it on the way through."

This is what happens when an AI tool answers questions but refuses to explain them. When it generates artifacts but won't show the reasoning. When it presents conclusions but hides the working. The tool, in this design, treats the human as the bottleneck β€” something to be routed around as quickly as possible.

There is an older model β€” older than computing itself β€” that gets this exactly backwards. It is the model of the apprenticeship. The apprentice does not produce inferior work because the master does it for them. The apprentice produces inferior work, watches the master correct it, and over many years becomes the master. The tool, in this design, treats the human as the entire point.

graph TB subgraph CONDUIT["🚫 THE CONDUIT MODEL (What's Wrong)"] Q1[User Question] --> AI1[AI Black Box] AI1 --> A1[Final Answer] A1 --> R1[Recipient] Q1 -.->|No learning| U1[User Stays Novice] end subgraph APPRENTICE["βœ… THE APPRENTICE MODEL (PeopleWorks GPT)"] Q2[User Question] --> AI2[AI: Shows Reasoning] AI2 --> S2[Generated SQL Visible] S2 --> E2[Explanation of Choices] E2 --> A2[Answer + Understanding] A2 --> R2[Recipient] A2 -->|Learns SQL & Domain| U2[User Becomes Expert] U2 -->|Better Questions Over Time| Q2 end style CONDUIT fill:#fef2f2,stroke:#dc2626,stroke-width:2px style APPRENTICE fill:#ecfdf5,stroke:#10b981,stroke-width:2px style AI1 fill:#fecaca,stroke:#991b1b style U1 fill:#fecaca,stroke:#991b1b style AI2 fill:#d1fae5,stroke:#065f46 style S2 fill:#d1fae5,stroke:#065f46 style E2 fill:#d1fae5,stroke:#065f46 style U2 fill:#a7f3d0,stroke:#047857,stroke-width:3px

Figure 1. Two fundamentally different philosophies of AI tool design. The conduit hides; the apprentice teaches.

The difference is not philosophical. It is observable, measurable, and present in every screen of the user interface. Let us look at what it actually means in practice.

How PeopleWorks GPT Teaches

When a user asks PeopleWorks GPT a natural language question β€” for example, "Which customers haven't placed an order in the last 90 days?" β€” the system does not simply hand back a list of names. It hands back four things, each of which serves a specific pedagogical purpose:

1. The Generated SQL Query (Visible)

The exact SQL that was executed is shown to the user, in full, with syntax highlighting:

-- Generated from your question: "Customers without orders in last 90 days" SELECT c.customer_id, c.name, c.email, MAX(o.order_date) AS last_order_date FROM customers c LEFT JOIN orders o ON c.customer_id = o.customer_id GROUP BY c.customer_id, c.name, c.email HAVING MAX(o.order_date) < DATEADD(day, -90, GETDATE()) OR MAX(o.order_date) IS NULL;

This single design choice β€” showing the SQL β€” is not a technical detail. It is a philosophical commitment. It says: you are not a conduit. You are the analyst. The model's work is your work to inspect, modify, challenge, or reject.

2. The Tables and Schemas Selected (Visible)

The user sees which tables PeopleWorks GPT chose to query and why. This matters because β€” as the original essay correctly notes β€” "the schemas, and more importantly the objectives, were wrong in a way that would have been obvious to anyone with two years in the field." By making the schema selection visible, the domain expert can immediately spot when the AI has misunderstood the question.

3. The Prompt Engineering (Educational, Optional)

Users can opt into seeing how their natural language query was transformed into a structured prompt. This is part of what we call our AI literacy commitment: PeopleWorks GPT is not just a BI tool, it is a teaching surface for understanding how to communicate with LLMs effectively.

4. The Reasoning Chain (Available on Demand)

When the user asks "why this query and not another?", the system explains its choices: which tables were considered, why certain joins were preferred, what assumptions were made. The user is invited β€” gently, persistently β€” into the decision-making, not excluded from it.

πŸ’‘The pedagogical bet: After 30 days of using PeopleWorks GPT, a domain expert who started with zero SQL knowledge can read a moderately complex query and tell you whether the logic matches their question. They have not become a database engineer. They have become something more valuable: a domain expert who can supervise AI-generated SQL.

The Domain Knowledge Principle

There is a phrase we use internally at PeopleWorks that has become something of a creed: domain knowledge beats pure coding skills. The original essay actually validates this principle, though it does not phrase it the same way. The colleague described in the essay β€” the one who spent two months building a data system he could not explain β€” failed not because he could not code. He failed because he did not have the domain knowledge to evaluate whether the system he was building was the right one.

The cure for this failure is not less AI. It is more domain experts using AI in ways that surface, rather than hide, the decisions being made on their behalf.

Who PeopleWorks GPT Is Built For

This is why the product is explicitly designed to be used by people with deep business knowledge but limited SQL fluency β€” the operations manager who understands inventory turnover, the controller who knows the company's chart of accounts, the regional sales lead who knows which territory definitions matter. These users have judgment. What they lack is the syntactic bridge to their own data. PeopleWorks GPT builds that bridge, and shows them every plank.

Red Flags vs. Green Flags: How to Evaluate Any AI Tool

If you are a decision-maker β€” an executive, a department head, a team lead β€” evaluating an AI tool for your organization, here is a practical framework. Look for these signals when watching a vendor demo or running a pilot:

Dimension 🚩 Red Flag (Conduit Tool) βœ… Green Flag (Apprentice Tool)
Transparency of Output Shows only the final answer. SQL, code, or reasoning hidden. Shows the generated query, code, and reasoning by default.
User Skill Trajectory User remains equally dependent after 6 months of use. User can spot errors and refine prompts independently after weeks.
Handling of Errors Silently regenerates; never admits limitations. Surfaces ambiguity, asks clarifying questions, admits when stuck.
Verification Loop "Trust the output" β€” no built-in way to validate. Shows source data, intermediate steps, and result samples for verification.
Agreement Behavior Always agrees with the user; flatters incorrect assumptions. Pushes back on incorrect premises; surfaces alternative interpretations.
Documentation Style Generates twelve-page docs from a three-sentence intent. Matches output length to actual information density.
Domain Coupling Works "in any domain" β€” generic, context-free. Adapts to the user's specific business schema, vocabulary, and constraints.
Audit Trail No record of what was asked, generated, or executed. Full audit log of queries, prompts, sessions, and access patterns.

βœ…Use this table in vendor evaluations. Ask any AI vendor to demonstrate, live, how their product behaves in the right-hand column. The good ones will be excited to show you. The conduit tools will change the subject.

Five Principles of Honest AI Use at Work

For the individual contributor or knowledge worker, the discipline is simpler than it sounds. The original essay offers an excellent foundation, citing the University of Illinois and PLOS Computational Biology guidance. We extend it with five principles we teach our own team:

01

Verify Before You Pass Along

Never forward AI-generated work to a recipient without reading it yourself. If you cannot read it competently, you should not be the one sending it.

02

Use AI Where Feedback Is Fast

Drafting, brainstorming, summarizing, syntax help. Avoid AI where feedback loops are slow β€” architecture, strategy, irreversible decisions.

03

Never Ask AI for Confirmation

The model agrees with everyone. An agreement that costs the agreer nothing is worth nothing. Ask for objections, alternatives, or weaknesses β€” never validation.

04

Stay Inside Your Domain

Use AI to multiply your existing expertise, not to impersonate expertise you lack. If you don't know what "good" looks like, AI cannot tell you.

05

Preserve Your Judgment Muscles

Do at least 20% of your work without AI assistance. Judgment, like any other skill, atrophies without exercise. Protect the part of you the tool cannot replace.

How PeopleWorks GPT Embodies These Principles

We do not claim our product is perfect. We claim it is designed, from the ground up, to be the kind of tool the discipline above describes. Here is how that shows up in practice:

πŸ”

Full Query Transparency

Every generated SQL query is shown, syntax-highlighted, before execution. Users can edit, reject, or learn from it.

πŸŽ“

AI Literacy Built In

Optional explanations of prompt engineering, table selection logic, and LLM reasoning β€” teaching as you work.

πŸ›‘οΈ

Domain-Aware Validation

Queries are validated against your actual schema before execution. No silent failures, no hallucinated tables.

πŸ“‹

Complete Audit Trail

Every session, every prompt, every query logged. Compliance teams sleep at night. Auditors are welcomed.

πŸ”„

Conversational Routing

The system knows when to re-execute and when to reuse. It does not generate new queries to flatter your sense of progress.

🌐

Universal Database Support

SQL Server, MySQL, PostgreSQL, Oracle, SQLite. The provider pattern means the same teaching surface across all engines.

The Pedagogical Pipeline

For the technically curious, here is the five-step NL2SQL pipeline that powers every PeopleWorks GPT interaction. Notice that every step is visible to the user β€” that visibility is the entire point:

graph LR A["πŸ‘€ User Question
Natural Language"] --> B["Step 1
Session Init"] B --> C["Step 2
Table Selection"] C --> D["Step 3
Schema Retrieval"] D --> E["Step 4
SQL Generation"] E --> F["Step 5
Query Validation"] F --> G["πŸ“Š Result + SQL +
Explanation"] G -.->|"User reviews,
learns, refines"| A style A fill:#dbeafe,stroke:#0066cc,stroke-width:2px style B fill:#dcfce7,stroke:#10b981 style C fill:#dcfce7,stroke:#10b981 style D fill:#dcfce7,stroke:#10b981 style E fill:#dcfce7,stroke:#10b981 style F fill:#dcfce7,stroke:#10b981 style G fill:#fef3c7,stroke:#f59e0b,stroke-width:2px

Figure 2. The PeopleWorks GPT pipeline. The dashed feedback loop is where learning happens β€” the part most AI tools omit.

πŸ“œ The Honest AI Manifesto

  1. The human supplies the judgment; the tool supplies the throughput. Not the other way around.
  2. Every artifact the AI produces must be readable, inspectable, and challengeable by the human who will be held accountable for it.
  3. An AI tool that does not make its users smarter over time is making them weaker. There is no neutral position.
  4. Sycophancy is a bug, not a feature. A tool that always agrees with you is a tool that cannot be trusted to disagree when it matters.
  5. Domain expertise is not a bottleneck to be removed. It is the only thing standing between your organization and the slop.
  6. Speed without verification is not productivity. It is the appearance of productivity, which is much more expensive.
  7. The firms still doing the work properly will be in a position to charge for it. The market will figure this out. Be on the right side.

A Closing Word on the Two Failures

The original essay distinguishes between two kinds of AI-enabled failure: novices in a field producing work that outpaces their judgment, and people generating artifacts in disciplines they were never trained in. The author observes that research has mostly measured the first, and that the second is the riskier β€” the one we are missing.

We agree. And we think the second failure is also the more preventable, because it has an obvious tell: the worker cannot answer questions about the artifact they produced. In any organization where there is still a culture of asking β€” why this schema? why these objectives? what did you consider and rule out? β€” the second failure is caught quickly. The first failure is harder, because the novice often has answers; they just don't have the right ones.

The tool cannot save you from the first failure. Only domain experience can. But the tool β€” if it is built correctly β€” can accelerate the acquisition of that domain experience rather than substituting for it. That is the bet PeopleWorks GPT is making. We invite scrutiny on whether we are pulling it off.

The future of AI in the workplace will not be decided by the tools that automate the most. It will be decided by the tools that teach the most.

See the Apprentice Model in Action

If you have read this far, you are exactly the kind of person we built PeopleWorks GPT for: someone who refuses the false choice between "use AI and lose your edge" and "ignore AI and fall behind." There is a third option. We would like to show you.

Explore PeopleWorks GPT β†’

Or reach out directly: peopleworksgpt@gmail.com