MINA · AI marketplace (standalone note)

High-risk AI needs explicit human control

This page pulls the principle out of the case study: what “trust” means when the model touches money, messages, and public listings—not a shorter disclaimer, but a different interaction shape.

Why “high-risk” is the right frame

In a parent marketplace, the scary failures are not typos—they are wrong price live, a buyer messaged without consent, or a payout path no one remembers opting into. Users do not experience that as a model error; they experience it as loss of agency. That is automation anxiety in a concrete form, not an abstract preference for “more transparency.”

So the design question is not “how clever can the agent be?” It is: where must the human remain the accountable party, with legible boundaries and a way to back out?

What shipped in the interaction model

  • Preview before publish—drafts stay editable; nothing goes live until the seller has seen what the system proposed.
  • Explicit confirmations—high stakes steps surface what will happen next, in plain language, not only a single “Continue.”
  • Reversibility—where policy allows, prefer flows that can be undone or corrected without a support ticket; the product should not bet user trust on perfect first-shot AI.

The through-line: legible limits beat silent cleverness. The goal is the same class of trust consumer AI needs when money or reputation is on the line—bounded automation, not magic.

Project context

I developed these patterns on MINA, an AI-assisted marketplace for parents (San Francisco mom communities, Canada App Store). For screens, metrics, and Photo-to-Publish detail, see the full write-up.

Open MINA case study · AI trust section →