Turn Vague Mandates into Clear Team Goals That Stick
Transforming ambiguous directives into actionable team objectives remains one of the most persistent challenges in modern project management. This article draws on expert insights to present twenty-five practical techniques that replace vague instructions with specific, measurable targets. Each method offers a concrete framework teams can implement immediately to eliminate confusion and drive consistent execution.
Codify a Five-Minute Intake Spec
I run VIP Cleaners and Laundry in San Diego and I've spent 25+ years turning "make it better" into something my team can execute--because in garment care, vague equals ruined fabric or angry customers. My move is to force a 5-minute "intake spec": What exact item(s) are we improving, what defect are we removing (stain/odor/yellowing/fit/feel), what "done" looks like on inspection, what risks are acceptable (none on dye/trim), and what the scope is (clean only vs clean + preservation + documentation).
I write it down as a simple job card: fabric + problem + acceptable methods + stop conditions. Then I choose the process before touching the garment (spot test, gentlest viable chemistry, escalation path), and I add controls like barcode tracking and photo documentation so "better" is verifiable, not a vibe.
One time this prevented major rework was a vintage silk wedding gown that came in yellowed with hem stains. "Make it better" could've meant aggressive whitening and wrecking the silk, so I scoped it as: improve brightness without weakening fibers, remove hem staining as much as safely possible, no texture change, and stop immediately if dye shifts--documented with before/after photos for approval.
Because that scope was clear, we didn't have to redo anything or argue about expectations later. The bride saw the photos, understood the safe limits, and the gown came out looking like new without us chasing an impossible "perfect" that would've damaged it.
Declare What You Will Not Do
The standard advice is to ask "what does 'better' mean to you" and write down the answer. That's how you get sent back twice. The person giving a vague mandate doesn't know what they want. Asking them to define it just shifts the homework.
What I do instead is invert the question. Don't ask what better looks like. Ask what we want to not do. "Make the website better" becomes "are we redesigning the homepage or are we fixing the part that isn't converting?" That single question forces a choice. The vagueness collapses.
The example that saved a quarter of work: I got a "make the marketing site better" ask from a leadership offsite. Instead of scoping a redesign, I pulled analytics for ten minutes and asked, "is the goal a prettier homepage or a higher conversion on the demo request page?" The demo page bounce was 78%. The homepage was fine. We agreed in writing: scope is demo page only, success is bounce under 60%, anything else is explicitly out of scope (which I had to fight for, twice).
We hit that target in three weeks. A redesign would have eaten two months and rebuilt pages nobody had a problem with. Honestly, the line I now write at the top of every brief is one sentence: "what are we deliberately not doing here." That has saved more rework than any goal framework I've tried.

Map Friction Across the Funnel
I'm a Webflow designer/dev and founder of Webyansh, so I get the "make it better" mandate constantly. I turn it into a single-page brief built around a funnel: what exact user journey is broken (landing - key page - CTA), what's the one primary conversion action, and what must not change (brand rules, CMS structure, SEO-critical URLs).
My trick is a 30-minute "UX friction dump": I ask the client to screen-record themselves trying to complete 3 tasks (e.g., find a resource, understand the product, contact sales), then we label every snag as either content, design, or dev. That gives us scope: "fix navigation + messaging + page speed" isn't vague anymore--it becomes "reduce steps to CTA, tighten hierarchy, rebuild components with a style guide, and ship with an SEO checklist (canonicals/structured data where needed)."
This saved rework on the Hopstack redesign. Their ask was basically "modernize," but the real goal was converting high organic traffic from a huge resource library without dropping SEO, so we locked scope around (1) minimal design for performance, (2) clean Webflow build, and (3) safe CMS transfer of all resources--plus an explicit "no heavy animations" rule to protect load speed.
Because we wrote "definition of done" around the resource experience (findability + filtering) up front, we added advanced filtering with custom code during the planned build instead of getting a late "can you make resources easier to browse?" change request after launch. That one scoped decision prevented the classic round of post-launch restructuring that breaks URLs, redirects, and rankings.

Ask What Improves in Two Weeks
Vague mandates tend to create rework because everyone builds toward a slightly different version of "better," so the first move is translating that into something observable. The approach that held up was asking a simple follow up and not moving forward until it was answered clearly. What would we point to in two weeks and say this is improved. That question forces the outcome to become visible, whether it is a measurable metric, a completed deliverable, or a specific change in how something performs. From there, the scope gets defined by what is required to reach that exact point and nothing beyond it. In a business like Equipoise Coffee, where consistency is tied to repeatable outcomes, that clarity prevents teams from adding unnecessary steps that do not change the final experience. One situation where this avoided rework was during a process update that initially sounded like a general efficiency improvement. Once the outcome was defined as reducing turnaround time by a specific margin, the work narrowed quickly and stayed focused. The result was completed faster, and nothing had to be undone because everyone was building toward the same finish line from the start.

Anchor Goals to Baseline and Decision
I've been running Latitude Park since 2009 and "make it better" is basically the agency's native language -- clients say it constantly about ads, websites, landing pages, you name it.
My first move is always to ask: "Better compared to what, and what decision does that change?" That one question forces whoever's asking to name a baseline and a business action. Vague requests can't survive that question.
Real example: a franchise client told us to "fix their Meta campaigns." Instead of touching everything, we asked what "fixed" looked like -- and they admitted the real problem was franchisees couldn't tell which locations were profitable. That reframed the entire scope to standardizing UTM tagging and building one clean dashboard, not rebuilding creative or restructuring campaigns. We shipped that fast, and they never asked us to redo the reporting layer later because the goal was locked in before we touched anything.
The trap I see constantly is agencies (including early me) jumping straight to execution on vague briefs. You burn hours producing work that gets killed because the stakeholder finally got clear by *seeing what they didn't want.* Nail the outcome statement first -- one deliverable, one audience, one measurable result -- and protect everyone's time.

Nail One Person's End Condition
I've been building Netsurit for 30 years, and "make it better" is probably the most dangerous phrase in business. It sounds collaborative but it's actually a blank check for wasted effort.
The first thing I do is flip the vague mandate into a people question: who feels the pain, and what does their day look like when the problem is solved? When we worked with Novo Nordisk on their pharmacy restocking process, "make it better" could have meant anything. We anchored it to one specific pain point -- pharmacies waiting over 48 hours for order updates -- and defined done as: the pharmacy gets a response in minutes, automatically. That constraint killed a dozen unnecessary conversations about dashboards, reporting formats, and integrations we didn't need yet.
The scope discipline that saved us rework there was separating "what hurts now" from "what would be nice." We built the Power Automate workflow first, got the 3-minute response time working, then added the Power BI visibility layer. If we'd tried to solve everything at once, we'd have rebuilt the whole thing twice.
My rule: before any work starts, get agreement on one sentence -- "We'll know this is done when [specific person] can [specific action] without [specific friction]." That sentence is your scope. Everything outside it is phase two.
Lock KPI Conversion and Guardrails Fast
I've heard "make it better" for 18 years running Foxxr (contractor-only SEO/lead gen since 2008). I translate it into a one-page "definition of better" by forcing three decisions: the hard business outcome (qualified leads/booked appointments/revenue), the single primary conversion (call or form), and the scope boundary (what we will *not* touch this cycle).
Then I lock it down with a measurement plan that kills vanity debates: what we'll track (traffic is fine, but conversion rate + call/form completions is the point), where it's tracked (GA4 goals + call tracking), and the time window to judge. If the mandate is vague, I ask one uncomfortable question: "If we only improve one thing in 30 days, what would you bet your own money on?"
One time this prevented rework was a mold remediation SEO project where "make the website better" could've turned into a redesign rabbit hole. We defined "better" as: more calls from specific service keywords + stronger local visibility, and scoped work to GBP optimization, local listings/reviews, technical SEO fixes (mobile usability/core web vitals), and one research-based blog/month--explicitly *not* a full redesign.
Because the goal and scope were written upfront, we didn't waste cycles arguing about aesthetics (hero image swaps, new fonts, etc.) every week. The client could see the exact tasks/hours in the portal and the outcomes in the metrics, so feedback stayed anchored to "does this produce qualified leads?" instead of "do we like it?"
Surface a Non-Negotiable Constraint
Coming from both event production and investment, I get handed "make it better" constantly -- from sponsors, from family office clients, from capital raisers. The first thing I do is flip the vague mandate into a single, non-negotiable constraint: who specifically needs to walk away with what outcome?
When I was building out the Jets & Capital event format, the early feedback was just "the networking needs to be better." Instead of overhauling everything, I pushed back with one question: better for whom -- the fund managers or the allocators? That forced us to define a concrete scope: ensure at least 85% of attendees are actual allocators, not just people looking to raise. That one constraint shaped the entire vetting process and avoided us reworking the invite structure multiple times later.
The lesson is that "make it better" is really a question disguised as a direction. Your job is to surface the constraint hiding underneath it -- budget, audience, timeline, or outcome -- before you touch anything.

Write Indicators and Red Lines Note
I'm a fractional CMO + GTM strategist, and "make it better" is basically my day job--especially with SEO/content ecosystems where everyone has a different definition of "better." I force clarity by turning the mandate into a one-page "Outcome + Evidence + Constraints" note: what outcome we're driving, what proof will show it happened, and what's explicitly out of scope.
My go-to is a 10-minute triage: "Better for who?" (buyer, sales, support, Google/AI Search), "Better at what moment?" (discover, compare, convert), and "Better measured how?" (one primary metric + one guardrail). Then I translate that into a small pilot on a single high-impact journey/page, with agreed reporting (e.g., conversion rate + engagement + assisted leads) so we don't argue later about what "worked."
One time this prevented rework: a fintech client wanted "better content" because leads felt soft, and the first instinct was to rewrite everything. Instead, I scoped it to one outcome--improve AI/Google discoverability for a specific state expansion--and one evidence set--Search Console visibility + lead inquiry quality from that topic cluster--while making "brand rewrite" explicitly out of scope.
That clarity let us run a tight, repeatable publishing cadence aligned to the expansion plan (not random blog ideas), and it avoided the classic redo where you ship 20 posts, then realize sales wanted bottom-funnel comparisons and support wanted self-serve FAQs. We agreed on the finish line up front, so we didn't pay for the same work twice.

Tie Effort to a Commercial Return
I'm a CEO of a fractional growth partner agency and I'm a salesperson at heart, so "make it better" always gets translated into a commercial outcome. I start by asking three things: what's the business constraint (pipeline, close rate, capacity, retention), what decision should this improvement unlock, and what metric will tell us it worked (even if it's a proxy like qualified conversations or sales cycle friction).
Then I force a one-page "definition of better": target audience + single promise/message + offer/CTA + channel + tracking. I also write an explicit "not doing" list (no redesign, no new tools, no extra channels) so the scope doesn't silently expand while everyone thinks they're being helpful.
One time a client came in with "make our marketing better" and the team's instinct was to refresh creative and launch more campaigns. In discovery we found the real pain was sales saying leads were "bad," so we scoped it to messaging + intake: tighten the promise, align form questions to qualification, and set a clear handoff definition so sales knew what to do with each lead type.
That prevented the classic rework where you run ads for a month, then get told "these leads suck," and have to rebuild everything. Because we agreed upfront on what a qualified lead is and where it gets tracked, we adjusted messaging and routing once instead of redoing the whole funnel.
Force Whom When and Proof Upfront
After 25 years in senior global leadership at HP and now running M&A integrations, "make it better" is probably the most dangerous phrase I hear. It feels like direction but it's actually just pressure dressed up as a mandate.
My first move is always to ask: *better for whom, by when, and how will we know it worked?* Those three questions expose whether someone has a real goal or just a feeling of dissatisfaction. I push for a measurable outcome tied to a 90-day window, not a vague aspiration. If someone can't answer those three questions, the "mandate" isn't ready to execute yet.
A real example: I was brought into a post-acquisition situation where leadership wanted to "improve how the team operates." Before touching anything, I insisted we define what a successful handoff looked like in concrete terms - specifically, which decisions the acquired team could make independently, and within what timeframe. That single scoping conversation exposed that three senior people had completely different definitions of "success." Without it, we would have built three different solutions simultaneously and called it integration.
The 1-3-1 framework I use with teams applies directly here too. One clear problem statement, three real options, one recommendation. It forces the person giving you the vague mandate to commit to a direction before your team burns cycles on the wrong one.

Refuse Start Until Scope Is Written
The first thing I do with a "make it better" mandate is refuse to start work until we have answered three questions in writing. Better for whom, measured how, and what would not change. Until those are filled in, every minute of building is a coin flip on whether the result will count.
The example I keep using internally at GpuPerHour is when our team got a request to "make the customer dashboard better." That phrase could mean a redesign, a new chart, a faster page load, more granular cost breakdowns, or all of the above. Each version is months of work that lives in different parts of the codebase. So before any engineer touched it, we wrote a one-pager that said: better for ML and AI customers running multi-day training jobs, measured by how quickly they can answer "am I on budget and is my run healthy" without leaving the page, and explicitly not changing the underlying data model or the admin views. Three lines.
The "what would not change" line is the one that prevented rework. It pulled three quietly assumed redesigns out of scope before they could become work. Two of those would have rippled into our billing pipeline and forced backend changes that would have taken longer than the dashboard work itself. By naming them as out of scope at the start, we avoided a week of design discussion and a much longer round of engineering rework.
The dashboard rebuild itself shipped in about three weeks instead of the eight to ten we had braced for. The customer-facing metric we cared about, the time to answer the budget and health question, dropped from a multi-click hunt to a glance. Nothing else broke.
The lesson is that vague mandates do not need more vision. They need a sharper "no." Writing down what is explicitly not in scope is usually more useful than writing down what is.
Faiz Syed, Founder of GpuPerHour

Eliminate Specific Risk with Verifiable Records
I run ITECH Recycling in Chicago, so I'm constantly translating "make it better" into something auditable: secure, compliant, sustainable, and easy for the client's team to execute. I turn vague mandates into a one-page outcome statement with (1) what data risk is being eliminated (NIST 800-88 wipe vs physical shredding), (2) what documentation they need (chain of custody + certificates), and (3) what equipment/locations are in scope.
My go-to move is a 15-minute "risk + inventory" intake: what kinds of data are on the devices (HIPAA/GLBA/FISMA concerns), what's the failure mode they're trying to avoid, and what "done" means in proof (serial-level reporting, wipe logs, destruction certificates). Then I lock it into a simple decision: reuse/resale candidates get ADISA-certified WipeOS sanitization; anything high-risk or end-of-life gets hard drive shredding.
One time a client essentially said "make our hardware retirement process safer." Before we touched anything, we scoped it to: all laptops/desktops/servers in two storage rooms, wipe where possible, shred any failed or non-wipeable media, and deliver documentation their compliance person could file without rewriting it. That prevented rework because IT originally assumed "delete/reimage is fine," but compliance needed NIST 800-88 proof--so we didn't end up re-collecting devices later to redo sanitization.
If you want a practical template: write the mandate as "Reduce [specific risk] to [verifiable standard] for [defined asset set] by [date], producing [named artifacts]." In my world, the artifacts are the difference between "we think it's better" and "we can prove it's better."

Draft Auditor-Ready Results and Perimeters
I've spent 20-25 years in regulated systems (CSV/CSA/CQV, GxP quality, and security) and now own the product roadmap at Valkit.ai, so "make it better" is basically my day job. I translate it into one measurable outcome + one boundary: what artifact changes, for which users, and what regulator/auditor would accept as "better."
My move is a 30-minute "definition interview" where I force three outputs: (1) the user decision we're trying to make easier (ex: approve a release, close a deviation, pass an audit), (2) the evidence that proves it worked (ex: traceability completeness, e-signature-ready record, audit trail present), and (3) the "not doing" list (what we explicitly won't touch). Then I write it as an acceptance test in plain English: "A QA reviewer can open one record and see requirements - risk - tests - evidence - approvals with no side spreadsheets."
A real example: early on at Valkit.ai, we got a fuzzy mandate to "make traceability better." Instead of building more UI, I scoped it to one outcome: "automated, audit-ready traceability from URS to test evidence with version control and e-signatures, including Jira/Azure DevOps items as first-class links." That immediately narrowed the build to master data + relationship rules + exportable views, not a giant "traceability module."
That saved rework when customers started asking about Annex 11/Part 11 expectations--because we'd already defined "better" as auditor-consumable evidence, not "more fields." We didn't have to unwind a bunch of features later; we just extended the same evidence chain to additional workflows instead of redesigning the core.

Set a Time-Bound Done Target
I deal with "make it better" constantly in HR -- usually it sounds like "fix our culture" or "improve our onboarding." My first move is always to ask: better for whom, and what does done look like? That single conversation converts a feeling into a scoped project.
The tool I use is simple: identify the problem, define one measurable outcome, and agree on what's explicitly out of scope. When a client told me to "fix their onboarding," I translated that into: "New hires should reach full productivity within 90 days, measured through their first performance check-in." Everything outside that window was a phase two conversation.
That scoping call prevented real rework. The client originally wanted to overhaul their entire training library simultaneously. By anchoring to that 90-day outcome, we focused only on week-one orientation and the 30-day manager touchpoint -- and skipped rebuilding content nobody was using yet. We would have wasted weeks otherwise.
The S.M.A.R.T. framework is what I hand clients when they push back on narrowing scope -- specifically the "time-bound" piece. A goal without a deadline stays vague forever, and vague goals get reworked endlessly because no one agreed on what "finished" meant in the first place.

Demand an Objective Success Measure
As a CEO, if you ask your team to improve an order like "make it better", but they respond by saying "we're going to fix this feature," then you probably don't have good expectations aligned.
Most teams will start tweaking a project or process, but at MKB Media Solutions, we've determined that improvement of a project occurs when you move from considering how people feel about things (subjective) to determining what data says about the problem (objective).
Therefore, we now always ask our clients and potential clients, "What measure will tell us that we were successful?"
In my last client engagement, which had a client who said their outreach wasn't working well, I could clearly identify "better" for them as "a twenty percent increase in backlinks."
Therefore, I did not spend months doing rework on either my part or theirs.
All professionals should stop immediately when asked to do so and instead develop a definition of "done".
When you define your request by developing a measurable outcome, you are ensuring that your time has value and credibility as a professional.

Bake Compliance Into a Simple SOP
I run environmental ops for MLI Environmental and Maine Labpack, so "make it better" usually means "reduce risk and pass an inspection without drama." I translate it into: what waste stream, what site areas, what regulation driver (EPA/DOT/OSHA/state DEP), and what "done" looks like in paperwork and in the field.
My first move is a 30-minute walk-through + inventory, then I turn it into a one-page SOP-style scope: purpose, scope boundaries, roles/responsibilities, the exact procedure steps (labeling, containers closed/in good condition, segregation, storage inspections), and the required records (manifests, training, inspection logs). If someone can't point to who does labeling, where labels live, and when inspections happen, the "better" goal isn't real yet.
One time this prevented rework: a client asked us to "tighten up haz waste compliance" before a potential unscheduled audit. We found inconsistent container labels and a few open/loosely sealed containers; instead of "fixing it as we go," we assigned a single owner for labeling, standardized the label info, set an inspection cadence, and baked it into their waste-management SOP with a revision section. That stopped the common cycle of relabeling the same drums twice and redoing shipment paperwork because details didn't match what DOT/EPA expect.
Reddit-level tip: if the mandate is vague, force a "definition of done" that includes both safety behavior and documentation. In this world, if it isn't written down and assigned, it didn't happen--and that's where rework (and fines) come from.

Adopt a Single North Star
I'm Runbo Li, Co-founder & CEO at Magic Hour.
Vague mandates are not a communication problem. They're a prioritization problem. When someone says "make it better," what they're really telling you is they don't know what "better" means yet. Your job is to force that clarity before you write a single line of code or move a single pixel.
I use what I call the "one number" rule. Every project needs exactly one metric that defines success. Not three. Not a dashboard. One number. If you can't name it, you don't understand the problem yet. When someone says "make the onboarding better," I ask: better means more people finish it, or more people come back the next day? Those are two completely different projects with two completely different solutions.
This saved us months of potential rework early on at Magic Hour. We had feedback from users saying our video creation flow was "confusing." The instinct was to redesign the whole thing. Instead, I looked at the data and asked one question: where exactly are people dropping off? Turns out 60% of drop-offs happened at a single step, the prompt input screen. Users didn't know what to type. So instead of a full redesign, we built template-based starting points that pre-filled the prompt. The fix took days, not months. Completion rates jumped significantly.
If we'd chased the vague goal of "less confusing," we would have redesigned screens that were working fine, burned weeks, and probably introduced new confusion in the process. The one number rule, in this case completion rate at that specific step, kept the scope tight and the outcome measurable.
The trick is refusing to start until the vague thing becomes specific. People feel pressure to move fast, so they skip this step. That's backwards. Specificity is speed. The five minutes you spend forcing clarity will save you five weeks of building the wrong thing.
Translate Symptoms Into Tests and Limits
I run All Pro Service Group in the Salt Lake Valley and we do plumbing/HVAC/electrical--so "make it better" is basically our daily intake call. I turn it into a clear goal by forcing it into an observable symptom, a "done" test, and a boundary: what the homeowner will be able to see/feel/hear when it's fixed, what we will inspect/measure, and what we are explicitly not touching that day.
Example: "Make the A/C work better" becomes "keep the home comfortable without the system struggling"--so we scope it as a diagnostic visit with specific checks (airflow, thermostat behavior, obvious duct issues, and system condition), plus a written report with options. The outcome isn't "better," it's "here's what's failing, what's risky, and what's the next step," explained in plain language so they can approve work without surprises.
That approach prevented rework on a call where the customer wanted us to "stop the breaker from tripping" so they could add more devices. Instead of swapping parts blindly, we clarified the goal as safety + code compliance + capacity, and the scope as a wiring evaluation (not just a breaker change). We found the real constraint and recommended the right upgrade path, which avoided the classic second visit where you end up undoing a quick fix because the underlying load/wiring issue was never defined.

Specify Metric Goal and Deadline
Chris here -- I run Visionary Marketing, specialist SEO and Google Ads agency.
"Make it better" is how projects fail quietly. Everyone feels like they're working towards something, nobody's actually on the same page, and six weeks in you've delivered the wrong thing beautifully.
The conversation I've learned to have is relentlessly specific. When a client says "improve our Google Ads performance," that's not a goal--that's a direction. I ask: are we chasing conversion volume, cost per conversion, ROAS, or something else entirely? What does "better" mean in pounds and pence? By when? If the answer is "I don't know yet," we literally stop and build the definition together before writing a single line of strategy.
One client came to us with "we need to be more visible online." That could mean SEO, paid search, social, PR, or all of it. We ran a two-week discovery that isolated their real problem: they were losing to competitors in the awareness phase because they had zero owned content addressing common questions in their industry. That revealed the actual mandate: create a content framework and distribution plan that captures search intent around those questions. Suddenly we had scope, success metrics, and a timeline.
The rework avoidance came on our second project with them. They said "improve conversion rates on our landing pages." We stopped. Asked the hard questions. Found out the issue wasn't creative or design--it was message-market fit. People were arriving on the wrong pages. We fixed the targeting, not the landing page. That clarity meant zero rework. If we'd assumed "improve conversion rates" meant redesign the pages, we'd have built something beautiful that solved the wrong problem.
The template I use now is almost mechanical: what's the current state, what's the desired state, how will we measure the gap, and by when. Write it down. Get agreement. Then strategy flows from that definition.
The sharp takeaway? Vague mandates kill projects quietly. The effort to define them clearly happens once and saves weeks of rework.

Plan Backward from Inspector Evidence
I'm the Chairman of Academy Aviation Group, and I run teams across Malta, the U.S., and the UAE where "make it better" is dangerous language--because regulators (EASA/FAA/UK CAA/GCAA, etc.) don't audit intentions, they audit outcomes and evidence. My move is to translate the mandate into a testable end-state: what decision changes, what artifact proves it, who signs off, and what explicitly stays out of scope.
I use a quick "audit-backwards" filter: if an inspector walked in tomorrow, what would they ask to see? Then I force the goal into deliverables (training format, competence measurement method, instructor qualifications, recordkeeping) and a review cadence, because those are the levers that survive real scrutiny.
One time this saved rework was when we were rolling out FAA-format maintenance training aligned to how inspectors evaluate programs (Order 8900.1A procedures, acceptance expectations, surveillance mindset). "Make our FAA training stronger" became: produce an audit-ready training package with documented competence checks and standardized records, and get internal stakeholders aligned on what "acceptable evidence" looks like before anyone built content.
That upfront definition prevented the classic redo where course material is "good," but the documentation and measurement don't match inspector logic--so you end up rebuilding assessment, retraining instructors, and reformatting records after the fact. In aviation training, the fastest way to burn time is to build first and define "proof" later.

State the Customer's So-That Benefit
Coming from fashion design and then building Bark & Style from scratch, "make it better" was basically my daily reality. The way I handle it: I immediately ask whose experience are we improving, and what does "better" look like when it's done? That question forces the conversation from feeling to function.
When I was developing the subscription tier structure for Bark & Style, "make it better" could have meant anything. Instead of guessing, I mapped it to a specific customer journey question: does a first-time subscriber know exactly what they're getting before they commit? That reframed the goal into something concrete -- clearer tier naming, item counts listed per tier, and a defined entry point (the Trial Box) so new customers weren't overwhelmed.
That clarity prevented a huge amount of rework. Without it, I would have redesigned the product photography, rewritten the homepage copy, or overhauled pricing -- all things that felt like "making it better" but weren't actually solving the real friction point.
The rule I follow: translate the mandate into a single sentence starting with "so that the customer can..." If you can't finish that sentence, you don't have a goal yet -- you just have a feeling.

Pin Down Symptom End State and Bounds
I run a third-generation HVAC + plumbing company (family-owned since 1956), so "make it better" is basically our daily job--comfort complaints, high bills, weird smells, recurring clogs, you name it. The way I translate it is: define the symptom, define the measurable outcome, then lock the scope to what we will touch today.
My script is three questions: "What's the pain you'd pay to stop?" "What does 'fixed' look like in your house--temp difference, humidity feel, noise, smells, water usage, or reliability?" "Where are we allowed to intervene--thermostat/settings, filtration/IAQ, ductwork, attic/insulation, equipment, or plumbing/drains?" Then I write it down as a one-line target (ex: "Reduce upstairs stuffiness and long AC run times without changing equipment size") plus an excluded list (ex: "not replacing the whole system unless diagnostics prove it").
One time a customer asked us to "make the upstairs more comfortable." Instead of jumping to a bigger unit (classic rework), we scoped it to diagnostics first: compare upstairs/downstairs temps late afternoon, inspect filter/airflow, check for sweating ductwork and attic insulation thin spots, and verify the condensate drain wasn't backing up and spiking humidity. That turned it into a clear outcome--more even room temps and fewer overworked cycles--without buying the wrong solution first.
Because we offer HVAC, indoor air quality, and plumbing under one roof, scoping like this also prevents the "fix the AC, ignore the moisture source" loop. If the mandate is vague, I treat it like troubleshooting: agree on the finish line, agree on what we're not doing, and only then pick the tools.
Flag Harmful URLs Before Suppression
In ORM, "make it better" usually means "make the bad stuff disappear" -- which tells me nothing actionable. My first move is always to ask: what specific content, on what platform, visible to which audience, and what does resolved actually look like? Without that, you're just throwing effort at a feeling.
A vague mandate once came in from a legal firm asking us to "clean up" an executive's search results. Before touching anything, I ran our 27-point removal audit to identify exactly which URLs were causing damage and why. That scoping step revealed that only three pieces of content were doing 90% of the reputational harm -- so we prioritized those first instead of launching a broad suppression campaign that would have cost the client far more time and money.
The rework it prevented was significant. Had we gone straight into suppression mode across the board, we'd have built out a months-long content strategy around results that could've just been removed in days. Defining the target first kept us in removal mode, which is always faster and more permanent than suppression.
My framework is simple: convert the vague request into a before/after statement. "Right now, searching [name] returns X negative result on page one. Success means that result is gone or displaced within Y days." If you can't write that sentence, you're not ready to start working -- you're still in the problem-definition phase, and that's exactly where you should stay until it's clear.

Replace Adjectives with Observable Actions
Running a water heater company taught me that "make it better" usually means five different things to five different people. So the first thing I do is ask one question: *what does "better" look like when we're done?* That forces the conversation from feeling to outcome.
When we decided to "improve the customer experience," that mandate could've gone a hundred directions. I narrowed it to one specific gap: customers didn't know when we were arriving. So we defined the goal as *eliminating arrival uncertainty* -- the outcome was a text notification 30 minutes out. That's it. Tight scope, measurable result. John D. literally called it out in his review: "a text when they were 30 min. away." That's how you know the goal was right.
Without that specificity, we could've rebuilt our whole scheduling system, redesigned the website, or hired more staff -- all before solving the actual frustration. Defining the outcome first killed the rework before it started.
The framework I use: replace the adjective with a verb. Don't say "better communication" -- say "customer confirms our arrival window before we leave the shop." Now your team knows exactly what done looks like, and nobody's guessing.








