.png)
I work on product growth at a fintech. We move fast, we ship constantly, and our whole thing is basically "get users to value before they bounce." Great! Love that for us.
What I do not love is that our review process takes three weeks.
Anything I build — onboarding update, feature announcement, whatever — goes through Product, Marketing, Legal, and at least one senior stakeholder. And okay, they all have real reasons to be there. Legal especially, because we're in financial services and compliance isn't optional. Product has caught actual targeting errors. I'm not trying to stage a coup.
But three weeks to ship a tooltip? We're tracking activation rate and time to first value and I can literally see the drop-off that happens when contextual guidance doesn't land fast enough after a feature goes live. It's not invisible. It has a number.
And reviewers have started rubber-stamping anyway because the volume is too high. So we're not even getting the thorough review we built this whole process for. We're just getting the wait.
I don't want to blow it up. I just want it to work. Fast and safe seems like it should be possible, but right now we have neither, which is somehow the worst outcome.
How do you fix a process that's both too slow and not actually working?
— Leave Me Alone, Legal
Moment of silence.
Three weeks is basically a month. If this review cycle is rolling (and it sounds like it is) you're not reviewing things sequentially, you're waiting. Indefinitely. Which is a world of difference.
A review cycle this heavy doesn’t just pop out of the ground. Something shipped at some point that caused a problem whether that was the wrong message, a segment built wrong, compliance issue, you name it. This process is a response to that something. As I love to say: this is not about what it’s about.
So before you touch the process, you have to get everyone in the room agreeing out loud that a three-week review cycle for a tooltip (!!!) is not reasonable. Don’t offer a fix yet.
When you get shared acknowledgment that what you're doing isn't working, it makes it more real for other people. It’s also the first meaningful step towards actually fixing this. If people don’t agree the problem exists, you’re basically pushing a rock uphill through the mud in the rain.
You can’t build a solution on top of a problem people haven’t agreed to see.
You're tracking two things, right? Activation rate and time to first value. That means you almost certainly have examples of drop-off that correlates with guidance not being live when it needed to be. So go and pull those, across use cases. I’m talking about everything from big launches to the small stuff. Put it in a Loom. Send it to one person from each team and ask to meet.
Wherever you can tie that data to a business metric like revenue, churn, an OKR, do it. It moves the conversation from "our process is frustrating" to "our process is costing us something." Those land very differently in a room with legal and senior stakeholders.
The core problem sounds like a tooltip and a tier-one product launch are going through the exact same process, and they shouldn't be in the same conversation. Most teams that make progress here do it by agreeing on a simple tier system: major product release gets full review, significant announcement gets a lighter version, routine update gets a quick check before it ships.
The specifics matter less than everyone agreeing on the categories. Once that exists, a lot of the weight falls off naturally because the process starts to match the stakes.
Open-ended asks produce open-ended opinions. Instead of putting something in front of people and waiting to see what comes back, go to each team with specific questions.
This narrows scope, respects people's time, and signals that you're driving the process. A lot of the rubber-stamping you're seeing is probably reviewers who don't know what's actually being asked of them. Give them a smaller, more specific job and they'll do it faster.