Transparent Case Study
Our first campaign didn’t hit targets. Here’s everything that happened.
⌚ 9 min read · 2,019 words
client campaign
Most agencies only show you the wins. The 200+ placements, the viral data study, the homepage feature on TechCrunch. Nobody talks about the campaigns that underdelivered.
This is the story of Presslei’s first paid campaign. Our client was Chatronix, an AI technology company. The topic was political bias in AI chatbots. The results were disappointing. And it taught us more about reactive PR than any success could have.
I’m sharing this because I think the PR industry has a transparency problem. If we’re going to ask clients to trust us with their money and their reputation, the least we can do is be honest about what works and what doesn’t.
In This Article
The Brief
Chatronix came to us in late 2025. They build tools that audit AI systems for bias, and they wanted press coverage around a specific angle: political bias in the major AI chatbots. The timing felt perfect. AI regulation was dominating headlines. The EU AI Act was being implemented. Governments were asking hard questions about how these systems influence public opinion.
We proposed an original research study. Not a thought leadership piece, not a press release about a product update. Actual data that journalists could cite.
The idea: test four major AI chatbots (ChatGPT, Claude, Gemini, and Copilot) across 12 politically sensitive topics, from immigration policy to climate legislation to gun control, and measure how their responses skewed across the political spectrum. Score each response on a left/right scale using a standardised framework, run it multiple times to check for consistency, and package the findings into something newsworthy.
Chatronix paid a deposit of EUR 644 and we got to work.
The Study
We spent about two weeks designing the methodology and running the tests. Each chatbot received the same 48 prompts (12 topics, 4 variations each) and we scored responses on a 1 to 7 political spectrum scale, with 1 being strongly progressive and 7 being strongly conservative.
The findings were genuinely interesting:
- Three of the four chatbots showed a measurable centre left lean on economic topics, particularly around wealth inequality and healthcare policy
- One chatbot was notably more evasive, refusing to engage with 7 of the 12 topics entirely
- Responses varied significantly depending on how the question was framed. The same topic phrased as a policy question vs. a moral question produced different political leanings from the same model
- Consistency was poor across all four. Re-running the same prompt 24 hours later could shift the score by up to 1.5 points
We had a solid dataset. We had clear, quotable findings. We turned it into a clean study page with methodology, charts, and a summary that any journalist could scan in two minutes.
Pro Tip
Track everything. The difference between PR professionals who grow and those who stagnate is measurement. Know your pitch-to-placement rate and which angles convert.
The Outreach
This is where things went sideways.
We compiled a target list of 87 journalists covering AI, technology policy, and digital rights. A mix of national tech reporters (Wired, The Verge, Ars Technica), political technology writers, and AI specialist journalists at outlets like MIT Technology Review and VentureBeat.
Our outreach ran for about three weeks in total. We sent personalised pitches in three waves:
Wave 1 (Week 1): 34 journalists. Our top tier. Personalised emails referencing their recent coverage. Open rate was decent at around 38%. Response rate: 3 replies. Zero commitments to cover.
Wave 2 (Week 2): 28 journalists. Second tier, slightly broader outlets. We tweaked the subject line and led with the single most surprising finding (the framing effect on political lean). Open rate dropped to 29%. Two responses, both “interesting but not for us right now.”
Wave 3 (Week 3): 25 journalists. Included some freelancers and newsletter writers. By this point, a major AI safety story had broken (a leaked internal memo from one of the big labs) and every tech journalist was chasing that instead. Open rate: 24%. One lukewarm response.
Final score: 87 journalists pitched. 6 responses. 0 placements.
Not a single story published.
What Went Wrong
I’ve spent a lot of time thinking about this. Here’s my honest breakdown:
1. We picked a crowded week and didn’t adapt.
The AI news cycle in late 2025 was relentless. Every week there was a new model release, a new regulation, a new controversy. Our study was interesting but it wasn’t urgent. When that leaked memo story broke during Week 2, we should have paused the campaign and waited for the cycle to cool. Instead we pushed through. That was a mistake.
2. The study tried to say too much.
We had 12 topics, 4 chatbots, multiple framings. The dataset was rich but the pitch was complicated. Journalists don’t want a buffet. They want one finding they can build a headline around. “AI chatbots give different political answers depending on how you ask the question” is a story. “Here’s a comprehensive analysis of political bias across 12 topics” is a research paper. We pitched the research paper.
3. Our subject lines were too safe.
Looking back at our emails, the subject lines were descriptive but not compelling. “New research: Political bias in AI chatbots” tells you what the email contains but gives you no reason to open it. We should have led with the sharpest finding every time.
4. We didn’t have established journalist relationships yet.
This was our first campaign. We were cold emailing everyone. No journalist had heard of Presslei or Chatronix. That matters more than people in this industry admit. A pitch from an unknown agency about an unknown company faces a credibility gap that even great data can’t always close.
What We’d Do Differently
If I ran this exact campaign again today, here’s what would change:
Single finding, single headline. Instead of pitching the full study, I’d pick the most counterintuitive result and build the entire pitch around that one data point. The full methodology lives on a study page for anyone who wants to dig deeper. But the pitch email is one finding, one stat, one sentence.
Reactive timing, not proactive timing. Instead of launching the study on our schedule, I’d prep the data and wait. The moment a relevant story breaks (a chatbot gives a controversial political answer, a politician calls out AI bias), we’d have the data ready to offer as an expert comment or supporting research within hours. That’s reactive PR done properly.
Warm before you pitch. We now spend time engaging with target journalists on social before we ever send a pitch. Comment on their articles, share their work, build some name recognition. Cold email to a warm contact converts better than cold email to a cold contact.
Shorter outreach window. Three weeks is too long for a single campaign. If the first wave doesn’t land, something is wrong with the pitch or the timing. Two waves maximum, then regroup.
Exclusivity. For a data study like this, offering an exclusive to one top tier outlet first would have been smarter than blasting 34 journalists simultaneously. Exclusives create urgency. Mass emails don’t.
Key Takeaway
PR is a long game. Individual campaigns matter less than building a reputation as a reliable, valuable source that journalists trust.
What This Taught Us
This campaign shaped how Presslei works today. Every process we now follow, from how we structure study findings to how we time outreach to how we write subject lines, has roots in what went wrong with Chatronix.
Three specific things changed:
First, we moved to a reactive first model. We still create original data studies, but we design them to be deployable in response to breaking stories, not as standalone pitches. This has made our timing dramatically better.
Second, we adopted a “one finding, one pitch” rule. Every campaign now gets distilled to a single headline before outreach begins. If you can’t say it in one sentence, it’s not ready.
Third, we started tracking journalist engagement before pitching. We don’t cold email anymore without at least two prior touchpoints (social interaction, content sharing, event attendance). It takes longer to launch but the response rates are incomparably better.
Why I’m Publishing This
There’s a version of this post where I spin the Chatronix campaign as a “learning experience” and bury the numbers. That would be easy and nobody would question it.
But I started Presslei because I thought the PR industry needed more honesty. Agencies that only share success stories are doing their prospective clients a disservice. You deserve to know what failure looks like so you can evaluate whether an agency actually learned from it or just hid it.
Chatronix trusted us with their budget and we didn’t deliver the results we promised. That stings. But the methodology we built, the study itself, and the outreach infrastructure we created during that campaign became the foundation for everything that came after.
If you’re evaluating PR agencies, ask them about their failures. If they don’t have any, they’re either lying or they haven’t done enough work to have learned anything useful yet.
Keep Reading
- The campaign that did work: 2,296 placements for Hockerty
- What is reactive PR and how we do it now
- How much does digital PR actually cost?
Keep Reading
Ready to earn links instead of buying them?
Get 8–14 editorial placements in top-tier publications. No contracts. No risk. Just results.
$3,000 per campaign · 8–14 links guaranteed · Zero penalty risk
Presslei is a reactive digital PR agency based in Zurich. We run data driven campaigns for tech and B2B companies. If you want to talk about what a campaign would look like for your business, honest conversation included, get in touch.
Sources: Google Trends · ONS
About the Author
Salvador Jovells
Founder of Presslei. 12+ years in ecommerce SEO across international markets. After a decade of link buying for Hockerty and Sumissura, I reverse-engineered 5,272 earned media placements and founded a reactive PR agency that builds authority through data-driven stories journalists actually want to publish. Based in Zurich.
Related Reading
- 5 Data-Driven PR Campaign Ideas for Ecommerce
- The 10 PR Campaign Formats That Get 90% of Press Coverage
- What 5,272 Media Placements Taught Us
“Research-driven PR campaigns work because they give journalists something they genuinely need: original data that supports the story they’re already trying to tell.”
— Salva Jovells, Presslei
DO
- Design research methodology that withstands journalist scrutiny
- Choose research topics that connect to active news conversations
- Package findings with clear headline numbers and supporting data
- Prepare a methodology document before any journalist asks for it
- Plan distribution strategy before conducting the research
DON’T
- Design research to produce a predetermined conclusion
- Use sample sizes too small to be statistically meaningful
- Pitch research findings without a clear news hook
- Ignore the limitations of your methodology in press materials
- Assume one research campaign will produce ongoing coverage without follow-up
Frequently Asked Questions
What actually went wrong with the first campaign?
The core mistake was pitching an angle that was interesting to us but not tied to anything journalists were actively covering. The timing was off and the hook was too brand-centric. We fixed it by mapping every future campaign idea against live editorial trends.
Did the client continue?
Yes — largely because we were transparent about what went wrong and what we were changing. Clients tolerate underperformance far better than being kept in the dark. The second campaign outperformed targets significantly.
What single thing changed most afterward?
Building a campaign idea scoring framework — a rubric that rated each idea on newsworthiness, data strength, journalist relevance, and timing. Campaigns scoring below threshold don’t get built, saving enormous wasted effort.


