GSC Insights: The Performance Workflow We Stopped Pretending Didn’t Exist

Sitechecker | SEO Analytics | B2B SaaS | GSC Insights | Shipped 2025

QUICK CONTEXT

Project overview

Sitechecker is a B2B SaaS platform that helps teams monitor SEO performance and AI visibility across client websites. It combines technical site auditing, search analytics, and reporting into one workspace so you can spot what is growing, what is breaking, and why, then prioritize fixes and measure impact over time.

We already had Google Search Console connected across our ecosystem, not as one dedicated tool, but as a data layer powering multiple reports. Data was flowing in. Data was stored. Everything worked technically. And we noticed a clear product signal: users who connected GSC consistently showed stronger activation and retention.

But we still didn’t have a real place for GSC Performance work inside the product. No space where investigation actually happens. So users did what they always do: they connected GSC for the added context, then went back to the native console for the real analysis.

It was one of those product moments where the problem isn’t bugs: it’s absence.

We had the retention signal, but not the workflow to sustain it. So we built GSC Insights from scratch, starting with the backbone report: Performance Overview.

This case is the story of how we turned raw GSC data into a calm, reliable workflow that scales from “quick check” to “serious investigation.”

Product

Web

TIMELINE

Q2 2025 (Shipped in 4 iterations: Basic, Compare, Segments, Saved Filters)

MY ROLE

As Product Designer, I owned the entire product logic: from debunking the “copycat” strategy to designing the Shift Method for honest comparisons and the No Split Reality interaction model

SKILLS

Product Strategy, Complex Logic Mapping, Data Visualization, System Design, Stakeholder Alignment

IMPACT

Turned a passive data integration into a retention driver. Built a from-scratch Performance workflow that replaces click-heavy GSC analysis with instant context, reliable comparisons, and scalable segmentation for 50,000+ URLs.

THE MOMENT IT CLICKED

The “copycat” trap

Before I touched any UI, I did the unglamorous part: research and competitive scan.

I expected to find inspiration. Instead, I found a pattern I couldn’t unsee.

Most competitors weren’t solving GSC pain. They were repackaging it.

They copied GSC’s interface, imported the same friction, and just added a new layer on top. That’s when the direction became obvious.

We had a choice: keep calling it an integration, or actually build the product users expect the moment they connect GSC

We weren’t building “another GSC dashboard.” We were building the workflow GSC never prioritized. Not a reskin. A fix.

THE GOAL

Not a UI refresh, but a fix

The goal was measurable and practical:

The goal was measurable and practical:

Speed: Reduce time to answer the most common SEO questions (e.g., “Why did traffic drop?”).

Consistency: Stay usable under real data constraints (API limits, incomplete data).

Scale: Work equally well for 50 URLs and 50,000 URLs.

Trust: Feel reliable even when the data is not instantly ready.

This meant building a board from zero, not polishing an existing one. And we would ship it iteratively, without breaking the mental model SEO specialists already trust.

If “GSC integration is a retention anchor,” then the workflow has to earn that role.

MY ROLE

Leading the build from scratch

I led this as a from-scratch workflow build, from problem framing and scope to the interaction model and delivery plan.

Problem Framing: Defined what “better than GSC” means in success criteria, not aesthetics

Roadmap Strategy: Turned research into a scope: Basic → Compare → Segments → Saved Filters.

Interaction Model: Designed the sync logic so filters, search, charts, and tables never drift into conflicting realities.

Key Product Bets: Query-to-Page visibility, multi-country filtering, weekday-aligned comparison.

Delivery Partnership: Aligned with engineering on constraints, reuse (Rank Tracker patterns), and edge cases under real data conditions.

THE BUILD: A STORY IN BETS

Turning friction into a decision engine

Bet 1: Make Context Visible Where Work Happens

We started with the foundation: Chart on top, Table below. Familiar territory.

We also treated the chart as more than a visualization. It is where investigation starts.
So we added two context layers directly on the timeline: Google updates and custom notes. Users can see algorithm shifts in the same view as their performance changes, and leave their own markers to explain what happened and why.

In native GSC, the answers are still hidden behind clicks. The most common question is: “Which Page ranks for this Keyword?” And answering it costs context.

We started with the foundation: Chart on top, Table below. Familiar territory.

We also treated the chart as more than a visualization. It is where investigation starts. So we added two context layers directly on the timeline: Google updates and custom notes. Users can see algorithm shifts in the same view as their performance changes, and leave their own markers to explain what happened and why.

In native GSC, the answers are still hidden behind clicks. The most common question is: “Which Page ranks for this Keyword?” And answering it costs context.

The Fix: I made a table-level product decision to break the wall between Keywords and Pages.
⇥ In the Keywords tab, each keyword shows its Top Ranking Page.
⇥ In the Pages tab, each Page shows its Top Ranking Query.

✨ Why it matters?

✨ Why it matters?

SEO is rarely about what happened; it is about why it happened. By overlaying events on the chart, we turn a flat trend line into a story of cause and effect. And by surfacing relationships in the table, we eliminate "drill-down fatigue": users no longer need to break their flow just to connect a keyword to its URL.

Bet 2: No Split Reality, Ever

Analytics tools die the moment users stop trusting what they see. I enforced one principle: If the dataset changes, everything reflects it. Filters update the table AND the chart. Search updates the table AND the chart. Nothing lives in its own universe.

Metric Cards as Controls

We added cards for Clicks, Impressions, CTR, Position. But they aren’t decoration.

⇥ Click: Toggles a metric line on the chart.

Constraints: Users cannot disable all metrics (at least one stays on). If >2 metrics are active, we hide the axis system to reduce clutter.

Trust Formatting: Thousands become “K”, millions “M”, but hovering always reveals the exact raw number.

We added cards for Clicks, Impressions, CTR, Position. But they aren’t decoration.

  • Click: Toggles a metric line on the chart.

  • Constraints: Users cannot disable all metrics (at least one stays on). If >2 metrics are active, we hide the axis system to reduce clutter.

  • Trust Formatting: Thousands become “K”, millions “M”, but hovering always reveals the exact raw number.

Readable Timeframes

Daily points become noise at longer ranges. We built auto-switching:
⇥ < 30 days: Daily
⇥ 0 - 180 days: Weekly
⇥ 180+ days: Monthly

We added cards for Clicks, Impressions, CTR, Position. But they aren’t decoration.

  • Click: Toggles a metric line on the chart.

  • Constraints: Users cannot disable all metrics (at least one stays on). If >2 metrics are active, we hide the axis system to reduce clutter.

  • Trust Formatting: Thousands become “K”, millions “M”, but hovering always reveals the exact raw number.

✨ Why it matters?

✨ Why it matters?

In analytics, ambiguity creates anxiety. If the chart and table don't match perfectly, the user stops analyzing and starts debugging the tool. By enforcing total synchronization and automating visual cleanup (like auto-timeframes), we shifted the cognitive load. Users no longer waste energy deciphering the UI: they spend it deciphering their growth.

Bet 3: Filtering Without “Math Brain”

Filtering in GSC is where workflows go to die: single-country limits, awkward searching, regex gymnastics. We made a system that is powerful but readable.

Multi-Country (Finally)

Real SEO workflows aren’t “one country at a time.” They are “Tier 1 Countries” or “Europe + USA”. We enabled multi-select for countries.

We added cards for Clicks, Impressions, CTR, Position. But they aren’t decoration.

  • Click: Toggles a metric line on the chart.

  • Constraints: Users cannot disable all metrics (at least one stays on). If >2 metrics are active, we hide the axis system to reduce clutter.

  • Trust Formatting: Thousands become “K”, millions “M”, but hovering always reveals the exact raw number.

The Rule Builder

We supported AND, OR, REGEX, but avoided making it feel like a dev tool.

Builder flow: Pick entity → Pick condition → Enter value → Stack rules.

We added cards for Clicks, Impressions, CTR, Position. But they aren’t decoration.

  • Click: Toggles a metric line on the chart.

  • Constraints: Users cannot disable all metrics (at least one stays on). If >2 metrics are active, we hide the axis system to reduce clutter.

  • Trust Formatting: Thousands become “K”, millions “M”, but hovering always reveals the exact raw number.

Context-Aware Search

Search is tied to the active table. If you are in Pages, search looks for URLs. If in Queries, it looks for Queries.

We added cards for Clicks, Impressions, CTR, Position. But they aren’t decoration.

  • Click: Toggles a metric line on the chart.

  • Constraints: Users cannot disable all metrics (at least one stays on). If >2 metrics are active, we hide the axis system to reduce clutter.

  • Trust Formatting: Thousands become “K”, millions “M”, but hovering always reveals the exact raw number.

✨ Why it matters?

✨ Why it matters?

Advanced analysis often forces users to "export to Excel" simply because the native UI is too rigid. By humanizing the syntax (Rule Builder) and removing arbitrary limits (Multi-Country), we kept the investigation inside the product. It turns technical constraints into accessible logic, allowing anyone to find complex patterns without needing a developer's skillset.

Bet 4: Comparison That Does Not Lie

Our SEO Manager flagged that standard date comparison was misleading. Comparing “Last 7 Days” to “Previous 7 Days” naively compares Monday to Sunday. That is accidental fiction. Traffic patterns don’t work that way.

The Decision: Comparison must align weekdays.
The Shift Method: When users pick a preset, the system shifts the previous period back by full weeks so weekdays align: Monday to Monday.

Scanning Deltas

When Compare is enabled:

⇥ Metric cards show Delta (absolute # and %).

⇥ Chart adds a second dotted line.

⇥ Table adds delta columns.

We added cards for Clicks, Impressions, CTR, Position. But they aren’t decoration.

  • Click: Toggles a metric line on the chart.

  • Constraints: Users cannot disable all metrics (at least one stays on). If >2 metrics are active, we hide the axis system to reduce clutter.

  • Trust Formatting: Thousands become “K”, millions “M”, but hovering always reveals the exact raw number.

✨ Why it matters?

✨ Why it matters?

Data that misleads is worse than no data. By forcing "apples-to-apples" comparison, we eliminated false alarms caused by weekend dips matching against workday peaks. It ensures that a red signals a real performance drop, not just a calendar artifact, giving users the confidence to act on true signals rather than statistical noise.

Bet 5: Scale the Workflow

At 100 URLs, filters are fine. At 10,000 URLs, filters turn into manual labor.
We needed Segments, but we ran into a legacy constraint: Segments already existed in our Site Audit (crawler) tool. Asking users to build the same grouping twice, once for the crawler, once for GSC, would be pure self-harm.

One segment model across the platform

We made Segments as a cross-app entity. We unified the logic so Site Audit and GSC speak the same language. Create a segment once, and it works everywhere.

That means a user can define “/blog/” pages in Site Audit and instantly analyze the same cluster in GSC Performance without rebuilding rules.

We added cards for Clicks, Impressions, CTR, Position. But they aren’t decoration.

  • Click: Toggles a metric line on the chart.

  • Constraints: Users cannot disable all metrics (at least one stays on). If >2 metrics are active, we hide the axis system to reduce clutter.

  • Trust Formatting: Thousands become “K”, millions “M”, but hovering always reveals the exact raw number.

The Magic: Suggested Segments, powered by crawl data

Since we already crawl the site, we know its structure. So I used crawler signals to generate Suggested Segments. The system detects patterns like /blog/ or /products/ and offers them as ready-to-use clusters.

The shift is simple: instead of building rules, users approve them. Configuration stops being a chore and becomes a one-click decision.

We added cards for Clicks, Impressions, CTR, Position. But they aren’t decoration.

  • Click: Toggles a metric line on the chart.

  • Constraints: Users cannot disable all metrics (at least one stays on). If >2 metrics are active, we hide the axis system to reduce clutter.

  • Trust Formatting: Thousands become “K”, millions “M”, but hovering always reveals the exact raw number.

No detours

Segments are managed right where the work happens, via a drawer inside the board. I refused the common pattern of kicking users out to Settings. So, users never leave the report to fix their data structure.

We added cards for Clicks, Impressions, CTR, Position. But they aren’t decoration.

  • Click: Toggles a metric line on the chart.

  • Constraints: Users cannot disable all metrics (at least one stays on). If >2 metrics are active, we hide the axis system to reduce clutter.

  • Trust Formatting: Thousands become “K”, millions “M”, but hovering always reveals the exact raw number.

You manage them right where you are:

Drag & Drop Priority: Users can reorder segments simply by dragging them, defining their own hierarchy of importance.

Quick Edits: Need to tweak a rule? Click edit, fix it in the drawer, and the data recalculates instantly.

Zero Navigation: You never leave the report. You build the structure while looking at the data.segments are managed inside the board via a drawer.

We added cards for Clicks, Impressions, CTR, Position. But they aren’t decoration.

  • Click: Toggles a metric line on the chart.

  • Constraints: Users cannot disable all metrics (at least one stays on). If >2 metrics are active, we hide the axis system to reduce clutter.

  • Trust Formatting: Thousands become “K”, millions “M”, but hovering always reveals the exact raw number.

✨ Why it matters?

✨ Why it matters?

Configuration is where engagement dies. By recycling crawler signals, we removed "blank page" paralysis: users shouldn't have to teach the tool how their site works; the tool should already know. This turns a complex setup into a simple approval flow, making the workflow truly scalable. Users stay anchored in the problem they are solving, not the setup. Their mental model stays intact even as the dataset grows, because structure is reusable, consistent, and always just one step away from insight.

Bet 6: Save Time in the Boring Parts

Testing revealed a quiet pattern: users rebuild the same filters every day. “Mobile + USA + Non-Brand” : same setup, same clicks, wasted minutes.

Designed to be lightweight:
⇥ Requires a name to save.
⇥ Stored per project.
Non-editable (rename/delete only) to keep it fast.
Behavior: Enabling a saved filter overwrites manual filters to prevent ambiguous states.

TRUST DESIGN

When Reality Hits

Pulling GSC data takes time. We didn’t hide it behind a spinner. We treated it as a trust moment.

Data Crawling State

If data is syncing:
⇥ Real progress indicator.
⇥ Clear expectations.
⇥ Option to close the tab and get an email notification.

No Results State

When filters hit zero:
Context over emptiness: We explicitly explain why (filters/dates), so users know it’s not a bug.
Guidance: Instead of a dead end, we suggest adjusting criteria to recover the data flow.

Not Connected State

If GSC is missing:
⇥ Clear CTA to connect.
⇥ The board never feels broken.

In analytics, Trust is UX. These states do more for retention than a polished chart ever will.

OUTCOMES

Measuring impact

Core feature adoption

73.6% of active users utilized the Performance Overview within 30 days, cementing it as the platform’s primary analytics hub.

Active exploration

On average, users apply 3+ filters per session. This proves they use the tool for deep investigation, not just passive monitoring.

Workflow lock-in

> 15% of active users have created at least one Segment, transforming a manual daily setup into a permanent one-click habit.

Qualitative

"The tool also allows us to interpret Google Search Console data in a more practical way, providing valuable information for optimisation." - Kini Calderón, SEO Account Manager at Viva Conversion

"I would say that my main takeaway from using it is the simple and quick interaction with other systems like Google Search Console and Google Analytics." - Andreza Ferrari, Founder and SEO specialist at Seotec

And for me, the real outcome wasn't just shipping a cleaner dashboard. It was turning the GSC integration into a strategic retention anchor, ensuring we built a workflow robust enough to make the native console obsolete for our users' daily routines

WHAT I LEARNED

Key takeaways

  1. Workflow > Polish: In complex B2B products, the competitive advantage isn't new gradients or shadows. It is workflow clarity. My main insight: if a design is beautiful but forces extra clicks, it is a failed design.

  1. Logic is Design: Date comparison (Shift Method) isn't a backend task; it is a product design decision. I learned that in analytics tools, UX is primarily about the mathematical integrity of the data, and only secondarily about the interface

  1. Scale Breaks Patterns: What works for 50 pages collapses at 50,000. This project taught me to always design for "extreme" data volumes. If the filter or table cannot handle scale, it does not work at all.

  1. Respect Muscle Memory: SEOs have lived in GSC for years. Breaking their habits for the sake of "creativity" is a path to failure. We kept the familiar structure (chart + table) but changed the interaction logic. Familiarity speeds up adoption.

  1. Quality of Life = Retention: Small details like Saved Filters are often underestimated. But they are the specific features that save users minutes every day, turning a product from a "tool" into an "indispensable assistant"

Footer Illustration

You've made it to the end of quite the scroll.. Great job!

In some other universe, we're already friends. So why not in this one? So let's connect!

Serious Me
I'm currently open to full-time opportunities! Let's create something amazing together! ✨
Footer Illustration

You've made it to the end of quite the scroll.. Great job!

In some other universe, we're already friends. So why not in this one? So let's connect!

Serious Me
I'm currently open to full-time opportunities! Let's create something amazing together! ✨