Automated skyscraper technique workflow, step by step
Ivaylo
March 19, 2026
We tried to turn the automated skyscraper technique workflow into a push-button machine once. It looked great in a spreadsheet. Then we shipped the “better” article, sent 300 emails, and got exactly two replies: one was a bounced mailbox, the other was an editor asking if we’d even read their post.
That failure was expensive, but useful. It taught us what actually automates well (data gathering, scoring, queuing, reminders) and what stays stubbornly human (judgment, differentiation, and the small social cues that make an editor trust you). If you treat those human parts like they are just fields in a CSV, you will blame “outreach” when the real problem was your target.
This piece is the workflow we wish we had the first time we ran Skyscraper properly: the 3-step method credited to Brian Dean (Backlinko, 2015) with the modern reality baked in. Backlinks still matter. Editors are tired. And “longer” is not “better.”
What “automation” really means in a skyscraper workflow
Automation in this context means we let software do the repetitive, high-volume work that humans are bad at: collecting link data, checking recency, pulling traffic estimates, extracting snippets, and keeping a pipeline moving when we get busy.
What trips people up is assuming automation can replace editorial judgment. It cannot. Topic selection, intent match, and the reason someone should switch their link are not numbers you can fully outsource to a tool. When people do, they end up building a glossy replacement for a page nobody should be targeting in the first place.
So we automate the boring parts and keep the “why would an editor care” decisions human.
Target selection worth automating (and the rubric we actually use)
Most skyscraper campaigns die before content exists. They die in the target selection tab, where someone sorts by “most referring domains” and calls it strategy.
Here’s the uncomfortable truth: if the original page doesn’t have a link profile that is both substantial and replaceable, you’re building a trophy, not a link magnet.
We start with a threshold that is high enough to matter but not so high that we chase unicorns. In practice, a page with roughly 50 to 100 quality referring domains is often the sweet spot: enough proven link attraction to justify a rebuild, not so saturated that every editor has already been pitched twenty times this year. This is a heuristic, not a law, but it keeps us from wasting weeks.
The workflow we run in tools (Semrush is fine, not magical)
We usually start in a backlink tool because it gives us two things quickly: which pages earned links, and who linked.
In Semrush, the path that saves the most time is simple. Enter a competitor domain. Open the Indexed Pages report. Sort by referring domains. Then stop yourself from clicking the top result like a lab rat.
Instead, we screen pages with three filters before we even score them:
First, relevance: does this topic belong on our site, with our audience, without feeling like we’re cosplaying? Second, intent: does the keyword set behind the page match what we can genuinely satisfy? Third, improvability: can we create a replacement that is materially better, not just formatted differently?
If any of those fail, we kill the target. Quickly.
The weighted scoring model (copy it, then tweak it)
We use a rubric because it forces the team to argue with numbers instead of vibes. It also makes automation helpful, because many of these fields are pullable from exports.
Our base scoring model uses six inputs. Total possible score: 100.
Referring domains fit (0 to 20): We give the most points when the page sits in the 50 to 100 quality RD band. Below that, it may not be proven. Way above that, it can be a bloodbath unless we have a very strong angle.
Topical relevance to our site (0 to 20): If the topic is adjacent but not core, it loses points. If we would be embarrassed to put it on our homepage, it goes to zero.
Intent match (0 to 15): We look at the SERP and ask: is Google rewarding quick answers, templates, a comparison, a study, a tool? If our “better article” fights the intent, links will be harder and rankings will be unstable.
Link quality signals (0 to 20): We scan the linking domains for editorial context. Are these real publications, resource pages, university references, niche blogs with actual readers? Or are they directories, spun posts, and “write for us” farms?
Improvement surface area (0 to 15): Old screenshots, missing steps, thin methodology, no examples, broken outbound links, a vague definition, no visuals, outdated stats. This is where you find your opening.
Business fit (0 to 10): Can we tie this to what we sell or care about without forcing it? If the answer is “not really,” the campaign becomes a vanity project.
We score 5 to 10 candidate URLs, not 50. That’s the asymmetric effort part. Spending an extra hour here saves a week later.
A sample decision table (not a spreadsheet template, the logic behind it)
Here’s what a real internal note might look like when we’re choosing between three “obvious” targets.
Candidate A: 120 RDs, strong relevance, intent matches, link quality mixed, improvement surface small, business fit medium. Score: 71. Risk: crowded, lots of stale linkers.
Candidate B: 68 RDs, strong relevance, intent matches, link quality high, improvement surface large (outdated workflow, missing visuals), business fit high. Score: 86. This is usually the pick.
Candidate C: 40 RDs, medium relevance, intent unclear, link quality decent, improvement surface medium, business fit high. Score: 58. Might be a later test if we need a faster win.
Notice what happened: the “most backlinks” page lost.
Kill criteria (we end targets fast)
We keep a short “nope” list because teams love to rationalize bad picks. If any of these hit, we stop:
- The page attracts links for a reason we cannot replicate (a one-time viral story, a proprietary tool, a dataset we cannot rebuild).
- The linking domains are mostly low-trust (thin blogs, obvious PBN patterns, junk directories).
- The topic sits outside our authority so hard that outreach would feel misleading.
- The SERP intent rewards something we cannot or will not provide (like a free tool, a calculator, a template library).
- We cannot articulate a one-sentence “switch reason” that would make sense to an editor.
That last one is the quiet killer. If our best pitch is “ours is newer,” we are not ready.
Automated prospecting: from competitor URL to a prioritized outreach list
Once the target is correct, automation finally earns its keep. The messy middle is not collecting backlinks. It’s deciding who to contact first so we do not waste good prospects on a weak first draft of our email.
The annoying part: treating all linkers as equal is how you get ignored. Editors change jobs. Pages go stale. Some sites have zero traffic and never will. You can send perfect outreach to a page that hasn’t been touched since 2019 and it will still fail.
The data we pull (and where it comes from)
We export backlinks to the original “inferior” URL from a tool like Semrush Backlinks. For each linking page or domain, we want:
Link first seen date: used for recency. Most tools expose this.
Last modified or last updated: sometimes a tool provides it, sometimes we have to crawl the linking URL and infer from headers, sitemap dates, or visible timestamps. This is messy. We do it anyway.
Estimated organic traffic for the linking domain (or page if available): Semrush, Moz, Ahrefs, pick your poison. We only need a tier, not a perfect number.
Context snippet: the sentence or paragraph around the link. Some tools provide it. Otherwise we scrape it.
Contact target: author name, editor email, or an editorial desk address. Automation can find candidates. A human still verifies.
The prioritization algorithm (simple on purpose)
We use a point system so we can batch outreach without hand-sorting hundreds of rows.
Priority Score = Recency points + Update points + Traffic tier points + Relevance points
Recency points (0 to 40): If they linked in the last 6 months, they get most of the points. If the link is 6 to 18 months old, fewer. Older than that, close to zero unless it’s a high-trust site.
Update points (0 to 25): If the linking page was updated within the last year, it scores high. If it’s untouched for years, it drops.
Traffic tier points (0 to 25): We use tiers because “estimated traffic” is noisy. Tier 3 could be 10k+ monthly organic, Tier 2 might be 1k to 10k, Tier 1 under 1k. Adjust to your niche.
Relevance points (0 to 10): This is human-labeled or semi-automated with topic modeling. We mostly do it by eye for Tier 1 prospects and let automation guess for the rest.
Then we batch it.
Tier 1: Priority Score 70+. These are fresh linkers, active pages, real traffic. We hand-check these and craft outreach carefully.
Tier 2: 50 to 69. We still personalize, but we move faster.
Tier 3: under 50. We often park these unless we need volume after we’ve proven the pitch works.
This batching is boring. It’s also the difference between a 1% and a 6% conversion rate on link updates.
Where automation helps, and where it lies to you
Automation is great at pulling dates and traffic estimates. It is bad at knowing whether a link is editorially meaningful.
We learned this the hard way on a campaign where the scoring model loved a cluster of links from “resources” pages. The pages had traffic. They were updated recently. Perfect, right? Wrong. The links were in giant lists where nothing ever gets clicked, and the editors guarded them like museum exhibits. Our conversion rate was near zero.
So we add one manual check on Tier 1 prospects: is the link sitting in a sentence where it helps the reader, or is it buried in a list of 200 links? Context matters more than DA metrics.
Building the 10x asset with information gain, not word count
At this point, people usually ask how long the replacement article should be. Our answer: long enough to win, short enough to be read.
Where this falls apart is when teams confuse “better” with “bigger.” They produce a 6,000-word sprawl that adds no new utility. Editors can smell that. Readers bounce. Google gets better every year at detecting fluff.
We build skyscraper assets around information gain. That can mean new data, clearer methods, better examples, or visuals that remove ambiguity.
The differentiation blueprint we reuse
We start by writing a “switch reason” sentence. Not a tagline, a practical editorial reason. Example: “This version includes a scoring rubric and a prospect prioritization formula you can copy, plus updated tooling steps.”
Then we add at least two of these four differentiators:
Net-new evidence: a small study, a dataset, screenshots of real exports, before-and-after results, even a mini audit of 20 linkers. It does not need to be academic. It needs to be concrete.
Custom visuals: original diagrams, annotated screenshots, checklists that are actually readable. Custom images matter because they signal effort, and editors like linking to things their readers can understand quickly.
Expert input: we sometimes email five practitioners with one sharp question and include the answers. Not “what is SEO,” but something like “what makes you replace a link?” You get quotable lines.
Intent satisfaction: we map the SERP and make sure we answer the job-to-be-done. If the SERP wants a workflow, we give a workflow with decisions, not a history lesson.
One tangent: we once lost a link swap because our “better” guide had gorgeous screenshots that were slightly blurry on mobile. The editor opened it on their phone, decided it looked sloppy, and ghosted. Petty. Real.
Creating swap-ready link targets (the part that makes outreach feel respectful)
If you send an editor a link and ask them to “consider adding it,” you are giving them homework. Homework does not get done.
We build swap-ready targets by mapping each prospect’s context to an exact replacement suggestion. Automation can extract the surrounding text and propose an insertion point. A human needs to sanity-check it so we do not look like we’re spraying tokens.
How we map link context without losing our minds
We pull the linking URL, scrape the paragraph around the existing competitor link, and store:
1) The anchor text they used.
2) The sentence that contains the link.
3) The surrounding section heading.
Then we decide the swap type.
If it’s a dead or outdated resource link, it’s a clean replacement.
If it’s a “further reading” style link, we propose an additional link, not a swap, unless the original is clearly weaker.
If it’s cited as a source for a specific claim, we only pitch if our asset supports that same claim with equal or better evidence.
This is where credibility lives. If you pitch a swap that breaks the author’s argument, they will ignore you even if your content is excellent.
Outreach system design for the AI era
Editors are saturated with “I noticed your article” emails. The problem is not that outreach is dead. The problem is that most outreach feels like an automated chore sent by someone who did not read the page.
The catch: over-automated personalization can hurt you twice. First, it produces weird, obviously templated sentences. Second, it creates a recognizable footprint across inboxes, which makes your domain look spammy.
Our sequencing (relationship-first, not email-first)
We run a short sequence and stop when the signal is bad.
First message: we point to the exact URL, quote the sentence where the old link sits, and propose a specific swap with the new URL. We keep it tight. We do not attach files.
Second message (3 to 5 business days later): we add one extra piece of value, like a single stat, a screenshot, or a note that the old resource is outdated or broken. Still short.
Third message (a week later): we ask if there is a better person to contact. If no response, we stop.
We do not do seven touches. That is how you burn a domain.
Deliverability basics we treat like plumbing
You can have the best pitch in the world and still land in spam.
We use a separate sending domain when volume is high, warm it slowly, and keep daily sends conservative until replies come in. We avoid heavy tracking pixels on cold campaigns. Some editors’ mail systems hate them.
We also keep templates loose. If your outreach reads like it came from a “link building tool,” it will be filtered by humans even if it passes the spam filter.
If you want tooling, Semrush’s Link Building Tool is decent for pipeline tracking, mostly because it keeps prospects, status, and notes in one place. It does not make you persuasive. FindThatLead and similar tools can help find contacts, but we still verify: nothing tanks response rates like emailing the wrong person.
Measurement and iteration (what we watch when we’re not lying to ourselves)
We track links earned, but we treat it as a lagging indicator.
We watch:
Reply rate by tier: if Tier 1 is not replying, the pitch or the target is wrong.
Link placement quality: in-content editorial links beat footers and resource dumps.
Referral traffic: if the new links never send visits, your prospecting may be chasing dead sites.
Ranking movement for the intended query set: if links land but rankings do not move, intent match or on-page quality is suspect.
Conversions tied to the asset: even a small signal matters. A skyscraper that earns links but never leads to anything is a content trophy.
When do we refresh vs pick a new target? If outreach is getting replies but not swaps, we usually improve the asset or the swap-ready mapping. If outreach is getting ignored across Tier 1 prospects, we abandon the target and go back to the rubric. Most teams do the opposite. They send more emails.
That’s the whole point of an automated skyscraper technique workflow: not sending faster spam, but making better decisions earlier, then using automation to move the repeatable parts with discipline.
FAQ
What parts of a skyscraper campaign can you actually automate?
You can automate data collection and operations: backlink exports, scoring, prioritization, reminders, and pipeline tracking. You cannot reliably automate topic judgment, intent match, and the editorial reason someone should replace a link.
How do you choose the right URL to rebuild in a skyscraper campaign?
Pick a page with a proven but replaceable link profile, often around 50 to 100 quality referring domains. Filter by relevance, intent, and improvability first, then score a small set of candidates with a weighted rubric.
How long should a skyscraper article be?
As long as it takes to satisfy intent and add information gain, not a fixed word count. Editors and search engines reward usefulness: clearer steps, updated evidence, and better visuals.
Why do skyscraper outreach emails get ignored even when the content is good?
Most emails fail because the target list is weak or the ask creates work for the editor. Prioritize active, relevant linkers and send a swap-ready suggestion that matches the existing context and claim.