Skip to main content
Clay tables are column-based — each column is a separately configured enrichment or AI step. Deepline is outcome-centric: describe what you’re trying to achieve and Claude selects the right tools to get there. This guide shows how to take an existing Clay table and produce equivalent scripts that run locally, on your own schedule, with full control over every step.
The video walks through a real Clay table with email waterfalls, Claygent research, and ICP scoring — the full flow from table → phase 1 doc → scripts → output CSV.

What you need

A Clay table

The table you want to migrate.

Claude Code

Install from claude.ai/code and log in.

Deepline CLI

Runs the enrichment scripts Claude Code generates. Install below.
You can use this skill just to document a Clay table without running anything. If that’s your goal, skip to Documentation-Only Path at the bottom.

Install Deepline

Deepline is what runs the enrichment. It wraps providers like LeadMagic, Exa, Instantly, and Claude into a single command.
npm install -g deepline
deepline auth login
Verify it’s working:
deepline tools list
You don’t need Deepline if you just want a Phase 1 doc (table summary + dependency graph + pass plan). You only need it when you want to actually run the output scripts.

The workflow

Four steps. The first two are in Claude Code. The last two are in your terminal.

Step 1 — Capture your Clay table’s network logs

  1. Open app.clay.com in Chrome and log in
  2. Open DevTools: Cmd+Option+I on Mac, or F12 on Windows
  3. Click the Network tab
  4. Reload the page (or click around in your Clay table to trigger requests)
  5. In the filter box, type api.clay.com to show only Clay API calls
  6. Click any request in the list
  7. Right-click → CopyCopy as cURL
On Mac it’s “Copy as cURL (bash)”. On Windows it’s “Copy as cURL (cmd)” — either works, the skill strips what it needs.
The copied command will look like:
curl 'https://api.clay.com/v3/tables/t_abc123/...' \
  -H 'accept: application/json' \
  -b 'claysession=eyJhb...; ajs_user_id=abc; ...' \
  ...
The part you need is everything between the -b '...' quotes — that’s your session cookie.

Step 2 — Run /clay-to-deepline in Claude Code

Open Claude Code and type:
/clay-to-deepline [paste URLs]
Then give it your Clay table. You can paste:
  • A Clay table URL (e.g. app.clay.com/workbooks/xxx/tables/yyy)
  • A JSON export from the table
  • Network logs or HAR file captured from Chrome DevTools
Network logs contain session cookies. The cookies are not shared with Deepline, but don’t share them externally.
Claude Code reads the table structure and the actual AI prompt templates embedded in each column. The richer your input, the more accurate the output. Review the Phase 1 doc — Claude Code produces three things before writing any scripts:
  • Table summary — every column, what Clay action it runs, what model, what the output looks like
  • Dependency graph — which columns feed into which (execution order at a glance)
  • Pass plan — the ordered list of Deepline tool calls that replicate each Clay action
Check that:
  • Prompt templates are recovered verbatim (approximated ones are flagged with ⚠️)
  • The pass plan matches what your table actually does
  • ICP scoring criteria are correct (the skill can’t invent these)
Once you’re happy, say “looks good” and Claude Code writes the scripts.

Step 3 — Run the scripts

Claude Code generates two scripts in a scripts/ folder:
./scripts/fetch_people.sh           # → seed_people.csv
./scripts/enrich_people.sh          # → output_people.csv
Start with a pilot run (3 rows) before doing the full table:
./scripts/fetch_people.sh seed_people.csv pilot
./scripts/enrich_people.sh
For paid tools (Exa, LeadMagic, email waterfalls), the skill adds a pilot gate automatically. You’ll see a preview before any money is spent.

Step 4 — Get your output

Run the normalizer to produce clean CSVs:
python3 scripts/normalize.py \
  --contacts output_people.csv \
  --out-contacts output/contacts.csv
You get:
  • output/contacts.csv — every person with enriched fields (email, job function, AI-generated research, ICP score)
  • output/accounts.csv — every company with verified name, domain, social links, HQ address, CEO
Or push directly to a campaign:
deepline tools execute instantly_add_to_campaign \
  --payload '{"campaign_id":"<id>","contacts":[...]}'

What you get

The migration produces a self-contained project folder:
my-table/
├── .env.deepline          # your Clay cookie (gitignored)
├── prompts/
│   ├── job_function.txt   # recovered verbatim from Clay, or marked ⚠️ APPROXIMATED
│   └── qualify_person.txt
├── scripts/
│   ├── fetch_people.sh
│   └── enrich_people.sh
└── output/
    ├── contacts.csv
    └── accounts.csv
The scripts are plain bash. You can read them, edit them, schedule them with cron, or run them in CI.

How Clay actions map to Deepline

The skill handles these automatically — this is just so you know what’s happening:
Clay ActionDeepline Equivalent
Claygent web research3 parallel Exa searches + Claude Sonnet synthesis
chat-gpt-schema-mappercall_ai haiku — same output, much cheaper
octave-qualify-personcall_ai sonnet + json_mode ICP scorer
add-lead-to-campaign (Instantly)instantly_add_to_campaign
add-lead-to-campaign (Smartlead)smartlead_api_request

Accuracy

From a 26-row production test against Clay ground truth (March 2026):
MetricMatch Rate
Email≥95% match vs Clay
Job function classification≥95% exact match
Claygent research (strategy, initiatives)~85% specific, actionable content
The ~15% research failure rate comes mostly from name collisions (e.g. onit.com → NASDAQ:ONIT instead of the legal-ops software company).
The skill’s “confidence” label on research rows doesn’t mean what you’d expect. “Low” confidence often has great content — it just means the LLM found blog posts instead of 10-K filings. Judge by whether the content is specific to the company, not by the confidence score.

Cost comparison

From the same 26-row test:
ClayDeepline
Cost per row$0.13~$0.03
MethodClaygent research, 10 search steps3 parallel Exa searches + Claude Sonnet synthesis
Time per row~134 seconds~15 seconds
Content quality is equivalent. For large public companies, Deepline is sometimes more current (Exa indexes recent blog/PR content faster than Google indexes 10-Ks).

Documentation-only path (ClayMate Lite)

If you just want to document a Clay table — understand what it does, map its dependencies, build a reference for the team — you don’t need to run anything.
  1. Install ClayMate Lite from GitHub
  2. Open your Clay table in Chrome
  3. Click the extension icon → “Export JSON”
  4. Drag the downloaded *.json file into Claude Code
  5. Run /clay-to-deepline and ask for Phase 1 docs only
You’ll get a full table summary, dependency graph, and pass plan — without running a single script or spending anything.
ClayMate Lite exports include the table schema + sample rows. This is the richest input short of a full HAR file. For most tables it’s enough to recover the actual AI prompt templates from cell values.

Feedback or issues?