Skill
gtm-meta-skill
|
# GTM Meta Skill
Use this skill for prospecting, account research, contact enrichment, verification, lead scoring, personalization, and campaign activation.
## 1) What this skill governs
- Route GTM decisions, safety gates, and provider/quality defaults before execution.
- Keep long command chains and tooling nuance in `SKILL.md`, and provider-specific implementation detail in `provider-playbooks/*.md`.
- Provide clear entry points for both paid and non-paid workflows, including `--rows 0:1` pilots.
## Process/goal.
Customer is generally trying to go from "I have an ICP" to "Here's a list of prospects with email/linkedin and very personalized content or signals". They may be anywhere in this process, but guide them along.
### When ICP context matters (and when it doesn't)
ICP context is required for:
- Prospecting from scratch / choosing who to target
- Persona selection and qualification
- Custom signal discovery and personalized messaging (`call_ai` columns)
- LinkedIn lookup when you don't have enough identifying info (title, company, geo) — ICP persona titles become your search filter
For these: if they have an ICP.md somewhere, read it; else guide them to create one or just give you base context of who they are (e.g. getting the customers domain is super high value, you can scrape their site and understand them).
ICP is NOT required for mechanical tasks — do not ask for it, do not raise it as an objection:
- Enriching an existing CSV with a specific field (email, phone, LinkedIn when identifiers are strong)
- Validating email addresses
- Scraping profiles from known URLs
- Running a waterfall on a known column
- Any task where the user already picked their targets and is asking for a specific enrichment type
**Heuristic**: if the user hands you a CSV and asks for a concrete field, just execute. ICP becomes required when the agent has to *choose who to target*, *craft what to say*, or *disambiguate a weak lookup*.
### Documentation hierarchy
- Level 1 (`SKILL.md`): decision model, guardrails, provider routing, approval gates.
- Level 2 (`playground-guide.md`, `qualification-and-email-design.md`, `actor-contracts.md`, `gtm-definitions-defaults.md`, `prompts.json`): tool mechanics and advanced guidance.
- Level 3 (`provider-playbooks/*.md`): provider-specific quirks, cost/quality notes, and fallback behavior.
- Level 4 (`SKILL.md` section 8.x): runnable end-to-end command recipes.
No-loss rule: moved guidance remains fully documented at its canonical level and is linked from here.
## 2) Read behavior (targeted, not exhaustive)
Read only the docs needed for the active task:
- Start with `SKILL.md`, then open only the relevant recipe/playbook sections.
- Use grep/search for navigation, then read the exact sections needed to execute safely.
## Plan First
Before executing a non-trivial task, do a short problem breakdown (3-7 steps).
Then execute according to that breakdown.
Do not start with ad-hoc commands before writing the breakdown.
## 3) Core policy defaults
### 3.1 Definitions and defaults
Use [gtm-definitions-defaults.md](gtm-definitions-defaults.md) as the source of truth for GTM time windows, thresholds, and interpretation rules.
## Provider Playbooks
- [apify playbook](provider-playbooks/apify.md)
Summary: Use async run plus polling for long jobs; keep sync runs for small bounded actor inputs.
Last reviewed: 2026-02-11
- [apollo playbook](provider-playbooks/apollo.md)
Summary: Recall-first people/company search with include_similar_titles=true unless strict mode is explicitly requested.
Last reviewed: 2026-02-11
- [crustdata playbook](provider-playbooks/crustdata.md)
Summary: Start with free autocomplete and default to fuzzy contains operators `(.)` for higher recall.
Last reviewed: 2026-02-11
- [exa playbook](provider-playbooks/exa.md)
Summary: Use search/contents before answer for auditable retrieval, then synthesize with explicit citations.
Last reviewed: 2026-02-11
- [google_search playbook](provider-playbooks/google_search.md)
Summary: Use Google Search for broad web recall, then follow up with extraction/enrichment tools for structured workflows.
Last reviewed: 2026-02-12
- [heyreach playbook](provider-playbooks/heyreach.md)
Summary: Resolve campaign IDs first, then batch inserts and confirm campaign stats after writes.
Last reviewed: 2026-02-11
- [instantly playbook](provider-playbooks/instantly.md)
Summary: List campaigns first, then add contacts in controlled batches and verify downstream stats.
Last reviewed: 2026-02-11
- [leadmagic playbook](provider-playbooks/leadmagic.md)
Summary: Treat validation as gatekeeper and run email-pattern waterfalls before escalating to deeper enrichment.
Last reviewed: 2026-02-11
- [lemlist playbook](provider-playbooks/lemlist.md)
Summary: List campaign inventory first and push contacts in small batches with post-write stat checks.
Last reviewed: 2026-02-11
- [parallel playbook](provider-playbooks/parallel.md)
Summary: Prefer run-task/search/extract primitives and avoid monitor/stream complexity for agent workflows.
Last reviewed: 2026-02-11
- [peopledatalabs playbook](provider-playbooks/peopledatalabs.md)
Summary: Use clean/autocomplete helpers to normalize input before costly person/company search and enrich calls.
Last reviewed: 2026-02-11
- Apply defaults when user input is absent.
- User-specified values always override defaults.
- In approval messages, list active defaults as assumptions.
### 3.2 Output policy: Playground-first for CSV
- Always use `deepline enrich` for CSV enrichment, its super nicely renderered, great for human review, and powers waterfalls +more. It auto-opens the Playground sheet so you can inspect rows, re-run blocks, and iterate.
-- Even when you don't have a CSV, create one and use deepline enrich.
-- This process requires iteration, one shotting results via running deepline tools execute enrichments is short sighted.
- If a command created CSV outside enrich, immediately run `deepline csv --render-as-playground start --csv <csv_path> --open`.
- In chat, send the file path + playground status, not pasted CSV rows, unless explicitly requested.
- Preserve lineage columns (especially `_metadata`) end-to-end.
- If rebuilding intermediate CSVs with shell tools unless explicitly requested, make sure to carry forward _metadata columns.
## 4) Pre-flight and shape checks
`deepline auth status`
`deepline tools list --search "email verify"`
### 4.0 Help-first command check (recommended)
- Use command-specific help before execution, for example:
- `deepline enrich --help`
- `deepline tools list --help`
- `deepline tools execute --help`
- `deepline csv --help`
- `deepline csv --render-as-playground start --help`
- Confirm required arguments and flags from help output before pilot or full runs.
### 4.1 Tool shape validation (required before JS extractors)
1. `deepline tools get <tool_id> && deepline tools get <tool_id_2>`
2. Run a real pilot with `--rows 0:1`
3. Correct `result_path`/interpolation if mismatch before any paid batch.
### 4.2 Pilot/preview checklist
- For unknown/modified payloads: run at least one real one-row pilot.
- For direct JSON extraction logic: validate with real tool output first.
- For high-risk parsing/matching: use `--rows 0:1` and inspect returned cell shape.
Example enrichment workflow:
```bash
deepline enrich --input leads.csv --in-place \
--with 'company_lookup=crustdata_companydb_autocomplete:{"field":"company_name","query":"{{Company}}","limit":1}' \
--with 'people_search=apollo_people_search:{"person_titles":["VP Sales"],"q_keywords":"{{Company}}","include_similar_titles":true,"per_page":1,"page":1}'
```
## 5) Credit and approval gate (paid actions)
### 5.1 Required run order
1. Pilot on a narrow scope (example `--rows 0:1`).
2. Request explicit approval.
3. Run full scope only after approval.
### 5.2 Execution sizing
- Use smaller sequential commands first.
- Keep limits low and windows bounded before scaling.
- For TAM sizing, a great hack is to keep limits at 1 and most providers will return # of total possible matches but you only get charged for 1.
- Do not depend on monthly caps as a hard risk control.
### 5.3 Approval message content
Include all of:
1. Provider(s)
2. Pilot summary and observed behavior
3. Intent-level assumptions (3–5 one-line bullets)
4. CSV preview from a real `deepline enrich --rows 0:1` pilot
5. Credits estimate / range
6. Full-run scope size
7. Max spend cap
8. Approval question: `Approve full run?`
Note: `deepline enrich` already prints the ASCII preview by default, so use that output directly.
Strict format contract (blocking):
1. Use the exact four section headers: Assumptions, CSV Preview (ASCII), Credits + Scope + Cap, Approval Question.
2. If any required section is missing, remain in `AWAIT_APPROVAL` and do not run paid/cost-unknown actions.
3. Only transition to `FULL_RUN` after an explicit user confirmation to the approval question.
Approval template:
```markdown
Assumptions
- <intent assumption 1>
- <intent assumption 2>
CSV Preview (ASCII)
<paste verbatim output from deepline enrich --rows 0:1>
Credits + Scope + Cap
- Provider: <name>
- Estimated credits: <value or range>
- Full-run scope: <rows/items>
- Spend cap: <cap>
- Pilot summary: <one short paragraph>
Approval Question
Approve full run?
```
### 5.4 Mandatory checkpoint
- Must run a real pilot on the exact CSV for full run (`--rows 0:1`).
- Must include ASCII preview verbatim in approval.
- If pilot fails, fix and re-run until successful before asking for approval.
```bash
deepline enrich --input <current_csv_path> --in-place --with ... --rows 0:1
```
### 5.5 Billing commands
```bash
deepline billing balance --json
deepline billing limit --json
```
when credits at zero, link to https://code.deepline.com/dashboard/billing to top up.
### 5.6 Quick state peek (playground snapshot)
- Default to reviewing `deepline enrich` output first; it already returns the main run result.
- Use `deepline csv --read --csv <csv_path> [--rows 0:1]` only when you need deeper inspection (raw snapshot/window payloads, custom row windows, or debugging block state).
## 6) Provider routing (quick catalog at top-level)
### 6.1 Search and discovery
- `google_search_google_search` — broad recall and URL discovery.
- `parallel_search` — fast candidate discovery when speed matters.
- `exa_answer` — schema-light quick answers with citations.
- `exa_research` — deeper multi-source synthesis under schema constraints.
- `apollo_people_search` — people/company discovery by title/industry/domain/headcount.
- `crustdata_*` — LinkedIn-oriented discovery and job listing context.
- `peopledatalabs_person_identify` / `peopledatalabs_person_search` / `peopledatalabs_enrich_contact` — structured person/company lookup when identifiers are partial.
- `parallel_run_task` / `parallel_extract` — richer synthesis or URL-bound extraction.
### 6.2 Verification and quality
- `leadmagic_email_validation` — deliverability at scale.
- `leadmagic_mobile_finder` — phone discovery when profile/email exists.
- `leadmagic_email_finder` — lower-cost fallback or missing-provider recovery.
- Apollo/PDL enrichment — corroboration before validation when identifiers are strong.
### 6.3 Messaging and enrichment quality
- `call_ai*` (`call_ai`, `call_ai_claude_code`, `call_ai_codex`) — highest-quality custom signals. Messaging. Anything custom. Use to build your OWN signals. These can call parallel, exa, themselves directly.
- `exa_*`/`parallel_*` direct calls are faster and cheaper but usually lower depth than full AI-column orchestration.
### 6.5 Provider path heuristics
- Broad first pass: direct tool calls for high-volume discovery.
- Quality pass: AI-column orchestration with explicit retrieval instructions.
- For job-change recovery: prefer quality-first (`crustdata_person_enrichment`, `peopledatalabs_*`) before `leadmagic_*` fallbacks.
- Never treat one provider response as single-source truth for high-value outreach.
### 6.5.1 Search and research routing
Most "search" tasks — company research, profile discovery, signal extraction, news — should go through `call_ai` (which orchestrates multiple tools: exa, parallel, google search). Prefer `call_ai` as the default for any research or search need. It produces better results because it can reason about what to look up, cross-reference sources, and retry.
**Before using `call_ai*` for anything, you MUST read `prompts.json` first.** It contains battle-tested prompt templates for common signal types. Start from an existing template, or at least get inspo, and then adapt it rather than writing prompts from scratch.
**Direct Google CSE (`google_search_google_search`) on its own** is appropriate for one thing: LinkedIn profile lookup when you have highly specific identifiers. Use it when you can build a maximally specific query:
- Include every available row field: name + company + title + geo.
- Use `site:linkedin.com/in` scoping and quoted exact-match phrases.
- If row data is thin (just a name), inject ICP persona titles as search terms (e.g. "revops", "sales ops") to constrain results.
- Bad: `"Jane Smith" site:linkedin.com/in`
- Good: `"Jane Smith" "Acme Corp" "sales ops" site:linkedin.com/in`
If you can't build a specific enough query (missing company AND title AND geo), don't use raw CSE - ask the USER FOR MORE CONTEXT.
## 6.6 Decision-by-use quick map
- Use AI-columns for: website signals, custom messaging, qualification notes, strict JSON schemas.
- Use direct tools for: cheap discovery, deterministic extraction from known URLs, and low-latency checks.
- Use Apify when the task is niche/scraper-specific and actor-level input needs control.
### 6.7 Parallel web research and extraction
Use the dedicated integration guide for operator details and examples:
[src/lib/integrations/parallel/agent-guidance.md](../../src/lib/integrations/parallel/agent-guidance.md)
Short rule:
- When the task is managed web research/extraction (including synthesis), prefer `parallel_search` / `parallel_extract` / `parallel_run_task` with a real one-row pilot first.
## 6.8 Category guide: choose provider by objective
| Objective | Preferred path | Alternate/fallback |
|---|---|---|
| Discovery/search | `call_ai` with prompts from `prompts.json`, using exa, parallel and google search directly.
| Profile/company matching | `crustdata_*` / `peopledatalabs_*` | `apollo_people_match` |
| Website evidence + signal extraction | `call_ai*` with Exa/Parallel retrieval prompt | `exa_research` / `parallel_extract` |, inspo from `prompts.json`.
| Validation | `leadmagic_email_validation` first, then enrich corroboration | `leadmagic_email_finder` |
| Outreach content | `call_ai*` (schema-aware) with prompts from `prompts.json`, using exa, parallel and google search directly.
| Scraper-specific niche tasks | `apify` actor + input schema flow | direct web search tools |
| Anything linkedin scraping | `apify` actors, by far the best | direct web search tools |
Use this table as the first routing check before choosing a recipe.
## 6.9 Custom Signal guidance (`call_ai`) and `prompts.json`
Use these signal buckets when building enrichment prompts:
- Funding recency: last round type/date/amount, investor names, use-of-proceeds clues.
- Hiring acceleration: new roles in sales, revenue ops, solutions, partnerships, AI.
- Job-posting drift: compare job post stack/signals vs company website stack.
- Org change: new leaders, promotions, team expansion, leadership churn.
- Product or packaging shifts: new SKUs, pricing model changes, enterprise plans.
- Stack changes: newly adopted data/CRM/support/analytics/AI tools.
- Geographic expansion: new regions, markets, offices.
- Compliance/security signals: SOC2, ISO, HIPAA, FedRAMP, procurement readiness.
- Channel/partner motion: alliances, marketplace entries, reseller changes.
Prompt pattern guidance:
- Start from templates in `prompts.json (has 50+ template/prompts)`
- Include all schema keys you need (score/summary/notes) in `json_mode` or system instructions.
- Include source URLs per claim in outputs.
- Keep missing values explicit (`null`), avoid inventing fields.
- Start with lightweight model/short prompt; increase detail only when shape is stable.
Template index (for `call_ai`) — pick the closest match, adapt, don't write from scratch. Skipping this produces weaker prompts and wastes credits on bad schema outputs:
- `10-K Analysis of Top Annual Initiatives` → strategic/company signals, annual priorities, exec-level research
- `5 interesting facts about a candidate` → person research, personalization data points
- `Accelerator participation` → startup signals, funding stage, accelerator/investor context
- `AI Outbound - Followup with Event Attendees` → outreach content, personalized messaging, event-triggered hooks
The full `call_ai` prompt catalog (including all custom-signal templates) is stored in `prompts.json`.
## 7) Structured output quality guardrails
- Include source URLs for extractive claims.
- Use `null` explicitly for missing fields.
- Keep confidence where practical for non-trivial claims.
- Prefer entity-stable matching (name + domain + LinkedIn).
- For `call_ai*`, `json_mode` expects JSON Schema object/stringified schema.
- Prefer `model: "sonnet"` for strict schema adherence; use `opus` when hard reasoning is required.
- Do not invent values.
### Common output paths (quick reference)
| Tool | Input fields | Key output paths |
|------|--------------|------------------|
| `leadmagic_email_finder` | `first_name`, `last_name`, `domain` | `.data.email`, `.data.status` |
| `leadmagic_profile_search` | `profile_url` | `.data.company_website`, `.data.full_name`, `.data.work_experience[0].company_website` |
| `leadmagic_email_validation` | `email` | `.data.email_status`, `.data.is_domain_catch_all`, `.data.mx_provider` |
| `crustdata_person_enrichment` | `linkedinProfileUrl` | `.data[0].name`, `.data[0].email`, `.data[0].current_employers[0].employer_company_website_domain[0]` |
## 8) Waterfalls (short, opinionated). Always use when possible, safest way to enrich.
Default execution stance:
1. Run a real pilot first: `--rows 0:1`.
2. Keep waterfall in one command (`--with-waterfall` + `--end-waterfall`). Prefer a clear `--type` (or at minimum `--result-getters`) per block.
3. Review `deepline enrich` output first. Only use snapshot for deeper debug.
### 8.0 Type/getter contract (critical)
- In CLI, `--with-waterfall <NAME>` is a label only.
- `--type` is the explicit type for the block and must be one of:
- `email`, `phone`, `linkedin`, `first_name`, `last_name`, `full_name`
- `--type` says what the block is looking for (`email`, `phone`, etc.).
- `--result-getters` says where in provider output to find that value.
- Use both when possible: `--type` gives intent, `--result-getters` gives extraction path.
- After `--end-waterfall`, the waterfall name becomes a plain-text column with the resolved scalar (e.g. `{{email}}` = `"john@example.com"`).
### 8.1 If you have name + company and need email
```bash
deepline enrich --input leads.csv --in-place --rows 0:1 \
--with-waterfall "email_recovery" \
--type email \
--result-getters '["data.email","email","data.0.email"]' \
--with 'apollo_match=apollo_people_match:{"first_name":"{{First Name}}","last_name":"{{Last Name}}","organization_name":"{{Company}}"}' \
--with 'crust_profile=crustdata_person_enrichment:{"linkedinProfileUrl":"{{colA.linkedin}}","fields":["email","current_employers"],"enrichRealtime":true}' \
--with 'pdl_enrich=peopledatalabs_enrich_contact:{"first_name":"{{First Name}}","last_name":"{{Last Name}}","domain":"{{Company Domain}}"}' \
--end-waterfall \
--with 'email_validation=leadmagic_email_validation:{"email":"{{email_recovery}}"}'
```
### 8.2 If you have LinkedIn URL and need email
```bash
deepline enrich --input leads.csv --in-place --rows 0:1 \
--with-waterfall "email_recovery" \
--type email \
--result-getters '["data.0.email","data.email","emails.0.address"]' \
--with 'apollo_match=apollo_people_match:{"first_name":"{{First Name}}","last_name":"{{Last Name}}","organization_name":"{{Company}}","domain":"{{Company Domain}}"}' \
--with 'crust_profile=crustdata_person_enrichment:{"linkedinProfileUrl":"{{colA.linkedin}}","fields":["email","current_employers"],"enrichRealtime":true}' \
--with 'pdl_enrich=peopledatalabs_enrich_contact:{"linkedin_url":"{{colA.linkedin}}","first_name":"{{First Name}}","last_name":"{{Last Name}}","domain":"{{Company Domain}}"}' \
--end-waterfall \
--with 'email_validation=leadmagic_email_validation:{"email":"{{email_recovery}}"}'
```
(catchall email validation is messaged as an error, but really its just inconclusive and you can continue)
### 8.3 If you have email and need person/company context
```bash
deepline enrich --input leads.csv --in-place --rows 0:1 \
--with 'email_validation=leadmagic_email_validation:{"email":"{{Email}}"}' \
--with-waterfall "full_name" \
--type full_name \
--result-getters '["person.name","data.person.name","data.name","name"]' \
--with 'apollo_match=apollo_people_match:{"email":"{{Email}}"}' \
--with 'crust_profile=crustdata_person_enrichment:{"linkedinProfileUrl":"{{colA.linkedin}}","fields":["current_employers"],"enrichRealtime":true}' \
--with 'pdl_identify=peopledatalabs_person_identify:{"email":"{{Email}}"}' \
--end-waterfall
```
ke catch
### 8.4 If you have first name + last name + domain and need email (pattern waterfall)
Always validate email at the end. Run pattern checks first (`v1..v4`), then provider fallbacks.
```bash
deepline enrich --input leads.csv --in-place --rows 0:0 \
--with 'email_patterns=run_javascript:{"code":"const f=(row[\"First Name\"]||\"\").trim().toLowerCase(); const l=(row[\"Last Name\"]||\"\").trim().toLowerCase(); const d=(row[\"Company Domain\"]||\"\").trim().toLowerCase(); if(!f||!l||!d) return {}; return {p1:`${f}.${l}@${d}`,p2:`${f[0]}${l}@${d}`,p3:`${f}${l[0]}@${d}`,p4:`${f}@${d}`};"}' \
--with-waterfall "email_recovery" \
--type email \
--result-getters '["data.email","email","data.0.email"]' \
--with 'v1=leadmagic_email_validation:{"email":"{{email_patterns.p1}}"}' \
--with 'v2=leadmagic_email_validation:{"email":"{{email_patterns.p2}}"}' \
--with 'v3=leadmagic_email_validation:{"email":"{{email_patterns.p3}}"}' \
--with 'v4=leadmagic_email_validation:{"email":"{{email_patterns.p4}}"}' \
--with 'apollo_match=apollo_people_match:{"first_name":"{{First Name}}","last_name":"{{Last Name}}","organization_name":"{{Company}}","domain":"{{Company Domain}}"}' \
--with 'crust_profile=crustdata_person_enrichment:{"linkedinProfileUrl":"{{colalinkedin}}","fields":["email","current_employers"],"enrichRealtime":true}' \
--with 'pdl_enrich=peopledatalabs_enrich_contact:{"first_name":"{{First Name}}","last_name":"{{Last Name}}","domain":"{{Company Domain}}"}' \
--end-waterfall \
--with 'email_validation=leadmagic_email_validation:{"email":"{{email_recovery}}"}'
```
### 8.5 If you only have company name and need employees. Apify is a great provider for this.
+
if you want to get to personalized signal based outreach, you need to get to the employees, find the best fit ICP contact, research recent LinkedIn posts, and draft a personalized outbound message.
Goal: from just `Company`, resolve company identity, find GTM personas, pick best-fit ICP contact, research recent LinkedIn posts, and draft a personalized outbound message.
Example (type outcome: can select people like Drew Bredvick at Vercel when present):
```bash
deepline enrich --input companies.csv --in-place --rows 0:0 \
--with 'apollo_company=apollo_company_search:{"q_organization_name":"{{Company}}","per_page":3,"page":1}' \
--with 'company_profile=run_javascript:{"code":"const q=(row[\"Company\"]||\"\").trim().toLowerCase(); const d=row[\"apollo_company\"]?.data||{}; const a=(d.accounts||[]).find(x=>((x?.name||\"\").trim().toLowerCase()===q))||(d.accounts||[])[0]||null; if(!a) return null; return {company_name:a.name||null, company_domain:a.primary_domain||a.domain||null, company_linkedin:a.linkedin_url||null};"}' \
--with 'apify_gtm_people=apify_run_actor_sync:{"actorId":"apimaestro/linkedin-company-employees-scraper-no-cookies","input":{"identifier":"{{company_profile.data.company_linkedin}}","max_employees":60,"job_title":"gtm"},"timeoutMs":180000}' \
--with 'pick_persona=call_ai_claude_code:{"model":"sonnet","json_mode":{"type":"object","properties":{"full_name":{"type":"string"},"headline":{"type":"string"},"linkedin_url":{"type":"string"},"why_fit":{"type":"string"}},"required":["full_name","headline","linkedin_url","why_fit"],"additionalProperties":false},"system":"Pick one best outreach persona for GTM at the target company. Prefer current GTM ownership (growth, revops, partnerships, GTM engineering). If Drew Bredvick is present, choose him.","prompt":"Company: {{Company}}\\nCandidates JSON: {{apify_gtm_people.data}}\\nReturn strict JSON only.","agent":"claude"}' \
--with 'apify_posts=apify_run_actor_sync:{"actorId":"apimaestro/linkedin-profile-posts","input":"{\"username\":\"{{pick_persona.extracted_json.linkedin_url}}\",\"total_posts\":5,\"limit\":5}","timeoutMs":180000}' \
--with 'post_research=call_ai_claude_code:{"model":"haiku","json_mode":{"type":"object","properties":{"themes":{"type":"array","items":{"type":"string"}},"signals":{"type":"array","items":{"type":"string"}},"hook":{"type":"string"}},"required":["themes","signals","hook"],"additionalProperties":false},"prompt":"Analyze this person for outbound personalization. Person: {{pick_persona.output}}Recent posts: {{apify_posts.extracted_json}}Return strict JSON.","agent":"claude"}' \
--with 'custom_message=call_ai_claude_code:{"model":"sonnet","json_mode":{"type":"object","properties":{"subject":{"type":"string"},"message":{"type":"string"}},"required":["subject","message"],"additionalProperties":false},"prompt":"Write one concise outbound message (<=90 words) to {{pick_persona.extracted_json.full_name}} at {{company_profile.data.company_name}} using: {{post_research.extracted_json}}. No fluff, specific details only. make is very casual, hook is about something i was interested in the post, about why i cant ignore it because im building gtm tooling."}'
```
Interpolation contract for this flow:
- `company_profile` values are under `{{company_profile.data.*}}`.
- `pick_persona` parsed JSON fields are consumed via `{{pick_persona.extracted_json.*}}`.
- `post_research` input intentionally uses `{{pick_persona.output}}` + `{{apify_posts.extracted_json}}`.
Then continue remaining rows from the same file with in-place (do not re-enrich pilot rows):
```bash
deepline enrich --input companies.csv --in-place --rows 1:2 ...
```
### 8.6 Hard rules
- Do not chain setup + enrich + snapshot in one line.
- Do not reuse an existing output file path.
- Scale to full rows only after the one-row pilot returns valid shape.
- Always include `leadmagic_email_validation` as the final email-quality gate if doing email enrichment, though it may be inconclusive and you can continue.
- For CLI waterfalls, use canonical type names (`email|phone|linkedin|first_name|last_name|full_name`) in `--type`.
- Always add `--result-getters` so `{{<waterfall_name>}}` resolves to the matched scalar.
## 13) Advanced references (still in skill for quick access)
- `qualification-and-email-design.md`: qualification + 4-step sequence design, with `context/icp.md` guidance.
- `playground-guide.md`: local playground workflow and sandbox/network fallback.
- `actor-contracts.md`: Apify input/output contracts and schema guidance.
- `prompts.json`: full `call_ai` custom-signal template catalog.
- Use this file for exact titles when building custom signals.
Critical: keep qualification-and-email-design workflow context active when running any sequence task. It is not optional for ICP-driven messaging.
For sequence and qualification-heavy use cases, open both:
- `qualification-and-email-design.md`
- `playground-guide.md`
### Apify actor flow (short canonical policy)
1. If user provides actor ID/name/URL: use it directly.
2. If not, use `src/lib/integrations/apify/recommended-actors.ts` top-ranked candidate for the use case.
3. If not present, run discovery search.
4. Avoid rental-priced actors.
5. Pick value-over-quality-fit; when tied, choose best evidence-quality/price balance.
6. Honor `operatorNotes` over public ratings when conflicting.
```bash
deepline tools execute apify_list_store_actors --payload '{"search":"linkedin company employees scraper","sortBy":"relevance","limit":20}' --json
deepline tools execute apify_get_actor_input_schema --payload '{"actorId":"bebity/linkedin-jobs-scraper"}' --json
```
## 14) Feedback protocol
When flows misbehave, report with `deepline provide-feedback`.
```bash
deepline provide-feedback "Prompt/eval flow: ... exact failure point, expected vs actual output, repro, and consistency details."
```
Include:
- workflow goal and dataset scope
- tool/provider/model used
- failure point (block/step)
- logs/request IDs/error payloads
- reproduction and stability notes