Skip to main content

Playbook

Use Apify when you need controlled web automation/scraping workflows.
  • Use apify_list_store_actors first when you do not know the actor id yet.
  • Build actorId as username/name from store results.
  • Use apify_get_actor_input_schema to inspect required/optional fields before running.
  • Prefer apify_run_actor for long or uncertain runs; poll run status before fetching outputs.
  • Use apify_run_actor_sync only for bounded, small workloads where blocking is acceptable.
  • Use actor-contracts.md for actor-specific required/optional input fields and sample payloads.
  • Validate payload shape with a tiny run before scaling row counts.
deepline tools get apify_run_actor --json
deepline tools execute apify_list_store_actors --payload '{"search":"linkedin jobs scraper","sortBy":"relevance","limit":10}' --json
# Convert first result into actorId: username/name
deepline tools execute apify_list_store_actors --payload '{"search":"linkedin jobs scraper","sortBy":"relevance","limit":10}' --json | jq -r '.result.data.actors[0] | "\(.username)/\(.name)"'
# Inspect the actor's input schema page before execution
deepline tools execute apify_get_actor_input_schema --payload '{"actorId":"bebity/linkedin-jobs-scraper"}' --json
deepline tools execute apify_run_actor --payload '{"actorId":"bebity/linkedin-jobs-scraper","input":{"title":"Web Developer","location":"United States","rows":10}}' --json
deepline tools execute apify_get_dataset_items --payload '{"datasetId":"{{dataset_id}}","limit":10,"offset":0}' --json