Events

What happens when GTM engineers stop using UIs

Four people showed what they actually built with Claude Code for sales. One doubled cold call connects without knowing ML. Another runs full outbound from the terminal.

Deepline

50 people in a Meatpacking District loft on a Tuesday night. Four talks. No panels. No "thought leadership." Just people showing what they built.

Not one person asked "does AI work for sales?" Every question was about how to scale it.


Full Event Recording


Outbound from the Terminal / Ben Holley, AirOps

Ben wired 10+ APIs into Claude Code and runs entire outbound campaigns by talking to his computer. Domain purchasing, mailbox warmup, personalization, sending, review. All from the terminal.

What stuck:

  • Claude Code vs Clay: Clay is deterministic and observable. Claude Code is faster when you need custom personalization logic at scale. They're complementary, not competing.
  • Local SQLite next to HubSpot: "I'm not going to ask RevOps to build me a new field every time I have some weird flight of fancy."
  • Subagent review loops: Generate personalization variables, then spawn Sonnet subagents to QA before sending. Don't use Haiku for this.
  • Context layer: Download API docs into Claude Code once. Build persistent knowledge so it stops re-reading documentation every session.

Ben on LinkedIn · benyamin.ai · AirOps


I Don't Know ML / YG Hong, Owner.com

YG leads Applied AI at Owner.com. He rebuilt their lead scoring from scratch using Claude Code. He doesn't know machine learning. The model doubled their decision-maker connect rate on cold calls.

What stuck:

  • The old scoring was broken and nobody knew. One lead scored -6,998,200 points and still closed.
  • "Just do it" beats "write me a plan." Two weeks of 4am arguments with Claude. The gradient boosted tree worked.
  • Feature engineering matters more than model choice. They excluded PST timezone bias because the model just learned they called West Coast leads at better times. That's not a real signal.
  • Daily lead replacement over retry cadences. If a lead doesn't pick up, tomorrow's scored list is better than yesterday's.
  • Result: 2x DMC rate. YG still doesn't know ML.

YG on LinkedIn · Owner.com · Owner.com careers


Cookie Cutter GTM Is Dead / Garrett Wolfe, OneGTM

Garrett runs OneGTM, a GTM engineering agency. He surveyed 250 GTM engineers globally and brought a live demo of a custom sales tool he built for Antimetal.

What stuck:

  • 45% of companies don't even know what their GTM engineers do.
  • Growth alpha gets copied instantly. Every playbook has a shelf life now. The future is bespoke tooling per team.
  • Live demo: Custom web app on Supabase + Attio. Intent signals, instant prospecting, phone waterfall, straight to the dialer. No Clay tables.
  • Claude Code + Deepline collapsed days of table building into 20-30 minutes.

Garrett on LinkedIn · OneGTM


Stop Building Workflows. Define Outcomes. / Jai Toor, Deepline

Jai closed the event with a live demo. Competitive social listening, starting from a single prompt, ending with a Lemlist campaign. Claude Code + Deepline.

What stuck:

  • The Bitter Lesson applies to GTM. Generalized systems with compute outperform hand-tuned rules. Give it closed-won data and say "figure it out" instead of writing scoring logic.
  • UX is the only thing that matters. Agent is just another form of UX. Don't compete with where reps already live. Some teams live in Slack. Some live in the CRM. Meet them there.
  • Backtesting is a superpower. Pull closed-wons from six months ago. Get a bunch of different signals. See which ones were actually present on wins.
  • Signal discovery almost always finds two easy things you're missing. And when you go deeper, one or two core assumptions usually turn out to be wrong.

Jai on LinkedIn · Deepline · jai@deepline.com


What People Actually Asked

Nobody asked whether AI works for GTM. They asked:

  • How do you scale past a pilot? Deploying to 500 reps is a different sport than building a demo.
  • Build vs buy for intent signals? Annoying answer: it depends on your data maturity.
  • How do you earn trust with a sales team? Step one isn't the AI. It's cleaning dead accounts off rep books so they stop wasting dials.
  • Is context engineering a fad? For companies with historical data, the generalized model plus good examples will outperform hand-tuned rules every time.

Transcripts

Ben Holley, AirOps (full transcript)

How are you guys doing? Glad to be here at the meatpacking district where the meat is packed. I don't actually know if that's what happens here. They said I only had 10 minutes. I prepared for two hours. I'm just kidding.

So, first of all, thank you guys so much for having me. Thanks for coming. Thank you, Jai and the DeepLine team. It was really nice to meet you guys.

What I'm going to talk about today is basically like my day-to-day work that I do in Cloud Code. And one of my coworkers jokingly said that I like live in the terminal. And he's like, I'm like a professional Cloud Code prompter. And he's like an engineer. And I was like, well, I'm in sales and I feel like I'm a professional Cloud Code prompter.

So first thing, a lot of people ask, what's the difference between Claude Code and Clay? I think the biggest thing is that Clay is deterministic. It's very observable. If you have an actual team that needs to see workflows and see outputs and stuff, that's probably preferable. The one thing that I don't like doing anymore is using a UI. I feel like when I have to click on stuff, I'm having an aneurysm. And I think Cloud Code is turning me into an idiot or something.

I use WhisperFlow. There's a bunch of tools like that where you just talk into the computer. And basically, I'm like, hey, Claude, I want to run this campaign to these people with this messaging and these personalization variables. And it just starts building it. And it sounds magical, but I think there's a lot of things that you have to set up first.

These are all APIs that I use that are plugged into Claude Code. You could buy domains via Dynadot, via the API. It's the first time I ever spent money using Claude Code was buying URLs in Dynadot. It was a very weird experience. We host our domains on Cloudflare. We warm up mailboxes in Email Bison, which is the sending tool that we use.

So one of the things there, personalization. It's a lot slower to personalize things inside of the Clay UI than it is to just be like, hey, these are the personalization variables I want. And then assign a bunch of subagents to go fill in this variable. So there's a campaign that I was working on where we polled a company's competitors and we wanted to cite them in the email. And in Claude, I'm like, hey, Claude, now that we've made all of these personalization variables, can you just have a bunch of Sonnet subagents go review this and make sure that this actually makes sense?

Definitely Haiku subagents are cheaping out. I wouldn't recommend Haiku subagents. One of the benefits of working at AirOps is that we pay Anthropic a lot of money, so I have unlimited tokens.

In order for this stuff to work, you do have to have a pretty detailed context layer. I have a bunch of documentation about what AirOps is, what it does. I have an API onboarding flow. Anytime I decide to onboard a new tool, I do a command that basically is, hey, Claude Code, go download these API docs from the site. And anytime I'm working in a tool, whether it's Apollo or Email Bison or whatever, it knows that these are the API calls that it needs to make.

So yeah, basically the end result of that is that I can pull a list. When you build a list, you have to do an exclusion list. You have to make sure that none of these leads are in an opportunity stage. You have to make sure none of these people have been contacted in the last 14 days. People who aren't in sales or marketing, they're just like, oh, can you just go build me a list? And I'm going down all of the things that have to be done to actually get a list out the door. And it's quite a bit. And Claude Code makes a lot easier for me. I think it allows me to deliver higher quality work.

I actually do have an SQLite database on my computer that holds a little bit better records than our HubSpot instance. Because a lot of this stuff you don't really want to pollute HubSpot with. And I'm not going to ask the RevOps team to build me a new field every time I have some weird flight of fancy.

YG Hong, Owner.com (full transcript)

Hi, everyone. Owner is a restaurant tech software startup that builds for mom and pop restaurants, helping them compete with the goliaths of the industry, your Sweetgreens, your Domino's, et cetera, by giving the mom and pop restaurants access to the same types of tools, tech, and resources that the big dogs have. Our ICP is independent restaurants, independent restaurateurs, which makes for a little bit of a unique GTM motion. We are technically B2B, but we have a lot of B2C flavor, especially since it's really difficult to connect with the decision maker of a restaurant, usually the owner.

Over the last couple of months, the applied AI team has had the chance to work on a few cool different AI projects. The main ones for GTM did focus on efficiency and volume. One was a gradient boosted tree, a machine learning model, that lets us identify leads that have a much higher chance of DMC, or decision maker connects. The top decile of those leads have up to like a 2.2, 2.3x increase in DMC rate. The second was pre-call research automation, basically taking a lot of the manual research and clicking around that the reps did before picking up the phone and dialing, eliminated that, resulting in about an 80% increase in dial volume.

My name is YG. I lead applied AI at Owner, and my secret is I'm not that smart. And I'm really not good at math. But my superpower is I'm very comfortable feeling stupid, and I'm really good at asking questions until I no longer feel stupid. And so thanks to our dear friend Claude being a tutor of infinite patience, I've been learning some new stuff.

This all started with iRouting, Owner's Intelligent Routing System, which pretty much just did two things. First, scored and identified top leads. And second, it automatically assigned and routed those leads to our outbound sales team. The issue was iRouting was not running.

I pulled up the scoring script. I spun up Claude Code. I said, Claude, I'm trying to understand the scoring in this script. I'm seeing in the code there's like 10,000 points if they use Toast POS, 5,000 points if they have like 100 plus Google reviews. Where do the scores come from? What are they based off of? How do we calibrate them? And so Claude went into the script and looked around and Claude came back to me and said, I don't know.

I went back to Jonathan. I said, hey, Jonathan, boss, where did we get the scores from? And Jonathan said to me, I don't know. And so I realized we have to rebuild this from scratch.

And so this is when I kicked off a serious dialogue with Claude. I connected Claude Code to Slack, Notion, Google Docs, so it could read all the documentation and the communication of iRouting. I connected it also to the Snowflake MCP, so it could actually access our data.

Claude started suggesting machine learning as a solution. I don't know machine learning. I don't know what it is. I don't know how it works. I don't know the math. Claude was very adamant. It said, based on everything you shared with me, based on the data we have, this is the optimal solution. We can use a gradient boosted tree.

And thus began two weeks of hell. I had to double check, gut check, triple check, disagree with everything Claude was doing, all the while not knowing anything Claude was doing. But after two weeks of frustration and arguments and some cigarette breaks and some 4am bedtimes, at the end of it we had a nice little baby gradient boosted tree model.

We put together a cool little experimental tiger team. Three reps would get only the best scored iRouting leads. And the rest of the reps would just get the ones that they normally work. I was not optimistic. I had not written any of the code. I didn't know the math.

And we hit our numbers. Double DMCs. The score, the machine learning model worked. And now, today, iRouting is back in business. But I still don't know machine learning. I don't know how it works. I don't know the math.

A big part of the two-week process was we had to figure out what variables we would actually want to train the model. So for example, technically PST leads close or DMC at a higher rate. But that's just because we're East Coast and we happen to call them at better times. We didn't want to include that as training data because then it would just bias us to only calling PST leads, which we didn't want.

I just asked a bunch of dumb questions and now I'm a little less dumb. That's it for me.

Garrett Wolfe, OneGTM (full transcript)

Nice to meet all of you. My name's Garrett. I was a very early employee at a company called Unify, which I'm sure many of you are aware of. I was number nine there. And today we're going to talk about how I believe that sales enablement, the future of it, is going to be bespoke.

I put together a list of a bunch of go-to-market engineers from around the world. We collected answers from 250 of them and allowed us to really crunch really good data on go-to-market engineering and what the function means, what tools people use, what they're being paid, what agencies charge, the good, the bad, the ugly.

45% of companies actually know what go-to-market engineers do. That's like pretty bad. And one of the top frustrations is that most of the tools that we use are poorly integrated, hard to use, poor support, and yet somehow we report to everyone. So we report to everyone, no one knows what we do, and now we're frustrated by the closed wall gardens that exist. The top bottlenecks are bandwidth and capacity.

Growth has become this painful cycle of discovering go-to-market alpha and then everyone else copying it and it's like no longer worthwhile. Cookie cutter go-to-market no longer works. We're trying to fit round pegs in square holes.

What can we bring and build around your company and what makes your team excited to go to work? The fun for me comes when I get to sit in the room with a team of BDRs that feel like they're running up against walls constantly and we can build them something that lands them a meeting booked with a huge Fortune 500 company. If you empower them to be even better and get paid more themselves, they'll be your biggest fan.

One of my clients is Antimetal. They're like an AI SRE to help teams get to the bottom of outages. Their reps are spending way too much time literally trying to build lists of people and then pulling their phone numbers and then putting it into the dialer.

I built a web app that sits on top of Attio. We're pulling their entire CRM and any company and person that has some type of intent signal into this product. You can sort and filter all of these things. And we're using a data waterfall to pull phone numbers. You can download this list and upload it directly to the team's dialer.

I've been really shocked to learn that so few platforms out there allow you to combine intent signals and prospecting instantaneously. I should be able to have my list of intent signals for each company and then immediately prospect. That's crazy to me.

I've begun to see where products like DeepLine and Claude Code can plug in and really abstract away a lot of the cyclical and repetitive steps. A workflow that takes days come down to simply 20, 30 minutes.

All to say, the future for go to market is coming. Apollo could have done some crazy things, but they didn't modernize fast enough. And I think in general, I'm super excited about building things with teams in a way that really feels bespoke and special.

Jai Toor, Deepline (full transcript)

How's everyone doing? We're almost there. Last one here. My name is Jai Toor. I'm one of the co-founders of DeepLine, and I'm going to be talking about how workflow design and automation is actually now a constraint in the way that we've been doing it for decades.

My background, I've actually just been doing different versions of the same thing for like 15 years. 15 years ago, I was at Uber building marketing tech and marketing operations platforms internally, which was basically just integrating different data sets over and over again. And it's actually pretty insane how every job I've had since then is basically the same thing, and nothing has changed until Claude Code, which is pretty exciting.

What we at DeepLine have been working on, we started working with a lot of B2B companies and especially started looking at data warehouses and making the data warehouse the central intelligence of a company. And what we found is most teams are not even close to that. And everyone is really just workflow centric. You start with a problem and then you go step by step and say, all right, if I'm going to solve this problem, I'm going to go get data for it. Now I'm going to score it. Now I'm going to do X. Now I'm going to do Y. And that structured approach sucks. Anybody who's done a lot of Clay tables or marketing ops knows, it's brutal. All your time is just pushing data between different tools and you don't actually remember what you were doing in the beginning.

The mindset shift that I would highly recommend: you should start with the outcome and let the system and the agents and the tools find the route. The more micromanagement you do of the process, the worse your outcomes will be. You will not be able to compete with Claude Code and its infinite knowledge and its ability to iterate and test different options. What you should focus on is what do you actually want? What's an example of that outcome?

There's this concept called the bitter lesson of AI, which is generalized tools with lots of compute will outperform really specific hard-coded human-designed rules. If you try to tell it exactly what to do, it will not outperform it finding the best solution.

UX is actually all that matters. Agent is now just a different form of UX. For Antimetal, it was very focused on just that calling workflow. For Vanta, it's a bigger company, the CRM is where you need it. We have some customers that just only work in Slack. You're not going to get them to change their behavior. Just fit in where they already are.

Backtesting is a superpower. The ability to just say, this is a good outcome, go test it, and make sure that what we're doing is going to generate that result based on historical data. I do it for every signal that we generate for a customer. You can say, go look at our closed-wons from six months ago, and then let's go get a bunch of different signals, and then let's see which ones were actually present on wins.

Signal discovery almost always finds two easy things you're missing. When we go deeper, often one or two of their big core assumptions are not true. The results that we show you should be so specific to every single company, even to their direct competitors. Vanta and Drata have slightly different ICPs. There should be some difference between what your results look like and what someone else's look like.

The main value that we're trying to solve is you should not have to think about how the sausage is made. You should just think about the outcome that you're trying to achieve, the evidence that you have of a good outcome, and then our system will handle the rest.


Next Event

We're doing this again May 20th in NYC. Possibly London and SF too. DM Jai on LinkedIn if you want to speak, demo, or just show up.

Promo code AprilTools for $25 in Deepline credits.

Run your entire GTM motion from Claude Code

Install Deepline and get 45+ GTM data providers in your terminal. Enrich, validate, and sequence, all through natural language.