Events

YG Hong on shipping lead scoring without knowing ML

YG Hong's April Tools Day talk is not really about machine learning. It is about fixing a broken routing system by choosing the right outcome and staying inside the feedback loop.

Deepline

YG Hong on shipping lead scoring without knowing ML

YG Hong's talk is not really about machine learning. It is about choosing a metric the sales team trusts, loading the system with real context, and staying in the loop long enough to rebuild a broken operating system.

The best line in YG Hong's April Tools Day talk is not the one about machine learning. It is the one about the score that made no sense.

One lead had an iRouting score of negative 6,998,200. It still closed.

That tells you almost everything you need to know.

The system was broken in the only way that matters in a revenue org: people stopped trusting it.

The real problem was operational, not technical

Owner's old routing system was supposed to identify the best leads and hand them to the outbound team. In practice, sales managers had fallen back to giant manual Salesforce lists and a pile of filters no one really believed in.

YG describes the assignment the way operators usually get assignments:

"I received one of those very vague and ambiguous tasks that everyone in startup land knows and loves or hates. Mine was bring back iRouting and make it better."

Then he asked the obvious question: where did the scores come from?

Claude did not know.

His boss, who had written the script, also did not know.

That is the actual starting point for a lot of applied AI inside companies. Not a clean dataset. Not a neatly scoped model problem. A half-forgotten system that still affects frontline behavior even though nobody can explain it.

The reason it worked is simple: YG started with the outcome

YG did not start with "let's do machine learning."

He started with the thing the team actually needed: more decision-maker connects.

That choice matters because it is much closer to the real job than a downstream metric like closed won. Once he narrowed the problem to a binary outcome the team understood, the rest of the work became legible.

"Huh, I wonder if we can predict DMCs. You know, decision maker connects. That's a binary outcome. It's either we connect with the decision maker or we don't."

That is the lesson.

The model was not the idea. The business outcome was the idea.

This is also why the talk sits so naturally next to the broader Deepline point of view. Good GTM systems start with the outcome, the evidence, and the checks. They do not start with workflow theater.

The underrated move was loading real context into the system

One of the strongest parts of YG's talk is how unglamorous the setup really was:

"I connected Claude Code to Slack, Notion, Google Docs, so it could read all the documentation and the communication of iRouting."

"I connected it also to the Snowflake MCP, so it could actually access our data."

That is much more useful than any generic "AI built a model" headline.

The talk is really about context engineering plus judgment:

  • give the agent the actual operating history
  • give it the real data
  • keep asking dumb questions until the system becomes legible

That is what made the work possible.

The most memorable part is the operator posture

The line everyone will remember is this one:

"My secret is I'm not that smart. And I'm really not good at math. But my superpower is I'm very comfortable feeling stupid, and I'm really good at asking questions until I no longer feel stupid."

That is not false humility. It is the right temperament for this kind of work.

YG is not pretending the system arrived cleanly. He describes two weeks of arguments with Claude, gut checks, cigarette breaks, and 4 a.m. nights where he was evaluating work he did not know how to do by hand.

That is why the result is credible.

The human role was not to pretend to be the model. The human role was to stay inside the reflection loop long enough to decide what counted as believable.

The model mattered less than the system change it unlocked

The punchline is obvious:

"We hit our fucking numbers. Double DMCs."

But the better line comes later:

"We switched from like a multi-step, like one week cadence to actually like we rip and replace their leads every single day."

That is the more important outcome.

The model was useful because it changed the cadence of the sales system. It rebuilt trust. It made the next action clearer. It let the team work from fresher signal instead of retrying stale leads out of habit.

That is what good applied AI should do. Not produce a clever artifact. Change the quality of the operating loop.

If you want to go deeper

For the event context and the other talks, start with the April Tools Day recap.

If you want to build GTM systems that start with the outcome instead of the workflow, start with Deepline.

More talk breakdowns from the same event:

Build GTM systems around outcomes, not tool choreography

Deepline helps teams connect providers, evidence, and checks inside Claude Code so the system can search for the route without losing the goal.