#07 - My Most Haunting Failure

A few years ago, I was the CTO of Fraugster, a fraud prevention SaaS vendor.

We would routinely run performance POCs with prospects, and in the vast majority of cases we'd done really well.

But there's one instance where we failed miserably, and it haunts me to this day. Not because I failed. God knows that happened enough times in my career for me to get used to it.

It's actually because to this day, I don't know for sure why we failed in that POC.

But still, I learned a valuable lesson from it.

Episode 1: The Crime

We were speaking to a BNPL prospect, an industry we had plenty of experience with, even if the market was slightly new to us.

We went through the discovery and demo meetings and agreed on running a POC. They'd send us a training dataset with fraud labels, and a "blind" dataset which we'll need to score with our AI and strategy recommendations.

Pretty routine.

What was a bit unusual, was that we knew that the prospect is testing two other vendors as well, on the exact same data.

Why unusual? Honestly, you'd think this should be a standard for Fintechs, but the truth is we rarely saw it in practice.

Regardless, we went in very confident as we ran dozens of such exercises in the past.

And indeed, after a couple of weeks we had the results and we were pretty confident they were in-line with our usual performance.

How did we know that, even though the training set was blind? We looked at the score distribution, manually reviewed some sample transactions, and validated the feature store worked correctly.

We sent out the full results CSV and awaited the good news.

A week later, our contact on the prospect side sent us back an analysis of our results, side-by-side with the competitors'.

There were two things that surprised shocked me:

  1. We didn't come first.

  2. The vendor that did come first, blew it out of the water.

Seriously, I don't recall the exact numbers, but they managed to catch an imaginary 96% of fraud, while blocking only 2% of the traffic.

In BNPL terms, this kind of performance can be described in one word - unbelievable.

And indeed, I didn't believe it.

It's one thing to lose in a POC, but such results put all the years, effort and money invested in our product to shame.

There must be an explanation that will redeem my prestige!

Episode 2: The Suspects

I had to figure out how we lost so badly.

So I sat down with my leadership team and started to draw hypotheses on a whiteboard:

  1. They cheated - I don't know how, but surely these results are manufactured, right? Maybe they had some "help" from the inside, or maybe they somehow got the labels for the training set by mistake? Psychologically, it was the easiest explanation to accept.

  2. They are better - analytically, I had to consider the possibility I am not the best in the world. Hard truth. Do they have a better algorithm?

  3. They have better data - we knew that this one particular vendor was already working with the leading BNPL in that market. Is it possible they just had much better "general" data they could use to train their engine?

In an ideal world, I would work towards proving each of these statements is false, until I'm left with only one. Then I'll know the objective truth.

But reality is different.

Episode 3: The Investigation

So what did we do?

The first thing was to go back to the drawing board and check if we blundered our machine learning algorithms.

Mind you, at that time we were already running 3 different algorithms with years of experience, so it wasn’t easy to convince me otherwise.

Can it be that a younger vendor, which is listed with one data scientist on Linkedin, has something so much better?

Still, my data science team ran some research and concluded we need to try and research a new algorithm from the XG/CAT-Boost family.

[Side note: At that time we ran a proprietary random decision tree algorithm, a regression algorithm and a deep-learning algorithm.]

Long story short, and fast forwarding 12 months, I can say that the results were very inconclusive. The 4th algorithm didn’t prove to push the needle by much, and definitely not consistently.

So, is it the data angle?

Funnily, we had the competitor’s bigger client as a prospect in our pipeline, so we could just ask!

What blew my mind was that their client was really unhappy with their current performance.

And even if they were, the metrics they shared with us were light-years behind the performance of the vendor in the POC. It was actually so bad, they didn't even use them for real time decisions.

Still, you have to presume that having access to such a relevant dataset will improve their results, right?

Hmmm, can it be that they just cheated?

I cannot answer that, obviously.

At the time, I was 80% certain that’s the explanation.

Yes, failing does tricks on your mind.

But with time, I also realized that I chose this explanation because it gets me off the hook.

If they cheated, I don’t need to look at my own shortcomings. I don’t need to admit failure. I can just scream at the unfairness of the world.

In hindsight, I’m a bit embarrassed that as a leader I actually suggested this option to my team.

Epilogue

So… what happened? How can I explain this defeat to stellar competition?

To this day, I cannot really say.

And that’s the reason why it haunts me still.

I didn’t solve it.

So… what did I learn from it? Why share this story?

Well, it taught me that even strong teams can sometimes lose to new ones.

It taught me to never feel like I had the perfect solution for everything.

It taught me it is ok to lose sometimes. It’s not the end of the world.

And isn’t fighting fraud a similar adversarial ordeal?

The opponent doesn’t have to be smarter or better equipped to win.

If they try enough times, they’ll get to score.

It is part of the game.

Failure and loss comes with the territory of fighting fraud.

So there’s no need to look for excuses.

Look for getting better.

That’s all for this week.

See you next Saturday.

P.S. Feel like you're stuck with the same fraud challenges for months and need expert advice quickly? Book a consultation call with me to get clear, actionable recommendations that fit your budget. Guaranteed.

​Book a Call Now >>​

 

Enjoyed this and want to read more? Sign up to my newsletter to get fresh, practical insights weekly!

Previous
Previous

#08 - Dark Web Services Bypass KYC Checks for $150

Next
Next

#06 - My Zero-Cost Fraud Protection Guide for Fintech Startups