Fake Dating App Pictures: what are brands doing to stop the next ‘Tinder Swindlers’?
March 31, 2023 | Image ModerationOnline dating is here to stay. In 2022, there were more than 366 million online dating service users, and by 2027 this number is poised to rise to 440 million. What’s more, some 40% of people in established relationships say they met their current partner online. With this many people going online for love, it’s no surprise that catfishing scams are on the rise, and unsuspecting, potentially vulnerable, customers are being lured in by fake dating app pictures and promises of companionship.
What is catfishing?
Popularized by the hit Netflix documentary, The Tinder Swindler, catfishing is a term for scams wherein people create fake dating profiles and lure victims into an online relationship. The scammer never intends to meet their mark in real life. Instead, they use false images and made-up stories to convince their target he or she is a legitimate suitor. The ultimate goal is to establish enough trust that the scammer can access the victim’s money.
How common is catfishing?
Data shows that dating apps account for nearly 40% of all cases of catfishing, with the most common victims being women over 40. On free dating sites, studies show that 10% or more of all new accounts are fake. And a study by Scientific American found that more than half of online daters believe they were presented with false information.
The fact of the matter is, no brand in the online dating industry is immune from these threats. If you launch a platform, whatever its size, focus or audience, bad actors are going to find it. And because dating apps and websites tend to be used by people who might be feeling vulnerable, they are prime targets for scammers.
“The scammers will target the sites where people are looking to build real relationships rather than the hook-up sites,” explains Josh Buxbaum, co-founder of WebPurify. “This is because that’s where the vulnerable people are creating profiles. It might be a 70-year-old divorced man or a lonely woman sitting at home alone. People can be very vulnerable; they don’t have their guard up, and it’s these people who get scammed. They convince themselves that this story is real because the alternative is too painful.”
Aside from the moral consequences, there’s a business impact for letting this go unchecked. If people have been victimized on your site, those victims are going to tell other people. The trust that’s integral to your community will be lost, and that’s often followed closely by loss of users. So how do you keep customers safe on platforms where audiences can scale at speed? Platforms need effective and scalable content moderation of their text and images to ensure the safety of their users, as well as the reputation of their brand.
“We understand it’s a problem of scale,” says Josh. “You’ve built this platform, the service is growing so quickly, and you’re just not able to moderate it yourself anymore. Most companies can’t maintain that level of moderation 24/7. Your users are active around the clock, but when your team comes in at 8 am, they spend their day just catching up on what happened overnight. At WebPurify, we have a deep bench and the resources you need to keep your users safe 24/7.”
How WebPurify supports dating app platforms
Fake dating app pictures are just the tip of the iceberg in a catfishing scheme, but for WebPurify’s 24/7 human moderation teams, these images are a tool for quickly weeding out offenders. Scammers posing as men often use military photos, and it’s not uncommon for those presenting themselves as women to use images of **** stars.
“We’re checking every time someone registers. We’re there at Step 1,” says Josh. “At that point we’re looking at the profile image and the text itself.” WebPurify’s AI tool is the first line of defense to determine the likelihood of an image being a celebrity. Next, the human moderators employ reverse image search to quickly find out if this profile photo is hosted on multiple websites already, thus likely stolen.
“WebPurify also recommends that clients design (or re-design) their sign-up process to include a gold standard of new account vetting, wherein users are asked to submit two photos of themselves imitating a simple gesture – a peace sign or thumbs up, for example. This is especially effective because it allows human moderators to confirm the person making the account is who they claim to be.”
The AI tool’s job is to determine if the image is otherwise high risk. For instance, if a photo is deemed to have 98% nudity, it gets eliminated. If it scores 50-60%, it gets referred to a team of human moderators for a closer look.
It sounds like a complex process, but all of this happens within a few seconds (in the case of AI) to a few minutes (in the case of human review). Still, in an era where quick turnaround times are expected, new users want to see their profile go live immediately – not after moderation checks ensure it’s a valid account.
Fortunately, as part of the consultative approach WebPurify brings to the table, they’re able to help clients navigate just this sort of challenge and suggest tried and true fixes.
Case in point, while WebPurify’s AI’s work is done in an average of 250 milliseconds per image, additional review by humans can sometimes (though not often) take up to 10 minutes. In the interim, and in order to preserve an optimal user experience, WebPurify recommends that its customers show new user profiles as published only to the creator of that profile while, behind the scenes, the review is finalized and then – provided everything checks out – made visible to the wider community. It’s a UX sleight of hand that keeps everyone happy – but also safe.
In those instances when scammers make it through with images that pass all of the tests, WebPurify has yet another failsafe. “With access to your CMS, we can check for multiple emails and IP addresses,” says Satya Das, Operations Manager at WebPurify. “These are big red flags.”
Satya, who has a long track record in moderating dating sites and managing safety protocols, now heads up WebPurify’s efforts to detect catfishing scams before they go too far. “We can detect where they log in from via their IP address and see who they’re chatting with,” Satya says. “For example, if you say you live in NY but all your comms are coming from an IP in Ghana or Nigeria – two common geographies for scammers – then we know this is suspect.”
How WebPurify is stopping dating app scammers
What makes WebPurify unique is that it not only offers round-the-clock protection with both human moderators and AI, but it can provide scalable, customized processes suited for your brand or budget.
The process starts with what Josh calls their standard NSFW rules. “We came up with a list of what most people don’t want on their platform: nudity or partial nudity, hate or hate crimes, violence, offensive gestures or language, drugs, drug paraphernalia or drug use,” Josh explains. “We start with these, and then we can drill down further according to your risk tolerance and any concerns unique to your target audience or business model. And it’s an evolving, back-and-forth process.”
Integral to WebPurify’s approach in weeding out fake dating app pictures and catfish scammers is that every process is trainable and repeatable. There is an intense quality control program that takes place daily. Senior team members will routinely spot-check 10% of the content accepted and rejected by moderators to see if those decisions were accurate and fair.
“Really, we’re like consultants,” Josh says. “From a consultancy perspective, we share all the risks with our clients and then help them understand what those mean in real-world terms. We’ll ask how many images they receive a day, and then we can build a custom solution to a scale and within budget.”
“If you get 5000 images a day, we can build a team to manage that, but we’re also looking down the line as it grows and thinking about how we can build the human team or the AI to grow with you.”