I have an idea. I can’t tell if it’s good or bad. Let me know what you guys think.
I think when someone posts “clone credit cards HMU for my telegram I know you’re just here sitting here waiting like gee I wish someone would post me criminal scammy get rich quick schemes, I can’t want to have a felony on my record” type spam, there should be a bot the mods can activate that will start sending messages to the person’s telegram or whatever, pretending to be interested in cloned credit cards.
It wouldn’t be that hard to make one that would send a little “probe” message to make sure it was a for-real scammer, and then if they respond positively, then absolutely flood them with thousands of interested responses. Make it more or less impossible for them to sort the genuine responses from the counter-spam, waste their time, make it not worth their while to come and fuck up our community. And if they lose their temper it can save some of the messages and post them to some sort of wall of victory.
What do people think?
If you’ve thought of it, they’ve thought of it. Plainly, there are already scam bots floating around, most of the time engaging with them makes it quite clear that they are not actual people, as long as you’re paying attention. Their side oftentimes is completely automated. Get paid send info. The “lifelike” messages they send are canned and only vary slightly from message to message.
I swear, we’ll implement bots to “combat” this stuff and it won’t do anything because it will largely just be bots talking to bots forever. There’s already a nontrivial amount of internet bandwidth consumed by spam email that just gets thrown away as it arrives, now, more and more resources are going to be poured into having bots talk at eachother for centuries without getting anywhere.
If it wastes the scammers time, wouldn’t that be the point?
But if the scammer is using a bot too, then it becomes a null sum, since the bot can have thousands of conversations at a time.
Spam bots should be taken down more than engaged with. If there’s a real scammer on the other end, yes, absolutely, waste that person’s time as much as you can, and as much as you like. People have made entire careers out of trolling them and I endorse it. Scammers are the worst people, taking the hard earned money of his people to try to convince them of a lie just to get their money. This is sometimes true with normal sales, caveat emptor and all that, but when the entire premise of the interaction is based on deception, then to me, it crosses over into scam territory (looking at you, entire duct cleaning industry).
Wasting time making a bot to talk to spam bots is not very helpful. If you can identify that they are not properly filtering their inputs, I would invite you to use an SQL injection and talk to them about little Bobby tables. But by using a bot of your own to talk to spam bots will have such a negligible impact on the harm that scammers have that it’s basically not worth doing. Unless you can scale up your bot to the point of overwhelming the scammers bot into disfunction, it’s not going to provide any real help to those currently being scammed by the bot. Scaling up to the point of getting the bot to malfunction, is also something I would approach with caution, since you have no way of knowing what that limit is, and in the case of cloud systems, the capabilities of the bot may scale far and above what any attack against them could reasonably produce.
If they’re using cloud resources and you can verify that, then there’s a good chance you can hit them financially if you push their bot to its limits since cloud compute resources are not cheap. If you can generate enough traffic for them that the bot scales up significantly, then yeah, you may be successful in forcing the scammer paying for that to shut it down. The trick is doing so without incurring significant costs yourself. It’s still likely, however, that the scammer will simply abandon it (and not pay their bill), and restart the whole thing again later with a new telegram/whatever chat system account later that you won’t be able to track down in a reasonable timeframe.
So it’s somewhat insane to try, it’s easy for them to change the bot to avoid your usage attack and difficult for you to keep track of them and which account they’re using now.
We need to make it globally illegal to run these kinds of remote scam operations, and strongly prosecute anyone doing it. Their ill gotten gains need to be confiscated and sent back to their victims (as much of it as possible), and they should be imprisoned for a very long time.
As far as I’m concerned, this is the way. This is the only way. Legal reprocussions with strong penalties and strong law enforcement of those legalities is the only way to ensure that we crush this trend permanently. Most countries, even those where we see a lot of scamming coming from, have laws that enforce against scams; but the enforcement is very spotty, and IMO, the ramifications of being caught are far too light.
Right now, most civilians don’t really have any good recourse beyond ignoring it. Scambaiters are pretty common and they’re doing good work, even working with law enforcement to get these scammers behind bars, but even that falls far short of the action required to stop such things from continuing to happen. We need strong legislation agreed upon across international boundaries with full task forces to find and prosecute these assholes; we don’t have that, and so it continues.
Appreciate the in-depth explanation. Thank you.
Bots talking to bots is what alien explorers will find here. Except they won’t see the bots as the individuals. They’ll see the internet as one mind.