We know what spam is when we see it, we also know that the content of it seperate from the form. OK, porn and pill links in a signature are spam, but how does one differentiate them without human intervention. It requires abstract thinking, something we can't automate. How does one quantify behavors? It's not all that difficult to differentiate between a bot and a person if you target the limitations of one over the other. Sure, that doesn't stop manual spamming, but that's never going to be as efficient as scripts are. Perhaps yhou can elaboriate on what you're thinking of? If you have some thoughts on identifying what spam is in an efficient enough a way to allow one to keep it out whether it's from a person or a bot, it would be good to know.
Techie-Micheal wrote:... Turing test on thinking? Perhaps the user is handicapped in a way that prevents them from answering. ...
Probably means they won't be able to follow a thread and make useful contributions either.[/i]
Albert Einstein couldn't tie his shoes, but that didn't stop him.
Yes, humans are generally able to determine between a bot and a human. But if you attempt to quantify those behaviors, it is much harder. And by quantifying, that means listing the behaviors and writing a program to understand those. That takes advanced Artificial Intelligence and heuristics scanning. Both of which are very difficult to do. Why? Because you are trying to tell software what a human would do and wouldn't do. Then, after all of that, you are still left humans spamming, so you cannot have 100% of spam stopped.
As an example of the AI and heuristics needed, look at the email we receive. Ever notice the seemingly random paragraphs at the end? Sometimes called SpamAssassin busters or bays busters by the spammers, those texts look normal to a computer, but to the human eye we know that the paragraphs do not fit with the body of the email. But are you able to give me an effective list to tell a computer what to look for? Probably not. Nobody has been able to effectively tell a program to look for those texts and know if they fit with the email or not, that I know of. Therefore, it is difficult to quantify human behavior and bot behavior.
Instead, for bulletin board spam, you want to go after the source. Who is the source? The user. The user can be a bot or a human, or both. A human registers, but the bot spams. But once the human is in, what's to stop them from manually spamming? Here comes the problem again of trying to quantify what a human would do and what a bot would do and what either wouldn't do. What I'm proposing is a service that identifies the user, and not their actions. Or rather, the community identifies the user, and the community bans the user.