What makes you think Google's image classifer would think that's a mountain?
Especially if this is all used for learning, enough people saying "that is clearly not a mountain" would reinforce that it's, in fact, probably not a mountain. Even if I got classified as a robot, I'm not sure I would think "oh, a system designed to classify images would think this not-a-mountain is a mountain", so I definitely wouldn't double down and keep marking it as a mountain. I'd, well, not. And assume the system is at least as good as classifying the images it chooses to use as I am.
> "What makes you think Google's image classifer would think that's a mountain?"
Because every single time it asks me to classify mountains it rejects my answers if I don't click on trees on the horizon (and often trees on the horizon are the only "mountains" presented) and every single time it accepts the answer that such trees are mountains. I've gotten the mountains challenge dozens of times, the results are very consistent. If there is a group of trees on the horizon, that is asserted to be a mountain.
> "enough people saying "that is clearly not a mountain" would reinforce that it's, in fact, probably not a mountain."
Totally irrelevant because if I am trying to get through a google captcha, it's because that captcha is standing in the way of me doing something. My interest is in passing the captcha, not correcting Google's shitty image classifier. So I have absolutely no incentive to make my life harder by insisting on correct answers, and every incentive to tell Google what they want to hear.
>So I have absolutely no incentive to make my life harder by insisting on correct answers, and every incentive to tell Google what they want to hear.
I guess this is where the misunderstanding is. You don't think Google wants to hear the correct answer?
Trying to guess at what the daily/monthly flavor of "correct" is seems like it'd do more harm than good, resulting in some kind of nondeterministic guessing game of "well, trees on the horizon are probably assumed to be a mountain" that never settles on actually-correct answers (and, I'd wager, is often more inconvenient to the user than just answering correctly would be, because now there's a layer of indirection on what they think a system thinks of an image, rather than just what they think of that image).
If everyone just answered "no, that's trees" instead of a hand-wavy "I think you think it's a mountain", I feel like this captcha would be significantly easier for us humans (because we could actually give real answers), as well as less inconvenient for people who just want to pass on through and get on with whatever they were doing before a site wanted to verify they weren't a bot (because they can just, well, identify images instead of playing a game of "what does the machine think?").
> "You don't think Google wants to hear the correct answer?"
They may want it but they don't reward it. I don't care what sort of answer they want, I only care what sort of answer they accept. I'm not going to donate my time to these bastards by doing anything more than what's necessary to pass their captcha.
> "If everyone just answered "no, that's trees" instead of a hand-wavy "I think you think it's trees","
Especially if this is all used for learning, enough people saying "that is clearly not a mountain" would reinforce that it's, in fact, probably not a mountain. Even if I got classified as a robot, I'm not sure I would think "oh, a system designed to classify images would think this not-a-mountain is a mountain", so I definitely wouldn't double down and keep marking it as a mountain. I'd, well, not. And assume the system is at least as good as classifying the images it chooses to use as I am.