There’s no need to overshare on social media now that OpenAI’s new chatbots can pinpoint your location from the tiniest details in images

The ultimate GeoGuessr cheat code or just a privacy nightmare?

The ultimate GeoGuessr cheat code or just a privacy nightmare?

Word to the wise, be careful about the images you post on social media. OpenAI’s latest AI models, released last week, have sparked a new viral craze for bot-powered geoguessing. In other words, using AI to deduce where a photo was taken. Not to put too fine a point on it, but that could be a doxxing and privacy nightmare.

OpenAI’s new o3 and o4-mini models are both capable of image “reasoning”. In broad terms, that means comprehensive image analysis skills. The models can crop and manipulate images, zoom in, read text, the works. Add to that agentic web search abilities, and you theoretically have a killer image-location tool, foreboding pun somewhat intended.

According to OpenAI itself, “for the first time, these models can integrate images directly into their chain of thought. They don’t just see an image—they think with it. This unlocks a new class of problem-solving that blends visual and textual reasoning.”

That’s exactly what early users of the o3 model in particular have found (via TechCrunch). Numerous posts are popping up across social media showing users challenging the new ChatGPT models to play GeoGuessr with uploaded images.

A close-cropped snap of a few books on a shelf? The library in question at the University of Melbourne correctly identified. Yikes. Another X post shows the model spotting cars with steering wheels on the left but also driving on the left-hand side of the road, narrowing down the options to a few countries where driving on the left is required but lefthand drive cars are common, including the eventual correct guess of Suriname in South America.

The models are also capable of laying out their full reasoning, including the clues they spotted and how they were interpreted. That said, research published earlier this year suggests that the explanations these models give for how they arrive at answers doesn’t always reflect the AI’s actual cognitive processes, if that’s what they can be called.

When researchers at Anthropic “traced” the internal steps used by its own Claude model to complete math tasks, they found stark differences with the method the model claimed it had used when queried.

Whatever, the privacy concerns are clear enough. Simply point ChatGPT at someone’s social media feed and ask it to triangulate a location. Heck, it’s not hard to imagine that a prolific social media user’s posts might be enough to allow an AI model to accurately predict future movements and locations.

All told, it’s yet another reason to be circumspect about exactly how much you spam on social media, especially when it comes to fully public posts. On that very note, TechCrunch queried OpenAI on that very concern.

“OpenAI o3 and o4-mini bring visual reasoning to ChatGPT, making it more helpful in areas like accessibility, research, or identifying locations in emergency response. We’ve worked to train our models to refuse requests for private or sensitive information, added safeguards intended to prohibit the model from identifying private individuals in images, and actively monitor for and take action against abuse of our usage policies on privacy,” was the response, which at least shows the AI outfit is aware of the problem, even if it’s yet to be demonstrated that these new models would refuse to provide geolocations for any given image or collection of images.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

About Post Author

Leave a Reply

Your email address will not be published. Required fields are marked *