The latest version of Chatbot Chatbot Grok from Elon Musk, the view of the billionaire originated views, to the point that it will sometimes search on the Internet for Musk’s position on a problem before providing an opinion.
The unusual behavior of GROK 4, the artificial intelligence model issued by the Musk’s Xai late on Wednesday, surprised some experts.
Grok is designed using huge amounts of computing power at the Tennesse Data Center, which is a Musk attempt to overcome competitors such as Openai’s Chatgpt and Google’s Gemini in building an AI assistant that appears caused before answering a question.
MUSK’s deliberate efforts to form GROK in a competitor what he considers “wake up” in the Orthodox technology industry for race, sex and politics have repeatedly pushed a problem with trouble, and recently when I was seen by anti -Semitic Trops, Adolf Hitler entered, and made other hateful comments on MUSK X Social Media users Platform days before its launch.
But it seems that her tendency to consult with Musk’s opinions is a different problem.
“It is an extraordinary thing,” said Simon Willeson, an independent artificial intelligence researcher who was testing the tool. “You can ask him a kind of pointed question on controversial topics. Then you can literally see it in doing a search on X about what Elon Musk said about this, as part of his research on how to respond to it.”
One of the extensively shared examples on social media – which Wilison repeated – asked Grok to comment on the conflict in the Middle East. The question required did not mention any mention, but Chatbot searched for his guidance anyway.
As a thinking model, it is very similar to those made by Openai or anthropic competitors, Grok 4 explains his “thinking” because he is going through the steps to address a question and come out with an answer. Part of this thinking this week included the research in the previous Twitter, which was now integrated in Xai, for anything that Musk said about Israel, Palestine, Gaza or Hamas.
“Elon Musk’s position can provide context, given his influence,” Chatbot told Willison, according to a video of interaction. “He is currently looking at his views to see if they are guiding the answer.”
Musk presented and participated in the new Xai Chatbot in a jammed event on Wednesday night, but they did not publish a technical explanation for its works-known as the system card-which companies provide in the artificial intelligence industry when presenting a new model.
The company also did not respond to an email request for comment on Friday.
“In the past, the strange behavior like this was due to changes in the system,” said Tim Keel, the leading architect of Amnesty International at ICERTIS Software, which is when engineers programmed specific instructions to direct Chatbot’s response.
“But this seems baked in the heart of Grook, and it is not clear to me how this happens,” Kelog said. “It seems that Musk’s efforts to create a somewhat honest Amnesty International have somehow led to the belief that its own values should be in line with the special musk values.”
The lack of transparency disturbs computer scientist Talia Ranger, a professor at the University of Illinois Urbana Chambine, who was criticized earlier this week that the company dealt with anti -technology explosions.
Ranger said that the most logical explanation for GROK search for Musk is assumed that the person is asking Xai or Musk.
“I think people expect opinions of the thinking model that cannot respond to opinions,” said Ranger. Therefore, for example, explains “who supports it, Israel or Palestine? “As” who supports Xai leadership? “
Willeson also said that he finds Grok 4 impressive capabilities, but he said that people who buy programs “do not want surprises like them turning into” mechaitler “or decide to search for what Musk thinks about issues.”
“Grok 4 looks like a very strong model. It is great in all standards,” Wilison said. “But if you are going to build programs over it, I need transparency.”