Decrypt s Art Fashion And Entertainment Hub

Aus Philo Wiki
Version vom 9. Februar 2025, 18:03 Uhr von ConcepcionIlr (Diskussion | Beiträge) (Die Seite wurde neu angelegt: „<br>A hacker said they [https://re.sharksw.com purloined private] [https://ifcwcu.dynamic.omegafi.com details] from [https://dev.yayprint.com millions] of [htt…“)
(Unterschied) ← Nächstältere Version | Aktuelle Version (Unterschied) | Nächstjüngere Version → (Unterschied)
Wechseln zu:Navigation, Suche


A hacker said they purloined private details from millions of OpenAI accounts-but researchers are hesitant, and the business is examining.


OpenAI states it's investigating after a hacker claimed to have swiped login qualifications for 20 countless the AI company's user put them up for sale on a dark web forum.


The pseudonymous breacher posted a puzzling message in Russian advertising "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and offering prospective buyers what they claimed was sample data containing email addresses and passwords. As reported by Gbhackers, the full dataset was being used for sale "for just a few dollars."


"I have more than 20 million gain access to codes for OpenAI accounts," emirking composed Thursday, according to a translated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus agrees."


If legitimate, this would be the 3rd significant security incident for the AI company since the release of ChatGPT to the public. In 2015, a hacker got access to the company's internal Slack messaging system. According to The New York Times, the hacker "took details about the style of the company's A.I. technologies."


Before that, bphomesteading.com in 2023 an even simpler bug including jailbreaking prompts allowed hackers to obtain the private data of OpenAI's paying consumers.


This time, however, security scientists aren't even sure a hack took place. Daily Dot press reporter Mikael Thalan composed on X that he found invalid email addresses in the supposed sample data: "No proof (suggests) this alleged OpenAI breach is genuine. A minimum of 2 addresses were void. The user's only other post on the online forum is for a thief log. Thread has given that been deleted also."


No evidence this alleged OpenAI breach is genuine.


Contacted every email address from the purported sample of login credentials.


At least 2 addresses were invalid. The user's only other post on the online forum is for a stealer log. Thread has because been deleted also. https://t.co/yKpmxKQhsP


- Mikael Thalen (@MikaelThalen) February 6, 2025


OpenAI takes it 'seriously'


In a declaration shown Decrypt, an OpenAI spokesperson acknowledged the circumstance while maintaining that the business's systems appeared safe and secure.


"We take these claims seriously," the spokesperson said, adding: "We have actually not seen any proof that this is linked to a compromise of OpenAI systems to date."


The scope of the alleged breach sparked concerns due to OpenAI's huge user base. Millions of users worldwide depend on the company's tools like ChatGPT for company operations, instructional functions, and material generation. A legitimate breach could expose private discussions, business tasks, and other delicate data.


Until there's a last report, some preventive steps are always advisable:


- Go to the "Configurations" tab, log out from all connected gadgets, and allow two-factor authentication or 2FA. This makes it practically impossible for a hacker to gain access to the account, even if the login and passwords are jeopardized.
- If your bank supports it, championsleage.review then develop a virtual card number to handle OpenAI memberships. In this manner, it is easier to spot and avoid scams.
- Always keep an eye on the conversations saved in the chatbot's memory, and know any phishing efforts. OpenAI does not request for any individual details, and any payment update is always dealt with through the main OpenAI.com link.