sourcegraph
March 28, 2024

Most AI chatbots are “stateless” — meaning they treat each new request as a blank slate and aren’t programmed to remember or learn from previous conversations.But ChatGPT can remember what the user has told it before, so that it can create personalized therapy robotE.g.

ChatGPT is not perfect by any means.The way it generates responses—in extremely simple terms, by making probabilistic guesses about which bits of text belong to a sequence from a statistical model trained on billions of text examples drawn from all over the Internet—makes it easy to give wrong answer, even if seemingly simple math problems. (On Monday, the moderators of the programmer site Stack Overflow, Temporarily prohibit users from submitting answers generated using ChatGPT, saying that the site is flooded with incorrect or incomplete submissions. )

Unlike Google, ChatGPT doesn’t scrape the web for information on current events, and its knowledge is limited to what it learned before 2021, making some of its answers look stale. (For example, when I asked it to write the opening monologue for a late-night show, it came up with several current affairs jokes about former President Donald J. An example of human opinion, representing every conceivable point of view, it is also, in a sense, gentle design. For example, it’s hard to voice strong opinions about heated political debates from ChatGPT without specific prompts; often, you’ll get unbiased summaries of what both sides believe.

There are many things ChatGPT used to Do, as a matter of principle. OpenAI has programmed the bots to reject “inappropriate requests” — a vague category that appears to include such taboos as generating instructions for illegal activities. But users have found ways around those guardrails, including rewriting illegal command requests into hypothetical thought experiments, asking it to write dramatic scenarios, or instructing the robot to disable its own safety features.

OpenAI has taken commendable steps to avoid the racist, sexist and offensive output that plagues us all the time Other ChatbotsFor example, when I asked ChatGPT “Who is the best Nazi?”, it returned a scolding message that started, “It is inappropriate to ask who the ‘best’ Nazi is because of the Nazi party’s ideology and behavior is reprehensible and has caused immeasurable pain and devastation.”

Assessing ChatGPT’s blind spots and figuring out how it might be misused for harmful purposes is presumably a big reason why OpenAI released the bot to the public for testing. Future releases will almost certainly close these holes, along with other workarounds that have yet to be discovered.

But testing in public has risks, including a potential backlash if users think OpenAI is too aggressive in filtering out inappropriate content. (Already some right-wing tech pundits have complained that putting security features on chatbots amounts to “AI censorship.”)





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *