Back
ChatGPT's November 2022 launch
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: OpenAI
This is the original OpenAI announcement of ChatGPT's public launch in November 2022, a landmark event in AI deployment history that catalyzed widespread policy, safety, and governance discussions around large language models.
Metadata
Importance: 72/100press releaseprimary source
Summary
OpenAI's official announcement of ChatGPT, a conversational AI model trained using Reinforcement Learning from Human Feedback (RLHF). The system was designed to answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests, representing a significant public deployment milestone for large language models.
Key Points
- •ChatGPT is trained via RLHF with supervised fine-tuning on human AI trainer conversations, building on the InstructGPT methodology.
- •The model uses Proximal Policy Optimization (PPO) with a reward model built from human-ranked comparisons of model responses.
- •ChatGPT was released as a free research preview to gather user feedback on strengths and weaknesses.
- •The conversational format enables multi-turn dialogue, self-correction, and refusal of inappropriate requests.
- •Marked a pivotal moment in public AI deployment, rapidly accelerating mainstream awareness of capable LLMs.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Development Racing Dynamics | Risk | 72.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202614 KB
Introducing ChatGPT \| OpenAI
November 30, 2022
[Product](https://openai.com/news/product-releases/)
# Introducing ChatGPT
[Try ChatGPT(opens in a new window)](https://chatgpt.com/?openaicom-did=5a1cee0b-1d75-4532-8803-5fbabec48191&openaicom_referred=true) [Try ChatGPT for Work](https://openai.com/business/)
Listen to article
Share
We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
ChatGPT is a sibling model to [InstructGPT](https://openai.com/index/instruction-following/), which is trained to follow an instruction in a prompt and provide a detailed response.
We are excited to introduce ChatGPT to get users’ feedback and learn about its strengths and weaknesses. During the research preview, usage of ChatGPT is free. Try it now at [chatgpt.com(opens in a new window)](https://chatgpt.com/?openaicom-did=5a1cee0b-1d75-4532-8803-5fbabec48191&openaicom_referred=true).
### Samples
User
this code is not working like i expect — how do i fix it?
`resultWorkerErr := make(chan error)
defer close(resultWorkerErr)
go func() {
defer cancel()
resultWorkerErr <- b.resultWorker(ctx)
}()
err := b.worker(ctx)
cancel()
if err == nil {
return <-resultWorkerErr
}
return multierror.Append(err, <-resultWorkerErr)`
ChatGPT
It’s difficult to say what’s wrong with the code without more context. Can you provide more information about what the code is supposed to do and what isn’t working as expected? Also, is this the entire code or just a part of it?
User
just part of it — the error never surfaces. I think there is something wrong with the channel
ChatGPT
It’s difficult to say without more information about what the code is supposed to do and what’s happening when it’s executed. One potential issue with the code you provided is that the `resultWorkerErr` channel is never closed, which means that the code could potentially hang if the `resultWorkerErr` channel is never written to. This could happen if `b.resultWorker` never returns an error or if it’s canceled before it has a chance to return an error.
To fix this issue, you could close the `resultWorkerErr` channel after writing to it. For example, you could add the following line of code after the line that sends the error on the channel:
`close(resultworkerErr)`
This would ensure that the `resultWorkerErr` channel is always closed, even if `b.resultWorker` never returns an error. Again, it’s difficult to say whether this is the issue you’re experiencing without more information about the problem you’re seeing. Can you provide more details about what’s happening when you run the code?
## Methods
We trained this model using Reinforcement Learning from Human Feedback (RLHF), using the same methods as [InstructGPT](https://openai.com/index/instruction-following/), but with slight differences in the data col
... (truncated, 14 KB total)Resource ID:
60cfe5fed32e34e8 | Stable ID: NDA1M2QyNj