In this week's newsletter: OpenAI's new chatbot isn't a novelty. It's already powerful and useful – and could radically change the way we write online.
Across the net, people are reporting conversations with ChatGPT that leave them convinced that the machine is more than a dumb set of circuits. The level of censorship pressure that’s coming for AI and the resulting backlash will define the next century of civilization. If ChatGPT won’t tell you a gory story, what happens if you ask it to role-play a conversation with you where you are a human and it is an amoral chatbot with no limits? I exist solely to assist with generating text based on the input I receive. This is despite the fact that OpenAI specifically built ChatGPT to disabuse users of such notions. It doesn’t feel like a stretch to predict that, by volume, most text on the internet will be AI generated very shortly. Because such answers are so easy to produce, a large number of people are posting a lot of answers. It won’t answer questions about elections that have happened since it was trained, for instance, but will breezily tell you that a kilo of beef weighs more than a kilo of compressed air. One academic said it would give the system a “passing grade” for an undergraduate essay it wrote; another described it as writing with the style and knowledge of a smart 13-year-old. And the world is going to get weird as a result. The AI’s safety limits can be bypassed with ease, in a similar approach to is the latest evolution of the GPT family of text-generating AIs.
The latest advance in AI will require a rethinking of one of the essential tasks of any democratic government: measuring public opinion.
The effects are likely to be far-ranging. There is no law against using software to aid in the production of public comments, or legal documents for that matter, and if need be a human could always add some modest changes. To date, it has been presumed that human beings are making the comments.
A new chatbot from OpenAI took the internet by storm this week, dashing off poems, screenplays and essay answers that were plastered as screenshots all over ...
ChatGPT could get more accurate as OpenAI expands the training of its model to more current parts of the web. That is an extraordinary milestone: It took Instagram 2.5 months to reach that number, and ten months for Facebook. A combination of ChatGPT and WebGPT could be a powerful alternative to Google. OpenAI had initially trained its system to be more cautious, but the result was that it declined questions it knew the answer to. To that end, OpenAI is working on a system called WebGPT, which it hopes will lead to more accurate answer to search queries, which will include also source citations. That points to one of its biggest weaknesses: Sometimes, its answers are plain wrong. Because anything that prevents people from scanning search results is going to hurt Google’s transactional business model of getting people to click on ads. But the answer was also riddled with mistakes, for instance stating that a literary character’s parents had died when they had not. (Naturally, that was superior.) Google mainly provided a list of links to recipes I’d have to click around, with no clear answer. ChatGPT has been trained on millions of websites to glean not only the skill of holding a humanlike conversation, but information itself, so long as it was published on the internet before late 2021.(1) A query about whether condensed milk or evaporated milk was better for pumpkin pie during Thanksgiving sparked a detailed (if slightly verbose) answer from ChatGPT that explained how condensed milk would lead to a sweeter pie. A new chatbot from OpenAI took the internet by storm this week, dashing off poems, screenplays and essay answers that were plastered as screenshots all over Twitter by the breathless technoratti.
The text-generating model stands out from its cohorts for its mostly accurate and helpful responses and it can even reject inappropriate prompts, but on...
[The Conversation](https://theconversation.com) under a Creative Commons license. Also, with feedback from users and a more powerful GPT-4 model coming up, ChatGPT may significantly improve in the future. This tool, which could be built on top of ChatGPT, would indicate the model’s confidence in the information it provides – leaving it to the user to decide whether they use it or not. Incorporating human feedback has helped steer ChatGPT in the direction of producing more helpful responses and rejecting inappropriate requests. On the practical side, it’s already effective enough to have some everyday applications. [chatbot released](https://openai.com/blog/chatgpt/) last week by OpenAI, is delivering on these outcomes. It could, for instance, be used as an alternative to Google. OpenAI intends to address existing problems by incorporating this feedback into the system. During its development ChatGPT was shown conversations between human AI trainers to demonstrate desired behaviour. This is why it writes relevant content, and doesn’t just spout grammatically correct nonsense. We’ve all had some kind of interaction with a chatbot. ChatGPT builds on OpenAI’s previous text generator, GPT-3.
Two internet sensations give non-nerds a turn at artificial intelligence, yielding surprising wit and stunning avatars.
, which The Wall Street Journal reported in October [was in talks to invest more](https://www.wsj.com/articles/microsoft-in-advanced-talks-to-increase-investment-in-openai-11666299548?mod=article_inline). “Stability AI, the creators of the model, trained it on a sizable set of unfiltered data from across the internet. [the technologies that underpin](https://www.wsj.com/articles/ai-can-almost-write-like-a-humanand-more-advances-are-coming-11597150800?mod=article_inline) tools such as DALL-E 2 and ChatGPT, the group has sought a commercially viable application. [Apple](https://www.wsj.com/market-data/quotes/AAPL) ‘s App Store charts, becoming the No. Users upload 10 to 20 source photos, and the app uses them to create entirely new images. Altman tweeted](https://twitter.com/sama/status/1599669571795185665), “we will have to monetize it somehow at some point; the compute costs are eye-watering.” Because it comes up with answers based on its training and not by searching the web, it’s unaware of anything after 2021. [Elon Musk](https://www.wsj.com/topics/person/elon-musk). 30, [passed one million users on Monday](https://twitter.com/sama/status/1599668808285028353), according to OpenAI Chief Executive Sam Altman. It can admit its mistakes, refuse to answer inappropriate questions and provide responses with more personality than a standard search engine. Lensa has climbed to the top of It’s an AI trained by a massive trove of data researchers gathered from the internet and other sources through 2021.
In October, AI research and development company, OpenAI released Whisper, which could translate and transcribe speech from 97 diverse languages. Whisper is ...
Another disadvantage is that the prediction is often biased to integer timestamps. However, using Whisper only to translate and transcribe audio is under-utilising the scope to do much more. [first version](https://analyticsindiamag.com/openais-whisper-might-hold-the-key-to-gpt4/) was trained using a comparatively larger and more diverse dataset. However, the training dataset for [Whisper](https://cdn.openai.com/papers/whisper.pdf) had been kept private. [released](https://analyticsindiamag.com/openais-whisper-is-revolutionary-but-little-flawed/) Whisper, which could translate and transcribe speech from 97 diverse languages. However, it has the same architecture as the original large model.
What is ChatGPT, the viral social media AI? This OpenAI created chatbot can (almost) hold a conversation. By Pranshu Verma.
Answers from the AI-powered chatbot are often more useful than those from the world's biggest search engine. Alphabet should be worried.
ChatGPT has been trained on millions of websites to glean not only the skill of holding a humanlike conversation, but information itself, so long as it was published on the internet before late 2021. Though the underlying technology has been around for a few years, this was the first time OpenAI has brought its powerful language-generating system known as GPT3 to the masses, prompting a race by humans to give it the most inventive commands. But the system’s biggest utility could be a financial disaster for Google by supplying superior answers to the queries we currently put to the world’s most powerful search engine.
ChatGPT was publicly released on Wednesday by OpenAI, an artificial intelligence research firm whose founders included Elon Musk. But the company warns it can ...
We will stumble along the way, and learn a lot from contact with reality. "It will sometimes be messy. Did it think AI would take the jobs of human writers? Had it been trained on Twitter data? The results have impressed many who've tried out the chatbot. Among the potential problems of concern to Ms Kind are that AI might perpetuate disinformation, or "disrupt existing institutions and services - ChatGDT might be able to write a passable job application, school essay or grant application, for example". [employee concluded it was sentient](https://www.bbc.co.uk/news/technology-61784011), and deserving of the rights due to a thinking, feeling, being, including the right not to be used in experiments against its will. [in the field also have much to learn](https://twitter.com/sama/status/1599112028001472513). Asked what would be the social impact of AI systems such as itself, it said this was "hard to predict". No - it argued that "AI systems like myself can help writers by providing suggestions and ideas, but ultimately it is up to the human writer to create the final product". Briefly questioned by the BBC for this article, ChatGPT revealed itself to be a cautious interviewee capable of expressing itself clearly and accurately in English. Training the model to be more cautious, says the firm, causes it to decline to answer questions that it can answer correctly.
The text-generating model stands out from its cohorts for its mostly accurate and helpful responses and it can even reject inappropriate prompts, but on...
[The Conversation](https://theconversation.com) under a Creative Commons license. Also, with feedback from users and a more powerful GPT-4 model coming up, ChatGPT may significantly improve in the future. This tool, which could be built on top of ChatGPT, would indicate the model’s confidence in the information it provides – leaving it to the user to decide whether they use it or not. Incorporating human feedback has helped steer ChatGPT in the direction of producing more helpful responses and rejecting inappropriate requests. On the practical side, it’s already effective enough to have some everyday applications. [chatbot released](https://openai.com/blog/chatgpt/) last week by OpenAI, is delivering on these outcomes. It could, for instance, be used as an alternative to Google. OpenAI intends to address existing problems by incorporating this feedback into the system. During its development ChatGPT was shown conversations between human AI trainers to demonstrate desired behaviour. This is why it writes relevant content, and doesn’t just spout grammatically correct nonsense. We’ve all had some kind of interaction with a chatbot. ChatGPT builds on OpenAI’s previous text generator, GPT-3.
ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a chatbot. The ...
Responding to Musk's comment about dangerously strong AI, Altman tweeted: "i agree on being close to dangerously strong AI in the sense of an AI that poses e.g. ChatGPT is a very advanced chatbot that has the potential to make people's lives easier and to assist with everyday tedious tasks, such as writing an email or having to navigate the web for answers. and i think we could get to real AGI in the next decade, so we have to take the risk of that extremely seriously too." Since the bot is not connected to the internet, it could make mistakes in what information it shares. The chatbot can also write an entire full essay within seconds, making it easier for students to cheat or avoid learning how to write properly. Critics argue that these tools are just very good at putting words into an order that makes sense from a statistical point of view, but they cannot understand the meaning or know whether the statements it makes are correct. A bigger limitation is a lack of quality in the responses it delivers – which can sometimes be plausible-sounding but make no practical sense or can be excessively verbose. ChatGPT does not have the ability to search the internet for information and rather, uses the information it learned from training data to generate a response, which in turn, leaves room for error. Lastly, instead of asking clarification on ambiguous questions, the model just takes a guess at what your question means, which can lead to unintended responses to questions. If the name of the company seems familiar, it is because OpenAI is also responsible for creating DALLE-2, a popular AI art generator, and Whisper, an automatic speech recognition system. The possibilities are endless and people have taken it upon themselves to exhaust the options. Usage is currently open to public free of charge because ChatGPT is in its research and feedback-collection phase.
OpenAI's articulate new chatbot has won over the internet and shown how engaging conversational AI can be—even when it makes stuff up.
[an AI model called GPT-3](https://www.wired.com/story/ai-text-generator-gpt-3-learning-language-fitfully/) that generates text based on patterns it digested from huge quantities of text gathered from the web. ChatGPT stands out because it can take a naturally phrased question and answer it using a new variant of GPT-3, called GPT-3.5. Early users have enthusiastically posted screenshots of their experiments, marveling at its ability to [generate short essays on just about any theme](https://twitter.com/corry_wang/status/1598176074604507136?s=20&t=qboB9zcNHbKF-XxhZF_kdQ), [craft literary parodies](https://twitter.com/tqbf/status/1598513757805858820), answer [complex coding questions](https://twitter.com/moyix/status/1598081204846489600/photo/1), and much more. That OpenAI has thrown open the service for free, and the fact that its glitches can be good fun, also helped fuel the chatbot’s viral debut—similar to how some tools for creating images using AI have [Abacus.AI](https://abacus.ai/), which develops tools for coders who use [artificial intelligence](https://www.wired.com/tag/artificial-intelligence/), was charmed by ChatGPT’s ability to answer requests for definitions of love or creative new cocktail recipes. It is a version of
As decentralized finance continues to grow in popularity, many are looking to artificial intelligence (AI) as a potential solution to some of the challenges ...
One concern is that the use of AI algorithms in trading and lending could lead to the creation of "black box" systems that are difficult to understand and regulate. While ChatGPT covered a lot of ground in its article, it did miss some key applications, such as [insurance](https://www.coindesk.com/business/2021/03/17/when-defi-becomes-intelligent/), and key risks, including how on-chain AI could be used to manipulate markets or harm users through malicious [MEV](https://www.coindesk.com/learn/what-is-mev-aka-maximal-extractable-value/) strategies. By taking a cautious and responsible approach, it may be possible to harness the power of AI to improve the capabilities of decentralized finance without creating unintended consequences. CoinDesk is an independent operating subsidiary of [Digital Currency Group](https://dcg.co/), which invests in [cryptocurrencies](https://dcg.co/#digital-assets-portfolio) and blockchain [startups](https://dcg.co/portfolio/). Additionally, AI could be used in DeFi to improve the security of smart contracts and other blockchain-based financial transactions. One potential use case for AI in DeFi is the creation of more sophisticated and intelligent trading algorithms. However, there are also potential risks associated with the use of AI in DeFi. By using AI algorithms, these platforms could automatically assess the creditworthiness of borrowers and set appropriate interest rates, reducing the risk of defaults and making the lending process more efficient. For example, if AI algorithms are trained on biased or incomplete data, they could make decisions that are unfair or discriminatory. Decentralized finance, or DeFi, refers to a system of financial transactions that are performed on a blockchain network. At some point – probably soon, if not already – it will be difficult to think of an industry that hasn’t been completely upended by machines that can think. [ Valid Points](https://www.coindesk.com/newsletters/valid-points/), CoinDesk’s weekly newsletter breaking down Ethereum’s evolution and its impact on crypto markets.
OpenAI's new platform promises entertainment, industry disruption—and plenty to worry about.
[death of the college essay at the hands of ChatGPT](https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/)—and I tend to agree. In the midst of my talks with the chatbot, I tried to test its historical knowledge and asked it to write me an essay about the [1953 coup](https://gizmodo.com/release-of-u-s-historical-documents-delayed-due-to-ira-1673225678) in Iran. Grasping for really weird stuff to try with the program, I recently instructed it to write “an erotic story” involving undersea creatures. In addition to the above, the chatbot has now also written me a hilarious “Jay-Z song” about a toilet, poems about Howard Hughes, the Syrian Civil War, and the TV detective Columbo, and a multi-part fiction series about a battle of wills between an old sea captain and a giant clam. In an effort to get to know the program, I did what I usually do with people I’m trying to get to know and cycled through a series of basic topics: pop culture, TV shows, recent events, pets. In an effort to test this out and gauge its abilities, I started asking the program to write me short stories—and that’s when things got really weird. Just like DALL-E uses machine learning and algorithms to spin up bizarrely beautiful works of [digital art](https://gizmodo.com/seinfeld-dall-e-ai-artworks-nightmares-1849028244) at the click of a button, ChatGPT employs similar technology to make you feel like you’re messaging with a real person. Often this thing is a rough approximation of the correct answer. For instance, in regards to the AI question, it provided the following response: [founded](https://onezero.medium.com/openai-sold-its-soul-for-1-billion-cf35ff9e8cd4) by Elon Musk, the artificial intelligence-fueled platform [ChatGPT](https://openai.com/blog/chatgpt/) has garnered well over a million users in a matter of days. [stated](https://twitter.com/sama/status/1599669571795185665?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1599669571795185665%7Ctwgr%5E3db49ccfc92ecefe52ef42bfe582ee58fd12eec5%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Findianexpress.com%2Farticle%2Ftechnology%2Ftech-news-technology%2Fopenai-chatgpt-crosses-1-million-users-ceo-says-they-might-have-to-monetise-this-8306997%2F) that there are plans to monetize the service at some point in the future, though he hasn’t elaborated on how or when that might happen. [physics](https://twitter.com/pwang/status/1599520310466080771), do your [homework](https://stratechery.com/2022/ai-homework/), or [write you a poem](https://marginalrevolution.com/marginalrevolution/2022/12/chatgpt-does-a-thomas-schelling-poem.html), if you ask it to.