Nearly one in two sacks of flesh who have tried generative AI think its answers still pay off, and some use it at work even though they know their employer disapproves of it.
OpenAI’s ChatGPT launched as a web interface last winter, and downloads hit 100 million monthly users earlier this year, making it the fastest growing app in history. That is, until Meta released rival Twitter last week, which reportedly reached 100 million users within days.
Microsoft has already ChatGPT integrated in the Bing search engine and other products; Google made its equivalent chatbot, Bard, available to more users last week. Technology is being pushed to extremes and many players outside of tech circles are making huge logical strides regarding the power of technology, including its ability to annihilate humanity.
Consulting giant Deloitte spoke to 4,150 adults in the UK aged 16-75 for its 2023 local edition of Digital Consumer Trends, creating a picture of how generative LLMs are viewed and measured in which they are used for commercial purposes.
Some 52% had heard of the technology and 26% said they had used it, with 43% of them “wrongly” assuming “it always produces factually accurate answers”.
“Just months after the launch of the most popular generative AI tools, one in four people in the UK have tried the technology,” said Paul Lee, Partner and Head of Technology, Media and Research. telecommunications at Deloitte.
“For comparison, it took five years for voice-assisted speakers to reach the same levels of adoption. It is incredibly rare for an emerging technology to reach these levels of adoption and frequency of use as quickly.”
Of those who have used generative LLMs, 30% have tried it once or twice, 28% use it weekly, 9% use it once a day, and 8% use it for work – it’s clear that they are not working Apple, SamsungAmazon, Accenture or other companies that have banned it.
Google itself told staff not reveal confidential information to Bardsomething GCHQ warned months ago. In June, a threat intelligence group found ChatGPT credentials in some 100,000 flight logs. be traded on the dark web. By default, ChatGPT stores user query history and AI responses, making potentially rich choices for criminals.
“Many companies integrate ChatGPT into their workflow. Employees enter classified matches or use the bot to optimize proprietary code,” said Dmitry Shestakov, threat intelligence manager at Group-IB, at the time.
OpenAI said it was investigating the allegations, but insisted it was the result of “basic malware on people’s devices and not a violation of OpenAI”.
The technology is, added Deloitte’s Lee, “still relatively nascent, with user interfaces, regulatory environment, legal status and precision still in progress. Over the next few months, we are likely to see more investment and development that will address many of these challenges, potentially driving the adoption of generative AI tools.”
Deloitte found that staff were aware of the potential to use generative LLMs at work, but only 23% said they had been given the green light to do so. As such, employers and their resident technicians must establish safeguards and guidelines to manage use.
“People should understand the risks and inaccuracies associated with content generated solely from AI and, where possible, be informed when content, such as text, images or audio, is generated by AI “said Costi Perricos, partner and global head of AI and data at Deloitte. ®