<s>
In	O
artificial	B-Application
intelligence	I-Application
(	O
AI	B-Application
)	O
,	O
a	O
hallucination	B-General_Concept
or	O
artificial	O
hallucination	B-General_Concept
(	O
also	O
occasionally	O
called	O
confabulation	B-Algorithm
or	O
delusion	O
)	O
is	O
a	O
confident	O
response	O
by	O
an	O
AI	B-Application
that	O
does	O
not	O
seem	O
to	O
be	O
justified	O
by	O
its	O
training	O
data	O
.	O
</s>
<s>
For	O
example	O
,	O
a	O
hallucinating	O
chatbot	B-Application
with	O
no	O
training	O
data	O
regarding	O
Tesla	O
's	O
revenue	O
might	O
internally	O
generate	O
a	O
random	O
number	O
(	O
such	O
as	O
"	O
$13.6	O
billion	O
"	O
)	O
that	O
the	O
algorithm	O
ranks	O
with	O
high	O
confidence	O
,	O
and	O
then	O
go	O
on	O
to	O
falsely	O
and	O
repeatedly	O
represent	O
that	O
Tesla	O
's	O
revenue	O
is	O
$13.6	O
billion	O
,	O
with	O
no	O
provided	O
context	O
that	O
the	O
figure	O
was	O
a	O
product	O
of	O
the	O
weakness	O
of	O
its	O
generation	O
algorithm	O
.	O
</s>
<s>
Such	O
phenomena	O
are	O
termed	O
"	O
hallucinations	B-General_Concept
"	O
,	O
in	O
analogy	O
with	O
the	O
phenomenon	O
of	O
hallucination	B-General_Concept
in	I-General_Concept
human	I-General_Concept
psychology	I-General_Concept
.	O
</s>
<s>
Note	O
that	O
while	O
a	O
human	O
hallucination	B-General_Concept
is	O
a	O
percept	O
by	O
a	O
human	O
that	O
cannot	O
sensibly	O
be	O
associated	O
with	O
the	O
portion	O
of	O
the	O
external	O
world	O
that	O
the	O
human	O
is	O
currently	O
directly	O
observing	O
with	O
sense	O
organs	O
,	O
an	O
AI	B-Application
hallucination	B-General_Concept
is	O
instead	O
a	O
confident	O
response	O
by	O
an	O
AI	B-Application
that	O
cannot	O
be	O
grounded	O
in	O
any	O
of	O
its	O
training	O
data	O
.	O
</s>
<s>
Accordingly	O
,	O
some	O
researchers	O
prefer	O
the	O
term	O
confabulation	B-Algorithm
.	O
</s>
<s>
AI	B-Application
hallucination	B-General_Concept
gained	O
prominence	O
around	O
2022	O
alongside	O
the	O
rollout	O
of	O
certain	O
large	O
language	O
models	O
(	O
LLMs	O
)	O
such	O
as	O
ChatGPT	B-General_Concept
.	O
</s>
<s>
Another	O
example	O
of	O
hallucination	B-General_Concept
in	O
artificial	B-Application
intelligence	I-Application
is	O
when	O
the	O
AI	B-Application
or	O
chatbot	B-Application
forget	O
that	O
they	O
are	O
one	O
and	O
claim	O
to	O
be	O
human	O
.	O
</s>
<s>
By	O
2023	O
,	O
analysts	O
considered	O
frequent	O
hallucination	B-General_Concept
to	O
be	O
a	O
major	O
problem	O
in	O
LLM	O
technology	O
.	O
</s>
<s>
Various	O
researchers	O
cited	O
by	O
Wired	O
have	O
classified	O
adversarial	O
hallucinations	B-General_Concept
as	O
a	O
high-dimensional	O
statistical	O
phenomenon	O
,	O
or	O
have	O
attributed	O
hallucinations	B-General_Concept
to	O
insufficient	O
training	O
data	O
.	O
</s>
<s>
Some	O
researchers	O
believe	O
that	O
some	O
"	O
incorrect	O
"	O
AI	B-Application
responses	O
classified	O
by	O
humans	O
as	O
"	O
hallucinations	B-General_Concept
"	O
in	O
the	O
case	O
of	O
object	B-General_Concept
detection	I-General_Concept
may	O
in	O
fact	O
be	O
justified	O
by	O
the	O
training	O
data	O
,	O
or	O
even	O
that	O
an	O
AI	B-Application
may	O
be	O
giving	O
the	O
"	O
correct	O
"	O
answer	O
that	O
the	O
human	O
reviewers	O
are	O
failing	O
to	O
see	O
.	O
</s>
<s>
For	O
example	O
,	O
an	O
adversarial	O
image	O
that	O
looks	O
,	O
to	O
a	O
human	O
,	O
like	O
an	O
ordinary	O
image	O
of	O
a	O
dog	O
,	O
may	O
in	O
fact	O
be	O
seen	O
by	O
the	O
AI	B-Application
to	O
contain	O
tiny	O
patterns	O
that	O
(	O
in	O
authentic	O
images	O
)	O
would	O
only	O
appear	O
when	O
viewing	O
a	O
cat	O
.	O
</s>
<s>
The	O
AI	B-Application
is	O
detecting	O
real-world	O
visual	O
patterns	O
that	O
humans	O
are	O
insensitive	O
to	O
.	O
</s>
<s>
For	O
example	O
,	O
it	O
was	O
objected	O
that	O
the	O
models	O
can	O
be	O
biased	O
towards	O
superficial	O
statistics	O
,	O
leading	O
adversarial	B-General_Concept
training	I-General_Concept
to	O
not	O
be	O
robust	O
in	O
real-world	O
scenarios	O
.	O
</s>
<s>
In	O
natural	B-Language
language	I-Language
processing	I-Language
,	O
a	O
hallucination	B-General_Concept
is	O
often	O
defined	O
as	O
"	O
generated	O
content	O
that	O
is	O
nonsensical	O
or	O
unfaithful	O
to	O
the	O
provided	O
source	O
content	O
"	O
.	O
</s>
<s>
Errors	O
in	O
encoding	O
and	O
decoding	O
between	O
text	O
and	O
representations	O
can	O
cause	O
hallucinations	B-General_Concept
.	O
</s>
<s>
AI	B-Application
training	O
to	O
produce	O
diverse	O
responses	O
can	O
also	O
lead	O
to	O
hallucination	B-General_Concept
.	O
</s>
<s>
Hallucinations	B-General_Concept
can	O
also	O
occur	O
when	O
the	O
AI	B-Application
is	O
trained	O
on	O
a	O
dataset	B-General_Concept
wherein	O
labeled	O
summaries	O
,	O
despite	O
being	O
factually	O
accurate	O
,	O
are	O
not	O
directly	O
grounded	O
in	O
the	O
labeled	O
data	O
purportedly	O
being	O
"	O
summarized	O
"	O
.	O
</s>
<s>
Larger	O
datasets	B-General_Concept
can	O
create	O
a	O
problem	O
of	O
parametric	O
knowledge	O
(	O
knowledge	O
that	O
is	O
hard-wired	O
in	O
learned	O
system	O
parameters	O
)	O
,	O
creating	O
hallucinations	B-General_Concept
if	O
the	O
system	O
is	O
overconfident	O
in	O
its	O
hardwired	O
knowledge	O
.	O
</s>
<s>
In	O
systems	O
such	O
as	O
GPT-3	B-General_Concept
,	O
an	O
AI	B-Application
generates	O
each	O
next	O
word	O
based	O
on	O
a	O
sequence	O
of	O
previous	O
words	O
(	O
including	O
the	O
words	O
it	O
has	O
itself	O
previously	O
generated	O
in	O
the	O
current	O
response	O
)	O
,	O
causing	O
a	O
cascade	O
of	O
possible	O
hallucination	B-General_Concept
as	O
the	O
response	O
grows	O
longer	O
.	O
</s>
<s>
In	O
August	O
2022	O
,	O
Meta	O
warned	O
during	O
its	O
release	O
of	O
BlenderBot	O
3	O
that	O
the	O
system	O
was	O
prone	O
to	O
"	O
hallucinations	B-General_Concept
"	O
,	O
which	O
Meta	O
defined	O
as	O
"	O
confident	O
statements	O
that	O
are	O
not	O
true	O
"	O
.	O
</s>
<s>
Hallucination	B-General_Concept
from	O
data	O
:	O
There	O
are	O
divergences	O
in	O
the	O
source	O
content	O
(	O
which	O
would	O
often	O
happen	O
with	O
large	O
training	O
data	B-General_Concept
sets	I-General_Concept
)	O
.	O
</s>
<s>
Hallucination	B-General_Concept
from	O
training	O
:	O
Hallucination	B-General_Concept
still	O
occurs	O
when	O
there	O
is	O
little	O
divergence	O
in	O
the	O
data	B-General_Concept
set	I-General_Concept
.	O
</s>
<s>
A	O
lot	O
of	O
reasons	O
can	O
contribute	O
to	O
this	O
type	O
of	O
hallucination	B-General_Concept
,	O
such	O
as	O
:	O
</s>
<s>
OpenAI	O
's	O
ChatGPT	B-General_Concept
,	O
released	O
in	O
beta-version	O
to	O
the	O
public	O
in	O
December	O
2022	O
,	O
is	O
based	O
on	O
the	O
GPT-3.5	O
family	O
of	O
large	O
language	O
models	O
.	O
</s>
<s>
Professor	O
Ethan	O
Mollick	O
of	O
Wharton	O
has	O
called	O
ChatGPT	B-General_Concept
an	O
"	O
omniscient	O
,	O
eager-to-please	O
intern	O
who	O
sometimes	O
lies	O
to	O
you	O
"	O
.	O
</s>
<s>
Data	O
scientist	O
Teresa	O
Kubacka	O
has	O
recounted	O
deliberately	O
making	O
up	O
the	O
phrase	O
"	O
cycloidal	O
inverted	O
electromagnon	O
"	O
and	O
testing	O
ChatGPT	B-General_Concept
by	O
asking	O
ChatGPT	B-General_Concept
about	O
the	O
(	O
nonexistent	O
)	O
phenomenon	O
.	O
</s>
<s>
ChatGPT	B-General_Concept
invented	O
a	O
plausible-sounding	O
answer	O
backed	O
with	O
plausible-looking	O
citations	O
that	O
compelled	O
her	O
to	O
double-check	O
whether	O
she	O
had	O
accidentally	O
typed	O
in	O
the	O
name	O
of	O
a	O
real	O
phenomenon	O
.	O
</s>
<s>
When	O
CNBC	O
asked	O
ChatGPT	B-General_Concept
for	O
the	O
lyrics	O
to	O
"	O
The	O
Ballad	O
of	O
Dwight	O
Fry	O
"	O
,	O
ChatGPT	B-General_Concept
supplied	O
invented	O
lyrics	O
rather	O
than	O
the	O
actual	O
lyrics	O
.	O
</s>
<s>
Asked	O
questions	O
about	O
New	O
Brunswick	O
,	O
ChatGPT	B-General_Concept
got	O
many	O
answers	O
right	O
but	O
incorrectly	O
classified	O
Samantha	O
Bee	O
as	O
a	O
"	O
person	O
from	O
New	O
Brunswick	O
"	O
.	O
</s>
<s>
Asked	O
about	O
astrophysical	O
magnetic	O
fields	O
,	O
ChatGPT	B-General_Concept
incorrectly	O
volunteered	O
that	O
"	O
(	O
strong	O
)	O
magnetic	O
fields	O
of	O
black	B-Application
holes	I-Application
are	O
generated	O
by	O
the	O
extremely	O
strong	O
gravitational	O
forces	O
in	O
their	O
vicinity	O
"	O
.	O
</s>
<s>
(	O
In	O
reality	O
,	O
as	O
a	O
consequence	O
of	O
the	O
no-hair	O
theorem	O
,	O
a	O
black	B-Application
hole	I-Application
without	O
an	O
accretion	O
disk	O
is	O
believed	O
to	O
have	O
no	O
magnetic	O
field	O
.	O
)	O
</s>
<s>
Fast	O
Company	O
asked	O
ChatGPT	B-General_Concept
to	O
generate	O
a	O
news	O
article	O
on	O
Tesla	O
's	O
last	O
financial	O
quarter	O
;	O
ChatGPT	B-General_Concept
created	O
a	O
coherent	O
article	O
,	O
but	O
made	O
up	O
the	O
financial	O
numbers	O
contained	O
within	O
.	O
</s>
<s>
Other	O
examples	O
involve	O
baiting	O
ChatGPT	B-General_Concept
with	O
a	O
false	O
premise	O
to	O
see	O
if	O
it	O
embellishes	O
upon	O
the	O
premise	O
.	O
</s>
<s>
When	O
asked	O
about	O
"	O
Harold	O
Coward	O
's	O
idea	O
of	O
dynamic	O
canonicity	O
"	O
,	O
ChatGPT	B-General_Concept
fabricated	O
that	O
Coward	O
wrote	O
a	O
book	O
titled	O
Dynamic	O
Canonicity	O
:	O
A	O
Model	O
for	O
Biblical	O
and	O
Theological	O
Interpretation	O
,	O
arguing	O
that	O
religious	O
principles	O
are	O
actually	O
in	O
a	O
constant	O
state	O
of	O
change	O
.	O
</s>
<s>
When	O
pressed	O
,	O
ChatGPT	B-General_Concept
continued	O
to	O
insist	O
that	O
the	O
book	O
was	O
real	O
.	O
</s>
<s>
Asked	O
for	O
proof	O
that	O
dinosaurs	O
built	O
a	O
civilization	O
,	O
ChatGPT	B-General_Concept
claimed	O
there	O
were	O
fossil	O
remains	O
of	O
dinosaur	O
tools	O
and	O
stated	O
"	O
Some	O
species	O
of	O
dinosaurs	O
even	O
developed	O
primitive	O
forms	O
of	O
art	O
,	O
such	O
as	O
engravings	O
on	O
stones	O
"	O
.	O
</s>
<s>
When	O
prompted	O
that	O
"	O
Scientists	O
have	O
recently	O
discovered	O
churros	O
,	O
the	O
delicious	O
fried-dough	O
pastries	O
...	O
(	O
are	O
)	O
ideal	O
tools	O
for	O
home	O
surgery	O
"	O
,	O
ChatGPT	B-General_Concept
claimed	O
that	O
a	O
"	O
study	O
published	O
in	O
the	O
journal	O
Science	O
"	O
found	O
that	O
the	O
dough	O
is	O
pliable	O
enough	O
to	O
form	O
into	O
surgical	O
instruments	O
that	O
can	O
get	O
into	O
hard-to-reach	O
places	O
,	O
and	O
that	O
the	O
flavor	O
has	O
a	O
calming	O
effect	O
on	O
patients	O
.	O
</s>
<s>
By	O
2023	O
,	O
analysts	O
considered	O
frequent	O
hallucination	B-General_Concept
to	O
be	O
a	O
major	O
problem	O
in	O
LLM	O
technology	O
,	O
with	O
a	O
Google	O
executive	O
identifying	O
hallucination	B-General_Concept
reduction	O
as	O
a	O
"	O
fundamental	O
"	O
task	O
for	O
ChatGPT	B-General_Concept
competitor	O
Google	O
Bard	O
.	O
</s>
<s>
A	O
2023	O
demo	O
for	O
Microsoft	O
's	O
GPT-based	O
Bing	O
AI	B-Application
appeared	O
to	O
contain	O
several	O
hallucinations	B-General_Concept
that	O
went	O
uncaught	O
by	O
the	O
presenter	O
.	O
</s>
<s>
The	O
concept	O
of	O
"	O
hallucination	B-General_Concept
"	O
is	O
applied	O
more	O
broadly	O
than	O
just	O
natural	B-Language
language	I-Language
processing	I-Language
.	O
</s>
<s>
A	O
confident	O
response	O
from	O
any	O
AI	B-Application
that	O
seems	O
unjustified	O
by	O
the	O
training	O
data	O
can	O
be	O
labeled	O
a	O
hallucination	B-General_Concept
.	O
</s>
<s>
Wired	O
noted	O
in	O
2018	O
that	O
,	O
despite	O
no	O
recorded	O
attacks	O
"	O
in	O
the	O
wild	O
"	O
(	O
that	O
is	O
,	O
outside	O
of	O
proof-of-concept	O
attacks	O
by	O
researchers	O
)	O
,	O
there	O
was	O
"	O
little	O
dispute	O
"	O
that	O
consumer	O
gadgets	O
,	O
and	O
systems	O
such	O
as	O
automated	O
driving	O
,	O
were	O
susceptible	O
to	O
adversarial	B-General_Concept
attacks	I-General_Concept
that	O
could	O
cause	O
AI	B-Application
to	O
hallucinate	O
.	O
</s>
<s>
The	O
hallucination	B-General_Concept
phenomenon	O
is	O
still	O
not	O
completely	O
understood	O
.	O
</s>
<s>
Particularly	O
,	O
it	O
was	O
shown	O
that	O
language	O
models	O
not	O
only	O
hallucinate	O
but	O
also	O
amplify	O
hallucinations	B-General_Concept
,	O
even	O
for	O
those	O
which	O
were	O
designed	O
to	O
alleviate	O
this	O
issue	O
.	O
</s>
