<s>
Prompt	B-General_Concept
engineering	I-General_Concept
is	O
a	O
concept	O
in	O
artificial	B-Application
intelligence	I-Application
(	O
AI	B-Application
)	O
,	O
particularly	O
natural	B-Language
language	I-Language
processing	I-Language
(	O
NLP	B-Language
)	O
.	O
</s>
<s>
In	O
prompt	B-General_Concept
engineering	I-General_Concept
,	O
the	O
description	O
of	O
the	O
task	O
that	O
the	O
AI	B-Application
is	O
supposed	O
to	O
accomplish	O
is	O
embedded	O
in	O
the	O
input	O
,	O
e.g.	O
</s>
<s>
Prompt	B-General_Concept
engineering	I-General_Concept
typically	O
works	O
by	O
converting	O
one	O
or	O
more	O
tasks	O
to	O
a	O
prompt-based	O
dataset	O
and	O
training	O
a	O
language	B-Language
model	I-Language
with	O
what	O
has	O
been	O
called	O
"	O
prompt-based	O
learning	O
"	O
or	O
just	O
"	O
prompt	O
learning	O
"	O
.	O
</s>
<s>
The	O
GPT-2	B-General_Concept
and	O
GPT-3	B-General_Concept
language	B-Language
models	I-Language
were	O
important	O
steps	O
in	O
prompt	B-General_Concept
engineering	I-General_Concept
.	O
</s>
<s>
In	O
2021	O
,	O
multitask	O
prompt	B-General_Concept
engineering	I-General_Concept
using	O
multiple	O
NLP	B-Language
datasets	O
showed	O
good	O
performance	O
on	O
new	O
tasks	O
.	O
</s>
<s>
In	O
a	O
method	O
called	O
chain-of-thought	O
(	O
CoT	O
)	O
prompting	O
,	O
few-shot	O
examples	O
of	O
a	O
task	O
were	O
given	O
to	O
the	O
language	B-Language
model	I-Language
which	O
improved	O
its	O
ability	O
to	O
reason	O
.	O
</s>
<s>
Prompt	B-General_Concept
engineering	I-General_Concept
may	O
work	O
from	O
a	O
large	O
language	B-Language
model	I-Language
(	O
LLM	O
)	O
,	O
that	O
is	O
"	O
frozen	O
"	O
(	O
in	O
the	O
sense	O
that	O
it	O
is	O
pretrained	O
)	O
,	O
where	O
only	O
the	O
representation	O
of	O
the	O
prompt	O
is	O
learned	O
(	O
in	O
other	O
words	O
,	O
optimized	O
)	O
,	O
using	O
methods	O
such	O
as	O
"	O
prefix-tuning	O
"	O
or	O
"	O
prompt	O
tuning	O
"	O
.	O
</s>
<s>
The	O
technique	O
was	O
first	O
proposed	O
by	O
Google	B-Application
researchers	O
in	O
2022	O
.	O
</s>
<s>
LLMs	O
that	O
are	O
trained	O
on	O
large	O
amounts	O
of	O
text	O
using	O
deep	B-Algorithm
learning	I-Algorithm
methods	O
can	O
generate	O
output	O
that	O
resembles	O
human-generated	O
text	O
.	O
</s>
<s>
While	O
LLMs	O
show	O
impressive	O
performance	O
on	O
various	O
natural	B-Language
language	I-Language
tasks	I-Language
,	O
they	O
still	O
face	O
difficulties	O
with	O
some	O
reasoning	O
tasks	O
that	O
require	O
logical	O
thinking	O
and	O
multiple	O
steps	O
to	O
solve	O
,	O
such	O
as	O
arithmetic	O
or	O
commonsense	O
reasoning	O
questions	O
.	O
</s>
<s>
When	O
applied	O
to	O
PaLM	O
,	O
a	O
540B	O
parameter	O
language	B-Language
model	I-Language
,	O
CoT	O
prompting	O
significantly	O
aided	O
the	O
model	O
,	O
allowing	O
it	O
to	O
perform	O
comparably	O
with	O
task-specific	O
fine-tuned	O
models	O
on	O
several	O
tasks	O
,	O
even	O
setting	O
a	O
new	O
state	O
of	O
the	O
art	O
at	O
the	O
time	O
on	O
the	O
GSM8K	O
mathematical	O
reasoning	O
benchmark	O
.	O
</s>
<s>
CoT	O
prompting	O
is	O
an	O
emergent	O
property	O
of	O
model	O
scale	O
,	O
meaning	O
it	O
works	O
better	O
with	O
larger	O
and	O
more	O
powerful	O
language	B-Language
models	I-Language
.	O
</s>
<s>
There	O
are	O
two	O
main	O
methods	O
to	O
elicit	O
chain-of-thought	O
reasoning	O
:	O
few-shot	O
prompting	O
and	O
zero-shot	B-Algorithm
prompting	I-Algorithm
.	O
</s>
<s>
It	O
is	O
also	O
possible	O
to	O
elicit	O
similar	O
reasoning	O
and	O
performance	O
gain	O
with	O
zero-shot	B-Algorithm
prompting	I-Algorithm
,	O
which	O
can	O
be	O
as	O
simple	O
as	O
appending	O
to	O
the	O
prompt	O
the	O
words	O
"	O
Let	O
's	O
think	O
step-by-step	O
"	O
.	O
</s>
<s>
While	O
CoT	O
reasoning	O
can	O
improve	O
performance	O
on	O
natural	B-Language
language	I-Language
processing	I-Language
tasks	O
,	O
certain	O
drawbacks	O
exist	O
.	O
</s>
<s>
Zero-shot	B-Algorithm
CoT	O
prompting	O
increased	O
the	O
likelihood	O
of	O
toxic	O
output	O
on	O
tasks	O
for	O
which	O
models	O
can	O
make	O
inferences	O
about	O
marginalized	O
groups	O
or	O
harmful	O
topics	O
.	O
</s>
<s>
In	O
2022	O
,	O
machine	O
learning	O
(	O
ML	O
)	O
models	O
like	O
DALL-E	B-General_Concept
2	I-General_Concept
,	O
Stable	B-General_Concept
Diffusion	I-General_Concept
,	O
and	O
Midjourney	B-Application
were	O
released	O
to	O
the	O
public	O
.	O
</s>
<s>
These	O
models	O
take	O
text	O
prompts	O
as	O
input	O
and	O
use	O
them	O
to	O
generate	O
images	O
,	O
which	O
introduced	O
a	O
new	O
category	O
of	O
prompt	B-General_Concept
engineering	I-General_Concept
related	O
to	O
text-to-image	B-General_Concept
prompting	O
.	O
</s>
<s>
Prompt	O
injection	O
can	O
be	O
viewed	O
as	O
a	O
code	O
injection	O
attack	O
using	O
adversarial	O
prompt	B-General_Concept
engineering	I-General_Concept
.	O
</s>
<s>
In	O
2022	O
,	O
the	O
NCC	O
Group	O
characterized	O
prompt	O
injection	O
as	O
a	O
new	O
class	O
of	O
vulnerability	O
of	O
AI/ML	O
systems	O
.	O
</s>
<s>
In	O
early	O
2023	O
,	O
prompt	O
injection	O
was	O
seen	O
"	O
in	O
the	O
wild	O
"	O
in	O
minor	O
exploits	O
against	O
ChatGPT	B-General_Concept
,	O
Bing	B-Application
,	O
and	O
similar	O
chatbots	O
,	O
for	O
example	O
to	O
reveal	O
the	O
hidden	O
initial	O
prompts	O
of	O
the	O
systems	O
,	O
or	O
to	O
trick	O
the	O
chatbot	O
into	O
participating	O
in	O
conversations	O
that	O
violate	O
the	O
chatbot	O
's	O
content	B-Protocol
policy	I-Protocol
.	O
</s>
