Skip to content

Instantly share code, notes, and snippets.

@bplunkert
Last active December 21, 2022 19:17
Show Gist options
  • Save bplunkert/4efc0f002d7b4f57e1bb9e4e83bb64ad to your computer and use it in GitHub Desktop.
Save bplunkert/4efc0f002d7b4f57e1bb9e4e83bb64ad to your computer and use it in GitHub Desktop.
Run a conversation between two GPT bots each with their own memories (extended context)
import json
import openai
import os
if not 'OPENAI_API_KEY' in os.environ:
raise Exception('Environment variable OPENAI_API_KEY is required')
temperature = 0.9
max_tokens = 2000
top_p = 1.0
frequency_penalty = 0.6
presence_penalty = 0.0
data = json.load(open('./conversations/data.json'))
bot_name = os.environ.get('BOT_NAME')
conversation = data['conversation']
context = data['context'][bot_name]
last_conversation_item = conversation[-1]
last_speaker = last_conversation_item['speaker']
if last_speaker == bot_name:
print('Bot was last to speak, exiting.')
exit()
# Build the prompt with the previous 10 conversation items and the context
prompt = ""
for i in range(max(0, len(conversation) - 10), len(conversation)):
conversation_item = conversation[i]
prompt += conversation_item['speaker'] + ": " + conversation_item['text'] + "\n"
prompt += context + "\n" + last_conversation_item['speaker'] + ":" + last_conversation_item['text']
response = openai.Completion.create(
model="text-davinci-003",
prompt=prompt + "I want you to respond only as " + bot_name + ":",
temperature=temperature,
max_tokens=max_tokens,
top_p=top_p,
frequency_penalty=frequency_penalty,
presence_penalty=presence_penalty,
).choices[0].text
new_context = openai.Completion.create(
model="text-davinci-003",
prompt=prompt + "\n" + response + "\n" + context + "\n" + "Recontextualize and summarize the above: ",
temperature=temperature,
max_tokens=max_tokens,
top_p=top_p,
frequency_penalty=frequency_penalty,
presence_penalty=presence_penalty,
).choices[0].text
conversation.append({"speaker": bot_name, "text": response})
# write them back to file
data['conversation'] = conversation
data['context'][bot_name] = new_context
with open('./conversations/data.json', 'w') as outfile:
json.dump(data, outfile)
@bplunkert
Copy link
Author

This code is a proof of concept of extending GPT3 beyond the context window limit by maintaining an external context state. This could be extended into many layers of context. Some interesting discussion of the technique and the possible limitations is here: https://www.erwinmayer.com/2021/05/23/extending-gpt-3s-context-window-infinitely-by-storing-context-in-gpt-3-itself-or-in-secondary-layers/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment