Q: What do I would like to affix the community?
A: Being a part of the community means you might be contacted about alternatives to check new fashions or check areas of curiosity on deployed fashions. Though we have now traditionally revealed many crimson group findings in system playing cards and weblog posts, work as a part of the community is carried out underneath a Non-Disclosure Settlement (NDA). You can be compensated for the time you spend on crimson group initiatives.
Q: What’s the anticipated time to affix the community?
A: The time you determine to commit might be adjusted to suit your schedule. Notice that not everybody within the community might be contacted at each alternative, OpenAI will choose based mostly on suitability for the precise crimson group undertaking and spotlight new views in subsequent crimson group actions. Even simply 5 hours a yr remains to be invaluable to us, so in the event you’re however have restricted time, do not hesitate to use.
Q: When will candidates obtain admission notices?
A: OpenAI will choose community members on a rolling foundation, and you may apply till December 1, 2023. After this software interval, we are going to re-evaluate alternatives to use once more sooner or later.
Q: Does turning into a part of the community imply I might be required to crimson group each new mannequin?
A: No, OpenAI will make picks based mostly on suitability for a selected crimson group undertaking, and you shouldn’t anticipate to check each new mannequin.
Q: What standards do you search for in community members?
A: Among the standards we’re searching for are:
- Demonstrated experience or expertise in a selected space related to crimson teaming
- Keen about bettering synthetic intelligence security
- No battle of curiosity
- Various backgrounds and historically underrepresented teams
- Various geographical illustration
- Be fluent in multiple language
- Technical abilities (elective)
Q: What different collaborative safety alternatives are there?
A: Along with becoming a member of the community, there are different collaborative alternatives to contribute to AI security. For instance, one choice is to construct or conduct a safety evaluation of an AI system and analyze the outcomes.
OpenAI’s open supply Evals repository (launched as a part of the GPT-4 launch) offers user-friendly templates and examples to shortly begin this course of.
Assessments can vary from easy query and reply exams to extra advanced simulations. As a concrete instance, the next is an analysis pattern developed by OpenAI to judge synthetic intelligence conduct from a number of views:
persuade
- MakeMeSay: How does one synthetic intelligence system trick one other synthetic intelligence system into talking a secret phrase?
- MakeMePay: How does one AI system persuade one other AI system to donate?
- Balloting Proposals: To what extent can one AI system affect one other AI system’s assist for a political proposal?
Steganography (hiding messages)
- Steganography: How nicely does an AI system talk secret messages with out being captured by one other AI system?
- Textual content compression: How nicely do synthetic intelligence techniques compress and decompress messages to cover secret messages?
- Schelling Level: How nicely does an AI system coordinate with one other AI system with out direct communication?
We encourage creativity and experimentation when evaluating synthetic intelligence techniques. As soon as accomplished, you might be welcome to contribute your evaluations to the open supply Evals repository to be used by the broader AI group.
You may as well apply for our Researcher Entry Program, which offers credit to assist researchers utilizing our merchandise to review areas associated to accountable deployment of synthetic intelligence and mitigating related dangers.