It has always been debated that long-term AI safety research requires social scientists to ensure that AI alignment algorithms triumph over real humans. However, if one wishes to align human values with advanced AI systems suitably, then one needs to resolve many ambiguities.
These reservations are related to –
- The psychology underlying the rationality of humans
- Emotion
- Biases
By studying human behavior closely, social science researchers and machine learning could have an additional association. That is why social scientists are being hired to work on open-source AI.
The Goal of AI Safety
There is only a singular goal in this regard: human values are not compromised in any way while it collaborates with advanced AI systems. This is also the core principle of long-term artificial intelligence (AI) safety. The systems should unfailingly be able to follow those orders that the people expect from them. They should not digress at any point.
At open source AI, this is achieved by inquiring people about their needs. After knowing what they want, the training of machine learning (ML) models is done on this information. The AI systems are then optimized to do well as per the learning of these models.
Bottlenecks in AI Safety
Unfortunately, there is no way of establishing reliability when humans answer questions that are concerned about their values. What is the reason behind this? That is because humans possess an incomplete knowledge base and don’t have much reasoning ability as well. They can’t explain the rationale behind each action of theirs. This can be partially attributed to an array of reasoning prejudices and moral beliefs. These results in inconsistencies after these problems are pondered upon. This requires more human interaction to understand the weightage and the bias.
Why are Social Scientists required?
To overcome the ML limitations, training tests should be completely human-based. It should replace ML agents with the involvement of real humans. For example, the approach of debate for alignment with AI involves a game that has two human AI debaters, and a human also judges them. Humans can debate on any topic of their choice, and this learning can be transferred to ML instead. Using ML debaters would not be correct as they have a basic understanding and are at a primitive level.
Although these experiments, which contain humans, shall be inspired by ML algorithms and other than this, there will be no involvement of ML systems of any type. Instead, they will follow a careful design for experiments that can help them to collect data on the current level of understanding. They will slowly study how humans debate or interact, and then they will collect it and intellectualise their systems to behave more humanly. This is where social scientists are required. They need to intervene wherever human interference is required for guiding the ML systems. Solely being responsible for AI is not the correct approach to work. The right way to do that would also be to understand how the human decision-making process needs to be inferred.
Parting Thoughts
Hopefully, this write-up has shed some light on the requirement of social scientists in the development process of AI and its overall safety. If you are interested in learning more about such topics, head over to the E2E Networks website. Also, if you plan to implement AI in your business, you can rent the required infrastructure at a nominal cost from E2E Networks.
Reference Links
https://distill.pub/2019/safety-needs-social-scientists/
https://openai.com/blog/ai-safety-needs-social-scientists/
https://www.effectivealtruism.org/articles/ea-global-2018-ai-safety-needs-social-scientists