The Facebook executive spearheading the company’s virtual reality efforts hopes to create virtual worlds with “almost Disney levels of safety” but has also acknowledged that moderation “at any meaningful scale is almost impossible.”
Facebook’s parent company Meta is working on creating virtual reality worlds where people will socialize, work, game, and even do shopping using 3D avatars of themselves.
In an internal memo, obtained by the Financial Times, Andrew Bosworth, who will be spearheading Meta’s $10 billion “metaverse” project, alleged that virtual reality can potentially be a “toxic environment,” especially for minorities and women.
The memo notes that content and behavior censorship and moderation could be a big challenge, especially given the company’s poor record in fighting “harmful” content.
“The psychological impact on humans is much greater,” said Kavya Pearlman, chief executive of the XR Safety Initiative, a non-profit focused on developing safety standards for VR, augmented and mixed reality. She further explained that users would retain what happened in the metaverse like it happened in reality.
Bosworth outlined a plan that the company could use to tackle the issue, but experts have noted that policing behavior in a virtual reality setting requires a lot of resources and might not even be possible.