OAN’s James Meyers
8:15 AM – Monday, November 27, 2023
The United States and multiple other countries have come to an agreement on guidelines for the use of Artificial Intelligence (AI).
Advertisement
In a 20-page document unveiled on Sunday, 18 countries came to an agreement on how AI should be used to keep it safe for customers and the public from misuse.
The agreement is non-binding and only gives general recommendations, which includes AI systems for abuse, vetting software suppliers and protecting data from tampering.
The director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, said it was important that countries place guidelines on AI systems for safety purposes.
“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly told Reuters, saying the guidelines represent “an agreement that the most important thing that needs to be done at the design phase is security.”
The agreement put in place is in hopes to keep AI technology from being hijacked by hackers and to make sure proper security testing is completed before a new product is released.
Critics of Artificial Intelligence believe that the technology could be used to disrupt elections, cutting jobs, or turbocharge fraud.
The agreement is broken down into four key major areas. Secure design, secure development, secure deployment, and secure operation and maintenance, including suggestive behaviors to help improve security.
Stay informed! Receive breaking news blasts directly to your inbox for free. Subscribe here. https://www.oann.com/alerts
Be the first to comment