AI IRL Podcast Episode 47: 3 Takeaways from the the US Government’s 10 AI Principles
The White House released 10 new AI principles it believes will establish the US’s continued leadership within AI. The move was a subtle change in course; up to now, the government had been somewhat hands off when it came to AI, letting the technologists determine the guidelines.
But perhaps because of incoming pressure from China, the White House saw an opportunity to lead the charge in establishing some uniformity and structure for the world of AI.
The 10 principles were not laid out as official regulations but rather as guidelines and sign posts for potential regulations. Currently, regulations vary within countries (even if there are any at all), and even within those countries, rules vary from industry to industry.
By the US taking a stance, I believe it may give us the opportunity to continue to lead within AI.
I had 3 takeaways from the announcement (which I went into in detail on the AI: IRL podcast).
- The US government involvement gave us a platform to discuss AI
- At least 2 areas of AI need further exploration (eliminating bias and demystifying AI)
- The US government can be a catalyst for the private sector
1. The US government involvement provides a platform to discuss AI
While the list of 10 AI principles didn’t go into much detail, many of the topics discussed were a solid foundation, providing the “header 1” info to launch broader discussions about AI. The list included:
- Public trust
- Public participation
- Benefit and cost
- Fairness and transparency
- Non-discrimination and avoiding biases
Overall, the principles listed were necessary and the information was a helpful starting place. However, there were no official regulations around those issues, which is a necessary component for any industry as it begins to develop shape.
Financial services, for instance, have solid regulations that allow the industry to operate. I think it’s time to open a discussion about how regulation looks in relation to AI — while we clearly don’t want to slow down innovation because of too much red tape, we do want to deploy enough commonality and compliance so the world of AI is demystified and cohesive.
2. At least 2 areas of the government’s AIM guidelines need further exploration
While the government provided a great place to start with discussions about AI, each of these topics needs to be double-clicked on
Two areas in particular that I felt the US government didn’t go deep enough on:
Eliminating bias: How are we identifying and eliminating biases within AI and the underlying datasets?
Both the datasets and resulting outcomes within AI have unfortunately often been based on discriminatory information, further impeding the rights of the most vulnerable in our society, such as minorities. Often, the datasets we’ve used to build our models and our algorithms don’t include everyone in a given cohort, meaning we aren’t capturing complete or accurate information.
There are 2 reasons it’s important to be inclusive within AI and eliminate discrimination and bias: For one, there are moral and ethical obligations. But there’s also a business consideration — if your data set is only based on say 80% of the population, you may be missing information on the very 20% your business should be reaching, leaving the market to your competitors.
Are we truly eliminating the blackbox of AI?
While the general tone of the release did point to the US government’s attempt to expand access to AI, not restrict it, I think there is work to do.
We must demystify AI for the broader community.
Currently, around the globe and especially within organizations, a very limited number of individuals understand the technical aspects of AI well enough to know how changing one element will impact outcomes. With such a limited perspective, we are missing huge business and social opportunities. Studies have shown that with a diverse pool of input, better outcomes are achieved, so we must continue to ensure that everyone in business and society can access and understand AI.
3. The US government can be a catalyst for AI in the private sector
There are great (if only a few) examples of the US government leading the charge in regulation and experimentation, and then outpouring those findings to the broader private community. An easy example to point to is the internet — which was developed for the government’s use, but its implications obviously became much broader, impacting literally every aspect of nearly everyone’s life around the globe.
So while the US government has their own investments in AI, whether that’s in financial services, healthcare, or the military, they can test technology, policy, and regulation, and then share those insights with the private sector to boost and enhance the US as a leader in the AI.