Olympia LePoint’s IMEX America keynote session, “Making New Artificial Intelligence Technology Ethical & Accurate,” is sponsored by Visit Detroit.
Olympia LePoint would sit in a dark room for 12 hours watching data run across a screen in the NASA Mission Control Room. Helping to oversee safe Space Shuttle launches, she’d watch data readings from the pressure sensors, which would indicate if there were valve leaks. She’d look at temperature gauges to see if liquid oxygen was leaking. She’d watch vibration sensors to see if blades came off a pump during a flight.
A few times, though, the computer said one thing and the launch was stopped, only to discover a sensor was bad. The computer algorithm code was giving a false result. So, she and the team developed triple computer redundancies—independent algorithms to check the original algorithms.
LePoint learned that computers don’t always tell the truth. The same can be said about artificial intelligence (AI). You often must go inside the AI to verify it’s working correctly.
“I have learned to check all results. Computers can lie,” LePoint says. “Based on my NASA work, I have extremely unique knowledge of how AI, deepfake codes and synthetic media work. As a result, I am the first person in the world to create the 10 AI Code of Ethics, which was developed through teaching AI ethics and applications classes at UCLA.”
Making the right decisions
The mathematical predictive modeling that LePoint pioneered to help assure safe NASA rocket launches ties into the architecture behind AI, and she is one of only three people in the world with this particular knowledge and insight.
LePoint is also an MPI keynote speaker at IMEX America in Las Vegas, Oct. 8-10. She’s been called the “New Einstein,” and People magazine named her a “Modern Day Hidden Figure.” She’s an award-winning rocket scientist and author who helped launch 28 Space Shuttle missions.
Her IMEX America session, “Making New Artificial Intelligence Technology Ethical and Accurate,” will help attendees learn how to stay safe with emerging AI developments and discover how to think in the future, and know their future decisions for innovation, so they make the right decisions with AI for their organizations to thrive.
“If no one is verifying the AI result, humanity can be negatively impacted.”
“Right now, there is a great deal of fear that exists around AI,” LePoint says. “While there are people excited about using AI, who want to build great innovative projects with it, there are people who are scared that AI will take their jobs. To add, there are some bad actors who may want to use AI in unethical ways to hurt companies, brands and groups of people.”
All feelings and concerns are real, and they are important, she says. The truth, though, is that AI is a tool, and in the right hands, it can bring innovation and more jobs. But in the wrong hands, the human workforce can be destroyed.
“If no one is verifying the AI result, humanity can be negatively impacted,” LePoint says. “I am a firm believer that you must know the truth behind AI so you can use it in ways that bring innovation.”
The good and the bad
When talking about the positive and negative aspects of AI, LePoint says to imagine speaking to someone, and instead of seeing a video screen, a 3D version of the person appears in front of you as a hologram.
“This future AI will be seen in a powerful real-life application,” she says. “You will be speaking to a person represented by a real-life interactive hologram. This technology will be here in the next five years. And it will come from quantum AI. This is where quantum computing and AI merge to render trillions of data points on computer qubits within a fraction of a millisecond. The way you can create hologram video calls is by organizing large amounts of data through the tool of AI. In order to have 3D hologram communication, we will need AI across the world.”
On the flip side, LePoint says, there’s a horrible situation to imagine: AI starting and continuing wars instead of seasoned military officials and strategists who use their human wisdom.
“Without my 10 AI Code of Ethics, we can face such issues,” she says. “As a result, I believe that humans must oversee all AI decisions and its output, no matter how minor the results are. There needs to be verification methods and people hired to ensure all AI results are safe for all humans. AI lacks wisdom. Only our God-given human brains contain wisdom, and no computer can ever match the wisdom that exists with the human brain.”