WOC STEM DTX Conference: Women at the helm of AI Transformation and Beyond
Published October 05, 2024 By : Staff Writer
The advancement of artificial intelligence (AI) across all industries and sectors is an exciting development. AI is seen as a powerful tool for democratizing knowledge, breaking the chains of controlled information, and dismantling systems of inequity.
This was a major topic at the 29th annual Women of Color STEM Conference, organized by Career Communications Group's Women of Color magazine's digital twin experience (DTX).
In his book Metaquake USA published by STEM City USA, CCG's CEO Tyrone Taborn explored the implications of increased access to knowledge and innovation.
Several discussions during the three-day WOC STEM DTX Conference featured leading voices in AI, including Maj. Gen. Michelle Link, who serves as the commanding general of the 75th U.S. Army Reserve Innovation Command and also as the product support manager for Program Executive Office Ground Combat Systems, Assistant Secretary of the Army, Acquisition, Logistics & Technology.
In her civilian career, Maj. Gen. Link is responsible for managing the support functions required to field and maintain the readiness and operational capability of the Army’s ground combat weapon systems, subsystems, and components. She also leads the development, production, and fielding of weapon systems with a focus on AI capabilities.
Additionally, other panelists such as Christine Burkette, a trailblazer in digital equity and information technology innovation, discussed leveraging AI technologies to develop forward-thinking solutions and capabilities.
AI is no longer just a trend, and it's important to consider who is building it. Now more than ever, it's crucial to create ethical and responsible AI that benefits everyone.
During the "Inclusive Intelligence: The Imperative of Diversity in AI" seminar, Keysha Cutts, a senior program manager at the U.S. Army Corps of Engineers, Manvee Sharma, an executive at Infosys, and Stephanie Vaughn, a computer science instructor in Detroit, shared different perspectives on AI.
They discussed how AI is often built on statistical biases and why it's essential for the people and companies developing AI to care about this issue.
They also talked about the code of ethics for developers and the differences between AI and machine learning (ML). In essence, AI is programmed by developers and data models, while ML relies on predictions based on behavior.
The seminar also raised the question of whether developers have an ethical responsibility when building tech.
One concerning issue is that AI has trouble identifying and differentiating people with darker skin.
Facial recognition systems often use passport and driver's license photos from federal databases, posing a risk of misidentification for Black and Brown individuals, as highlighted by Data for Black Lives.
Scientists and researchers have expressed concerns about the high margin of misidentification in facial recognition technology developed by tech companies.
Currently, legislation and safeguards are lacking to protect against these issues. This raises the question of why there should be a code of ethics and how we should approach it.
In July 2024, the Biden-Harris administration announced new AI actions. Nine months prior, Biden issued a landmark executive order on managing the risks of artificial intelligence (AI).
This Executive Order is based on the voluntary commitments he and Vice President Harris received from 15 leading U.S. AI companies the previous year.
The administration announced that Apple had signed onto the voluntary commitments, further solidifying these commitments as cornerstones of responsible AI innovation.
Over 270 days, the executive order directed agencies to take sweeping action to address AI's safety and security risks, including releasing vital safety guidance and building capacity to test and evaluate AI.
To protect safety and security, agencies have released new technical guidelines from the AI Safety Institute (AISI) for leading AI developers to manage the evaluation of misuse of dual-use foundation models.
AISI's guidelines detail how leading AI developers can help prevent increasingly capable AI systems from being misused to harm individuals, public safety, and national security, as well as how developers can increase transparency about their products.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie SettingsReject AllAccept
Manage consent
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.