Artificial intelligence (AI) promises to be one of the most transformative technologies of our time, with the potential to drive tremendous efficiencies, insights, and innovations across industries. However, without thoughtful governance, AI also poses risks related to issues like algorithmic bias, data privacy, cybersecurity, and more.
That’s why in September 2022, the White House released a sweeping executive order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. This order lays outs a coordinated approach and set of principles for responsibly governing the development and use of AI technologies across areas like safety, innovation, privacy, equity and more.
For business leaders, this executive order signals both a new regulatory environment that your AI strategy will need to align with, as well as new opportunities to leverage AI for good. In this comprehensive overview, we’ll break down:
Key provisions of the order relevant for business
How to invest in responsible AI development aligned to new requirements
The critical role of cybersecurity and privacy protection
Ensuring fairness and transparency in AI systems
Special considerations for critical infrastructure and generative AI models
Tips for regulatory reporting and compliance
At its core, this executive order aims to balance two complementary goals:
Maximize AI’s benefits: Promoting innovation, commercialization and beneficial AI applications
Manage AI’s risks: Ensuring safety, preventing harms, and mitigating issues of bias, equity, propaganda etc.
To achieve these goals, there is heavy emphasis on public-private partnership throughout the order, signaling that businesses have an important role to play.
Your strategic response here should focus both on aligning your AI programs and practices to the new regulatory environment, while also seizing fresh opportunities to innovate with AI across your business.
Key aspects of the order that will compel a strategic response include:
Section 4.2 mandates new reporting rules around the development of large AI models that the government deems as ‘dual use’ – meaning having potential for both commercial and military benefits.
Specifically, any entity developing dual use models (as defined by a high level of computing power) has to file regular reports with the Secretary of Commerce detailing:
Activities related to training, developing or producing the model
Physical and cybersecurity controls
Ownership statements and access controls for model weights
Results of red team testing to identify model vulnerabilities and risks
What this signals is far greater scrutiny and transparency requirements around powerful AI models that may have national security implications.
If your business is investing significantly in space and developing models that meet the technical thresholds described, your strategy will need to adapt to meet these disclosure rules and proper protocols.
Conversely, businesses focused on the commercial application of AI may find exciting new partnership opportunities created from the order.
Section 5 on promoting innovation calls for major new investments into AI education, workforce training programs, grants and incentives – especially targeted at advancing privacy enhancing tools.
Specific actions outlined include:
Launching a National AI Research Resource (NAIRR) pilot program
Funding four new National AI Research Institutes
500 new AI scholarships for experts by 2025
For many companies, these initiatives open promising channels to collaborate with agencies like NSF, DoE and others to further develop and take AI innovation to market responsibly.
Prioritizing equity and inclusion is also front and center, meaning taking steps to increase access and representation may be considered in applications for grants and partnerships.
In totality – between new reporting requirements and doors for innovation – the order mandates business strategy evolves to meet the new environment. Let’s now discuss some functional areas more in depth.
Given AI’s reliance on data to function, the order takes clear aim at not only ensuring data privacy but also securing AI supply chains through enhanced cyber protections.
These specific measures warrant immediate business attention:
The order calls for agencies to carry out Privacy Impact Assessments on use of personal data and Commercial Information – specially related to AI systems. Stronger constraints may be introduced around agencies sharing or acquiring such data from private partners.
As you evaluate AI applications to invest in, scrutinize what data is actually required to fulfill the intended purpose. Seek guidance from privacy professionals on recording and sharing protocols given regulatory uncertainty. Build systems limiting data use to only what is essential for the AI model’s efficacy.
There are also clear signals that Privacy Enhancing Technologies (PETs) such as homomorphic encryption, zero knowledge proofs and federated learning will become far more industry standard.
Section 9 specifically tasks NSF with funding PETs research and creating incentives to accelerate technology transfer into real systems. It also calls for NIST to develop PETs guidelines for federal agencies within a year.
Again – partnership opportunities clearly exist here for commercial entities focused on PETs. Even otherwise, evaluate how you may incorporate techniques like differential privacy or synthetic data creation across AI efforts to limit exposure of sensitive personal information.
Fragmented access to data alone cannot guarantee protection in today’s threat landscape. Sophisticated cyber attacks can combine anonymous and unprotected data leaks to devastating effect.
Hence the order places focus also on enhanced cyber protections when handling dual use AI models. Agencies have to implement physical and network security with access controls and external penetration testing.
For companies training powerful models, this is a reminder to treat your AI IP as critical assets requiring maximum security. Monitor access, implement principle of least privilege, maintain meticulous access logs, and mandate rigorous penetration testing to uncover gaps.
Additionally, Section 4.2 introduces strict regulations compelling disclosure if ANY foreign entity accesses US cloud infrastructure to run AI workloads. This data can be used by law enforcement in investigations against foreign threats.
If your development teams employ overseas talent or outsourced vendors, be aware of this visibility mandate coming into effect across all major cloud platforms.
A core theme across Biden’s executive order is emphasis on not just avoiding harm from AI systems, but maximizing societal benefit with intentionality.
Unlike previous administrations focused narrowly on increasing AI leadership or capabilities, there are consistent provisions here to make equity, accessibility, accountability and transparency design objectives for any government backed initiative.
Below are two crucial areas for businesses to focus on from an ethical AI perspective:
The guidance in Section 10 makes it compulsory for agencies to carry out external audits checking for algorithmic bias or discrimination in AI systems impacting people’s rights. This includes documentation, continuous monitoring in production via equity guardrails, and human oversight processes providing recourse against solely automated decisions.
For business applications as well, instituting checks in training data, rigorous pre-deployment testing, ongoing performance tracking on segments, and keeping a “human in the loop” are all best practices you should be implementing to ensure standards of fairness and accountability in AI systems. Consult policy experts on CURRENT regulation landscape (not just this order!) to confirm obligations.
Given the rise of generative AI like DALL-E 2 for synthetic image creation and GPT-3 for natural language generation, the order also focuses on standards for content authentication and attribution.
Agencies have to institute mechanisms for labeling and signaling content created or modified by AI systems, so provenance is protected and downstream misuse can be contested. Having robust data lineage itself acts as a transparency safeguard.
For commercially released generative models, build capabilities to tag or watermark outputs as AI generated, establish notices to users on scope of permissible usage, and implement access controls or audit logs allowing traceability if needed.
Taken together, these provisions compel businesses to make ethical AI frameworks centered on fairness and transparency integral to strategy rather than an afterthought.
Modern critical infrastructure sectors like finance, energy and transportation are increasingly incorporating AI capabilities in operations for efficiency gains.
However from a cybersecurity lens, increasing autonomy and integration also expands potential attack surfaces for malicious actors. Recognizing this, Biden’s order calls attention to tailored guidelines and oversight programs needed as AI assurance in national critical infrastructure.
Two specific areas warranting business focus are:
Through section 4.3, within 180 days the Secretary of Homeland Security has to formalize sector specific guidelines for safe AI usage tailored to critical infrastructure systems. These are MEANT to integrate with existing protocols like the NIST AI Risk Framework.
Subsequently, heads of other federal agencies have to mandate these same guidelines or portions within their jurisdictions and programs. Independent regulators are also urged to enforce the guidelines on industries they oversee.
What this translates to is mandatory AI safety practices becoming integrated into certification processes for critical infrastructure vendors. From smart powergrid components to autonomous vehicles, if your technologies serve CI end use purposes – strict assessment against new criteria will take effect through 2023.
Explicit attention is also directed at Generative AI’s potential to exponentially enhance disinformation, cyberattacks and infrastructure disruption risks.
Through section 4.6, recommendations have to be formulated within 9 months on managing threats from models where core weights are publicly accessible. Stricter constraints on accessing services platforms offering large language models may be introduced.
For vendors building on open source models, or using repositories like HuggingFace, be prepared for much more scrutiny in reliability standards and access protocols before critical infrastructure integration. Regulatory discussions in 2023 are likely on this front.
While presidential executive orders represent policy vision rather than actual law, agencies have latitude interpreting guidance into new certification policies or compliance burdens on contractors. This order seeds several such possibilities for AI over the coming year that businesses should monitor closely.
With translated NIST documents on assessing AI trustworthiness and safety already planned, relevant security process sections like CMMC will inevitably need to integrate AI specific criteria as well. The order calls for FAR council to take this into account, signalling future code of federal regulation changes placing AI requirements onto contractors.
Start building understanding across your firm regarding standards like NIST AI Trustworthiness Framework draft, as well as best practices suggested from the AI Risk Management Framework. Identify gaps against eventual compliance controls listed once changes take effect to procurement protocols.
By far, the most impactful development is a government wide mandate for every federal agency to appoint a Chief AI Officer within months after order passage. This central leadership is meant to govern and facilitate access to AI tools across all departments.
With over $600 billion in annual contract spending, agencies adopting AI solutions tailored to their mission will compel private sector partners to demonstrate adherence with the evolving standards as a prime eligibility criteria.
The formation of agency level AI governance boards also provides a primary contact point for airing compliance concerns related to new provisions. Maintain engagement channels with their office as they setup.
New guidance shall also advise procurement bodies on evaluating vendor claims around:
Effectiveness of AI tools accuracy
Risk mitigation capabilities
Fairness, bias and safety standard compliance
Independent audits of proprietary models or benchmarks on representative data might be mandated to substantiate promises during acquisition.
For commercial AI teams this introduces need to have client accessible validation reports clearing quantitative thresholds ready – specially across sensitive category use cases like healthcare, finance etc.
Start compiling empirical evidence on salient parameters now itself to streamline adoption cycles later.
In totality – businesses must ingest that AI oversight responsibilities will permeate across the sprawling federal bureaucracy. Interpreting needs and compliances early is essential.
Common across nearly all the measures described in Biden’s executive order is the necessity for massively scaling specialized AI talent within government. And by corollary – the private sector as well.
Agencies are exhorted to expand direct hiring programs, connect with technical trade groups for recruitment help and increase professional training to existing employees on paradigm shifts ushered by AI adoption at scale.
In truth, ambitious policy visions contained here mean little without capable hands to judiciously put them into practice across contexts.
For executives, this talent crunch puts all the more emphasis on guarding any internal AI experts jealously and investing continuously into their augmentation.
Here are some tips:
Cross-train software developers on understanding unintended consequences from narrowly focused models.
Incentivize data scientists rotating into domain specific oversight roles analyzing model risk or bias issues.
Grow a compliance bench able to reliably audit AI systems on safety parameters.
Ensure representation from communities likely disadvantaged by existing societal asymmetries rather than teams disconnected from grounded reality.
Cultivating a holistic AI talent strategy – spanning technical build to oversight – is the surest lever for sustainable commercial success even as regulatory undertones shift.
Biden’s wide sweeping executive order sets ambitious agenda for responsible AI spanning security, privacy, ethics and innovation.
For business leaders and technology strategists, it signals both new obligations in governance as well as openings for public private collaboration.
Key action items include:
Expect greater transparency requirements on dual use AI models
Seize opportunities for research partnerships and incentives
Enhance data protection and cyber vigilance
Prepare for tighter integration mandates between agency guidelines and critical infrastructure AI controls
Closely track chief AI officers positions and reporting structures setup by agencies
Start compiling validation materials to ease compliance burdens
This order undoubtedly kickstarts a new era of transformation in how AI governance takes shape. Stay tuned as the public comment periods give further shape over 2023!
We hope this overview has shed light on the key provisions in Biden’s executive order that technology leaders need to monitor. Responsible AI governance balancing innovation with ethics is a shared responsibility. Thanks for reading this post. Please share this post and help secure the digital world. Visit our website, thesecmaster.com, and our social media page on Facebook, LinkedIn, Twitter, Telegram, Tumblr, Medium, and Instagram and subscribe to receive updates like this.
You may also like these articles:
Arun KL is a cybersecurity professional with 15+ years of experience in IT infrastructure, cloud security, vulnerability management, Penetration Testing, security operations, and incident response. He is adept at designing and implementing robust security solutions to safeguard systems and data. Arun holds multiple industry certifications including CCNA, CCNA Security, RHCE, CEH, and AWS Security.
“Knowledge Arsenal: Empowering Your Security Journey through Continuous Learning”
"Cybersecurity All-in-One For Dummies" offers a comprehensive guide to securing personal and business digital assets from cyber threats, with actionable insights from industry experts.
BurpGPT is a cutting-edge Burp Suite extension that harnesses the power of OpenAI's language models to revolutionize web application security testing. With customizable prompts and advanced AI capabilities, BurpGPT enables security professionals to uncover bespoke vulnerabilities, streamline assessments, and stay ahead of evolving threats.
PentestGPT, developed by Gelei Deng and team, revolutionizes penetration testing by harnessing AI power. Leveraging OpenAI's GPT-4, it automates and streamlines the process, making it efficient and accessible. With advanced features and interactive guidance, PentestGPT empowers testers to identify vulnerabilities effectively, representing a significant leap in cybersecurity.
Tenable BurpGPT is a powerful Burp Suite extension that leverages OpenAI's advanced language models to analyze HTTP traffic and identify potential security risks. By automating vulnerability detection and providing AI-generated insights, BurpGPT dramatically reduces manual testing efforts for security researchers, developers, and pentesters.
Microsoft Security Copilot is a revolutionary AI-powered security solution that empowers cybersecurity professionals to identify and address potential breaches effectively. By harnessing advanced technologies like OpenAI's GPT-4 and Microsoft's extensive threat intelligence, Security Copilot streamlines threat detection and response, enabling defenders to operate at machine speed and scale.