AI is coming to Windows environments — which can be a big asset when implemented correctly and a security nightmare when it’s not.
AI is coming to desktops everywhere — is your security team ready for it? Beginning with its October security updates, Microsoft has begun a staged rollout of built-in artificial intelligence in the form of Copilot for Windows.
But before leaping to integrate Copilot into your systems, it’s essential to review policies and procedures that your company or organization has governing the use of artificial intelligence. If you don’t have such policies in place, now would be a good time to consider implementing them.
If your Windows 11 22H2 desktops are in a managed setting — that is, they all have their updates controlled by Windows Software update services, Intune, or Windows update for business — Copilot will not be enabled.
If you are in the European Union, Copilot will not be enabled. If your desktops are patched by Windows update and get the October security release installed, they may start to see the Copilot module. Based on the Bing chat module, it allows the user to ask questions and obtain answers.
Copilot enabling also depends on the user having either a Microsoft account or an Entra (formerly Azure Active Directory) account. Copilot will also not be enabled for users with only a local account or an active directory-based domain.
Small changes have already been observed in the behavior of Microsoft’s AI component. In early testing, Microsoft was including links to advertisements in the chat windows but recently I’ve noticed that they are no longer including ads in responses. Clearly, changes are being made as feedback is received.
Already we’ve seen studies reviewing the security of GitHub Copilot’s Code contributions. A paper published by researchers at Cornell University last August reviewed the impact of using AI in code and how secure or how vulnerable that code is if you rely on developers using Github to augment their coding skills.
The paper indicates that “as the user adds lines of code to the program, Copilot continuously scans the program and periodically uploads some subset of lines, the position of the user’s cursor, and metadata before generating some code options for the user to insert.”
The AI generates code that is functionally relevant to the program as implied by comments, docstrings, and function names, the paper states. “Copilot also reports a numerical confidence score for each of its proposed code completions, with the top-scoring (highest-confidence) score presented as the default selection for the user. The user can choose any of Copilot’s options.”
The study found that upon testing 1,692 programs generated in 89 different code-completion scenarios, 40% were found to be vulnerable. As the authors indicated, “while Copilot can rapidly generate prodigious amounts of code, our conclusions reveal that developers should remain vigilant (‘awake’) when using Copilot as a co-pilot. Ideally, Copilot should be paired with appropriate security-aware tooling during both training and generation to minimize the risk of introducing security vulnerabilities.”
Ultimately you need to start thinking and planning about your firm’s implementations of any and all AI modules that will arrive in your operating systems, in your API implementations, or in your code. The use of AI doesn’t mean that the application or code is vetted by default — rather, it’s just a different type of input that you need to review and manage.
In the case of Microsoft AI inputs that are coming to desktops and applications, some, like Copilot for Windows, come as native to the platform, without additional costs, and may be managed with Group Policy, Intune, or other management tools. Once you have deployed the October security updates to a sample Windows 11 22H2 workstation, an IT department can proactively manage Copilot in Windows by using the group policy or Intune tools noted here.
Note that what is rolling out in October is not the more impactful AI offering, that of Copilot for Microsoft 365. That offering will be included in Microsoft 365 and is expected to be priced at an additional $30 per user per month. For this, the AI will be embedded in email, Word documents, and Excel files.
Thus, you’ll need to set boundaries and review who has what permissions to access information and documents in your network. If there is outdated information it’s relying on, that Copilot-based output will be less than ideal.
In addition to the normal end-user security training that you provide now to your staff, ensure that you implement AI awareness and ensure that any private or sensitive information is not included in the chat or AI input windows.
For example, a financial investment firm that may have fully implemented Copilot for Windows and Microsoft 365 Copilot would need to add security awareness training to ensure that sensitive financial information does not get used (and potentially exposed) in the input windows.
If you work in the EU, be aware that the Digital Markets Act (DMA) has mandated that Copilot may not be implemented. Thus, workstations with EU locals will not receive the Copilot implementation. The DMA requires that companies cannot monopolize the marketplace and give equal and fair chances to other local companies.
Microsoft is working on a specific version that will work in the EU and abide by its laws and regulations. If you are located in the EU and your IT staff want to test out and review the upcoming release, you can launch Copilot by building a shortcut to this link.
Using the Copilot feature is not illegal or restricted if you are in the EU, so you may wish to test it out, review the implications, and set your firm’s policy now before it’s released in a future update. It’s expected to be fully implemented in the 23H2 release later this year.
The bottom line is that you’ll want to test and review these AI solutions being delivered to your doorstep as they are coming to your network faster than you think. Ensure your security teams have written policies, reviewed the impact, and considered where AI will be a help to your network and where it should not be implemented. Artificial intelligence, when implemented correctly can be a help to your endeavors. When it’s not, it can be a security nightmare. Plan now before it arrives on your desktop.
Susan Bradley has been patching since before the Code Red/Nimda days and remembers exactly where she was when SQL slammer hit (trying to buy something on eBay and wondering why the Internet was so slow). She writes the Patch Watch column for Askwoody.com, is a moderator on the PatchManagement.org listserve, and writes a column of Windows security tips for CSOonline.com. In real life, she’s the IT wrangler at her firm, Tamiyasu, Smith, Horn and Braun, where she manages a fleet of Windows servers, Microsoft 365 deployments, Azure instances, desktops, a few Macs, several iPads, a few Surface devices, several iPhones and tries to keep patches up to date on all of them. In addition, she provides forensic computer investigations for the litigation consulting arm of the firm. She blogs at https://www.askwoody.com/tag/patch-lady-posts/ and is on twitter at @sbsdiva. She lurks on Twitter and Facebook, so if you are on Facebook with her, she really did read what you posted. She has a SANS/GSEC certification in security and prefers Heavy Duty Reynolds wrap for her tinfoil hat.
Leave a Reply