Microsoft takes aim at local devs with Windows AI Studio

Microsoft takes aim at local devs with Windows AI Studio

Microsoft takes aim at local devs with Windows AI Studio PlatoBlockchain Data Intelligence. Vertical Search. Ai.

Ignite As far as Microsoft is concerned, AI is to be inescapable for its customers – even those who want to keep their software development and deployment confined to the PCs in front of them and their users. Welcome to the Windows AI Studio.

The Windows AI Studio is aimed at developers looking to join the generative AI app development world but would prefer to work locally than necessarily invest in cloud resources.

Due to appear in the next few weeks as an extension to Visual Studio Code, the plan is to bring together AI development tools and models from the Azure AI Studio to fine-tune and deploy small language models (SLMs) for local use in Windows apps.

It seems to us the goal is to allow not only the local development of machine-learning code but also the testing and running of it on local Windows machines.

Microsoft Ignite at a glance

Naturally, Microsoft is also eager that developers consider the Azure option, and it emphasized the possibility of hybrid scenarios where models can be run in its public cloud, locally, or as a combination of the two.

According to Microsoft: “Prompt Flow makes it easier than ever to implement this hybrid pattern by switching between local SLMs and cloud LLMs.”

Ultimately, Windows AI Studio is more of a tool for creating a workspace where developers can code generative AI applications. A guided workspace set-up includes a model configuration UI and walkthroughs to fine-tune SLMs such as Phi and bigger models like Llama 2 and Mistral.

Microsoft described the upcoming Visual Studio Code extension as “a familiar and seamless interface to help you get started with AI development. The guided interface allows you to focus on what you do best, coding, while we do all the heavy lifting by setting your developer environment with all the tools needed.”

That heavy lifting in Visual Studio Code consists of selecting the model – currently, only Llama 2 variants and Microsoft’s Phi are available, although this is preview stuff – and configuring it before the developer dives in for fine-tuning with their own data set.

There are also plans to highlight state-of-the-art models optimized specifically for Windows GPUs and NPUs, but Microsoft did not give a specific timescale for this feature. ®

Time Stamp:

More from The Register