Google’s Agentspace will put AI agents in the hands of workers
Google has unveiled an AI agent builder tool designed to automate repititive tasks and help workers find information held across their organization faster.
AI agents have become a major focus for software vendors in recent months, including Atlassian, Microsoft, Salesforce, and numerous others. The “agent” concept is used in different ways, but generally refers to software systems that are able to take actions on behalf of a user, with varying degrees of autonomy. IDC analysts predict that at least 40% of Global 2000 businesses will use AI agents and agentic workflows to automate knowledge work, doubling productivity in the process — at least in cases where the technology is successfully implemented.
On Friday, Google unveiled Agentspace, its own application where workers can access and build agents. The standalone app has three main purposes, according to Google.
One is to serve as the “launch point” for custom AI agents. These agents combine generative AI (genAI) large language models with multi-step workflows to automate repetitive tasks. Google said the application has an “intuitive interface” and intends it to serve as a space where workers can access pre-built agents created in Google’s VertexAI Agent Builder. A low-code tool is also in the works to enable a wider range of employees to set up their own agents.
Agentspace also provides an enterprise search function that Google said will help workers find information held in applications across their organization, includingboth structured and unstructured data such as documents and emails. Agentspace search is “multimodal,” Google said, meaning it should be possible to search across video and image files as well as text documents.
Agentspace search can access data from a range of sources using connectors to third-party tools such as Confluence, Google Drive, Jira, Microsoft SharePoint, ServiceNow, and others.
Users can interact with a conversational assistant that responds to search queries. Agentspace agents will also perform actions based on the information held in customers’ documents, Google said.
Finally, NotebookLM is also embedded in the Agentspace app. Unveiled as an “experimental” tool by Google Labs last year before a wider release in September, NotebookLM is billed as a “virtual research assistant” that provides responses grounded in documents and data supplied by a user. This includes the ability to create podcast-style voice summaries of selected documents, for example.
Agentspace is available now in early access with a 90-day free trial; it will require a monthly per user subscription fee after that period. Pricing details are yet to be announced, a Google spokesperson said.
Google this week announced a range of AI “agent” tools, including two research prototypes: Project Astra, which can perceive the physical world and provide assistance to users, and Project Mariner, which understands and can take action on the contents of a computer screen. These are powered by Gemini 2.0, Google’s latest AI model which launched on Wednesday and is described by Google as its “model for the agentic era.”