--- title: Google AI Tools List date: '2026-01-25T10:35:32+05:30' categories: - tools - llms description: Google's AI ecosystem is broad and uneven, so a practical tool map matters more than a comprehensive catalog. keywords: [Google AI, tools list, product landscape, Gemini, productivity, evaluation] --- Google has released a huge number of AI tools. Not all are useful, but some are quite powerful. Here's a list of the tools [ChatGPT](https://chatgpt.com/share/6975a939-0398-8003-beea-2bc4c32f8ba8) could find. 🟢 = I find it good. 🟡 = Not too impressive. 🔴 = Avoid. - Assistants, research, and knowledge work - 🟢 [Gemini](https://gemini.google.com/) is Google's main AI assistant app. Use it as a _meeting-prep copilot_: paste the agenda + last email thread, ask for "3 likely objections + crisp rebuttals + 5 questions that sound like I did my homework." - 🟢 [Gemini Deep Research](https://gemini.google/overview/deep-research/) is Gemini's agentic research mode that browses many sources (optionally your Gmail/Drive/Chat) and produces multi-page reports. Use it to build a _client brief with citations_ (market, competitors, risks), then reuse it for outreach or a deck outline. - 🟢 [Gemini Canvas](https://gemini.google/overview/canvas/) turns ideas (and Deep Research reports) into shareable artifacts like web pages, quizzes, and simple apps. Use it to convert a research report into an _interactive explainer page_ your team can share internally. - 🟢 [Gemini Agent](https://gemini.google/overview/agent/) is an experimental "do multi-step tasks for me" feature that can use connected apps (Gmail/Calendar/Drive/Keep/Tasks, plus Maps/YouTube). Use it to _plan a week of customer check-ins_: "find stalled deals, draft follow-ups, propose times, and create calendar holds-show me before sending." - 🟢 [NotebookLM](https://notebooklm.google.com/) is a source-grounded research notebook: it answers from your uploaded sources and can generate Audio Overviews. Use it to turn a messy folder of PDFs into a _decision memo_ + an "AI podcast" you can listen to while walking. - 🟡 [Pinpoint](https://pinpoint.google.com/) (Journalist Studio) helps explore huge collections of docs/audio/images with entity extraction and search. Use it for _internal investigations / audit trails_: upload contracts + emails, then trace every mention of a vendor and its linked people/locations. - 🟢 [Google AI Mode](https://www.google.com/search?udm=50) exposes experimental Search experiences (including AI Mode where available). Use it for _rapid competitive scans_: run the same query set weekly and track what changed in the AI-generated summaries vs links. - [Project Mariner](https://blog.google/technology/google-labs/project-mariner/) is a Google Labs "agentic" prototype aimed at taking actions on your behalf in a supervised way. Use it to _prototype a real workflow_ (e.g., "collect pricing from 20 vendor pages into a table") before you invest in automating it properly. - Workspace and "AI inside Google apps" - 🟢 [Google Workspace with Gemini](https://workspace.google.com/) brings Gemini into Gmail/Docs/Sheets/Drive, etc. Use it to _turn a weekly leadership email_ into: (1) action items per owner, (2) a draft reply, and (3) a one-slide summary for your staff meeting. - [Google Vids](https://workspace.google.com/products/vids/) is Workspace's AI-assisted video creation tool. Use it to convert a project update doc into a _2-3 minute narrated update video_ for stakeholders who don't read long emails. - [Gemini for Education](https://edu.google.com/intl/ALL_in/ai/gemini-for-education/) packages Gemini for teaching/learning contexts. Use it to generate _differentiated practice_: same concept, three difficulty levels + a rubric + common misconceptions. - Build: developer + agent platforms - 🟢 [Google AI Studio](https://aistudio.google.com/) is the fast path to prototyping with Gemini models and tools. Use it to build a _"contract red-flagger"_: upload a contract, extract clauses into structured JSON, and generate a risk report you can paste into your workflow. - [Firebase Studio](https://firebase.studio/) is a browser-based "full-stack AI workspace" with agents, unifying Project IDX into Firebase. Use it to ship a _real internal tool_ (auth + UI + backend) without local setup, then deploy with Firebase/Cloud Run. - 🟢 [Jules](https://jules.google/) is an autonomous coding agent that connects to your GitHub repo and works through larger tasks on its own. E.g. give it “upgrade dependencies, fix the failing tests, and open a PR with a clear changelog,” then review it like a teammate’s PR instead of doing the grind yourself. - [Jules Tools (CLI)](https://jules.google/docs/cli/reference/) is a command-line interface for running and monitoring Jules from your terminal or CI. E.g. pipe a TODO list into “one task per session,” auto-run nightly maintenance (lint/format/test fixes), and have it open PRs you can batch-review in the morning - [Jules API](https://developers.google.com/jules/api) lets you programmatically trigger Jules from other systems. E.g. when a build fails, your pipeline can call the API with logs + stack trace, have Jules propose a fix + tests, and post a PR link back into Slack/Linear for human approval - [Project IDX > Firebase Studio](https://idx.dev/) is the transition site if you used IDX. Use it to keep your existing workspaces but move to the newer Studio flows (agents + Gemini assistance). - [Genkit](https://genkit.dev/) is an open-source framework for building AI-powered apps (workflows, tool use, structured output) across providers. Use it to productionize an _agentic workflow_ (RAG + tools + eval) with a local debugging UI before deployment. - [Stax](https://stax.withgoogle.com/) is Google’s evaluation platform for LLM apps (prompts, models, and end-to-end behaviors), built to replace “vibe testing” with repeatable scoring. E.g. codify your product’s rubric (tone, factuality, refusal correctness, latency), run it against every prompt/model change, and block releases when key metrics regress - [SynthID](https://deepmind.google/models/synthid/) is DeepMind’s watermarking approach for identifying AI-generated/altered content. E.g. in an org that publishes lots of content, watermark what your tools generate and use detection as part of provenance checks before external release - [SynthID Text](https://ai.google.dev/responsible/docs/safeguards/synthid) is the developer-facing tooling/docs for watermarking and detecting LLM-generated text. E.g. watermark outbound “AI-assisted” customer emails and automatically route them for review if they’re about regulated topics - [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) is Google’s “safeguards” hub: watermarking, safety classifiers, and guidance to reduce abuse and failure modes. E.g. wrap your app with layered defenses (input filtering + output moderation + policy tests) so one jailbreak prompt doesn’t become a security incident - [Vertex AI Agent Builder](https://cloud.google.com/products/agent-builder) is Google Cloud's platform to build, deploy, and govern enterprise agents grounded in enterprise data. Use it to build a _customer-support agent_ that can read policy docs, query BigQuery, and write safe responses with guardrails. - [Gemini Code Assist](https://codeassist.google/) is Gemini in your IDE (and beyond) with chat, completions, and agentic help. Use it for _large refactors_: ask it to migrate a module, generate tests, and propose PR-ready diffs with explanations. - [PAIR Tools](https://pair.withgoogle.com/tools/) is Google’s hub of practical tools for understanding/debugging ML behavior (especially interpretability and fairness). E.g. before launch, run “slice analysis + counterfactual edits + feature sensitivity” to find where the model breaks on real user subgroups - [LIT (Learning Interpretability Tool)](https://pair-code.github.io/lit/) is an interactive UI for probing models on text/image/tabular data. E.g. debug prompt brittleness by comparing outputs across controlled perturbations (tense, style, sensitive attributes) and visualizing salience/attribution to see what the model is actually using - [What-If Tool](https://pair-code.github.io/what-if-tool/) is a minimal-coding tool to probe model predictions and fairness. E.g. manually edit a single example into multiple “what-if” counterfactuals and see which feature flips the decision, then turn that into a targeted data collection plan - [Facets](https://pair-code.github.io/facets/) helps you explore and visualize datasets to catch skew, outliers, and leakage early. E.g. audit a training set for missingness and subgroup imbalance, then fix data before you waste time “tuning your way out” of a data problem - 🟡 [Gemini CLI](https://github.com/google-gemini/gemini-cli) brings Gemini into the terminal with file ops, shell commands, and search grounding. Use it as a _repo-native "ops copilot"_: "scan logs, find the regression, propose the patch, run tests, and summarize." - 🟡 [Antigravity](https://deepmind.google/products/antigravity/) (DeepMind) is positioned as an agentic development environment. Use it when you want _multiple agents running tasks in parallel_ (debugging, refactoring, writing tests) while you supervise. - [Gemini for Google Cloud](https://docs.cloud.google.com/gemini/docs/overview) is Gemini embedded across many Google Cloud products. Use it for _cloud incident triage_: summarize logs, hypothesize root cause, and generate the Terraform/IaC fix. - Create: media, design, marketing, and "labs" tools - [Google Labs](https://labs.google/) is the hub for many experiments (Mixboard, Opal, CC, Learn Your Way, Doppl, etc.). Use it as your "what's new" page-many tools show up here before they become mainstream. - 🟡 [Opal](https://opal.google/) builds, edits, and shares AI mini-apps from natural language (with a workflow editor). Use it to create a _repeatable analyst tool_ (e.g., "take a company name > pull recent news > summarize risks > draft outreach"). - 🟡 [Mixboard](https://mixboard.google.com/projects) is an AI concepting canvas/board for exploring and refining ideas. Use it to run a _structured ideation sprint_: generate 20 variants, cluster them, then turn the top 3 into crisp one-pagers. - [Pomelli](https://labs.google.com/pomelli/about/) is a Labs marketing/brand tool that can infer brand identity and generate on-brand campaign assets. Use it to produce a _month of consistent social posts_ from your website + a few product photos. - 🟡 [Stitch](https://stitch.withgoogle.com/) turns prompts/sketches into UI designs and code. Use it to go from a rough wireframe to _React/Tailwind starter code_ you can hand to an engineer the same day. - 🟡 [Flow](https://labs.google/fx/tools/flow/) is a Labs tool aimed at AI video/story production workflows (built around Google's gen-media stack). Use it to create a _pitch sizzle reel_ quickly: consistent characters + scenes + a simple timeline. - [Whisk](https://labs.google/fx/tools/whisk/) is a Labs image tool focused on controllable remixing (subject/scene/style style workflows). Use it for _fast, art-directable moodboards_ when text prompting is too loose. - [ImageFX](https://labs.google/fx/tools/image-fx/) is Google Labs' image-generation playground. Use it to iterate _brand-safe visual directions_ quickly (e.g., generate 30 "hero image" variants, pick 3, then refine). - [VideoFX](https://labs.google/fx/tools/video-fx/) is the Labs surface for generative video (Veo-powered). Use it to prototype _short looping video backgrounds_ for product pages or events. - [MusicFX](https://labs.google/fx/tools/music-fx/) is the Labs music generation tool. Use it to generate _royalty-free stems_ (intro/outro/ambient) for podcasts or product videos. - [Doppl](https://labs.google/doppl) is a Labs try-on style experiment/app. Use it to sanity-check _creative wardrobe ideas_ before you buy, or to mock up "virtual merch" looks for a campaign. - 🟢 [Gemini Storybook](https://gemini.google/overview/storybook/) creates illustrated stories. Use it to generate _custom reading material_ for a specific learner's interests (and adjust reading level/style). - [TextFX](https://textfx.withgoogle.com/) is a Labs-style writing creativity tool (wordplay, transformations, constraints). Use it to generate _10 distinct "hooks"_ for the same idea before you write the real piece. - [GenType](https://labs.google/gentype) is a Labs experiment for AI-generated alphabets/type. Use it to create _a distinctive event identity_ (custom letterforms) without hiring a type designer for a one-off. - Science, security, and "serious AI" - [AlphaFold Server](https://alphafoldserver.com/) provides AlphaFold structure prediction as a web service. Use it to test _protein/ligand interaction hypotheses_ before spending lab time or compute on deeper simulations. - [Google Threat Intelligence](https://cloud.google.com/security/products/threat-intelligence) uses Gemini to help analyze threats and triage signals. Use it to turn a noisy alert stream into a _prioritized, explainable threat narrative_ your SOC can act on. - Models - 🟡 [Gemma](https://deepmind.google/models/gemma/) is DeepMind’s family of lightweight open models built from the same tech lineage as Gemini. E.g. run a small, controlled model inside your VPC for narrow tasks (classification, extraction, safety filtering) when sending data to hosted LLMs is undesirable - 🟡 [Model Garden](https://cloud.google.com/model-garden) is Vertex AI’s catalog to discover, test, customize, and deploy models from Google and partners. E.g. shortlist 3 candidate models, run the same eval set, then deploy the winner behind one standardized platform with enterprise controls - [Vertex AI Studio](https://cloud.google.com/generative-ai-studio) is the Google Cloud console surface for prototyping and testing genAI (prompts, model customization) in a governed environment. E.g. keep “prompt versions + test sets + pass/fail criteria” together so experiments become auditable artifacts, not scattered chats - [Model Explorer](https://ai.google.dev/edge/model-explorer) helps you visually inspect model graphs so you can debug conversion/quantization and performance issues. E.g. compare two quantization strategies and pinpoint exactly which ops caused a latency spike or accuracy drop before you deploy - [Google AI Edge](https://ai.google.dev/edge) is the umbrella for building on-device AI (mobile/web) with ready-to-use APIs across vision, audio, text, and genAI. E.g. ship an offline, privacy-preserving feature (document classification or on-device summarization) so latency and data exposure don’t depend on the network - [Google AI Edge Portal](https://ai.google.dev/edge/ai-edge-portal) benchmarks LiteRT models across many real devices so you don’t guess performance from one phone. E.g. test the same model on a spread of target devices and pick the smallest model/config that consistently hits your FPS/latency target - [TensorFlow Playground](https://playground.tensorflow.org/) is an interactive sandbox for understanding neural networks. E.g. use it to teach or debug intuitions—show how regularization, feature interactions, or class imbalance changes decision boundaries in minutes - [Teachable Machine](https://teachablemachine.withgoogle.com/) lets anyone train simple image/sound/pose models in the browser and export them. E.g. prototype an accessibility feature (custom gesture or sound trigger) fast, then export the model to a small web demo your stakeholders can try - Directories ("where to discover the rest") - [Google DeepMind Products & Models](https://deepmind.google/) (Gemini, Veo, Astra, Genie, etc.)-best "canonical list" of what exists. - [Google Labs Experiments directory](https://labs.google/experiments?category=develop)-browse by category (develop/create/learn) to catch smaller experiments you didn't know to search for. - [Experiments with Google](https://experiments.withgoogle.com/) is a gallery of interactive demos (many AI) that’s great for prompt/data literacy and workshop “aha” moments. E.g. curate 5 experiments as a hands-on “AI intuition lab” for your team so they learn failure modes by playing, not by reading docs