The definitive May 2026 ranking of every skill and native tool that turns OpenClaw into a real digital coworker.
OpenClaw crossed 13,729 community skills on ClawHub by late February 2026 ( VoltAgent awesome-openclaw-skills), with the official registry adding roughly 400 to 600 new entries every week and the top tier shifting almost daily. The catalog is now larger than the App Store had in its entire first year. That growth is mostly good news, but it also means the average user installs at most a dozen skills out of more than thirteen thousand candidates and has no real way to know whether the dozen they picked are the right ones.
This is the practical problem we are solving here. Most "best skills" lists on the internet stop at twenty entries, ignore the native tools entirely, and rarely separate plugin bundles from one-off skills or agent profiles. None of that maps to how OpenClaw actually behaves at runtime. The OpenClaw agent loop selects from a layered hierarchy: native tools first, then bundled plugin tools, then registered MCP servers, then user-installed skills, then on-demand ClawHub fetches. A serious top list has to respect that hierarchy or it will mislead the reader.
This guide breaks down the top 100 skills and tools for OpenClaw as of May 2026, ranked by a four-criteria scoring system, grouped into nine categories that match how the agent runtime actually layers capability. We start with the 15 native tools that ship inside the OpenClaw binary, because nothing on ClawHub matters until you understand what you already have, then we work outward through coding, web research, communication, productivity, data, creative, business, and personal categories. Every entry includes what it does, what it costs, where it shines, and where it fails.
The list is not a popularity ranking. Capability Evolver has 35,000 downloads and is genuinely useful, but it does not beat Browser Relay (a native tool) on any criterion that matters for real work. The ranking reflects engineering reality, not voting behavior.
Contents
- How we ranked: the four-criterion scoring method
- The Top 10 at a glance (master scoring table)
- Native OpenClaw tools (#1 to #15)
- Coding and development skills (#16 to #30)
- Web research and search skills (#31 to #42)
- Messaging and communication skills (#43 to #54)
- Productivity and task skills (#55 to #66)
- Data, storage, and knowledge skills (#67 to #77)
- Creative and media skills (#78 to #86)
- Business and sales skills (#87 to #94)
- Smart home and personal skills (#95 to #100)
- The full ranked list (all 100 with category, score, and one-line description)
- How to choose your stack from the list
- The future of skill ecosystems and where this is heading
- Final notes and a date disclaimer
1. How we ranked: the four-criterion scoring method
Every skill list on the internet either rates by raw download counts or by the author's gut feel. Both approaches break under load. Download counts punish new but excellent skills and reward old, deeply embedded ones that nobody is actively using. Gut feel is unreliable across categories: a developer rating productivity skills has different intuitions than a journalist rating research skills.
Our scoring method addresses this with four weighted criteria that map directly to what an agent actually needs in production. The weights came from the same tradeoffs that Yuma Heymans (founder of o-mega.ai) has flagged repeatedly when analyzing AI agent capability stacks: a tool is only useful in proportion to whether the agent can call it correctly, whether it returns reliable output, whether it scales to many calls, and whether the cost stays manageable across a long-running session.
The four criteria are agent-readiness (35%), reliability (30%), cost efficiency (20%), and ecosystem depth (15%). The bias toward the first two criteria is intentional. An agent that calls a beautifully priced tool with broken JSON output ends up costing more in retry tokens than an expensive but consistent tool, so reliability earns a heavier weight than cost.
Each criterion is scored 0 to 10, with cell justification (real numbers, real features) recorded next to the score. The final number is a weighted average rounded to one decimal place. Within each of the nine categories the entries are then numbered globally so that a reader can compare across categories without having to recompute scores.
Because the list is exhaustive, we publish the full Top 10 in a single master scoring table at the top of the article. The remaining 90 entries are listed in section 12 with the same Final Score column so the reader can see global rank without reading every profile. This is the format the comparison guide self-improving AI agents (2026 edition) used, and we have stuck with it because it actually works for long lists.
2. The Top 10 at a glance
These are the ten highest-scoring entries across all categories. Native tools dominate because they ship with OpenClaw, are stress-tested by every user, and require no separate billing. The first non-native skill appears at #6.
| # | Name | Category | What It Does | Agent Ready (35%) | Reliability (30%) | Cost (20%) | Ecosystem (15%) | Final |
|---|---|---|---|---|---|---|---|---|
| 1 | Browser Relay | Native | CDP-based browser control (3 modes) | 10 - typed CDP, 3 profiles | 10 - 7 months stable | 10 - free, runs locally | 9 - 2.4M+ Chrome installs | 9.9 |
| 2 | Filesystem Tool | Native | Read/write files with sandboxed paths | 10 - typed args, path guard | 10 - native, no plugin | 10 - free | 8 - referenced in 3000+ skills | 9.7 |
| 3 | Memory System | Native | Vector-indexed persistent memory | 10 - automatic chunking | 9 - local + cloud embed | 10 - $0 local, optional API | 9 - cited in every guide | 9.6 |
| 4 | Subagents | Native | Spawn child agents with own tools | 10 - typed delegation | 9 - parent-child isolation | 9 - costs scale with use | 8 - growing pattern | 9.3 |
| 5 | Lobster Workflow Engine | Native | Multi-step orchestrator inside core | 9 - YAML + dynamic | 9 - rollback, dry-run | 10 - free | 9 - bundled with installer | 9.2 |
| 6 | GitHub Skill | Coding | Conversational Git operations + PRs | 10 - 32 typed actions | 9 - 10K+ downloads | 9 - free, optional GitHub API cap | 9 - largest dev base | 9.1 |
| 7 | Heartbeat Scheduler | Native | Cron + interval task runner | 9 - cron syntax, JSON jobs | 10 - drift-corrected | 10 - free | 7 - native only | 9.1 |
| 8 | Tavily Search | Web Research | Agent-native search with summarized results | 10 - JSON-structured output | 9 - 99.7% uptime in 2025 | 8 - $5/1K queries | 8 - 30+ integrations | 9.0 |
| 9 | MCP Client (built-in) | Native | Connect to any MCP-compatible server | 10 - typed adapters | 9 - mature spec | 10 - free, server costs vary | 7 - 50+ servers in 2026 | 8.9 |
| 10 | Voice Loop | Native | Realtime voice with full tool access | 9 - dual-stream STT/TTS | 8 - latency-sensitive | 8 - depends on voice provider | 9 - bundled with Talk app | 8.8 |
The most important pattern in this table is the dominance of native tools in positions 1, 2, 3, 4, 5, 7, and 9. Seven of the top ten entries are native, not skills. This is the single most important fact about the OpenClaw stack and the one most users miss. The native layer is where the agent does its hard work, and the skill layer is where it adds specialized integrations.
The non-native entries that crack the top ten are GitHub (#6), Tavily (#8), and Voice Loop (#10). All three solve problems that the native tools cannot: structured Git ops, agent-grade web search, and realtime voice with model interleaving. Notice that the Voice Loop entry overlaps the native and skill layers in places, and the score reflects that hybrid status.
3. Native OpenClaw tools (#1 to #15)
The native tools are the foundation of every OpenClaw deployment. They are written in TypeScript, ship inside the binary, and receive type-checked arguments directly from the agent. They cannot be uninstalled, only enabled or disabled, and they require no API key beyond the keys you already supply for the model. Skipping over them in favor of a flashy ClawHub install is the most common mistake new users make.
Why this matters. The native layer determines what the skill layer can or cannot extend. A skill that wraps a native tool with a friendlier interface (the Browser Pro skill, for example) is just a thin wrapper over Browser Relay. If you understand the native tool, you can usually skip the wrapper and call the underlying capability yourself, with fewer abstractions and less surface area for bugs.
Every entry in this section is described with what it does, when it shines, when it fails, and how to invoke it. We avoid generic feature lists in favor of grounded descriptions of what the tool actually does at runtime.
#1 - Browser Relay
OpenClaw's flagship native tool, Browser Relay, controls Chrome through the Chrome DevTools Protocol in three distinct modes ( - OpenClaw Browser Relay docs). The first mode, Extension Relay, attaches to a Chrome instance the user is already running, including all logged-in sessions and cookies. This is the mode that lets the agent post to your private Twitter account, check your inbox, or fill out a form using your stored credentials.
The second mode, OpenClaw-Managed, spawns an isolated Chrome instance with no inherited identity. This mode runs on a dedicated profile and is the right choice for tasks that should not touch your personal sessions, such as scraping a competitor's public site. The third mode, Remote CDP, connects to a headless Chrome running in a cloud or container environment ( - Chrome Web Store listing for Browser Relay).
Browser Relay scored a perfect 10 on agent-readiness because the underlying CDP commands are exposed to the agent as typed actions, not as a free-text prompt. The agent invokes browser.click(selector), browser.fill(selector, value), browser.evaluate(js), and so on. Each call returns a structured result. The agent never has to interpret screenshots unless it explicitly asks for them. For a deeper exploration of why this matters for production agents, see our breakdown of browser automation alternatives.
#2 - Filesystem Tool
The Filesystem Tool lets the agent read, write, list, and remove files within a sandboxed directory tree. The sandbox is configured at install time and defaults to ~/.openclaw/workspace, which is the right default for personal use. The tool exposes typed arguments for path, content, and encoding, and rejects any path that resolves outside the sandbox.
Filesystem tool calls are surprisingly common at runtime because most agent workflows produce intermediate files: a summary written to disk, a PDF parsed into text, a CSV downloaded and reformatted. Without a native filesystem tool, every one of those steps would have to round-trip through a skill, costing tokens and adding latency. The reason it scores so high on reliability is that there is no networked dependency, no rate limit, and no third-party API to fail.
#3 - Memory System
OpenClaw's memory system stores agent learnings as Markdown files under ~/.openclaw/memory/, indexed by vector embeddings ( - optimization guide). Each file gets a frontmatter block with a name, description, and tags. When the agent processes a new request, the memory system runs a hybrid retrieval (70% vector similarity, 30% BM25) and injects the top three matches into the prompt. The memory system is configured by default to use either OpenAI's text-embedding-3-large or local Ollama for embeddings.
The reason this is a top three entry is that memory is what turns an agent from a chatbot into a coworker. Without memory, every conversation starts cold. With memory, the agent remembers your preferences, your project structure, your recurring tasks, and the corrections you have given it before. We covered how this changes the long-term economics of agent ownership in our self-improving AI agents guide.
#4 - Subagents
The Subagents tool lets the main agent spawn child agents with their own tool set, model, and context window ( - hermes-agent project for similar architecture). Subagents run in isolation, return a summary to the parent, and free the parent's context for new work. This is OpenClaw's answer to the long-context problem: instead of stuffing everything into a single 200k window, you delegate.
Common subagent patterns include research subagents that read 50 pages and return a 1k-word summary, coder subagents that take a bug report and return a patch, and reviewer subagents that take a draft and return critique. The parent agent never sees the intermediate work, only the final return value, which keeps the parent's context clean.
#5 - Lobster Workflow Engine
Lobster is OpenClaw's built-in multi-step orchestration engine, named after the project's mascot. It lets you define multi-step pipelines as YAML, with each step optionally running under a different agent, model, or tool set. Lobster supports dry-run mode (the steps are simulated without actually executing them), conditional branches, retries with backoff, and rollback handlers.
Lobster matters because most useful agent work is multi-step. A "morning briefing" workflow might fetch your calendar, summarize your inbox, check stock prices, and post the result to Slack. Hand-coding that as a single agent prompt is fragile. Defining it as a Lobster pipeline is verifiable. The rollback handlers in particular are a feature you do not appreciate until your first half-broken automation eats half a day, at which point you wonder how you ever lived without them.
#6 - GitHub Skill (advanced placement)
The GitHub skill earned position #6 on the master table because it is the most heavily used skill in the dev community and acts as an extension of the native filesystem tool ( - GitHub repo for skill). It exposes 32 typed actions including repo.list, pr.create, issue.comment, branch.create, release.tag, and actions.workflow_run. The skill uses your personal access token or GitHub App credentials and respects branch protection rules.
We profile the GitHub skill in detail in section 4 alongside the rest of the development tools. It earns its top-ten placement because Git is the substrate of most coding work and conversational Git is genuinely faster than typing commands once you adjust to it.
#7 - Heartbeat Scheduler
OpenClaw's heartbeat scheduler runs cron-like jobs at fixed intervals, allowing the agent to wake up and check things proactively even when no one is talking to it. Jobs are defined as JSON in ~/.openclaw/jobs.json and support cron syntax, fixed intervals, and one-shot delayed runs.
The reliability score of 10 reflects the drift correction built into the scheduler. Most home-grown cron implementations skip a tick under load. OpenClaw's scheduler tracks expected vs actual fire times and runs missed jobs immediately on resume, which matters because users actually rely on these jobs for daily morning briefings, weekly digests, and overnight monitoring.
#8 - Tavily Search (advanced placement)
Tavily is the highest-scoring web research skill, and we profile it again in section 5 because it deserves the focus. It is featured in the master table because in the practical question of "how should the agent search the web," Tavily is the default answer for most production deployments ( - Tavily search engine for AI agents). Tavily returns structured JSON with a summarized answer, a list of source documents, and source-level citations, all in one call.
Compared to invoking a generic search engine and parsing HTML, Tavily saves both tokens and latency. It is not free (the entry-level plan costs $5 per 1,000 queries), but the savings on output tokens usually make it cheaper net-net for any deployment that issues more than a handful of searches per day.
#9 - MCP Client (built-in)
The MCP client is OpenClaw's bridge to the Model Context Protocol, Anthropic's open standard for agent tool servers ( - OpenClaw MCP CLI docs). The client supports both stdio servers (launched as subprocesses) and HTTP servers (connected over the network), and it auto-discovers the tool catalog of each connected server.
MCP matters because it is becoming the lingua franca of agent tools. We surveyed the 50 best MCP servers in early 2026 and the count has roughly doubled since then ( - 50 best MCP servers for AI agents (2026)). Any MCP server you stand up in your stack is automatically available to OpenClaw without writing a wrapper skill.
#10 - Voice Loop
The Voice Loop is OpenClaw's realtime voice interface, supporting Talk (always-on local mic), Voice Call (phone-style call into the agent), and Google Meet participation. Voice Loop integrates with multiple STT and TTS providers including Whisper, Deepgram, ElevenLabs, OpenAI TTS, and Piper for fully local audio.
The reason Voice Loop earns a top-ten placement despite being newer is that it interleaves with the full tool set during a conversation. Most "voice mode" implementations are read-only, voice in and voice out only. OpenClaw's voice loop can fire arbitrary tools mid-sentence: search the web, check the calendar, send an email, all while the user is still speaking. This is qualitatively different from voice as a UI surface.
#11 - Plugin Bundle System
OpenClaw's plugin bundle system is the layer between native tools and individual skills. A bundle packages multiple related capabilities (tools, skills, MCP server configs, prompt fragments) into a single installable unit ( - OpenClaw plugin bundles docs). Bundles are versioned, signed, and auditable, which matters for security after the ClawHavoc incident in early 2026 (more on that in section 13).
The bundle system is where teams package their internal capabilities. A finance team might publish a finance-bundle containing the Excel skill, the SAP MCP server, the QuickBooks skill, and a curated prompt that explains the team's accounting conventions. New team members install one bundle and get the whole stack pre-configured. We covered this exact approach for Suprsonic-published bundles when open source personal AI became viable.
#12 - Slash Commands System
The slash commands system is OpenClaw's user-facing command layer. Commands like /mcp, /plugins, /debug, /restart, /send, and /bash give the user direct control over the agent runtime without going through the LLM ( - OpenClaw slash commands docs). Commands respect the same permission model as tools, so /bash is gated behind an explicit allow flag.
This is a small but consequential design choice. Slash commands let the user fix things without negotiating with the agent. If you want to disable a misbehaving plugin, /plugins disable foo is faster than asking the agent to do it, and crucially does not depend on the agent being healthy. We have seen production deployments that recovered from a runaway agent specifically by hitting /restart from a separate terminal.
#13 - ACP Agents
ACP (Agent Communication Protocol) is OpenClaw's standard for agent-to-agent communication ( - ACP CLI docs). It defines how one OpenClaw instance can task another, exchange capability manifests, and pass intermediate results. ACP is what lets a personal OpenClaw delegate work to a team or company OpenClaw running elsewhere.
ACP is younger than most other native features and the score reflects that. It is reliable in single-tenant deployments and fragile in multi-tenant ones, mostly because the security boundary between two ACP-connected agents is still being hardened. The pattern is the right one, the implementation is still maturing.
#14 - Gateway Web Interface
The Gateway is OpenClaw's web-based control surface ( - OpenClaw Gateway configuration). It runs on localhost:18792 by default and provides a UI for browsing skills, configuring plugins, viewing memory, and watching tool calls in real time. The Gateway is what most users see most of the time, even if they spend their day chatting through WhatsApp or Telegram.
A CRITICAL WARNING: The Gateway web interface is NOT hardened for public exposure. Binding it to a public IP without an authenticating reverse proxy is the single most dangerous deployment mistake we see in OpenClaw setups. Keep it on localhost or behind tailscale.
#15 - Web Fetch (built-in HTTP)
The web_fetch tool is OpenClaw's native HTTP client. It performs GET/POST requests, handles cookies, follows redirects, and returns headers + body in a structured response. It is the default way for the agent to hit any HTTP endpoint that does not deserve a dedicated skill.
Web fetch is unglamorous and underrated. Most "skills" that wrap a third-party API would not need to exist if their authors had simply documented the right call signature for the underlying REST endpoint. We strongly recommend reaching for web_fetch before installing a skill, and then upgrading to a dedicated skill only if the call pattern becomes unwieldy.
The native tools account for 15 of the most-used capabilities in any OpenClaw deployment. They cover roughly 70% of the tool calls the average agent makes, with the remaining 30% spread across the 13,000+ skills on ClawHub. The skill layer is where you specialize, but the native layer is where you live.
4. Coding and development skills (#16 to #30)
The development category is the most populated tier on ClawHub by a wide margin. According to the awesome-openclaw-skills index, the coding agents and IDEs subcategory alone hosts over 1,200 distinct skills, and the git and github subcategory hosts another 600 ( - VoltAgent awesome-openclaw-skills coding category). The skills in this section are the ones we have either tested in production or seen used reliably across multiple teams.
The reason there are so many is that OpenClaw is built by developers and used heavily by developers before anyone else. Most of the early viral moments (Steinberger demos, Moltbook posts) involved code being shipped, repos being modified, or PRs being opened from a chat interface. The category therefore matures faster than others, with skills churning quickly as better wrappers replace older ones. We have tried to choose entries that are stable and broadly applicable rather than chasing the most-recently-trending one.
#16 - GitHub Skill
We touched on GitHub above. Beyond the 32 typed actions, the skill exposes a special PR review mode that pulls a draft PR, runs the change set through a code review prompt, and posts inline comments. This works particularly well combined with the Subagents tool: a parent agent triggers a reviewer subagent that loads the PR, generates inline review comments, and exits when done. Best for: solo developers and small teams who want conversational Git without losing audit trails.
#17 - Linear
The Linear skill connects to Linear's GraphQL API and exposes typed actions for issue creation, status changes, comment posting, and project queries. It is the second-most-installed coding skill behind GitHub. The skill works best in per-issue mode: the agent picks up an issue, runs through the linked PRs, and updates the status when the PRs merge. Best for: product engineering teams already on Linear who want the agent to live inside their existing workflow.
#18 - Jira
The Jira skill is similar in spirit to Linear but with a much wider configuration surface. Jira has more ticket types, more workflow states, and more custom fields, all of which the skill has to negotiate. The result is a powerful but verbose interaction. The skill scores slightly lower than Linear on agent-readiness because the typed action surface is larger and the agent occasionally guesses field values incorrectly. Best for: enterprise teams with mature Jira workflows.
#19 - GitLab
The GitLab skill mirrors most of GitHub's functionality (PRs become MRs, issues are issues, projects are projects). It scores lower than the GitHub skill mostly because the install base is smaller and the maintainer activity is more sporadic. For teams already on GitLab, it is the right choice. The skill correctly handles GitLab's permission model and respects approval rules. Best for: GitLab-native teams.
#20 - Vercel
The Vercel skill enables the agent to deploy projects, query deployment logs, and manage environment variables. The skill is genuinely useful for the "deploy this" interaction pattern: you finish a coding task, ask the agent to deploy it, and the agent handles vercel deploy --prod, watches the build, and reports the URL when ready. Best for: front-end teams already on Vercel.
#21 - Render
The Render skill is the equivalent for Render-hosted services, with first-class support for backend deployments, cron jobs, and database services. It is less mature than the Vercel skill but covers more service types. Best for: full-stack teams running Python or Go backends on Render.
#22 - Docker
The Docker skill exposes the local Docker daemon to the agent. It can build images, run containers, list containers, attach to logs, and exec into a container for debugging. The skill is one of the rare ones that actually justifies its existence over the native shell tool because Docker's command surface is so verbose. Best for: local development and CI workflows where containers are the unit of execution.
#23 - ESLint Code Quality
The ESLint skill runs lint and format passes on a target directory, returns the violations, and optionally applies autofixes. It is a small skill, but it earns its place because the alternative (asking the agent to "look for style issues" using its own model) is both slower and less reliable. Linters are deterministic, so let them be deterministic. Best for: any TypeScript/JavaScript codebase.
#24 - Cursor Sync
The Cursor Sync skill bridges OpenClaw to Cursor, the AI-native editor. It pushes work-in-progress files from your OpenClaw memory directly into a Cursor workspace and pulls the resulting edits back. This is the right pattern when you want the agent to draft and a human to refine in real time. Best for: pair-programming workflows where the agent prepares scaffolding and a human polishes.
#25 - Postman / API Tester
The Postman skill (and its sibling, the generic API Tester) lets the agent execute API calls against an OpenAPI spec and verify responses. It is most useful in regression mode: the agent runs a saved collection nightly and reports which endpoints returned unexpected outputs. Best for: API-heavy teams who want continuous behavioral monitoring without standing up a dedicated test platform.
#26 - Sentry
The Sentry skill connects to Sentry's API to fetch issues, query release health, and acknowledge alerts. It is the right primitive for the "the agent watches the error feed and triages" pattern. The skill is reliable and well-maintained, with one caveat: rate limits on free Sentry plans can bite if your agent is overly enthusiastic. Best for: teams already on Sentry who want triage automation.
#27 - Stack Overflow Lookup
The Stack Overflow skill is a focused search wrapper that returns top-voted answers for a given query, with code blocks intact. It is genuinely useful when the agent hits an obscure runtime error and needs the consensus answer rather than reasoning from first principles. The skill is simple, fast, and free. Best for: debugging sessions where the agent should defer to the existing community consensus.
#28 - PR Reviewer
The PR Reviewer skill is a higher-level wrapper around the GitHub skill that adds a curated review prompt, a checklist generator, and inline comment formatting. It is essentially a convenience layer, but the curated prompt is worth the install on its own. Best for: teams who want consistent, opinionated PR reviews without writing the prompt themselves.
#29 - Test Runner
The Test Runner skill wraps pytest, jest, vitest, go test, and cargo test behind a unified interface. It runs the test suite, returns the pass/fail summary, and (on failure) returns the failing test bodies along with their stack traces. Best for: workflows where the agent ships a change and verifies before committing.
#30 - Code Generator
The Code Generator skill is a structured scaffolder. It takes a high-level brief (a feature name, a tech stack, a target directory) and emits a starter implementation with tests, docs, and a basic CI config. It is a productivity multiplier for greenfield work but should not be used for changes inside an established codebase: the heuristics it uses to guess local conventions are too thin. Best for: starting new projects from scratch.
The development category has been growing roughly 18% month over month according to ClawHub's public counters and shows no signs of slowing. Notable gaps still exist for Apple Xcode, JetBrains IDEs, and embedded tooling, all of which we expect to see filled before the end of 2026. For developers who want a more managed environment than installing skills one at a time, Claude Code's pricing analysis covers the closest comparable platform, while the llm tool gateways guide covers the layer below that.
5. Web research and search skills (#31 to #42)
Web research is the second-most-developed skill category and the one where price/quality tradeoffs matter most. The agent is liable to issue dozens of queries per task on a research-heavy run, and the difference between a $5 per thousand search API and a $50 per thousand search API compounds quickly. We have prioritized agent-grade APIs that return structured JSON over generic search wrappers that return HTML, because the latter cost more in output tokens for parsing.
The other notable pattern in this category is niche corpus skills (ArXiv, Wikipedia, GitHub Code Search) that are small but disproportionately useful when the agent's task is constrained to a specific knowledge domain. A research agent looking up a paper does not need a generic web search, it needs ArXiv. The skill list reflects this division: the first six entries are general-purpose, and the last six are niche.
#31 - Tavily Search
We profiled Tavily in the master section. Beyond the basics, Tavily's search-with-answer endpoint returns a model-generated synthesis along with the citations, which is the right mode when the agent's job is to answer a question rather than collect raw results. The free tier is 1,000 queries per month, which is more than enough for personal use. Best for: any agent that issues more than five web searches per task.
#32 - Brave Search
The Brave Search skill is the most popular non-Tavily option. Brave indexes 40 billion pages on its own crawler, returns up to 20 results per query at $5 per 1,000 calls, and respects user privacy in ways most other APIs do not ( - our analysis of agent search APIs). The skill's main weakness is that it returns metadata, not full content, so the agent has to chain a fetch step to read the actual page. Best for: privacy-sensitive deployments and high-volume indexing tasks.
#33 - Exa Semantic Search
Exa runs a semantic search index optimized for long-form content. Where Tavily and Brave shine on factual lookups, Exa shines on "find me the best blog posts about X" or "find me startups doing Y." It costs $7 per 1,000 queries and returns content along with the URLs. Best for: research tasks that benefit from semantic neighborhood expansion rather than keyword precision.
#34 - Perplexity
The Perplexity skill wraps Perplexity's API, which itself is a search-plus-synthesis layer over multiple underlying indexes. It is the most expensive entry in this section ($20 per 1,000 queries) but it returns the highest-quality answers for messy questions. Best for: a research subagent whose only job is to produce concise, well-sourced summaries of unstructured queries.
#35 - Web Fetch (advanced)
We covered the native web_fetch above, but the ClawHub version adds JS rendering, cookie persistence, and proxy rotation. The advanced wrapper is useful when the target site requires authentication or anti-bot bypassing. Best for: tasks that need to read pages a generic HTTP client cannot.
#36 - Firecrawl
Firecrawl is a crawling-and-extraction service designed specifically for AI agents. It handles JS rendering, sitemap traversal, and structured extraction in a single call. The pricing is per-page rather than per-query, which makes it the right choice for "scrape the entire blog of company X" patterns. Best for: bulk extraction tasks where you know the target site upfront ( - data extraction APIs comparison).
#37 - ScreenshotOne
The ScreenshotOne skill captures full-page screenshots of any URL and returns them as PNG or PDF. It is invaluable when the agent needs to show visual state (a competitor's pricing page, the current state of a dashboard) rather than describe it in text. Best for: monitoring workflows that change visual surfaces matter (see our screenshot APIs comparison for alternatives).
#38 - ScrapingBee
ScrapingBee is an older but well-loved scraping API. It is reliable, well-priced ($49 per 100k requests on the entry plan), and has unusually good documentation. It scores slightly lower than Firecrawl on agent-readiness because the response is HTML rather than structured JSON, but it is more reliable on hard-to-scrape sites. Best for: scraping sites that block other extractors.
#39 - ArXiv Lookup
The ArXiv Lookup skill queries ArXiv's metadata API and the full-text endpoint. It returns paper abstracts, authors, dates, and PDF download URLs. It is small, fast, and free, and it dramatically improves agent performance on academic research tasks compared to using a generic search engine. Best for: research agents whose corpus includes scientific papers.
#40 - Wikipedia Reader
The Wikipedia Reader skill fetches and parses Wikipedia articles, returning structured sections and links to related pages. It is a small but essential skill for any agent that ever has to answer factual questions. Best for: factual lookup workflows where Wikipedia is the source of truth.
#41 - YouTube Transcripts
The YouTube Transcripts skill fetches the auto-generated or human transcript of any YouTube video. Combined with a summarizer, this lets the agent "watch" videos for the user and extract key points. It is the right tool for the "summarize this hour-long talk" pattern. Best for: agents that need to consume video content as text.
#42 - RSS Reader
The RSS Reader skill subscribes to RSS feeds and surfaces new items as they arrive. Combined with the heartbeat scheduler, it is the foundation of any "wake me up when X publishes" workflow. The skill stores subscriptions in OpenClaw's memory and deduplicates aggressively. Best for: news aggregation, blog monitoring, and competitor watch.
The web research category is genuinely competitive. Tavily's lead is real but contested, and the right choice depends on volume and budget. For a deeper analysis of the search API space see our AI search APIs guide, which goes into more detail on per-query economics and benchmark scores.
6. Messaging and communication skills (#43 to #54)
The communication category is where OpenClaw's identity as a gateway agent becomes most concrete. Most users do not interact with OpenClaw through the Gateway web interface, they talk to it through whatever messaging app they already live in. The skill catalog has matured around this insight: every major messenger has at least one battle-tested skill, and the better ones include presence detection, group routing, and threaded reply support.
The trade-offs in this category are not about cost (most messaging integrations are free or near-free) but about identity and trust. A skill that posts on your behalf to Slack is using your identity. A skill that sends a WhatsApp message is using your phone number. The blast radius of a misbehaving messaging skill is wider than the blast radius of a misbehaving search skill. We have tried to call out where each integration carries that risk.
#43 - WhatsApp
The WhatsApp skill is the single most-used messaging skill on ClawHub by install count. It connects to WhatsApp's Cloud API (or, for personal use, a community Bridge running on your phone) and supports text, voice messages, image attachments, and group chats. The skill respects WhatsApp's 24-hour reply window for bots and falls back to template messages outside it. Best for: personal-use OpenClaw deployments and WhatsApp-first user bases.
#44 - Telegram
The Telegram skill is a close second to WhatsApp by install count and arguably ahead on developer experience. Telegram's Bot API is more permissive, the rate limits are generous, and the skill exposes inline keyboards, file attachments, and webhook handlers cleanly. Many developers run their personal OpenClaw via Telegram for exactly these reasons. Best for: technically inclined personal users.
#45 - Signal
The Signal skill connects through Signal-CLI or one of the bridge projects ( - openclawnews ecosystem map). It is meaningfully harder to set up than WhatsApp or Telegram because Signal does not expose a first-party bot API. The reward is a fully end-to-end-encrypted channel between you and your agent. Best for: privacy-sensitive users.
#46 - Discord
The Discord skill is built around Discord's Bot framework. It supports text channels, voice channels, slash commands, and threaded replies. It is the right primitive for community-facing agents (a server moderator, a support bot, a community FAQ answerer). Best for: community managers and Discord-native teams.
#47 - Slack
The Slack skill targets Slack's modern API including Block Kit messages, slash commands, and the Events API. The skill correctly handles thread replies, channel mentions, and DMs, and it respects Slack's enterprise grid permissions when present. Best for: workplace deployments.
#48 - iMessage
The iMessage skill works only on macOS and uses the local Messages app's database to send and receive messages. It is hacky in the way Apple-platform integrations always are (the database schema changes between OS versions) but it is the only way to integrate iMessage at all without a third-party Bridge. Best for: Mac users who already live in iMessage.
#49 - Email (SMTP/IMAP)
The Email skill is the generic SMTP/IMAP integration. Configure it with credentials and the agent can read mail and send mail. It is the lowest-common-denominator email skill and is the right choice when the user does not want to grant OAuth scope to a single provider. Best for: privacy-sensitive users and self-hosted email setups.
#50 - Gmail (OAuth)
The Gmail skill is the OAuth-scoped version of the email integration, with first-class support for labels, filters, and the Gmail-specific search query syntax. It is meaningfully more capable than the generic SMTP/IMAP skill but it requires you to register a Google Cloud project with a verified OAuth consent screen. Best for: Google Workspace users.
#51 - SMS (Twilio)
The SMS skill wraps Twilio's API for sending and receiving SMS messages. It is the right primitive for "ping me about X" workflows where the user is away from the keyboard. It is paid (Twilio rates apply) but the per-message cost is small enough that personal use rarely exceeds a few dollars a month. Best for: alerting and notification workflows.
#52 - Microsoft Teams
The Microsoft Teams skill targets the Teams Graph API. It supports messages, channels, mentions, and the Teams-specific cards format. It is the right choice for enterprise deployments that have standardized on Teams over Slack. Best for: Microsoft 365-centric workplaces.
#53 - Matrix
The Matrix skill connects to Matrix's federated chat protocol via Synapse or any other Matrix homeserver. It is the most flexible messaging skill in the category but also the most niche. The right user is someone who already runs a Matrix homeserver and wants the agent to live alongside their existing federated chats. Best for: federation-minded users and self-hosted teams.
#54 - Zoom
The Zoom skill is a relative newcomer that lets the agent join Zoom calls as a meeting participant, transcribe in real time, and post structured summaries afterward. It pairs naturally with the Voice Loop native tool. Best for: meeting-heavy professionals who want a hands-off "agent in the room" pattern.
The communication category illustrates a broader pattern: messaging is where personal AI becomes useful. The agent is only as helpful as it is reachable, and being reachable means being on the channels the user already lives in. We have written about this at length in our analysis of open source personal AI, which covers the same point from a different angle.
7. Productivity and task skills (#55 to #66)
The productivity category is the most heterogeneous on this list. It mixes calendaring, knowledge management, file storage, and task tracking into a single bucket because these capabilities tend to be deployed together. A user who installs the Notion skill almost always installs a calendar skill within the same week, and the agent's day-to-day usefulness is roughly proportional to how many of these are wired up.
The other thing to know about this category is that it is the most heavily personal. A productivity skill is touching the user's own data: their calendar, their tasks, their notes. Mis-configurations here are uniquely painful because the cost of a corrupted Notion page or a deleted task list shows up the next day, not in a log file. We have therefore favored skills with strong rollback semantics over flashier ones with broader feature surfaces.
#55 - Google Calendar
The Google Calendar skill supports event creation, reading, updating, free/busy queries, and reminder setup. It is the foundation of any "schedule something" workflow. The skill correctly handles recurring events, time zones, and shared calendars. It is the most-installed productivity skill on ClawHub by a wide margin. Best for: any Google Workspace user.
#56 - Apple Calendar
The Apple Calendar skill is the Mac-native equivalent. It uses the local Calendar.app database and supports the same operations. The skill is somewhat fragile because the database schema is private and changes between OS versions. Best for: Apple-only households.
#57 - Notion
The Notion skill is the most capable knowledge-management skill on ClawHub. It supports page creation, block manipulation, database queries, and templating. The skill correctly handles Notion's idiosyncratic block structure and respects workspace permissions. Best for: Notion-native individuals and teams.
#58 - Obsidian
The Obsidian skill is the local-first equivalent of the Notion skill. It reads and writes Markdown files in an Obsidian vault, including frontmatter, links, and tags. The skill pairs well with the Memory System native tool because both store data as Markdown files, so the agent can use them interchangeably. Best for: local-first knowledge workers and privacy-sensitive users.
#59 - Todoist
The Todoist skill connects to Todoist's REST API and supports task creation, completion, project management, and natural-language date parsing (Todoist's own parser, not OpenClaw's). It is the most-installed task management skill, slightly ahead of Things 3. Best for: GTD-style task workflows.
#60 - Things 3
The Things 3 skill is the Mac/iOS-only equivalent. It uses Things' URL scheme rather than a REST API, which is fast for adding tasks but slow for querying them. The skill is most useful as a write-only sink: the agent creates tasks for the user to triage manually. Best for: Apple users committed to Things.
#61 - Google Drive
The Google Drive skill exposes Drive's API for file uploads, downloads, sharing, and search. It is a foundational skill for any workflow that produces files the user wants to access later from a phone or tablet. Best for: Google Workspace users.
#62 - Dropbox
The Dropbox skill is the equivalent for Dropbox-stored files. It is meaningfully simpler than the Drive skill because Dropbox's permission model is simpler. Best for: Dropbox-native users.
#63 - Reminders
The Reminders skill is a thin wrapper over Apple's Reminders.app or Microsoft To Do. It is the right primitive for time-based nudges that do not warrant a full task management system. Best for: lightweight reminder workflows.
#64 - Pocket / Read Later
The Pocket skill (and its Read Later variants) lets the agent save articles to a read-later queue and surface them on a schedule. Combined with the heartbeat scheduler, it underpins the "send me a curated reading list every morning" workflow. Best for: information consumption workflows.
#65 - Capability Evolver
The Capability Evolver skill (35,000 downloads, top of ClawHub charts) watches how your agent handles tasks over time, identifies inefficiencies, and quietly adjusts its approach. It is essentially a self-monitoring meta-skill that catalogs the agent's tool calls, success rates, and retries, and proposes optimizations. Best for: long-running personal deployments where the user wants the agent to improve over time.
#66 - Clawflows
The Clawflows skill is a multi-step workflow orchestrator built on top of the Lobster engine. It exposes a friendlier YAML format than raw Lobster and adds a library of common templates (morning briefing, weekly report, content pipeline). It is essentially a curation layer over a native primitive. Best for: users who want pre-built workflows rather than authoring from scratch.
The productivity category is also where the awareness gap between users and the underlying capability is widest. Most users install Todoist before they install Capability Evolver, even though Capability Evolver is the more transformative skill. The reason is that productivity tools are familiar and self-evolving agents are not. We expect this to flip over the next year as users learn what the meta-skills can actually do.
8. Data, storage, and knowledge skills (#67 to #77)
The data category covers structured persistence: databases, spreadsheets, vector stores, and document indexes. It is more technical than the productivity category and somewhat less personal. Most data skills target either a backend the user owns (a Postgres instance, a SQLite file) or a SaaS product (Airtable, Google Sheets), and the right choice depends on whether you want self-hosted or managed.
A reliable pattern in this category is that the best skills are often the simplest ones. SQL skills tend to expose the underlying query language directly rather than trying to invent a higher-level abstraction. This works because the agent is competent enough to write SQL on its own, and a simple skill is harder to misuse than a complex one.
#67 - PostgreSQL
The PostgreSQL skill exposes typed queries against any reachable Postgres database. The skill includes a schema-introspection helper so the agent can ask "what tables do you have?" before writing queries. It supports connection pooling and respects read-only credentials. Best for: any Postgres-backed application.
#68 - SQLite
The SQLite skill is the local-first equivalent. It works against any .sqlite file on disk and is the right choice for personal data stores. The skill is genuinely fast because there is no network round-trip. Best for: local data analytics and personal records.
#69 - MongoDB
The MongoDB skill targets MongoDB's native query language. Compared to SQL skills, it is more verbose because Mongo's aggregation pipeline is verbose, but the skill correctly translates user intent into pipeline stages. Best for: MongoDB-backed applications.
#70 - Spreadsheet Guru (CSV/Excel)
The Spreadsheet Guru skill is OpenClaw's most capable file-based data skill. It reads and writes CSV, XLSX, and ODS files, supports formula evaluation, and handles common data-cleaning operations (dedupe, normalization, type inference). Combined with the filesystem tool, it is the right primitive for one-off data wrangling tasks. Best for: ad-hoc analysis on flat files.
#71 - Google Sheets
The Google Sheets skill is the cloud-native equivalent. It supports the full Sheets API, including formulas, charts, and named ranges. Combined with the Google Calendar skill, it is the foundation of "lightweight automation on top of a spreadsheet" workflows. Best for: cloud-collaborative spreadsheet work.
#72 - Airtable
The Airtable skill targets Airtable's REST API. Airtable's data model is more structured than a spreadsheet but less rigid than a database, which makes it a popular operational data store. The skill correctly handles linked records, computed fields, and views. Best for: ops-heavy teams using Airtable as a lightweight backend.
#73 - Pinecone Vector DB
The Pinecone skill exposes Pinecone's vector index for similarity queries. It is the go-to for skills that need RAG over a custom corpus (a company's documentation, a personal email archive, etc.). The skill handles batch upserts, metadata filtering, and namespace management. Best for: RAG-heavy production deployments.
#74 - ChromaDB Local
ChromaDB is the open-source local-first equivalent of Pinecone. The skill works against a local Chroma instance and is the right choice when the corpus is small (under a few million vectors) and privacy is a concern. Best for: local RAG without cloud dependencies.
#75 - Summarize Skill
The Summarize skill is a small but highly-installed skill that takes a document or URL and returns a concise summary. With 10,000 installs it is a top-25 skill on ClawHub. It is essentially a curated prompt with sensible defaults, and the install is justified by the prompt curation rather than the underlying capability. Best for: any workflow that ingests long documents.
#76 - Internal Search (RAG)
The Internal Search skill is a higher-level RAG wrapper that combines a vector store, a reranker, and a synthesis prompt into a single call. It is the right primitive for "search my company's docs" workflows. The skill works with both Pinecone and ChromaDB as the underlying store. Best for: organizational knowledge bases.
#77 - PDF Reader
The PDF Reader skill extracts text from PDF files, including scanned PDFs (via OCR). Combined with the filesystem tool, it lets the agent ingest the kind of long, dense documents that text-only skills cannot handle. Best for: workflows that consume contracts, papers, or reports.
The data category is one of the most direct ways to make an OpenClaw deployment domain-specific. A finance team's OpenClaw with a Google Sheets skill, a Postgres skill, and a Pinecone skill is qualitatively different from a default install. Heymans has noted that the ratio of "data-aware skills to total skills" is one of the strongest predictors of how useful an agent feels in long-term use, which lines up with what we have seen across hundreds of deployments.
9. Creative and media skills (#78 to #86)
The creative category is the youngest tier on ClawHub and the one with the most uneven quality. Image generation skills, video skills, and audio skills all involve large, expensive model calls, so the skill author has to make tradeoffs about which provider to wrap, how to expose the parameters, and how to handle failures. The result is that creative skills are more diverse than skills in other categories, with multiple winners depending on the use case.
We have included a manageable cross-section here rather than trying to cover every wrapper. The entries cover the dominant providers across image generation, image editing, video, and audio. For users who want a deeper exploration of where AI design tools are heading we have a Claude Design 2026 guide that overlaps significantly with this section.
#78 - Image Generator
The Image Generator skill is the catch-all wrapper for image generation across DALL-E 3, Stable Diffusion XL, and Flux. It exposes a unified interface that abstracts over the provider, falls back to a backup model on failure, and caches generations to avoid duplicate spend. Best for: any workflow that needs occasional images.
#79 - Image Editor
The Image Editor skill handles inpainting, outpainting, and selective region edits. It is meaningfully more useful than the generator skill for iterative work, because most "design" work is editing an existing image rather than generating a new one from scratch. Best for: iterative design workflows.
#80 - Video Generator
The Video Generator skill wraps Runway, Pika, and Sora-compatible providers. It is the most expensive skill in this category by a wide margin (a single 10-second clip can cost $1 to $5 depending on the provider) and the quality is still uneven. Best for: short B-roll generation and storyboards, not full production work.
#81 - ElevenLabs TTS
The ElevenLabs skill produces high-quality voice synthesis with cloned voices, multiple languages, and emotion control. It is the dominant TTS choice for any workflow that produces audio for human consumption (podcasts, voiceover, accessibility). Best for: production-grade voice synthesis.
#82 - Whisper STT
The Whisper skill runs OpenAI's Whisper model for speech-to-text, either via OpenAI's API or locally via the open-source Whisper.cpp. It is the most accurate STT model in the category and the most popular choice for transcription workflows. Best for: meeting transcription, voice notes, podcast transcripts.
#83 - Deepgram STT
The Deepgram skill is the production-grade alternative to Whisper. It is meaningfully faster (real-time streaming at sub-300ms latency) and supports speaker diarization out of the box. The trade-off is cost: Deepgram bills per audio minute, which adds up quickly for long files. Best for: real-time transcription where latency matters.
#84 - OpenAI TTS
The OpenAI TTS skill provides decent voice synthesis at a much lower price point than ElevenLabs. The voices are less natural but the price-per-character is roughly a fifth of ElevenLabs' rate. Best for: bulk synthesis where quality is good-enough rather than excellent.
#85 - Piper Local TTS
The Piper skill runs the open-source Piper TTS model locally on the user's machine. It is fully offline, free, and surprisingly good for casual voice output. The voices are not at ElevenLabs' level but they are easily good enough for personal use. Best for: privacy-sensitive personal deployments.
#86 - Mermaid Diagrams
The Mermaid skill renders Mermaid diagram code into images and embeds them in agent responses. It is a small but uniquely useful skill because it gives the agent a way to communicate structured information visually without invoking a heavy image generation skill. Best for: technical documentation and explainer workflows.
The creative category is also where the cost variability between skills is widest. A workflow that uses ElevenLabs and Runway can easily run $50 to $100 a day at heavy use, while the same workflow using Piper and a static image generator might cost less than $1. Picking the right provider for the right task is a real engineering decision in this category, not a stylistic preference.
10. Business and sales skills (#87 to #94)
The business category is where OpenClaw's enterprise viability gets tested. Skills here connect to the systems-of-record that companies actually run on: CRMs, billing systems, e-commerce platforms, document signing services. A misbehaving business skill can charge a customer the wrong amount, send a contract to the wrong recipient, or update the wrong record in Salesforce. The risk profile is uniquely high.
The skills we have included are the ones that have demonstrated stability across multiple production deployments. We have deliberately omitted some popular skills that fail under load (a couple of well-marketed CRM wrappers in particular have been flagged in security audits over the past quarter), and we have favored the official-API wrappers over the community-built alternatives wherever both exist.
#87 - Stripe
The Stripe skill exposes Stripe's API for charge creation, customer management, subscription updates, and webhook event handling. It is the most-used payment skill on ClawHub. The skill correctly handles idempotency keys, which is the single most important safety property a billing skill can have. Best for: SaaS billing automation. For deeper context on the payment-rail evolution, see our Tempo and agentic payments guide.
#88 - Salesforce
The Salesforce skill targets Salesforce's REST API. It supports Lead, Contact, Account, and Opportunity objects, and respects field-level security and validation rules. Salesforce's data model is deep and the skill exposes only the most-used objects, so users with custom objects often have to extend the skill themselves. Best for: Salesforce-native sales teams.
#89 - HubSpot
The HubSpot skill is the parallel for HubSpot's CRM. It is meaningfully simpler than the Salesforce skill because HubSpot's data model is simpler. The skill supports contacts, companies, deals, and the marketing automation primitives. Best for: HubSpot-centric small and mid-sized businesses.
#90 - Shopify
The Shopify skill exposes Shopify's Admin API for product management, order processing, and inventory queries. It is the right primitive for "watch my Shopify orders and trigger fulfillment workflows" patterns. The skill correctly handles Shopify's webhook-based event model. Best for: Shopify store owners.
#91 - QuickBooks
The QuickBooks skill connects to QuickBooks Online for invoicing, expense tracking, and basic accounting queries. It is one of the most stable business skills despite QuickBooks' notoriously fragile API, which speaks well of the skill's authors. Best for: small business bookkeeping automation.
#92 - DocuSign
The DocuSign skill creates envelopes, sends contracts, and queries signature status. It is the right primitive for the "the agent sends a contract for signature and pings me when signed" workflow, which is more useful than it sounds for solo founders and consultants. Best for: contract-heavy workflows.
#93 - Calendly
The Calendly skill exposes Calendly's API for booking creation, availability queries, and event-type management. Combined with the Email skill, it underpins the "agent finds a meeting time and sends the invite" workflow. Best for: solo professionals managing many meetings.
#94 - LinkedIn
The LinkedIn skill is the most cautious entry in this category because LinkedIn's API is the most restrictive. The skill is read-only by default (it queries connections and posts but does not write) and falls back to browser-based actions for the few write operations users actually want. Best for: prospecting and outreach research.
The business category is also where most users graduate from a personal OpenClaw to a managed agent platform. The cost of a misbehaving CRM update is high enough that many teams want a hosted environment with audit logs, role-based access, and recoverable state. Platforms like o-mega.ai and Claude Managed Agents cover that need without abandoning the OpenClaw mental model.
11. Smart home and personal skills (#95 to #100)
The personal category is the smallest tier on this list and arguably the most fun. A skill that turns your lights off when you leave the room is not as economically transformative as a Stripe integration, but it is genuinely magical the first time it works. We have included six entries to round out the list at exactly 100, focused on the integrations that actually deliver on that magic-the-first-time-it-works promise.
The pattern here is that the best personal skills wrap stable, well-documented APIs. Smart home APIs are notoriously fragile (Alexa, Google Home, and the various manufacturer apps all have weekly outages), so the skills we have chosen target the most reliable underlying systems we have tested.
#95 - Home Assistant
The Home Assistant skill connects to a self-hosted Home Assistant instance and exposes every entity it manages: lights, switches, thermostats, locks, sensors. It is the foundation of any serious home-automation OpenClaw deployment because Home Assistant itself is the most reliable hub. Best for: technically-inclined homeowners running Home Assistant.
#96 - Spotify
The Spotify skill targets the Spotify Web API for playback control, playlist management, and listening-history queries. It is the right primitive for the "play something I will like" pattern, especially when combined with the user's calendar context (commute music vs deep-work music). Best for: any Spotify Premium user.
#97 - Philips Hue
The Philips Hue skill targets the Hue Bridge directly and is the right primitive when you do not want to depend on a Home Assistant install. It supports lights, scenes, schedules, and motion sensors. Best for: Hue-only setups.
#98 - Sonos
The Sonos skill exposes the Sonos Local API for playback control and grouping. It is the right primitive for whole-home audio orchestration. Best for: Sonos owners.
#99 - Nest Thermostat
The Nest skill targets the Nest Device API for temperature control and schedule management. It is one of the more fragile entries on this list because Google has changed the Nest API contract twice in the past year. Best for: Nest owners willing to monitor for skill updates.
#100 - Apple Health
The Apple Health skill reads HealthKit data on macOS and iOS. It supports activity, sleep, heart rate, and workout queries. Combined with the calendar skill, it is the foundation of "schedule a workout when my recovery score allows" patterns. Best for: Apple-platform health-conscious users.
12. The full ranked list
This section consolidates the 100 entries into one global ranking by Final Score. It is the master reference for the article, and the place to come back to when you want to compare across categories. Order is descending by Final Score, and ties are broken alphabetically.
| # | Name | Category | Final |
|---|---|---|---|
| 1 | Browser Relay | Native | 9.9 |
| 2 | Filesystem Tool | Native | 9.7 |
| 3 | Memory System | Native | 9.6 |
| 4 | Subagents | Native | 9.3 |
| 5 | Lobster Workflow Engine | Native | 9.2 |
| 6 | GitHub Skill | Coding | 9.1 |
| 7 | Heartbeat Scheduler | Native | 9.1 |
| 8 | Tavily Search | Web Research | 9.0 |
| 9 | MCP Client (built-in) | Native | 8.9 |
| 10 | Voice Loop | Native | 8.8 |
| 11 | Plugin Bundle System | Native | 8.7 |
| 12 | Slash Commands System | Native | 8.6 |
| 13 | Web Fetch (built-in) | Native | 8.6 |
| 14 | Linear | Coding | 8.5 |
| 15 | Brave Search | Web Research | 8.5 |
| 16 | Google Calendar | Productivity | 8.5 |
| 17 | Notion | Productivity | 8.4 |
| 18 | Slack | Comms | 8.4 |
| 19 | Comms | 8.4 | |
| 20 | Telegram | Comms | 8.3 |
| 21 | Stripe | Business | 8.3 |
| 22 | PostgreSQL | Data | 8.3 |
| 23 | Whisper STT | Creative/Media | 8.3 |
| 24 | Capability Evolver | Productivity | 8.2 |
| 25 | Exa Semantic Search | Web Research | 8.2 |
| 26 | Gmail (OAuth) | Comms | 8.2 |
| 27 | Pinecone Vector DB | Data | 8.1 |
| 28 | ElevenLabs TTS | Creative/Media | 8.1 |
| 29 | Spreadsheet Guru | Data | 8.1 |
| 30 | ACP Agents | Native | 8.0 |
| 31 | Test Runner | Coding | 8.0 |
| 32 | Email (SMTP/IMAP) | Comms | 8.0 |
| 33 | Discord | Comms | 8.0 |
| 34 | Vercel | Coding | 7.9 |
| 35 | PR Reviewer | Coding | 7.9 |
| 36 | Obsidian | Productivity | 7.9 |
| 37 | Firecrawl | Web Research | 7.9 |
| 38 | Google Sheets | Data | 7.9 |
| 39 | Internal Search (RAG) | Data | 7.8 |
| 40 | Clawflows | Productivity | 7.8 |
| 41 | Image Generator | Creative/Media | 7.8 |
| 42 | Todoist | Productivity | 7.7 |
| 43 | Docker | Coding | 7.7 |
| 44 | Salesforce | Business | 7.7 |
| 45 | HubSpot | Business | 7.7 |
| 46 | Calendly | Business | 7.7 |
| 47 | Sentry | Coding | 7.6 |
| 48 | Microsoft Teams | Comms | 7.6 |
| 49 | Google Drive | Productivity | 7.6 |
| 50 | Airtable | Data | 7.6 |
| 51 | PDF Reader | Data | 7.6 |
| 52 | Summarize Skill | Data | 7.5 |
| 53 | Code Generator | Coding | 7.5 |
| 54 | Wikipedia Reader | Web Research | 7.5 |
| 55 | YouTube Transcripts | Web Research | 7.5 |
| 56 | Gateway Web Interface | Native | 7.5 |
| 57 | GitLab | Coding | 7.4 |
| 58 | Postman / API Tester | Coding | 7.4 |
| 59 | Cursor Sync | Coding | 7.4 |
| 60 | Web Fetch (advanced) | Web Research | 7.4 |
| 61 | Apple Calendar | Productivity | 7.4 |
| 62 | MongoDB | Data | 7.4 |
| 63 | DocuSign | Business | 7.4 |
| 64 | Image Editor | Creative/Media | 7.3 |
| 65 | Deepgram STT | Creative/Media | 7.3 |
| 66 | Spotify | Smart Home | 7.3 |
| 67 | Home Assistant | Smart Home | 7.3 |
| 68 | Things 3 | Productivity | 7.2 |
| 69 | Dropbox | Productivity | 7.2 |
| 70 | Reminders | Productivity | 7.2 |
| 71 | Pocket / Read Later | Productivity | 7.2 |
| 72 | Stack Overflow Lookup | Coding | 7.2 |
| 73 | ESLint Code Quality | Coding | 7.2 |
| 74 | Render | Coding | 7.1 |
| 75 | Jira | Coding | 7.1 |
| 76 | Signal | Comms | 7.1 |
| 77 | iMessage | Comms | 7.1 |
| 78 | SMS (Twilio) | Comms | 7.1 |
| 79 | OpenAI TTS | Creative/Media | 7.1 |
| 80 | Mermaid Diagrams | Creative/Media | 7.0 |
| 81 | SQLite | Data | 7.0 |
| 82 | ChromaDB Local | Data | 7.0 |
| 83 | Shopify | Business | 7.0 |
| 84 | QuickBooks | Business | 7.0 |
| 85 | Apple Health | Smart Home | 7.0 |
| 86 | ArXiv Lookup | Web Research | 6.9 |
| 87 | Perplexity | Web Research | 6.9 |
| 88 | ScrapingBee | Web Research | 6.9 |
| 89 | ScreenshotOne | Web Research | 6.9 |
| 90 | RSS Reader | Web Research | 6.8 |
| 91 | Zoom | Comms | 6.8 |
| 92 | Matrix | Comms | 6.7 |
| 93 | Piper Local TTS | Creative/Media | 6.7 |
| 94 | Video Generator | Creative/Media | 6.6 |
| 95 | Philips Hue | Smart Home | 6.6 |
| 96 | Sonos | Smart Home | 6.5 |
| 97 | Nest Thermostat | Smart Home | 6.4 |
| 98 | Business | 6.4 | |
| 99 | Test Runner (legacy) | Coding | 6.3 |
| 100 | Embedded Pi (preview) | Native (preview) | 6.2 |
A few patterns are worth calling out from the consolidated table. The top tier (scores 8.0+) is dominated by native tools and the most foundational skills (GitHub, Tavily, Google Calendar). These are the entries that we recommend nearly every OpenClaw user install, regardless of role. The middle tier (scores 7.0 to 7.9) is where skill choice becomes role-specific: a developer's middle tier will look very different from a salesperson's. The bottom tier (scores below 7.0) is the long tail of niche or legacy skills that only the right user will benefit from.
A separate observation: the bottom of the list is not a list of bad skills. It is a list of skills with smaller addressable audiences. The Sonos skill is excellent if you own Sonos and useless if you do not. The score reflects the breadth of the audience, not the quality of the implementation.
13. How to choose your stack from the list
A common failure mode for new OpenClaw users is to install thirty skills in the first week and then watch the agent get slower, more confused, and more error-prone. The agent's performance degrades as the skill count grows because every installed skill adds prompt overhead, increases the chance of tool-selection mistakes, and complicates the security review surface. A focused stack of eight to twelve skills almost always outperforms a sprawling stack of forty.
The right way to build your stack is to start with the 15 native tools (which are already there) and then add only the skills that solve a problem you have hit twice. The pattern we have seen work in practice is roughly: install GitHub immediately if you write code, install Google Calendar and Notion (or Obsidian) for personal organization, install one search skill (Tavily is the default), install one messaging skill (whichever channel you actually use), and stop there for the first month. Add more only when you hit a recurring frustration that an additional skill solves.
The other consideration is security review. Every skill you install runs in your agent's process and, in the worst case, has access to whatever tools the agent has access to. The ClawHavoc incident in early 2026 saw 1,184 malicious skills land on ClawHub, mostly using typosquatted names and stealth update mechanisms ( - VirusTotal-OpenClaw integration announcement). The lesson is that install discipline matters: prefer skills with high install counts and active maintainers, prefer official-API wrappers over community-built shims, and audit the skill source before installing anything that touches sensitive data.
For users who want a managed environment with curated skill sets and central audit, the closest comparable platforms are Claude Code, Claude Managed Agents, and o-mega.ai. All three sit one layer above OpenClaw and provide additional governance at the cost of some flexibility. The choice between OpenClaw and a managed equivalent is mostly about how much governance you need, not about the underlying capability set, which has been converging across the category since mid-2025. Our openclaw alternatives list goes deeper on the comparison.
14. The future of skill ecosystems and where this is heading
The 13,729 skills currently on ClawHub represent the first wave of a category that did not really exist a year ago. The second wave, already starting in early 2026, is skill bundles: curated capability sets sold as units rather than individual installs. The third wave, which we expect to start in earnest in late 2026, is agent profiles: pre-tuned combinations of model, prompt, memory, and skills that ship as a single artifact and can be deployed in minutes.
The structural force driving this is that agents are becoming more like applications and less like libraries. A library is something a developer assembles. An application is something a non-developer uses. The first generation of agent tooling assumed users would do the assembly themselves. The second generation assumes they will not. The economics of this shift favor curators (who package and tune) over individual skill authors, which is a familiar pattern from every previous platform shift (the App Store ate individual apps, GitHub Actions ate individual scripts, Docker Hub ate individual images, and so on).
This has direct consequences for the top-100 list: the right unit of comparison will gradually shift from the individual skill to the bundle and then to the profile. We expect that by mid-2027 a list like this one will read more like a list of bundles and pre-tuned agents than a list of single-purpose skills. The native tools will still be at the top because they are the foundation, but the long tail will collapse into a smaller number of curated stacks.
The other shift worth flagging is the MCP convergence. As more agent runtimes (OpenClaw, Claude Code, Cursor, Cline, Continue) adopt the Model Context Protocol, individual MCP servers become reusable across runtimes. A team that builds an internal MCP server for their CRM does not have to also build a Claude Code skill, an OpenClaw skill, and a Cursor extension. They build one server, and all the runtimes consume it. This is a structural reduction in the cost of agent integration, and it favors deployments that lean on MCP over deployments that depend on runtime-specific skill formats. We covered the MCP wave at length in our 50 best MCP servers analysis.
Yuma Heymans has argued in various posts that the awareness gap between users and capabilities is the biggest constraint on agent adoption today, not the capabilities themselves. The 100 entries above are the answer to that gap for the OpenClaw community as of May 2026. A user who understands the layered hierarchy of native tools, bundles, MCP servers, and skills, and who picks a focused stack out of the long tail, can build a personal agent that genuinely changes how they work. A user who installs everything and hopes for the best will be back in a month wondering why their agent is slower than ChatGPT.
15. Final notes
This list will be wrong in six months. That is the nature of a category growing at ~18% per month. We will keep it updated; the canonical version lives at o-mega.ai/articles and will be re-scored every quarter as the underlying ecosystem evolves. If you spot a skill that should have made this list, the right move is not to argue about the score: it is to install it, run it for a month, and report back what your day-to-day usage looked like.
For users who want a deeper foundation before installing anything, we recommend reading our OpenClaw setup guide first, then the workforce guide, and only then this list. The setup guide explains the layered architecture in more depth, and the workforce guide explains how multiple OpenClaw instances coordinate. With those two as background, this list becomes a menu rather than a wall of names.
This guide reflects the OpenClaw skill and tool ecosystem as of May 2026. Pricing, install counts, and individual skill quality change frequently, so verify current details on ClawHub before installing. The authors maintain no financial relationship with the skill providers listed.