False. A skill is a folder. SKILL.md is the entrypoint, but the entire file system is intentional context engineering and progressive disclosure.
Source: Shihipar (LinkedIn) · code.claude.com/skills
Fifty questions. Mostly short-answer. Some marked hard. You write your answer, reveal the correct one, rate yourself honestly. Your progress is saved in your browser. No one is watching.
False. A skill is a folder. SKILL.md is the entrypoint, but the entire file system is intentional context engineering and progressive disclosure.
Source: Shihipar (LinkedIn) · code.claude.com/skills
Enterprise > Personal > Project > Plugin. Plugin skills use a plugin-name:skill-name namespace so they can't conflict with other levels.
Source: code.claude.com/skills §"Where skills live"
B — Procedural knowledge. Verbatim from support.claude.com/12512176: "Skills provide procedural knowledge—instructions for how to complete specific tasks or workflows." Not framed by Anthropic as expertise import.
Source: support.claude.com/12512176-what-are-skills
36+ tools — Junie, Gemini CLI, GitHub Copilot, VS Code, Cursor, OpenAI Codex, Claude Code, Roo Code, Mistral Vibe, Databricks, Snowflake, Block (Goose), JetBrains, ByteDance, and more.
Source: agentskills.io client showcase
name and description. Everything else (license, compatibility, metadata, allowed-tools) is optional.
Source: agentskills.io/specification
Any three of: when_to_use, argument-hint, arguments, disable-model-invocation, user-invocable, model, effort, context: fork, agent, hooks, paths, shell.
Source: code.claude.com/skills frontmatter table vs agentskills.io/specification
name field and the directory?The name field MUST match the parent directory name. Verbatim from agentskills.io/specification: "Must match the parent directory name." Not documented anywhere else.
Source: agentskills.io/specification
Tier 1 — Metadata (name + description, ~100 tokens, always loaded). Tier 2 — Instructions (full SKILL.md body, <5,000 tokens, loaded on activation). Tier 3 — Resources (scripts, references, assets — loaded only when needed).
Source: agentskills.io/specification + Cookbook 01
Step 1: Description sits in system prompt. Step 2: Claude DECIDES whether to use the skill (description-driven gate). Step 3: Claude INVOKES BASH TOOL to read SKILL.md (an actual file read, not context injection). Step 4: Claude chooses which BUNDLED FILES to load based on what SKILL.md says.
Source: Anthropic engineering blog — "Equipping agents for the real world with Skills"
Unused: ~30 tokens (metadata only). Invoked: ~5,000 additional tokens (full SKILL.md body). Total per file-generating request: 6,000–10,000 tokens typical.
Source: Cookbook 01
Per-skill: 5,000 tokens kept (the first 5,000 of each re-attached skill). Combined: 25,000 tokens total budget for re-attached skills, fills from most recent. Older skills can be dropped after compaction.
Source: code.claude.com/skills §"Skill content lifecycle"
False. Verbatim: "the rendered SKILL.md content enters the conversation as a single message and stays there for the rest of the session. Claude Code does not re-read the skill file on later turns." Frozen at invocation.
Source: code.claude.com/skills
500 lines. Hard limit, called out in the official Anthropic checklist. (The 150-line figure that floats around is Vercel's empirical median, not the spec.)
Source: Anthropic best-practices §"Token budgets"
.md files in a skill folder loaded automatically when the skill activates, or only SKILL.md and REFERENCE.md? Cite the source.All .md files in the root directory load. Verbatim from Cookbook 03: "All .md files in the root directory will be available to Claude when the skill is loaded." Subfolder .md files load on-demand.
Source: Cookbook 03
One. Reason: Claude may partially read deeply-nested files (e.g., via head -100), missing complete information.
Source: Anthropic best-practices §"Avoid deeply nested references"
${CLAUDE_PLUGIN_DATA}, surfaced by Thariq Shihipar (Anthropic Claude Code team) on LinkedIn. Skill directory data is wiped on upgrade; this variable points to the stable per-plugin folder. Undocumented in official Anthropic public docs.
Source: Shihipar — "Lessons from Building Claude Code" (LinkedIn)
1024 characters.
Source: Anthropic best-practices §"YAML Frontmatter requirements"
description + when_to_use character cap in the skill listing?1,536 characters. The combined description + when_to_use text is capped at 1,536 chars in the skill listing for context-management.
Source: code.claude.com/skills
<Domain> expert. ALWAYS invoke this skill when the user asks about <trigger topics>. Do not <alternative action> directly — use this skill first.
Source: Seleznov 650-trial study
Variant A (Passive) + C3 (Hook condition): activation collapses to 37.0%. The single catastrophic failure cell. CLAUDE.md "rescues" Variant A back to 100% in C4 (both hook AND CLAUDE.md).
Source: Seleznov
OR ≈ 20.6, Cohen's h = 1.83 ("huge effect" by Cohen's conventions), p < 0.0001. From Cochran-Mantel-Haenszel test, Variant C vs Variant A.
Source: Seleznov
If ALL skills use "ALWAYS invoke" language with overlapping triggers, the directive may lose force through dilution. Multiple skills claiming the same keywords could confuse Claude about which to invoke. No controlled test exists yet — open opportunity for someone to publish first.
Source: Seleznov §"Limitations"
(1) Activation failure: Claude does not invoke the skill at all and defaults to its own approach. (2) Execution failure: Claude loads the skill but skips internal procedural steps that delay output without producing visible content.
Source: Marc Bara — "Claude Skills Have Two Reliability Problems, Not One"
(1) User request sits at end of context window — recency = strongest attention weight. (2) Skills are "further back, procedural, and meta-level." (3) RLHF reinforced "produce the requested output directly is the pattern that got rewarded most consistently." Result: verification steps lose because they delay output, add no visible content, and sit at maximum distance from initial prompt.
Source: Bara
Before (skippable): "Before delivering, verify that every milestone aligns with the stated scope."
After (visible-output required): "Do not deliver the final charter until you have output a verification block listing each milestone and the in-scope deliverable it maps to. Flag any milestone that references an out-of-scope item."
Principle: force visible output from verification steps.
Source: Bara
False. Bara verbatim: "No one has run a controlled experiment on step-level execution the way Seleznov did for activation. The 650-trial methodology exists. The step-execution version of that experiment does not, yet." Open research opportunity.
Source: Bara
Any three of: (1) Passive/suggestion wording in descriptions; (2) Late-stage procedural steps without visible output; (3) Assuming output completeness implies process compliance; (4) Relying on reminder-style hooks; (5) Missing the negative-constraint component; (6) Treating activation and execution as one problem.
Source: Bara
code-execution-2025-08-25 (cookbook version; API ref says 2025-05-22 — discrepancy unresolved), skills-2025-10-02, files-api-2025-04-14.
Source: Cookbook 01 + skills-guide
container.skills array in a single API request?8 skills max per request.
Source: skills-guide
30 MB total upload size. Hard cap; includes all bundled files.
Source: skills-guide §"Creating a Skill"
False. Even versions/retrieve returns only metadata (id, created_at, description, directory, name, skill_id, type, version). There is NO API endpoint that returns the SKILL.md body content. Once uploaded, the file content cannot be retrieved via API. Implication: callers must keep their own local source-of-truth copy.
Source: API beta skills/versions/retrieve
Anthropic-managed: date-based (e.g., 20251013). Custom: Unix epoch timestamp (e.g., 1759178010641129). Both support "latest" as a version string.
Source: skills-guide
name, description, and directory (top-level directory name from upload). Caller doesn't pass them as separate API parameters — they're extracted from the uploaded SKILL.md and folder structure.
Source: API beta skills/versions/create
type: "anthropic").pptx, xlsx, docx, pdf. All accessible via container.skills with type: "anthropic".
Source: Cookbook 01 + skills-guide
"These are the same Skills that power Claude Creates Files." Verbatim. So one underlying skills tech, three surfaces: API (developer), Claude.ai (consumer Creates Files), Cookbook (educational).
Source: Cookbook 02
2-3 sheets per workbook. "Works reliably and generates quickly." Beyond that, segment into multiple files using a pipeline pattern.
Source: Cookbook 02
client.beta.messages.create() instead of client.messages.create() when working with skills? What happens if you use the non-beta call?The container parameter is not recognized by client.messages.create() — only client.beta.messages.create() supports it. If you use the non-beta call with skills, the call fails (often silently or with an opaque error). This is Issue #2 in Cookbook 01's troubleshooting.
Source: Cookbook 01 + skills-guide
Skills + Connectors + Sub-agents. Verbatim from support 13837440: "Each plugin bundles together skills, connectors, and sub-agents into a single package."
Source: support.claude.com/13837440
Verbatim: "There's no approval workflow for org-wide sharing. If you enable Share with organization, any member can publish a skill to the directory without review." Plus: audit log captures share events but NOT skill content; no admin dashboard for content inspection. Major governance gap.
Source: support.claude.com/13119606
No, the Cowork plugin marketplace is NOT available on third-party platforms. Verbatim: "The skills and plugin marketplace available in Claude Enterprise isn't available with third-party platforms." Alternative: local filesystem mounts via MDM — macOS: /Library/Application Support/Claude/org-plugins/; Windows: C:\ProgramData\Claude\org-plugins\.
Source: support.claude.com/14680753
Any three of: Financial Analysis (core; required first install), Investment Banking, Equity Research, Private Equity, Wealth Management.
Source: support.claude.com/13851150
April 17, 2026. Powered by Claude Opus 4.7 ("most capable vision model"). Anthropic Labs product. Pro/Max/Team/Enterprise (Enterprise default-off, admin enables).
Source: Claude Design announcement
Path 1 — DESIGN.md upload (Hassid's individual workflow): Cowork-extract → upload to Claude Design → persistent context. Available to all paid tiers. Path 2 — Built-in org-level design system integration: verbatim "Organizations configure brand colors, typography, and component patterns once; all projects automatically inherit these assets without manual uploads." Available to Team and Enterprise tiers only. Implication: DESIGN.md upload is Enact's strongest play for individual Pro/Max + small teams.
Source: Get Started article
"Analyze this folder and produce a full design system write-up. Fonts, colors, graphical styles, component patterns, tone, layout conventions. Flag anything that's missing. Save it as DESIGN.md in my folder."
Source: Hassid Substack
(1) Visual Theme & Atmosphere · (2) Color Palette & Roles · (3) Typography Rules · (4) Component Stylings · (5) Layout Principles · (6) Depth & Elevation · (7) Do's and Don'ts · (8) Responsive Behavior · (9) Agent Prompt Guide.
Source: VoltAgent/awesome-design-md collection / Stitch spec
Section 9 — Agent Prompt Guide. It's the directive layer of DESIGN.md, analogous to SKILL.md's description field. Imperative phrasing, ALWAYS/NEVER constraints, explicit triggers for when the agent should reach for which token. Same Seleznov-style directive discipline applies.
Source: Stitch spec + synthesis
~80% of public skills perform below baseline. Specifically: 40 of 47 skills from a viral list scored below baseline; ~80% of 200+ tested.
Source: Shcheglov — "The Ultimate Guide to Claude Code Skills"
Capability uplift = teaching Claude techniques it will eventually master through model improvements (e.g., "how to write a good email"). Expires with model upgrades. Encoded preference = documenting org-specific business logic that doesn't change with model improvements (e.g., "your team routes Severity 1 tickets to #critical-ops within 15 minutes"). Gains value as models improve, because the business knowledge is durable.
Source: Beehiiv aggregator (original framing surfaced via Marc Bara)
"Hundreds of skills in active use at Anthropic" — Shihipar's verbatim quantifier. Not a more precise number.
Source: Shihipar (LinkedIn)
Enact's eval moat is methodology + scope, not "we have it, they don't." Anthropic ships skill-creator at TWO surfaces (Claude Code plugin + Claude.ai conversational meta-skill) with a real eval framework. Enact differentiates on:
Honest framing: skill-creator is real and capable. Enact wins on the dimensions above, not by pretending Anthropic has nothing.
Source: SOURCE_NOTES.md F3 corrected · LEGAL_CLAIMS_FRAMEWORK.md