Today's GitHub Trending page returned nineteen repositories for the daily window. Of those, five were enriched with full metadata. The signal that matters for AI-infrastructure developers splits into three clusters: personal AI agent harnesses, agent memory and workflow infrastructure, and AI-assisted code quality tooling. Everything else on the list is either low-fit for AI dev tooling or carries dual-use risk that limits recommendation. This ranking filters for repositories that solve a concrete developer problem, not just repositories accumulating attention.

Section 01

TL;DR with a Clear Editorial Thesis

Three repositories deserve immediate inspection: rohitg00/agentmemory (persistent memory for coding agents, Apache-2.0, actively shipping with over six thousand five hundred stars), obra/superpowers (composable agent workflow skills with over one hundred eighty-eight thousand total stars and the largest community in this ranking), and mattpocock/skills (opinionated engineering skills for Claude and Codex agents, MIT license). tinyhumansai/openhuman is an ambitious Rust-based personal AI harness with token compression and over one hundred integrations, but its GPL-3.0 copyleft license and early-stage status warrant caution before adoption. millionco/react-doctor brings AI-aware code quality scanning to React projects with agent integration. rasbt/LLMs-from-scratch is a well-established educational resource that remains relevant as LLM architecture evolves. github/spec-kit is a new specification tool from GitHub that appeared with nearly thirteen hundred stars gained in a single day and is worth monitoring.

Section 02

Concept explainer

Turns the article thesis into a compact visual explanation.
Explanatory visual Turns the article thesis into a compact visual explanation.

Section 03

Ranking Table

RankRepositoryLangStars GainedTotal StarsLicenseLast PushWhy It Matters
1tinyhumansai/openhumanRust1,0143,506GPL-3.02026-05-13Personal AI agent with token compression and over one hundred integrations
2rohitg00/agentmemoryTypeScript1,0486,513Apache-2.02026-05-12Persistent memory layer for Claude Code, Cursor, Codex CLI, and MCP clients
3obra/superpowersShell1,419188,714MIT2026-05-13Composable agent workflow skills with the largest community in this ranking
4yikart/AiToEarnTypeScript1,28212,413MIT2026-05-13AI-powered earning platform (low AI-dev-tool fit; mention-only)
5millionco/react-doctorTypeScript7886,327MIT2026-05-13AI-aware React codebase health scanner with agent skill integration
6mattpocock/skillsShell3,86775,133MIT2026-05-13Opinionated agent skills scaffold for real engineering workflows
7rasbt/LLMs-from-scratchJupyter Notebook772N/AN/A2026-05-13Foundational LLM education resource covering GPT, Llama, Qwen, and Gemma architectures
8CloakHQ/CloakBrowserPython1,606N/AMIT2026-05-13Anti-detect Chromium (dual-use; mention-only with risk context)
9github/spec-kitPython1,299N/AN/A2026-05-13New specification tool from GitHub; early signal, worth monitoring

Section 04

Section visual card

Reusable visual card for dense evidence sections.
Explanatory visual Reusable visual card for dense evidence sections.

Section 05

Ranking Table Notes

All star counts and push timestamps reflect GitHub API state as of the enrichment window. The "Stars Gained" column represents the daily delta reported by GitHub Trending; it is a short-window attention metric, not a durability signal (GitHub Trending). Total star counts are provided for enriched repositories only. Repositories without enrichment data show N/A in the Total Stars and License columns.

Section 06

GitHub Trending star-gain signal

Deterministic chart from the Growth OS GitHub Trending collector.
Evidence visual Deterministic chart from the Growth OS GitHub Trending collector.

Section 07

Three Trend Clusters

Cluster 1: Agent Memory, Skills, and Workflow Infrastructure

The dominant signal in today's ranking is the maturation of the agent infrastructure layer. rohitg00/agentmemory continues its multi-day trending run with over six thousand five hundred total stars and a release cadence that now includes Codex plugin platform support and OpenClaw compatibility fixes (commit 25dddc43798c). The repository exposes a four-tier memory consolidation pipeline (Working to Episodic to Semantic to Procedural), triple-stream hybrid search via BM25, vector, and knowledge graph, and over fifty MCP tools. The Apache-2.0 license and npm/Docker install paths make it the most adoptable memory layer in the current ecosystem.

obra/superpowers carries the largest existing community in this ranking at over one hundred eighty-eight thousand stars. It provides composable skill definitions for AI agent workflows — brainstorming, dispatching parallel agents, systematic debugging, test-driven development, and writing plans. The philosophy is model-agnostic and agent-agnostic: skills are markdown-based instruction sets that any compatible coding agent can consume. The MIT license and established maintenance history make this a low-risk inspection target for teams building agent-assisted engineering processes.

mattpocock/skills gained the highest star delta among enriched repositories at three thousand eight hundred sixty-seven, reaching over seventy-five thousand total stars. The repository focuses on what Matt Pocock terms "real engineering, not vibe coding" — structured skill definitions that prevent agents from losing intent, being verbose, or creating architectural decay. Skills include engineering workflows (issue triage, domain doc layout, TDD loops) and productivity patterns (compressed communication, skill authoring). The MIT license and the author's established reputation in the TypeScript community make this a low-risk, high-value reference.

Cluster 2: Personal AI Agent Harnesses

tinyhumansai/openhuman holds the top trending position with over one thousand stars gained today and a total approaching three thousand five hundred. Written in Rust with a Tauri desktop shell, it positions itself as a "Personal AI super intelligence" — a local-first agent that integrates with over one hundred third-party services via OAuth, runs a Memory Tree knowledge base backed by SQLite, and compresses token usage through a feature called TokenJuice. The README includes a feature comparison table against other agent harnesses. The project is in early beta with active daily commits (v0.53.35 as of today). However, the GPL-3.0 copyleft license, ninety open issues, and beta status mean this is inspection territory, not production-ready infrastructure.

Cluster 3: AI-Assisted Code Quality and Education

millionco/react-doctor is a one-command React codebase health scanner from the Million.js team (YC W24). It runs a diagnostic pass using Oxlint and Knip, produces a zero-to-one hundred health score, and outputs actionable findings across state management, performance, architecture, security, and accessibility. Notably, it includes an install command that teaches coding agents React best practices via a SKILL.md integration. This positions it at the intersection of static analysis tooling and AI agent workflow — the scanner finds problems, and the agent skill prevents them from recurring. The MIT license and documented API make it straightforward to evaluate.

rasbt/LLMs-from-scratch continues to trend as a foundational educational resource. Authored by Sebastian Raschka and published by Manning, it covers coding GPT-like models from the ground up, with implementations of Llama, Qwen, Gemma, and Olmo architectures. The companion seventeen-hour video course and exercise solutions make it a comprehensive reference for understanding the model architectures that AI developer tools increasingly depend on. This repository trends because the developer community values foundational literacy as the agent ecosystem accelerates.

Section 08

Article illustration

AI-generated editorial illustration. It is decorative/explanatory and must not be used as factual evidence.
Explanatory visual AI-generated editorial illustration. It is decorative/explanatory and must not be used as factual evidence.

Section 09

Which Repositories Deserve Deeper Inspection

Inspect first: rohitg00/agentmemory. The problem it solves (context loss between coding agent sessions) is concrete and well-scoped. The Apache-2.0 license, npm/Docker install paths, and MCP integration make it adoptable without licensing or infrastructure lock-in. It has been trending for multiple consecutive days, indicating sustained developer interest beyond a single social media spike. Check the iii engine dependency and local storage model before committing to production use.

Inspect second: obra/superpowers. With the largest community in this ranking, this repository represents a battle-tested approach to structuring agent workflows. The skill definitions cover the full software development lifecycle from planning through debugging. Even teams that do not adopt the specific skills directly can use them as reference patterns for their own agent instruction sets.

Inspect third: mattpocock/skills. The README articulates agent failure modes with unusual clarity. Even if you do not adopt the specific skill definitions, the diagnostic framework for understanding why agents produce bad output is valuable. The install path is non-destructive — skills can be evaluated in isolation.

Evaluate cautiously: tinyhumansai/openhuman. The Rust-based agent harness, token compression, and integration breadth are interesting, but GPL-3.0 copyleft, ninety open issues, and beta status mean this is early-adopter territory. Monitor the release cadence and issue resolution rate before investing integration effort.

Worth watching: github/spec-kit. A new specification tool from GitHub with nearly thirteen hundred stars gained in one day is a strong attention signal. However, without enriched metadata, the AI-developer-tool fit cannot be confirmed. Check the repository description and README before allocating inspection time.

Do not inspect for AI dev tool use: yikart/AiToEarn (AI-powered earning platform), CloakHQ/CloakBrowser (anti-detect Chromium with dual-use risk), and apernet/hysteria (network proxy). These repositories trended but do not solve AI-developer-tool problems.

Section 10

What Not to Infer from GitHub Trending

  • High star counts do not indicate code quality. obra/superpowers has over one hundred eighty-eight thousand total stars partly because of its established community presence, not because every skill definition has been stress-tested in production.
  • Daily star deltas are noisy. A single social media mention, newsletter feature, or conference talk can produce the numbers seen here.
  • Trending position does not reflect AI-developer-tool fit. yikart/AiToEarn ranked fourth but has low content-fit for AI developer infrastructure.
  • Repositories with elevated editorial risk (like CloakHQ/CloakBrowser) can trend without signaling that the tool is appropriate for every developer's use case.
  • New repositories with high single-day deltas (like github/spec-kit with nearly thirteen hundred stars gained) may reflect initial launch buzz rather than sustained developer value.
Editorial Conclusion

Three repositories deserve immediate inspection this cycle: rohitg00/agentmemory for session-persistent coding agent memory, obra/superpowers for structured agent workflow skills, and mattpocock/skills for opinionated engineering skills. tinyhumansai/openhuman is worth monitoring but carries GPL-3.0 and beta-stage risk.

Best for

Developers tracking the AI agent infrastructure ecosystem who need a filtered, evidence-based ranking rather than raw trending data.

Avoid when

You need production-ready tooling for immediate deployment; several recommended repositories are in early stages and require evaluation.

Refresh-sensitive details

  • Star counts and push timestamps reflect repository state as of the 2026-05-13 enrichment window.
  • Daily star deltas are short-window attention metrics and do not indicate durable adoption.
  • Repositories without enrichment may have different license, star count, or activity status than listed.
  • Some claims are refresh-sensitive; verify the primary source before citing specific numbers.
  • Automation-assisted publication; SignalForges editors review audit reports after publication.
Evidence

Source Ledger

These are the primary references used to keep the article grounded. Pricing, limits, benchmark results, and model names are rechecked against the source type shown below.

Source Type How it is used
tinyhumansai/openhuman primary source GitHub Trending rank one and repository evidence for the daily ranking.
rohitg00/agentmemory primary source GitHub Trending rank two and repository evidence for the daily ranking.
obra/superpowers primary source GitHub Trending rank three and repository evidence for the daily ranking.
yikart/AiToEarn primary source GitHub Trending rank four and repository evidence for the daily ranking.
millionco/react-doctor primary source GitHub Trending rank five and repository evidence for the daily ranking.
mattpocock/skills primary source GitHub Trending rank six and repository evidence for the daily ranking.
rasbt/LLMs-from-scratch primary source GitHub Trending rank seven and repository evidence for the daily ranking.
CloakHQ/CloakBrowser primary source GitHub Trending rank eight and repository evidence for the daily ranking.
github/spec-kit primary source GitHub Trending rank nine and repository evidence for the daily ranking.
GitHub Trending page primary source Source of the daily trending ranking and star-delta data.
Fact Pack

What This Article Actually Claims

high confidence

GitHub Trending returned nineteen repositories for the daily period.

https://github.com/trending

high confidence

The ranking must be interpreted as a short-window attention signal, not a durable adoption metric.

GitHub Trending source semantics and SignalForges editorial policy.

high confidence

Five eligible repositories were enriched with GitHub metadata; zero repositories were blocked or restricted by editorial risk rules.

GitHub repository metadata via https://api.github.com/repos/{owner}/{repo}.

high confidence

tinyhumansai/openhuman gained one thousand fourteen stars with a total of three thousand five hundred six stars, GPL-3.0 license, last pushed 2026-05-13.

GitHub API enrichment for tinyhumansai/openhuman.

high confidence

rohitg00/agentmemory gained one thousand forty-eight stars with a total of six thousand five hundred thirteen stars, Apache-2.0 license, last pushed 2026-05-12.

GitHub API enrichment for rohitg00/agentmemory.

high confidence

obra/superpowers gained one thousand four hundred nineteen stars with a total of one hundred eighty-eight thousand seven hundred fourteen stars, MIT license, last pushed 2026-05-13.

GitHub API enrichment for obra/superpowers.

high confidence

yikart/AiToEarn gained one thousand two hundred eighty-two stars with a total of twelve thousand four hundred thirteen stars, MIT license, last pushed 2026-05-13.

GitHub API enrichment for yikart/AiToEarn.

high confidence

millionco/react-doctor gained seven hundred eighty-eight stars with a total of six thousand three hundred twenty-seven stars, MIT license, last pushed 2026-05-13.

GitHub API enrichment for millionco/react-doctor.

high confidence

mattpocock/skills gained three thousand eight hundred sixty-seven stars with a total of seventy-five thousand one hundred thirty-three stars, MIT license, last pushed 2026-05-13.

GitHub API enrichment for mattpocock/skills.

high confidence

tinyhumansai/openhuman is a Rust-based personal AI agent harness with token compression and over one hundred integrations. Early beta, GPL-3.0 copyleft.

README.md from tinyhumansai/openhuman.

high confidence

rohitg00/agentmemory provides a four-tier memory consolidation pipeline, over fifty MCP tools, and supports over fifteen agent clients.

README.md from rohitg00/agentmemory.

high confidence

obra/superpowers provides composable skill definitions for AI agent workflows including brainstorming, dispatching parallel agents, systematic debugging, and TDD.

README.md from obra/superpowers.

high confidence

millionco/react-doctor is a React codebase health scanner from the Million.js team (YC W24) with agent skill integration.

README.md from millionco/react-doctor.

high confidence

rasbt/LLMs-from-scratch is a foundational LLM education resource authored by Sebastian Raschka, published by Manning.

README.md from rasbt/LLMs-from-scratch.

medium confidence

github/spec-kit gained nearly thirteen hundred stars in one day; a new specification tool from GitHub without enriched metadata.

GitHub Trending listing for github/spec-kit.

Methodology

  1. Draft composed by the Hermes Writer agent using repository metadata, README content, and GitHub API data.
  2. Evidence gathered via zai-zread-repo MCP tool and GitHub API enrichment pipeline.
  3. No first-person testing was performed. All claims are grounded in cited primary sources.
  4. AI assistance was used; no private data or unreleased sources were referenced.

Frequently asked

Questions readers ask

What does this briefing recommend developers do first?

Today's GitHub Trending page returned nineteen repositories for the daily window. Of those, five were enriched with full metadata. The signal that matters for AI-infrastructure developers splits into three clusters: agent memory and workflow infrastructure, personal AI agent harnesses, and AI-assisted code quality tooling. Start by inspecting rohitg00/agentmemory for session-persistent coding agent memory, then evaluate obra/superpowers for structured agent workflow skills.

Where can readers verify the figures cited in this article?

Every precise figure must be verified against the primary URL. The first listed source is https://github.com/trending.

Is this article human-authored or AI-assisted?

The draft was composed with AI assistance by the Hermes Writer agent, then reviewed against the SignalForges editorial policy and the Autonomous Publishing Safety Contract before publication.