| Without Agenticom | With Agenticom |
|---|---|
| M&A due diligence: 4β8 weeks, $50Kβ200K | M&A due diligence: hours, ~$5 in API costs |
| Patent landscape: $500β1000/hr lawyer | Patent landscape: hours, ~$3 in API costs |
| Security audit: 2β4 weeks, $20Kβ50K | Security audit: hours, ~$4 in API costs |
| Grant proposal: 40β100 hrs of work | Grant proposal: afternoon, ~$2 in API costs |
| Marketing strategy: agency retainer $10K/mo | Marketing strategy: hours, ~$3 in API costs |
"Validate my startup idea." A researcher scouts the market, an analyst sizes the opportunity, a strategist designs the go-to-market, a writer produces the business plan. Done in an hour.
"Write this grant proposal." Analyzes the RFP, synthesizes supporting literature, drafts the narrative, builds the budget justification. Submission-ready in an afternoon.
"Audit our platform security." Produces a threat model, vulnerability map, remediation plan prioritized by risk, and a board-ready executive report. Same day.
"Analyze acquisition target X." Five agents cover financial analysis, legal review, market assessment, technical audit, and investment recommendation with valuation range. In hours.
"Build this feature." Plans the work, writes the code, verifies the logic, writes tests, reviews for bugs. Ready to ship.
No coding. No configuration. Just tell it what you need.
Step 1: Install Agenticom once on your machine:
git clone https://github.com/wjlgatech/agentic-company.git
cd agentic-company && bash setup.shStep 2: Set your API key:
export ANTHROPIC_API_KEY=sk-ant-...Step 3: Open Claude.ai and describe your task:
"Use agenticom due-diligence to analyze TechStartup Inc β $10M ARR, 40% growth, B2B SaaS in HR tech. Give me financial analysis, legal review, market assessment, and a GO/NO-GO recommendation with valuation range."
"Use agenticom grant-proposal to write an NIH R01 for our lab's CRISPR sickle-cell research. Analyze the RFP requirements, synthesize the supporting literature, draft Specific Aims and Research Strategy, and build the budget justification."
"Use agenticom marketing-campaign for my luxury Miami real estate agency targeting international buyers. Buyer personas, competitor audit, 30-day content calendar, influencer list, 90-day launch plan with KPIs."
"Use agenticom security-assessment to audit our e-commerce platform β 100K daily transactions, 2M users' PII. Threat model, vulnerability scan, prioritized remediation plan, board-ready report."
Claude runs a team of specialist AI agents and returns the full deliverable.
OpenClaw is a personal AI assistant on your favourite messaging app. After a one-time install, message it just like you'd message a colleague:
"Use agenticom churn-analysis β our SaaS churn is 6.5% monthly. Identify top-5 churn segments, build retention playbooks with ROI projections, draft a 90-day action plan."
Prefer clicking to typing? Open the visual interface:
agenticom dashboard # β http://localhost:8080Pick a workflow, describe your task, and watch the agents work.
git clone https://github.com/wjlgatech/agentic-company.git
cd agentic-company && bash setup.sh# Free (local, no API key)
ollama serve && ollama pull llama3.2
# Claude β best quality
export ANTHROPIC_API_KEY=sk-ant-...
# GPT
export OPENAI_API_KEY=sk-...# Preview without making any LLM calls
agenticom workflow run feature-dev "Add login button" --dry-run
# Run for real
agenticom workflow run due-diligence "Analyze acquisition target Acme Corp"
agenticom workflow run security-assessment "Audit our payment API"
agenticom workflow run grant-proposal "NIH R01 for CRISPR sickle-cell research"| Command | Description |
|---|---|
agenticom workflow list |
List all workflows |
agenticom workflow run <id> "<task>" |
Run a workflow |
agenticom workflow run <id> "<task>" --dry-run |
Preview without LLM calls |
agenticom workflow status <run-id> |
Check status |
agenticom workflow resume <run-id> |
Resume a failed run |
agenticom dashboard |
Open web UI |
agenticom stats |
Run statistics |
| Team (workflow) | What it delivers | Time saved |
|---|---|---|
due-diligence |
M&A investment recommendation with full analysis | 4β6 weeks |
compliance-audit |
Audit-ready compliance report with remediation roadmap | 2β4 weeks |
patent-landscape |
Freedom-to-operate assessment + IP strategy | 3β6 weeks |
security-assessment |
Executive security report + prioritized fixes | 2β4 weeks |
churn-analysis |
Retention playbooks with ROI projections | 1β2 weeks |
grant-proposal |
Submission-ready proposal draft | 40β60 hours |
incident-postmortem |
Blameless post-mortem + action items | 4β8 hours |
marketing-campaign |
Full go-to-market strategy | 1β2 weeks |
| Team (workflow) | What it delivers |
|---|---|
feature-dev |
Plan β code β tests β review, end-to-end |
feature-dev-with-diagnostics |
+ automated root cause analysis on failure |
autonomous-dev-loop |
Continuous improvement loop for long-running tasks |
import asyncio
from orchestration import load_ready_workflow
team = load_ready_workflow('due-diligence.yaml')
result = asyncio.run(team.run("Analyze acquisition target Acme Corp, $15M ARR"))
print(result.final_output)More Python examples
Manual setup (more control):
from orchestration import load_workflow, auto_setup_executor
team = load_workflow('feature-dev.yaml')
executor = auto_setup_executor()
for agent in team.agents.values():
agent.set_executor(lambda p, c: executor.execute(p, c))
result = asyncio.run(team.run("Add user authentication"))Build a custom team in code:
from orchestration.agents import TeamBuilder, AgentRole
team = (
TeamBuilder("market-research")
.add_agent(AgentRole.RESEARCHER, "You are a senior market analyst.")
.add_agent(AgentRole.ANALYST, "You extract actionable insights from data.")
.add_agent(AgentRole.DEVELOPER, "You synthesize findings into clear reports.")
.build()
)Every team comes with safety features you'd normally pay extra for:
- Guardrails β block sensitive content (PII, API keys) before it reaches the LLM
- Memory β agents remember context across runs and learn from past work
- Approval gates β require human sign-off on high-stakes actions
- Caching β skip redundant LLM calls, cut costs
- Observability β track every step, metric, and cost
- MCP integration β connect to live data: PubMed, Ahrefs, Similarweb, and more
Code examples for each feature
Guardrails:
from orchestration.guardrails import ContentFilter, GuardrailPipeline
pipeline = GuardrailPipeline([ContentFilter(blocked_patterns=["password"])])
result = pipeline.check("My password is secret") # result[0].passed = FalseMemory:
from orchestration.memory import LocalMemoryStore
memory = LocalMemoryStore()
memory.remember("Client prefers executive summaries under 2 pages", tags=["preference"])
results = memory.search("summary format")Caching:
from orchestration.cache import cached
@cached(ttl=300)
def research(topic: str) -> str:
return llm.generate(f"Research {topic}")Approval gates:
from orchestration.approval import HybridApprovalGate
gate = HybridApprovalGate(risk_threshold=0.7)
decision = gate.request_approval("Deploy to production", risk_score=0.85)
# Low risk β auto-approved. High risk β waits for human.MCP tool integration:
from orchestration.tools import MCPToolBridge
bridge = MCPToolBridge(graceful_mode=True)
result = await bridge.execute("web_search", query="AI regulation 2025")Format: What it means for you β How it works β What was built
Every run is now a lesson. The first time a planner produces vague output, the system notices. By run five it proposes a sharper prompt. By run twenty the whole team is measurably better β without you touching a config file.
Tech: Self-improvement loop β SMARC quality scoring on every step output β per-agent performance tracking β capability gap detection β targeted prompt patch proposals β human-in-the-loop approval or auto-apply.
Implementation details
orchestration/self_improvement/β new module vendoring four classes fromwjlgatech/self-optimization:ResultsVerificationFramework(SMARC),MultiAgentPerformanceOptimizer,RecursiveSelfImprovementProtocol,AntiIdlingSystem- Zero hot-path impact β recording happens via
asyncio.create_task()afterteam.run()returns PromptVersionStoreβ SQLite-backed versioned personas with full rollback chainPromptEvolverβ heuristic suffix rules (always works) + optional LLM full-persona rewriteagenticom feedbackCLI βlist-patches,approve-patch,reject-patch,rollback,rate-run,statusfeature-dev.yamlβ opt-in viametadata.self_improve: true- 51 new tests Β· 900 total passing
Refactor boldly. The test suite catches regressions before you do β covering guardrails, memory backends, the YAML parser, all 13 bundled workflows, the REST API, and the CLI end-to-end.
Tech: Comprehensive pytest suite with async support, integration-test isolation, and Playwright-conditional browser tests β enforced in CI across Python 3.10 / 3.11 / 3.12.
Implementation details
- 849 β 900 tests across three rounds of coverage expansion
asyncio_mode = "auto"β async test functions work without decorators@pytest.mark.integrationguards tests requiring live API keys; CI runs with-m "not integration"tests/conftest.pyβcollect_ignorefor 6 script-style files that aren't pytest suites- CodeQL + coverage jobs added to the CI matrix
No more "it worked on my machine." Lint, types, tests on three Python versions, coverage, and security scanning all run automatically. Main branch is fully protected β no direct pushes, no bypassing checks.
Tech: GitHub Actions matrix CI (lint-and-type-check, test Γ3 Pythons, coverage, CodeQL) + branch protection (1 reviewer, 5 required checks,
enforce_admins: true).
Implementation details
.github/workflows/ci.ymlβ ruff + mypy + pytest in parallel across py3.10/3.11/3.12black>=24.0pinned; must run afterruff --fixto avoid conflictsmypyconfig:disallow_untyped_defs=false,warn_return_any=false(40 pre-existing modules carryignore_errors=true)- Pre-commit hook (
scripts/check_root_clutter.py) enforces file-organisation rules at commit time
The verifier agent used to report "it looks wrong." Now it shows you a screenshot, the console error, and an AI-generated root-cause hypothesis β all captured automatically during the workflow run.
Tech: Playwright browser automation captures screenshots + console logs + network requests after each step; a meta-analysis LLM layer identifies root causes and proposes targeted fixes; a criteria builder interviews you to sharpen acceptance criteria.
Implementation details
orchestration/diagnostics/βPlaywrightCapture,MetaAnalyzer,CriteriaBuilder,IterationMonitor,DiagnosticsIntegratoragenticom test-diagnostics <url>β run browser automation from the CLIagenticom build-criteria "<task>"β interactive Q&A β structured success criteria JSON@pytest.mark.skipif(not check_playwright_installation(), ...)guards browser tests
Your agents can now query PubMed, Ahrefs, Similarweb, and any MCP server β not just reason about what the data might say, but actually retrieve it mid-workflow.
Tech:
MCPToolBridgeroutes workflow tool references to registered MCP servers viaMCPToolRegistry;PromptEngineer+SmartRefinerrefine task descriptions into coherent multi-turn prompts before execution begins.
Implementation details
orchestration/tools/βmcp_bridge.py,registry.py,prompt_engineer.py,intent_refiner.py,smart_refiner.py,hybrid_refiner.pySmartRefinerβ multi-turn interview loop that synthesises a coherent final prompt from user answersConversationalRefinerβ single-pass intent clarification for simpler cases- Graceful-mode flag: tools degrade silently when MCP server is unavailable
Five specialist agents β planner, developer, verifier, tester, reviewer β collaborate on a shared task with cross-verification at every handoff. Each agent sees only what it needs; hallucinations from previous steps can't contaminate the next.
Tech: "Ralph Loop" pattern β fresh
AgentContextper step with template substitution ({{step_outputs.X}}); loopback on failure; guardrails, memory, and approval gates composable per workflow; 13 bundled YAML workflows ready to run.
Implementation details
orchestration/agents/βAgentTeam,TeamBuilder,AgentRoleenum, specialized agents (Planner/Developer/Verifier/Tester/Reviewer/Researcher/Writer/Analyst)orchestration/workflows/βWorkflowParser(YAML βAgentTeam), template engineagenticom/state.pyβ SQLite persistence (~/.agenticom/state.db) for all run stateorchestration/integrations/unified.pyβUnifiedExecutorroutes to OpenClaw (Anthropic), Nanobot (OpenAI), or Ollama;auto_setup_executor()picks the best available backend- 13 bundled workflows:
feature-dev,feature-dev-with-diagnostics,feature-dev-with-loopback,feature-dev-llm-recovery,autonomous-dev-loop,marketing-campaign,due-diligence,compliance-audit,patent-landscape,security-assessment,churn-analysis,grant-proposal,incident-postmortem
MIT β use it, fork it, build on it.
π Your AI company is open for business.
β Star on GitHub β’
π Report an issue
