Submit. Get reviewed. Improve
The shortest loop to learn AI

Takajo AI Lab is a small, hands-on lab for learning to use generative AI in real work. You study the material, solve assignments, submit your work, and receive detailed feedback. The program is designed around this loop.

Premium Plan (human review) is active with 5 seats.
AI Review Plan is coming soon (sign up for notifications).

What makes this different

This is not a video course or a content library. The program is built around three things working together:

  • Material -- Structured knowledge for practical AI use
  • Assignments -- You build something and submit it
  • Review -- You get specific, actionable feedback on your submission

Reading about AI is not enough. You need to build, submit, get reviewed, and revise. That loop is what this program is designed for.

Two review plans

There are two ways to get your work reviewed. Choose based on your goals.

Active

Premium Plan -- Human review

  • Hatanaka personally reviews every submission
  • Your prompts and code are actually executed and tested
  • Feedback covers not just pass/fail, but what to learn next
  • Limited to 5 seats

Why only 5?
To maintain the depth of review -- each submission is run and tested manually. This review process also serves as R&D for the AI review system. Insights from human review directly inform the design of CriticChain.

Advisory & consulting services

Coming soon

AI Review Plan -- Reviewed by AI

  • Powered by CriticChain, our open-source review engine
  • A multi-stage pipeline where one AI reviews your work, and another AI audits the review for leniency, weak reasoning, and hallucinations
  • Launching after tuning is complete

Material and assignments are the same across both plans. The difference is how your work gets reviewed.

What a review actually looks like

Even passing submissions receive this level of feedback.
The following are real Premium Plan reviews, anonymized.

Example 1: Running your prompt to find what a rubric can't

Assignment: Improving a bulk data generation prompt / Verdict: Pass

What worked
The prompt is well-structured with clear sections for role, context, and instructions, making it easy for the AI to parse accurately. The output format example provides the right level of granularity, minimizing output drift.

Suggestion: Avoiding "AI fatigue" in bulk generation
When I ran your prompt on a current model, I found a hallucination in the entry for the Taika Reform (645 CE) -- "the Soga clan" was truncated to "the So clan." This is a classic case of the AI losing fidelity under heavy output load.

For tasks this large, the best practice is to split the work into stages with human checkpoints (HITL -- Human-in-the-Loop) rather than generating everything in one pass.

The reviewer ran the submission, found a specific hallucination, and turned it into a teachable principle. A rubric alone wouldn't catch this.

Example 2: Passing, but showing you the next wall

Assignment: Self-verification loop / Verdict: Pass

What worked
The implementation clearly separates generation, back-translation, and verification steps, with intermediate outputs visible at each stage. This "glass-box" approach makes the AI's reasoning observable.

Suggestion: Session splitting to avoid context contamination
Your implementation fully meets the requirements. However, because the AI remembers its own intent from the generation step, its self-review tends to be biased -- much like a writer proofreading their own draft.

For higher-stakes tasks (press releases, contracts), splitting generation and verification into separate sessions -- so the reviewer has no knowledge of the writer's intent -- produces significantly more rigorous results.

A perfect submission still gets a pass -- plus a preview of the next challenge you'll face in real work.

Example 3: Designing a growth path from beginner to advanced

Assignment: Role setting and verification / Verdict: Pass

What worked
Running the prompt both with and without a role, then comparing the results, shows the kind of comparative observation that's essential in prompt engineering.

Suggestions (3 stages):

1. Structure -- Separate role, goal, and output format with Markdown headings so the AI doesn't miss parts of the instruction.

2. Observability -- Ask for intermediate outputs so you can see what criteria the AI used to make decisions (glass-box approach).

3. Role experiments -- Try different roles (a beginner student, a math teacher) and notice how the content -- not just the tone -- changes.

One assignment, three levels of improvement: a fix for now, and a growth path for what comes next.

How AI Review works -- CriticChain

The AI Review plan is powered by CriticChain, our open-source review engine.

Submission → Lint → Structure analysis → Hallucination detection
→ Draft review → Leniency audit (Critique) → Rewrite (Refine)
→ Consistency check → Scoring → Final review
  • One AI writes the review; another AI audits it for leniency and weak reasoning
  • Hallucinations are not just detected -- their propagation paths are traced
  • Every stage is logged, so the basis for each judgment can be inspected

CriticChain operates on the inspection criteria defined by prompt-as-code, a prompt syntax standard. Define the standard, then automate review against it -- that consistency is the foundation of quality.

Who this is for

  • Engineers who want to use generative AI effectively at work -- not just play with it
  • People who want to understand how AI systems work, not just use AI tools
  • Practitioners looking to go from prompt engineering to multi-agent development
  • Corporate training leads exploring a pilot program → Corporate inquiries

Curriculum -- 23 lessons

From the fundamentals of generative AI to building production systems. Five phases, progressively.

Phase 1: Foundations (Lessons 1-3)

How generative AI works, LLM behavior and failure patterns, choosing the right tools

Phase 2: Dev Environment (Lessons 4-6)

Python setup, prompt engineering, building custom skills

Phase 3: LLM Implementation (Lessons 7-11)

LLM APIs in practice, LangChain application development, token and cost management

Phase 4: Applied (Lessons 12-18)

RAG, AI agents, fine-tuning, multimodal, UI development

Phase 5: Production (Lessons 19-23)

Evaluation and quality assurance, ethics and governance, career design, capstone project

The focus is not on chasing buzzwords, but on building AI into systems that actually work.

Interested?

Premium Plan
Currently active with 5 seats. Contact us about availability.

AI Review Plan
Coming soon. Email info@1stpiece.io to be notified when it launches.

Corporate / training
Custom programs available. See advisory & consulting

FAQ

Q. What's the difference between the two plans?
Premium Plan: Hatanaka personally reviews your work, running and testing your submissions. AI Review Plan: CriticChain provides automated review with instant feedback. Material and assignments are the same.
Q. Are there open seats in the Premium Plan?
There are 5 seats total. Contact us for current availability.
Q. When does AI Review launch?
It's in preparation. Email info@1stpiece.io to get notified when it starts.
Q. Is this a self-paced video course?
No. The program is built around the loop of studying, solving assignments, receiving reviews, and revising your work.
Q. Do you work with companies?
Yes. Premium Plan seats can be used for corporate pilots. Custom training design is also available. Details here
Q. Can I see CriticChain's source code?
Yes. CriticChain is open source (AGPL-3.0) and available on GitHub.