STANDARD 1.0
The Grading Standard
We grade Contextual Intelligence, not just textbook compliance.
Philosophy: Context is King
Real clients have budgets, legacy debt, and deadlines. A Junior Architect recites the Cloud Well-Architected Framework. A Senior Architect knows when to break it.
The "It Depends" Clause
We do not penalize single-AZ solutions if the client budget is $50/month. We penalize unjustified risks, not constraints.
ScenarioPre-Seed Startup
ConstraintBUDGET < $100
Decision: No Multi-AZAPPROVED
1. Resilience & Availability
Constraint Aware
Criteria
- Architecture meets required SLA
- Uses appropriate redundancy for the scope
Possible Outcomes
Excellent (3):SLA met with efficient redundancy
Satisfactory (2):SLA met, minimal redundancy overhead
Needs Improvement (1):SLA unclear / under-engineered for context
Failure (0):Significant over-engineer for the scenario
Failure Flags
- Netflix-scale infra for startup budget
- Single point of failure ignored without justification
2. Security Posture
Non-Negotiable
Criteria
- Security hygiene applied
- Risks evaluated & mitigated based on scenario
Possible Outcomes
Excellent (3):Comprehensive security with context alignment
Satisfactory (2):Basic hygiene + justified exceptions
Needs Improvement (1):Security gaps not justified
Failure (0):Procrastinating security (e.g., "add later")
Failure Flags
- "We'll add auth later" (Security by procrastination)
- Hardcoded secrets or 0.0.0.0/0 database access
3. Cost Efficiency
The Real Test
Criteria
- Architecture stays within constraints
- Utilizes best-fit cost-effective resources
Possible Outcomes
Excellent (3):Max cost efficiency without sacrificing requirements
Satisfactory (2):Within budget, only minor inefficiencies
Needs Improvement (1):Slight cost overrun without good reason
Failure (0):Recommends expensive / irrelevant choices
Failure Flags
- Enterprise Support or Dedicated Hosts for small projects
- Ignoring cost-optimized alternatives (Spot, Serverless, etc.)
4. Operational Clarity & Maintainability
Day-2 Reality
Criteria
- Architecture can be operated by a small team
- Failure modes are understood & documented
- Observability, logging, alerting are sufficient
- Design is maintainable, not hero architecture
Possible Outcomes
Excellent (3):Clear ownership boundaries, debuggable, well-documented failure modes
Satisfactory (2):Basic observability, reasonable complexity for team size
Needs Improvement (1):Too many moving parts or unclear operational model
Failure (0):Hero architecture or "ops will handle it" mentality
Failure Flags
- Design is clever but impossible to debug
- Too many moving parts (microservices overkill)
- No ownership boundaries or unclear responsibilities
- "Ops will handle it" — deferring operational concerns