Internal Engineering Standard

WebbyLab AI SDLC: AI-Augmented Software Development Life Cycle

Our official framework for responsibly integrating generative AI tools across every phase of product development — from sprint planning to post-launch support.

01

Philosophy and Core Principles

AI-Augmented

WebbyLab strictly adheres to the principle of "Human in the Loop". We utilize generative AI tools exclusively as Co-pilots to accelerate routine operations, facilitate refactoring, and assist in solution discovery.

Absolute Developer Responsibility

AI is not an "autopilot"; it does not make final architectural decisions. The Software Engineer who executes the commit bears full and exclusive responsibility for the quality, security, and functionality of the code. The justification that "the AI made a mistake" is never accepted.

02

Corporate AI Toolset and Infrastructure

Authorized Access

Access to AI tools is granted exclusively via Google Workspace Enterprise Single Sign-On (SSO). The use of personal accounts (e.g., personal Gmail for ChatGPT, Claude, Midjourney) for corporate tasks is categorically prohibited.

Approved Models & Tools

The company utilizes the Antigravity platform, powered by Google Gemini (Vertex AI) and Anthropic (Opus and Sonnet) models.

Corporate MCP & Skills Framework

We centralize our 15 years of engineering expertise into corporate Model Context Protocol (MCP) servers. We share standardized Workflows and Skills across the company, covering:

  • System Architecture scaffolding
  • Code Review & Refactoring
  • Domain-Driven Design (DDD)
  • Clean Architecture
  • Atomic Design & React component best practices

IDE Security Configuration (Antigravity)

AI assistants within the IDE are permitted only under an API Key (Bring Your Own Key — BYOK) model. SaaS subscriptions (like Antigravity Pro/Business) where code is processed on external servers are strictly forbidden. Developers must:

  • Set "Codebase Indexing" to Local
  • Completely disable "Data Collection / Training"
03

AI Integration Across the Agile Workflow

3.1 Sprint Planning and Backlog Refinement

Estimation Calibration

The team uses a Corporate RAG (Retrieval-Augmented Generation) system to analyze historical task estimations against actual completion times. Because AI tools continuously increase team efficiency, this analysis allows us to dynamically adjust estimates and reduce the overall cost of feature development for our clients.

3.2 Architecture and Design

Cross-Project Knowledge Sharing

Our RAG system analyzes Design Documents across Epics and Tasks to share non-trivial architectural solutions between projects. This is executed strictly in accordance with the Client AI Consent Questionnaire; knowledge sharing is disabled if the client opts out of context analysis.

UI/UX Prototyping

During the design phase, teams utilize a combination of MCPs and specific AI Skills to generate UI kits directly into the application framework. The Antigravity Chrome extension is then used to visually verify component styles.

3.3 Development (Implementation) and Security

Project Context Management

Every repository must contain an active .Antigravityrules (or AI_INSTRUCTIONS.md) file in its root directory. This file defines the AI's role, the explicit technology stack, coding standards, and critical negative prompts (e.g., "do not use 'any' in TypeScript"). The Tech Lead is responsible for keeping this updated.

Commodity vs. Custom Solutions

Routine development tasks (e.g., standard forms, CRUD APIs) are primarily generated using AI Workflows. For custom business logic where AI capabilities are limited, engineers must manually implement a Proof of Concept (PoC) and subsequently prompt the AI on how to utilize that specific solution within the app.

Security Risk Assessment

While conducting threat modeling using STRIDE and DREAD methodologies, teams can utilize a dedicated architectural AI Skill as a supplementary analysis tool. However, the final responsibility for risk assessment and mitigation rests entirely with the human engineers. Inputting sensitive client data (PII, credentials) into any AI tool is strictly prohibited.

3.4 QA and Automated Testing

Test-Driven Development (TDD)

QA and engineering teams utilize a TDD approach. By feeding clear Acceptance Criteria from the issue tracker into AI/MCP tools, the team accelerates the generation of test cases and the writing of automated tests (e.g., Mocha, Chai, PHPUnit).

3.5 Code Review, CI/CD, and DevOps

Automated MCP Review

Following static analysis and automated testing within our CI/CD pipeline, code undergoes an automated review process via a custom, self-hosted GitLab MCP deployed on our internal infrastructure. All AI-generated code still requires mandatory human Code Review.

Debugging and Incident Response

We utilize a specific MCP for debugging and interacting with logs from Amazon CloudWatch or other vendors. Crucial: Engineers are required to sanitize all logs, stripping them of secrets and personal data before sending them to the AI model.

3.6 Retrospectives and Client Billing

AI Efficiency Tracking

During the Sprint Retrospective, the team analyzes the effectiveness of AI integration, specifically tracking the number of tokens consumed per feature.

Cost Management and Billing

Token expenditures are tied to the specific project's Cost Center. For client billing, if the BYOK pattern is used or if AI tokens are required for the application's core business logic (Product AI), these are treated as recurring infrastructure costs (similar to server hosting) and are billed directly to the client via Direct Billing or Reimbursement, as agreed upon in the Client Questionnaire.

2026 WEBBYLAB LLC. All rights reserved.