diff --git a/README.md b/README.md index 62ca7a9..bfc5e5b 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,8 @@ This repository is being used to build a C#-based semantic test mining platform The current source of truth for the planned product is [docs/requirements.md](/mnt/c/Users/kenne/Desktop/ReleasedGroup/2EndSquaredTesting/docs/requirements.md). The concept document that informed it is [docs/concept.md](/mnt/c/Users/kenne/Desktop/ReleasedGroup/2EndSquaredTesting/docs/concept.md). +In addition to being production-capable, the application is expected to support a local, production-like developer environment so changes can be exercised safely before they are pushed. The Blazor Server UI is also expected to be built early enough that major workflows can be visually tested during development rather than only through backend or CLI flows. + ## Product Summary The application described in this repository is not intended to be a raw click recorder. It is intended to be a semantic test mining and stabilisation platform that: diff --git a/WORKFLOW.md b/WORKFLOW.md index cd91faf..2418b15 100644 --- a/WORKFLOW.md +++ b/WORKFLOW.md @@ -4,7 +4,7 @@ tracker: endpoint: https://api.github.com/graphql api_key: $GITHUB_TOKEN owner: releasedgroup - repo: nextmedia-manager-copilot + repo: 2EndSquaredTesting milestone: null include_pull_requests: true labels: [] diff --git a/docs/README.md b/docs/README.md index 3c7113a..56b8f8e 100644 --- a/docs/README.md +++ b/docs/README.md @@ -14,6 +14,7 @@ This folder now contains the planning and specification documents for the new te - [Security Plan](./security-plan.md) - [Testing Plan](./testing-plan.md) - [Implementation Roadmap](./implementation-roadmap.md) +- [Sprint Plan](./sprint-plan.md) - [Requirements Traceability Matrix](./requirements-traceability-matrix.md) ## Recommended Reading Order @@ -24,7 +25,8 @@ This folder now contains the planning and specification documents for the new te 4. `security-plan.md` 5. `testing-plan.md` 6. `implementation-roadmap.md` -7. `requirements-traceability-matrix.md` +7. `sprint-plan.md` +8. `requirements-traceability-matrix.md` ## Usage Guidance @@ -33,3 +35,4 @@ This folder now contains the planning and specification documents for the new te - Use the UI/UX, security, and testing plans as non-optional design constraints. - Use the traceability matrix to connect future code, tests, ADRs, and pull requests back to requirements. - Treat all planning documents as subordinate to `requirements.md`; if a planning document and the requirements differ, update the planning document. +- Treat the local production-like developer environment and early UI delivery as non-optional delivery constraints, not optional polish. diff --git a/docs/implementation-roadmap.md b/docs/implementation-roadmap.md index 273b370..e2e23ed 100644 --- a/docs/implementation-roadmap.md +++ b/docs/implementation-roadmap.md @@ -24,6 +24,7 @@ Deliverables: - internal API versioning convention - SignalR infrastructure - artefact storage abstraction +- local production-like developer environment bootstrap - fixture app test harness - baseline observability and audit primitives - ADRs for generator choice, authentication provider, replay execution boundary, and draft persistence shape @@ -40,20 +41,24 @@ Prove the end-to-end path from recording to generated code to replay on a simple Suggested slices: -1. Recording session creation plus allow-list validation -2. Playwright browser launch and recorder injection -3. Meaningful event capture for navigation, click, fill, select, and checkbox -4. Incremental recording persistence -5. Initial timeline UI with human-readable steps -6. Locator candidate creation and ranking -7. Draft scenario editing and immutable version creation -8. Deterministic C# generator for one profile -9. Basic replay execution and step-level result reporting +1. Local production-like developer startup workflow with Blazor UI and PostgreSQL +2. Application shell and navigation suitable for visual testing +3. Recording session creation plus allow-list validation +4. Playwright browser launch and recorder injection +5. Meaningful event capture for navigation, click, fill, select, and checkbox +6. Incremental recording persistence +7. Initial timeline UI with human-readable steps +8. Locator candidate creation and ranking +9. Draft scenario editing and immutable version creation +10. Deterministic C# generator for one profile +11. Basic replay execution and step-level result reporting Phase 1 exit should match Section 18.5 Phase 1 exit criteria. Phase 1 is not complete unless URL allow-list enforcement and encryption of stored auth/session material are demonstrably working, because they are part of the stated exit criteria rather than optional hardening. +Phase 1 is also not complete unless a developer can run the real UI locally against a production-like stack shape and visually test the core workflow before push. + ## 5. Phase 2: Robustness Objective: diff --git a/docs/requirements-traceability-matrix.md b/docs/requirements-traceability-matrix.md index e1f09ae..35bd931 100644 --- a/docs/requirements-traceability-matrix.md +++ b/docs/requirements-traceability-matrix.md @@ -14,24 +14,25 @@ Use this document to keep future code changes, ADRs, test cases, and pull reques - [Security Plan](./security-plan.md) - [Testing Plan](./testing-plan.md) - [Implementation Roadmap](./implementation-roadmap.md) +- [Sprint Plan](./sprint-plan.md) ## 3. Requirement Coverage Matrix | Requirement Area | Requirement Source | Primary Planning Docs | | --- | --- | --- | | Product intent and canonical scenario principle | Sections 1, 2, 7.1, 24 | Technical Specification, Implementation Roadmap | -| Scope and phase boundaries | Sections 3, 18, 21 | Technical Specification, Implementation Roadmap | +| Scope and phase boundaries | Sections 3, 18, 21 | Technical Specification, Implementation Roadmap, Sprint Plan | | Roles and primary use cases | Section 4 | UI/UX Plan, Technical Specification | -| Logical architecture and lifecycles | Sections 5, 17 | Technical Specification, Implementation Roadmap | +| Logical architecture and lifecycles | Sections 5, 17 | Technical Specification, Implementation Roadmap, Sprint Plan | | Technology stack and provider strategy | Section 6 | Technical Specification | | Domain entities and artefacts | Sections 7.2 through 7.5 | Technical Specification, Testing Plan | -| Recording requirements | `FR-REC-001` through `FR-REC-010` | Technical Specification, UI/UX Plan, Security Plan, Testing Plan, Implementation Roadmap | -| Inference requirements | `FR-INF-001` through `FR-INF-009` | Technical Specification, UI/UX Plan, Testing Plan | -| Scenario authoring requirements | `FR-AUTH-001` through `FR-AUTH-006` | Technical Specification, UI/UX Plan, Testing Plan | -| Generation requirements | `FR-GEN-001` through `FR-GEN-010` | Technical Specification, UI/UX Plan, Testing Plan, Implementation Roadmap | -| Replay requirements | `FR-REP-001` through `FR-REP-005` | Technical Specification, UI/UX Plan, Security Plan, Testing Plan | -| Healing requirements | `FR-HEAL-001` through `FR-HEAL-005` | Technical Specification, UI/UX Plan, Security Plan, Testing Plan, Implementation Roadmap | -| Administration requirements | `FR-ADM-001` through `FR-ADM-004` | Technical Specification, UI/UX Plan, Security Plan, Testing Plan | +| Recording requirements | `FR-REC-001` through `FR-REC-010` | Technical Specification, UI/UX Plan, Security Plan, Testing Plan, Implementation Roadmap, Sprint Plan | +| Inference requirements | `FR-INF-001` through `FR-INF-009` | Technical Specification, UI/UX Plan, Testing Plan, Sprint Plan | +| Scenario authoring requirements | `FR-AUTH-001` through `FR-AUTH-006` | Technical Specification, UI/UX Plan, Testing Plan, Sprint Plan | +| Generation requirements | `FR-GEN-001` through `FR-GEN-010` | Technical Specification, UI/UX Plan, Testing Plan, Implementation Roadmap, Sprint Plan | +| Replay requirements | `FR-REP-001` through `FR-REP-005` | Technical Specification, UI/UX Plan, Security Plan, Testing Plan, Sprint Plan | +| Healing requirements | `FR-HEAL-001` through `FR-HEAL-005` | Technical Specification, UI/UX Plan, Security Plan, Testing Plan, Implementation Roadmap, Sprint Plan | +| Administration requirements | `FR-ADM-001` through `FR-ADM-004` | Technical Specification, UI/UX Plan, Security Plan, Testing Plan, Sprint Plan | | UI requirements | Section 9 | UI/UX Plan, Technical Specification | | API and real-time requirements | Section 10 | Technical Specification, UI/UX Plan | | Configuration requirements | Section 11 | Technical Specification, Security Plan | @@ -40,7 +41,8 @@ Use this document to keep future code changes, ADRs, test cases, and pull reques | Verification and testing requirements | Section 14 | Testing Plan | | Non-functional requirements | Section 15 | Technical Specification, UI/UX Plan, Testing Plan | | Deployment and operational safety | Section 16 | Technical Specification, Security Plan, Implementation Roadmap | -| Acceptance criteria | Section 19 | Testing Plan, Technical Specification, Implementation Roadmap | +| Local production-like developer environment and visual workflow validation | Sections 9.1, 14.1, 16.2, 18.1, 19 | Technical Specification, UI/UX Plan, Testing Plan, Implementation Roadmap, Sprint Plan | +| Acceptance criteria | Section 19 | Testing Plan, Technical Specification, Implementation Roadmap, Sprint Plan | | Risks and constraints | Section 20 | Technical Specification, Implementation Roadmap | | Build, test, and delivery requirements | Section 22 | Testing Plan, Technical Specification, Security Plan | | Implementer guardrails | Section 23 | Security Plan, Technical Specification, Implementation Roadmap | @@ -61,6 +63,7 @@ Use this document to keep future code changes, ADRs, test cases, and pull reques | Generated output remains reproducible | `FR-GEN-001`, `FR-GEN-010` | Technical Specification generation design, Testing Plan snapshot strategy | | Allow-list blocks disallowed targets | Section 12.4 | Security Plan allow-list control, Testing Plan allow-list tests | | Observability supports performance verification | Sections 13.1, 15.2 | Technical Specification observability design, Testing Plan performance instrumentation | +| Developers can validate the real UI locally before push | Sections 9.1, 14.1, 16.2, 19 | UI/UX Plan local visual testing expectations, Testing Plan local validation environment, Sprint Plan sprint exit criteria | ## 5. Future Use diff --git a/docs/requirements.md b/docs/requirements.md index d561a94..fb7e407 100644 --- a/docs/requirements.md +++ b/docs/requirements.md @@ -13,6 +13,8 @@ The platform shall: - Generate readable C# Playwright test code and supporting assets from the scenario model. - Replay generated tests and attempt deterministic healing when selectors or data drift. - Preserve enough artefacts, metadata, and diagnostics to let users review, edit, approve, and regenerate scenarios over time. +- Support a local, production-like developer environment so the full application can be exercised before changes are pushed or deployed. +- Prioritise delivery of a usable Blazor Server UI early enough that major workflows can be visually tested throughout development, not only after backend completion. The platform shall be built with the following solution model: @@ -75,6 +77,7 @@ The v1 platform shall support: - Deterministic selector healing workflows - Persistent storage of sessions, scenarios, steps, artefacts, and replay history - Multi-user authenticated web administration experience through Blazor Server +- Local developer execution in a production-like stack shape suitable for pre-push and pre-deployment validation - Export of generated tests into repository-friendly file structures ### 3.2 Out of Scope for v1 @@ -1037,6 +1040,8 @@ The Blazor Server application shall provide at least these primary work areas: 4. Replay and diagnostics workspace 5. Administration area +The UI shall be implemented early enough in the delivery plan that developers can visually exercise the primary workflows during development. API-only or backend-only completion is not sufficient for the intended v1 delivery workflow. + ### 9.2 Recording Workspace The recording workspace shall display: @@ -1248,6 +1253,8 @@ The platform implementation shall include automated coverage for: - Healing evaluation - Persistence mappings and migrations +The implementation shall also support a local, production-like developer test environment so changes can be exercised end to end before they are pushed to GitHub or deployed. That local environment shall include the real Blazor UI, PostgreSQL, artefact storage, and representative fixture applications unless a specific component is intentionally stubbed for local-only ergonomics. + ### 14.2 Test Layers At minimum, the repository should include: @@ -1326,6 +1333,10 @@ The implementation shall support at least: - Shared non-production environment - Production-like server deployment +The local developer execution profile shall mimic the actual application shape closely enough that a developer can validate the main workflows before pushing changes. At minimum, the local profile should run the ASP.NET Core host with the real Blazor Server UI, PostgreSQL, local artefact storage, and representative fixture applications or seeded test data. + +Differences between the local profile and shared/production-like environments shall be minimised and documented explicitly. Developer convenience shortcuts shall not bypass core safety behaviour such as allow-list enforcement, masking, audit logging, and encryption of sensitive stored material unless a local-only exception is intentionally documented and risk-accepted. + ### 16.3 Packaging Direction Desktop packaging through .NET MAUI Hybrid or Electron is explicitly a future option. The initial architecture shall therefore keep the backend and UI boundaries clean enough that later packaging can host the same application surfaces without re-implementing core recording, generation, or replay services. @@ -1369,8 +1380,9 @@ The application should be introduced into this repository as a new vertical slic Phase 1 shall target: +- Local production-like developer environment bootstrap for safe pre-push validation - Recording of navigation, click, fill, select, checkbox, and simple assertion-relevant actions -- Timeline UI +- Timeline UI and core workflow surfaces sufficient for visual testing by developers - Locator ranking - Basic scenario persistence with immutable scenario version creation - C# Playwright generation @@ -1416,6 +1428,7 @@ Phase 1 exit: - Generated C# Playwright output compiles against the emitted helper library. - Replay executes the generated scenario against the same fixture and reports pass/fail per step. - URL allow-list enforcement (Section 12.4) and encryption of storage state (Section 12.5) are enforced. +- A developer can run a local, production-like environment with the Blazor UI and visually exercise the Phase 1 workflow before pushing changes. Phase 2 exit: - Assertion inference produces at least one outcome-oriented assertion suggestion for every save/submit/navigate step in the Phase 1 fixture library. @@ -1447,6 +1460,7 @@ The implementation shall be considered to satisfy this specification only when a 10. Generated output remains reproducible bit-for-bit (modulo timestamps declared as non-deterministic) from scenario data plus generation profile plus template version. [FR-GEN-001, FR-GEN-010] 11. Recording and replay refuse to start against target URLs not present on the administrator-managed allow-list. [12.4] 12. Observability emits the identifiers listed in 13.1 and permits verification of the performance targets in 15.2. +13. A developer can run the application locally in a production-like configuration, including the real Blazor UI and PostgreSQL-backed persistence, to validate core workflows before push or deployment. [9.1, 14.1, 16.2] ## 20. Risks and Constraints @@ -1492,6 +1506,7 @@ The test mining platform shall be introduced into this repository as a new verti - The solution shall build with the repository's documented required .NET SDK version. Any SDK upgrade shall be an explicit, documented change. - `dotnet restore`, `dotnet build`, and `dotnet test` shall succeed from a clean checkout with no unresolved warnings treated as errors in core projects. - Projects shall enable nullable reference types and treat analyzer warnings as errors where practical. +- The repository should provide a documented local developer startup workflow for running a production-like application profile, including the Blazor UI and PostgreSQL dependency path. ### 22.3 Continuous Integration diff --git a/docs/security-plan.md b/docs/security-plan.md index 213901c..bc4f2d3 100644 --- a/docs/security-plan.md +++ b/docs/security-plan.md @@ -51,6 +51,8 @@ Recommended v1 approach: - shared environments: external identity provider via ASP.NET Core authentication - production-like environments: enforced external identity provider and secure cookie/session configuration +The local developer environment should still exercise the real application shell and the core security-sensitive code paths wherever practical. Local convenience mode should reduce friction, not create a separate unrepresentative application path. + ### 5.2 Authorization Minimum role model: diff --git a/docs/sprint-plan.md b/docs/sprint-plan.md new file mode 100644 index 0000000..06ca291 --- /dev/null +++ b/docs/sprint-plan.md @@ -0,0 +1,304 @@ +# Test Mining Platform Sprint Plan + +## 1. Purpose + +This sprint plan converts the implementation guidance in: + +- [requirements.md](./requirements.md) +- [technical-specification.md](./technical-specification.md) +- [ui-ux-plan.md](./ui-ux-plan.md) +- [security-plan.md](./security-plan.md) +- [testing-plan.md](./testing-plan.md) +- [implementation-roadmap.md](./implementation-roadmap.md) + +into an ordered execution plan for delivering the v1 application. + +This plan is subordinate to `requirements.md`. If any sprint task conflicts with the requirements, update the sprint plan rather than weakening the requirements. + +## 2. Planning Assumptions + +To keep the plan correct and implementation-ready, these assumptions are used: + +1. This sprint plan targets v1 completion, not the optional AI-assistance phase. +2. PostgreSQL is the required v1 persistence provider path. +3. Blazor Server is the required primary UI. +4. Structured `Scenario` plus immutable `ScenarioVersion` remain the source of truth throughout every sprint. +5. Security, auditability, observability, and automated verification are built into each sprint rather than deferred to the end. +6. Developers need a local production-like environment and usable UI surfaces early so workflow changes can be tested visually before push. + +## 3. Sprint Structure + +The plan uses eight implementation sprints: + +1. Sprint 0: Foundation and Architecture Decisions +2. Sprint 1: Host Shell, Persistence, and Admin Baseline +3. Sprint 2: Recording Pipeline MVP +4. Sprint 3: Scenario Authoring and Locator Intelligence +5. Sprint 4: Deterministic Generation and Export +6. Sprint 5: Replay MVP and Failure Diagnostics +7. Sprint 6: Semantic Robustness and Review Depth +8. Sprint 7: Product Hardening and v1 Completion + +Each sprint contains: + +- a sprint goal +- feature issues +- core requirement links +- explicit exit criteria + +## 4. Sprint 0: Foundation and Architecture Decisions + +### Goal + +Establish the project skeleton, architectural decisions, CI/testing baseline, and shared conventions needed to deliver the rest of the product without rework. + +### Issues + +1. Scaffold `TestMining.Platform.*` solution structure and shared project conventions. +2. Write ADRs for generator choice, authentication provider direction, replay execution boundary, and draft persistence shape. +3. Establish CI baseline for restore, build, test, secret scanning, and PostgreSQL-backed integration execution. +4. Create local production-like developer environment bootstrap plus fixture application/test harness foundation for later recording and replay coverage. + +### Requirement Links + +- Sections 6, 14, 17, 22 +- Sections 23.1 through 23.3 + +### Exit Criteria + +- New platform projects exist with clear boundaries. +- CI runs on the repository and enforces build/test/security baseline. +- Developers can start a local environment that mirrors the intended application topology closely enough for real workflow testing. +- Fixture harness exists for later sprints. +- Deferred design choices that block implementation have ADR direction recorded. + +## 5. Sprint 1: Host Shell, Persistence, and Admin Baseline + +### Goal + +Deliver the application host, authentication shell, PostgreSQL persistence baseline, admin configuration, and security-critical policy surfaces. + +### Issues + +1. Implement ASP.NET Core host and Blazor Server shell with role-aware navigation, usable for visual local testing from this sprint onward. +2. Implement PostgreSQL persistence baseline, EF Core migrations, and core domain entities. +3. Implement environment configuration, target URL allow-list management, and audit logging baseline. +4. Implement artefact storage abstraction, retention-policy model, and encrypted secret-storage plumbing. + +### Requirement Links + +- Sections 5.2, 6.1, 6.2, 7.2 through 7.5, 9.1, 10, 11, 12.1 through 12.7, 13, 16.1 +- `FR-ADM-001`, `FR-ADM-003`, `FR-ADM-004` + +### Exit Criteria + +- Users can sign in and reach a Blazor shell with role-aware access. +- PostgreSQL migrations succeed from a clean checkout. +- Core persistence entities exist for scenarios, versions, recordings, artefacts, replay runs, healing suggestions, and audit events. +- Allow-list and retention policies are managed server-side and audited. +- The local environment is usable for visually testing the host shell and admin configuration flows. + +## 6. Sprint 2: Recording Pipeline MVP + +### Goal + +Deliver the first end-to-end recording flow that can safely launch a browser, capture meaningful events, and persist recording progress incrementally. + +### Issues + +1. Implement recording session creation, validation, and browser launch with allow-list enforcement. +2. Implement recorder script delivery and init-script injection through Playwright. +3. Implement meaningful event capture, Playwright observation correlation, and reliable recorder transport. +4. Implement incremental recording persistence with pause, resume, stop, cancel, importance, ignore, and inline notes. + +### Requirement Links + +- `FR-REC-001` through `FR-REC-010` +- Sections 5.3.1, 9.2, 10.2, 10.4, 12.4, 12.6 + +### Exit Criteria + +- A user can start and control a recording from the UI. +- Browser launch is blocked for disallowed target URLs. +- Meaningful events are captured and persisted incrementally. +- Sensitive inputs are masked per active policy during capture and preview. +- The recording workflow is visually testable through the local UI. + +## 7. Sprint 3: Scenario Authoring and Locator Intelligence + +### Goal + +Transform recordings into structured drafts, expose a usable timeline editor, and persist ranked locator intelligence plus immutable scenario versions. + +### Issues + +1. Implement event normalisation and semantic action inference for the Phase 1 step set. +2. Implement element snapshots, locator candidate generation, ranking, and record-time validation. +3. Implement scenario draft creation, timeline workspace, and step editing workflows. +4. Implement scenario validation and immutable `ScenarioVersion` creation with change history. + +### Requirement Links + +- `FR-INF-001` through `FR-INF-005` +- `FR-AUTH-001`, `FR-AUTH-002`, `FR-AUTH-005`, `FR-AUTH-006` +- Sections 7.1 through 7.5, 9.3 + +### Exit Criteria + +- Recorded sessions compile into structured scenario drafts. +- Locator candidates are ranked and persisted for every targetable step. +- Users can review, suppress, reorder, annotate, and validate steps in the timeline UI. +- Material edits create new immutable scenario versions rather than mutating committed history. +- Developers can visually test timeline editing and versioning locally. + +## 8. Sprint 4: Deterministic Generation and Export + +### Goal + +Generate readable and reproducible C# Playwright output from approved scenario versions, with manifesting and repository-friendly export. + +### Issues + +1. Implement deterministic generation pipeline and generation profile model for the first supported output style. +2. Implement readable C# Playwright test generation plus runtime helper compatibility metadata. +3. Implement `LocatorResolver` helper generation and low-confidence warning surfacing in preview. +4. Implement generated artefact persistence, manifesting, preview, and repository-friendly export workflow. + +### Requirement Links + +- `FR-GEN-001` through `FR-GEN-010` +- Sections 5.3.3, 9.4, 10.3, 14.4, 19 + +### Exit Criteria + +- Approved scenario versions generate deterministic file sets. +- Generated code is previewable in the UI and exportable to a repository-friendly layout. +- Generation manifests include template version, helper compatibility, and checksum details. +- Generator output is verified by snapshot/approval tests. +- Developers can visually validate generation previews and warnings locally. + +## 9. Sprint 5: Replay MVP and Failure Diagnostics + +### Goal + +Replay scenario versions or generated artefacts in isolated contexts and provide actionable step-by-step diagnostics for failures. + +### Issues + +1. Implement replay execution orchestration from scenario version or generation artefact. +2. Implement step-level replay status streaming and replay workspace UI. +3. Implement replay diagnostics capture, failure categorisation, and artefact linking. +4. Implement targeted rerun support for full replay, current-step onward, and single-step diagnostics where valid. + +### Requirement Links + +- `FR-REP-001` through `FR-REP-005` +- Sections 5.3.4, 9.5, 10.2, 13.3 + +### Exit Criteria + +- Users can run replay from the UI against the approved scenario version or chosen generated artefact. +- Replay runs use isolated browser contexts. +- Failures produce categorized diagnostics with step-level evidence. +- Replay status survives long-running execution and remains observable. +- Developers can visually validate replay progress and diagnostics through the local UI. + +## 10. Sprint 6: Semantic Robustness and Review Depth + +### Goal + +Increase generated test resilience and author review quality with assertions, variables, fuzzy strategies, scoped locators, and richer diagnostic evidence. + +### Issues + +1. Implement variable classification model and UI editing for literals, parameters, generated values, fixtures, secrets, and ignored data. +2. Implement assertion suggestion engine, approval workflows, and outcome-oriented assertion persistence. +3. Implement fuzzy assertion strategies, scoped locators, and confidence explanation surfaces. +4. Implement richer replay diagnostics packages including screenshots and locator-resolution evidence. + +### Requirement Links + +- `FR-INF-006` through `FR-INF-009` +- `FR-AUTH-003`, `FR-AUTH-004` +- `FR-REP-002`, `FR-REP-003` +- Sections 9.3 through 9.5, 14.1 through 14.4 + +### Exit Criteria + +- Authors can classify all captured data with masking-safe behavior. +- Suggested assertions are reviewable and approval-gated before generation. +- Replay diagnostics include richer artefacts and locator evidence. +- Regeneration preserves approved variable and assertion choices. +- Developers can visually validate the deeper authoring and diagnostics flows locally. + +## 11. Sprint 7: Product Hardening and v1 Completion + +### Goal + +Finish the remaining hardening work needed to satisfy the v1 requirements and acceptance criteria: authentication bootstrap, healing approval, history/diff visibility, retention cleanup, and export/review polish. + +### Issues + +1. Implement authentication bootstrap options for manual login, stored storage state, and controlled cookie import. +2. Implement deterministic healing proposal generation, approval workflow, and linked immutable version creation. +3. Implement scenario history, version comparison, and trace viewer/export review surfaces. +4. Implement retention cleanup scheduling, artefact lifecycle enforcement, and operational safety/cleanup observability. + +### Requirement Links + +- `FR-ADM-002` +- `FR-HEAL-001` through `FR-HEAL-005` +- Sections 7.4, 12.5, 12.7, 16.4, 18.5, 19 + +### Exit Criteria + +- Authentication bootstrap flows work end to end under controlled administration. +- Healing proposals are deterministic, evidence-backed, and approval-gated. +- Approved healing creates a new immutable scenario version linked to the originating replay run. +- Retention cleanup is scheduled, observable, and audited. +- The v1 acceptance criteria in Section 19 are demonstrably satisfied. +- The full core workflow is visually testable locally before push or deployment. + +## 12. Cross-Sprint Working Agreements + +These apply in every sprint: + +1. Every implementation issue must cite the requirement IDs it satisfies. +2. Every behaviour change must include automated coverage linked to those requirements. +3. Security-sensitive work must document masking, encryption, audit, and retention impact. +4. Generated code is always derived output, never the editing source of truth. +5. `Symphony.*` tooling assets remain outside the change scope unless explicitly requested. + +## 13. GitHub Execution Model + +To keep GitHub planning easy to manage: + +- each sprint should be represented by a GitHub milestone +- each deliverable issue should be assigned to exactly one sprint milestone +- issue titles should stay vertical-slice oriented instead of layer-only +- issue bodies should include scope, requirement links, dependencies, and exit expectations + +Recommended labels for later use: + +- `area/host` +- `area/recording` +- `area/analysis` +- `area/generation` +- `area/replay` +- `area/healing` +- `area/security` +- `area/testing` + +## 14. Definition of Done Per Sprint + +A sprint is only done when: + +1. Its milestone issues are closed. +2. The sprint exit criteria in this document are met. +3. Build, test, and security checks pass for the implemented scope. +4. New behaviour is documented where needed. +5. No sprint introduces drift from `requirements.md`. + +## 15. Post-v1 Backlog + +The optional AI-assistance phase remains post-v1 and should not be treated as required for initial application completion. If scheduled later, it should be tracked as a separate milestone series with feature flags and advisory-only controls. diff --git a/docs/technical-specification.md b/docs/technical-specification.md index e34fb7a..19d360c 100644 --- a/docs/technical-specification.md +++ b/docs/technical-specification.md @@ -32,6 +32,8 @@ This technical plan primarily supports: - generating readable C# Playwright output - replaying scenarios with diagnostics - producing deterministic healing suggestions under human approval +- running the application locally in a production-like shape so end-to-end changes can be exercised before push or deployment +- delivering a usable UI early enough that major workflows can be visually tested during implementation Relevant requirements: @@ -97,6 +99,7 @@ Recommended defaults for v1: - Scriban templates for first-pass generation readability and snapshot testing - Roslyn reserved for later structural generation/refactoring needs - Filesystem artefact storage behind an abstraction so object storage can be added later +- A documented local developer environment that mirrors the production-capable application topology closely enough for end-to-end validation Deferred by requirements and requiring ADRs before hardening: @@ -391,6 +394,17 @@ Validation rules: - environment-level invalid configuration fails startup - runtime-editable invalid configuration fails save with actionable validation +### 12.1 Local Production-Like Developer Profile + +The implementation should provide a documented local profile that mirrors the real application topology as closely as practical for development: + +- ASP.NET Core host and real Blazor Server UI +- PostgreSQL as the active local persistence provider +- local filesystem artefact storage through the same abstraction used elsewhere +- fixture applications or seeded test data for recording, generation, and replay validation + +The local profile is for pre-push confidence, not for inventing a second architecture. Differences from shared or production-like environments should stay minimal and explicit. + ## 13. Security-Critical Technical Controls This technical specification depends on the detailed controls in [docs/security-plan.md](./security-plan.md). At minimum: @@ -452,10 +466,10 @@ The following remain deferred and should be resolved in ADRs before implementati To align with Phase 1 in Section 18: -1. Host shell plus authenticated Blazor layout and PostgreSQL persistence bootstrap +1. Host shell plus authenticated Blazor layout and PostgreSQL persistence bootstrap, available in a local production-like developer profile 2. Recording session creation plus allow-list validation 3. Recorder transport and incremental persistence for navigation/click/fill/select -4. Timeline review UI with draft editing +4. Timeline review UI with draft editing so the workflow can be visually tested early 5. Locator ranking and scenario version creation 6. Deterministic generation preview and export 7. Basic replay with pass/fail diagnostics @@ -469,3 +483,4 @@ Implementation should not start until these are agreed: - generator approach for Phase 1 - fixture web applications for automated testing - authentication approach for non-local environments +- local production-like startup workflow for developers, including how the UI and PostgreSQL are run together diff --git a/docs/testing-plan.md b/docs/testing-plan.md index 90083fe..11f3847 100644 --- a/docs/testing-plan.md +++ b/docs/testing-plan.md @@ -24,6 +24,7 @@ Derived from requirements: - flaky tests must be fixed or quarantined quickly - every acceptance criterion must trace to automated coverage - the PostgreSQL-backed path is the required v1 provider path in CI and local integration testing +- developers need a local production-like environment for pre-push validation of the real application and UI ## 4. Repository Test Layout @@ -199,6 +200,24 @@ Required Phase 1 path: 5. generate C# output 6. replay and inspect result +### 6.4 Local Developer Validation Environment + +In addition to automated coverage, the repository should support a local production-like validation profile so a developer can test changes before pushing them. + +That local profile should include: + +- the real ASP.NET Core host +- the real Blazor Server UI +- PostgreSQL +- artefact storage through the normal abstraction +- representative fixture applications or seeded data + +Recommended usage: + +- run the full application locally +- exercise the main UI workflows visually +- use the local environment before pushing substantial workflow changes + ## 7. Fixture Application Plan Based on Section 14.3, fixtures should cover: @@ -286,6 +305,8 @@ Pull request CI should: 6. run secret scanning and policy checks 7. run PostgreSQL-backed integration tests using a containerized database, per Section 22.3 +The repository should also document the local startup path developers use to validate the same application shape before push. + Suggested command groups: - `dotnet restore` @@ -301,6 +322,7 @@ Suggested command groups: - generated output compiles - replay reports per-step pass/fail - allow-list and encryption rules tested +- local production-like environment supports visual execution of the Phase 1 UI flow ### Phase 2 diff --git a/docs/ui-ux-plan.md b/docs/ui-ux-plan.md index 71e0072..4d787cc 100644 --- a/docs/ui-ux-plan.md +++ b/docs/ui-ux-plan.md @@ -30,6 +30,7 @@ Practical interpretation: - keep every destructive or source-of-truth-changing action explicit - preserve continuity between recording, editing, generation, and replay - keep the scenario editor as the primary authoring surface; generated code preview is a downstream review surface, not the editing source of truth +- deliver the UI early enough that developers can visually test the real workflows during implementation, not only after backend completion ## 3. User Roles and UX Focus @@ -64,6 +65,8 @@ Recommended dashboard widgets: - healing proposals awaiting approval - retention or configuration warnings +This navigation shell should exist early in development so the product can be exercised visually in a local environment even while deeper capability slices are still being completed. + ## 5. Information Architecture ### 5.1 Recording Workspace @@ -285,7 +288,19 @@ Because recording, generation, replay, export, and cleanup are asynchronous, the - provide retry guidance when failures occur - avoid blocking the entire app shell for operation-specific failures -## 8. Accessibility Plan +## 8. Local Visual Testing Expectations + +Because the product is intended to be developed and validated through its actual UI, the local developer environment should support visual testing of: + +- sign-in and application shell navigation +- recording session creation and recording status +- timeline editing and scenario validation feedback +- generation preview and warnings +- replay execution, diagnostics, and healing review surfaces + +Local visual testing should use the same Blazor Server UI that will ship, not a separate mock frontend. + +## 9. Accessibility Plan Required to support Section 15.6: @@ -296,7 +311,7 @@ Required to support Section 15.6: - non-color indicators for pass/warn/fail/confidence levels - focus management when drawers, dialogs, or review panels open -## 9. Responsive Behaviour +## 10. Responsive Behaviour The UI is desktop-first for v1, but should degrade gracefully. @@ -308,7 +323,7 @@ Recommended breakpoints: Do not hide critical validation, approval, or security warnings on smaller layouts. -## 10. Design System Guidance +## 11. Design System Guidance Suggested component set: @@ -329,7 +344,7 @@ Suggested state taxonomy: - blocking - sensitive -## 11. Source-of-Truth UX Rules +## 12. Source-of-Truth UX Rules To stay aligned with Sections 7.1 and 8.4: @@ -338,7 +353,7 @@ To stay aligned with Sections 7.1 and 8.4: 3. Replay and healing screens must always show which scenario version they are derived from. 4. Any action that changes persisted scenario behaviour must route through scenario versioning, not ad hoc direct mutation. -## 12. UX Acceptance Checks +## 13. UX Acceptance Checks The following checks should be true before UI slices are considered complete: @@ -347,3 +362,4 @@ The following checks should be true before UI slices are considered complete: 3. Sensitive values never appear in clear text in preview surfaces. 4. Long-running operations recover gracefully from refresh or reconnect. 5. Role-restricted actions are hidden or disabled with clear rationale. +6. A developer can run the local environment and visually exercise the primary UI workflows before pushing changes. diff --git a/src/Symphony.Infrastructure.Persistence.Sqlite/Symphony.Infrastructure.Persistence.Sqlite.csproj b/src/Symphony.Infrastructure.Persistence.Sqlite/Symphony.Infrastructure.Persistence.Sqlite.csproj index fd821b5..3f0e99f 100644 --- a/src/Symphony.Infrastructure.Persistence.Sqlite/Symphony.Infrastructure.Persistence.Sqlite.csproj +++ b/src/Symphony.Infrastructure.Persistence.Sqlite/Symphony.Infrastructure.Persistence.Sqlite.csproj @@ -4,16 +4,19 @@ - - - - all - runtime; build; native; contentfiles; analyzers; buildtransitive - - - - - + + + + all + runtime; build; native; contentfiles; analyzers; buildtransitive + + + + + + all + + net10.0