docs(09): create phase plan
Phase 09: CI Pipeline Hardening - 4 plan(s) in 3 wave(s) - Wave 1: Infrastructure setup (09-01) - Wave 2: Tests in parallel (09-02, 09-03) - Wave 3: CI integration (09-04) - Ready for execution Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
182
.planning/phases/09-ci-pipeline/09-01-PLAN.md
Normal file
182
.planning/phases/09-ci-pipeline/09-01-PLAN.md
Normal file
@@ -0,0 +1,182 @@
|
||||
---
|
||||
phase: 09-ci-pipeline
|
||||
plan: 01
|
||||
type: execute
|
||||
wave: 1
|
||||
depends_on: []
|
||||
files_modified:
|
||||
- package.json
|
||||
- vite.config.ts
|
||||
- vitest-setup-client.ts
|
||||
- src/lib/utils/filterEntries.test.ts
|
||||
autonomous: true
|
||||
|
||||
must_haves:
|
||||
truths:
|
||||
- "npm run test:unit executes Vitest and reports pass/fail"
|
||||
- "Vitest browser mode runs component tests in real Chromium"
|
||||
- "Vitest node mode runs server/utility tests"
|
||||
- "SvelteKit modules ($app/*) are mocked in test environment"
|
||||
artifacts:
|
||||
- path: "vite.config.ts"
|
||||
provides: "Multi-project Vitest configuration"
|
||||
contains: "projects:"
|
||||
- path: "vitest-setup-client.ts"
|
||||
provides: "SvelteKit module mocks for browser tests"
|
||||
contains: "vi.mock('$app/"
|
||||
- path: "package.json"
|
||||
provides: "Test scripts"
|
||||
contains: "test:unit"
|
||||
- path: "src/lib/utils/filterEntries.test.ts"
|
||||
provides: "Sample unit test proving setup works"
|
||||
min_lines: 15
|
||||
key_links:
|
||||
- from: "vite.config.ts"
|
||||
to: "vitest-setup-client.ts"
|
||||
via: "setupFiles configuration"
|
||||
pattern: "setupFiles.*vitest-setup"
|
||||
---
|
||||
|
||||
<objective>
|
||||
Configure Vitest test infrastructure with multi-project setup for SvelteKit.
|
||||
|
||||
Purpose: Establish the test runner foundation that all subsequent test plans build upon. This enables unit tests (node mode) and component tests (browser mode) with proper SvelteKit module mocking.
|
||||
|
||||
Output: Working Vitest configuration with browser mode for Svelte 5 components and node mode for server code, plus a sample test proving the setup works.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@/home/tho/.claude/get-shit-done/workflows/execute-plan.md
|
||||
@/home/tho/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/phases/09-ci-pipeline/09-RESEARCH.md
|
||||
|
||||
@vite.config.ts
|
||||
@package.json
|
||||
@playwright.config.ts
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: Install Vitest dependencies and configure multi-project setup</name>
|
||||
<files>package.json, vite.config.ts</files>
|
||||
<action>
|
||||
Install Vitest and browser mode dependencies:
|
||||
```bash
|
||||
npm install -D vitest @vitest/browser vitest-browser-svelte @vitest/browser-playwright @vitest/coverage-v8
|
||||
npx playwright install chromium
|
||||
```
|
||||
|
||||
Update vite.config.ts with multi-project Vitest configuration:
|
||||
- Import `playwright` from `@vitest/browser-playwright`
|
||||
- Add `test` config with `coverage` (provider: v8, include src/**/*, thresholds with autoUpdate: true initially)
|
||||
- Configure two projects:
|
||||
1. `client`: browser mode with Playwright provider, include `*.svelte.{test,spec}.ts`, setupFiles pointing to vitest-setup-client.ts
|
||||
2. `server`: node environment, include `*.{test,spec}.ts`, exclude `*.svelte.{test,spec}.ts`
|
||||
|
||||
Update package.json scripts:
|
||||
- Add `"test": "vitest"`
|
||||
- Add `"test:unit": "vitest run"`
|
||||
- Add `"test:unit:watch": "vitest"`
|
||||
- Add `"test:coverage": "vitest run --coverage"`
|
||||
|
||||
Keep existing scripts (test:e2e, test:e2e:docker) unchanged.
|
||||
</action>
|
||||
<verify>
|
||||
Run `npm run test:unit` - should execute (may show "no tests found" initially, but Vitest runs without config errors)
|
||||
Run `npx vitest --version` - confirms Vitest is installed
|
||||
</verify>
|
||||
<done>Vitest installed with multi-project config. npm run test:unit executes without configuration errors.</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Create SvelteKit module mocks in setup file</name>
|
||||
<files>vitest-setup-client.ts</files>
|
||||
<action>
|
||||
Create vitest-setup-client.ts in project root with:
|
||||
|
||||
1. Add TypeScript reference directives:
|
||||
- `/// <reference types="@vitest/browser/matchers" />`
|
||||
- `/// <reference types="@vitest/browser/providers/playwright" />`
|
||||
|
||||
2. Mock `$app/navigation`:
|
||||
- goto: vi.fn returning Promise.resolve()
|
||||
- invalidate: vi.fn returning Promise.resolve()
|
||||
- invalidateAll: vi.fn returning Promise.resolve()
|
||||
- beforeNavigate: vi.fn()
|
||||
- afterNavigate: vi.fn()
|
||||
|
||||
3. Mock `$app/stores`:
|
||||
- page: writable store with URL, params, route, status, error, data, form
|
||||
- navigating: writable(null)
|
||||
- updated: { check: vi.fn(), subscribe: writable(false).subscribe }
|
||||
|
||||
4. Mock `$app/environment`:
|
||||
- browser: true
|
||||
- dev: true
|
||||
- building: false
|
||||
|
||||
Import writable from 'svelte/store' and vi from 'vitest'.
|
||||
|
||||
Note: Use simple mocks, do NOT use importOriginal with SvelteKit modules (causes SSR issues per research).
|
||||
</action>
|
||||
<verify>
|
||||
File exists at vitest-setup-client.ts with all required mocks.
|
||||
TypeScript compilation succeeds: `npx tsc --noEmit vitest-setup-client.ts` (or no TS errors shown in editor)
|
||||
</verify>
|
||||
<done>SvelteKit module mocks created. Browser-mode tests can import $app/* without errors.</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 3: Write sample test to verify infrastructure</name>
|
||||
<files>src/lib/utils/filterEntries.test.ts</files>
|
||||
<action>
|
||||
Create src/lib/utils/filterEntries.test.ts as a node-mode unit test:
|
||||
|
||||
1. Import { describe, it, expect } from 'vitest'
|
||||
2. Import filterEntries function from './filterEntries'
|
||||
3. Read filterEntries.ts to understand the function signature and behavior
|
||||
|
||||
Write tests for filterEntries covering:
|
||||
- Empty entries array returns empty array
|
||||
- Filter by tag returns matching entries
|
||||
- Filter by search term matches title/content
|
||||
- Combined filters (tag + search) work together
|
||||
- Type filter (task vs thought) works if applicable
|
||||
|
||||
This proves the server/node project runs correctly.
|
||||
|
||||
Note: This is a real test, not just a placeholder. Aim for thorough coverage of filterEntries.ts functionality.
|
||||
</action>
|
||||
<verify>
|
||||
Run `npm run test:unit` - filterEntries tests execute and pass
|
||||
Run `npm run test:coverage` - shows coverage report including filterEntries.ts
|
||||
</verify>
|
||||
<done>Sample unit test passes. Vitest infrastructure is verified working for node-mode tests.</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
1. `npm run test:unit` executes without errors
|
||||
2. `npm run test:coverage` produces coverage report
|
||||
3. filterEntries.test.ts tests pass
|
||||
4. vite.config.ts contains multi-project test configuration
|
||||
5. vitest-setup-client.ts contains $app/* mocks
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
- CI-01 requirement satisfied: Vitest installed and configured
|
||||
- Multi-project setup distinguishes client (browser) and server (node) tests
|
||||
- At least one unit test passes proving the infrastructure works
|
||||
- Coverage reporting functional (threshold enforcement comes in Plan 02)
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/09-ci-pipeline/09-01-SUMMARY.md`
|
||||
</output>
|
||||
211
.planning/phases/09-ci-pipeline/09-02-PLAN.md
Normal file
211
.planning/phases/09-ci-pipeline/09-02-PLAN.md
Normal file
@@ -0,0 +1,211 @@
|
||||
---
|
||||
phase: 09-ci-pipeline
|
||||
plan: 02
|
||||
type: execute
|
||||
wave: 2
|
||||
depends_on: ["09-01"]
|
||||
files_modified:
|
||||
- src/lib/utils/highlightText.test.ts
|
||||
- src/lib/utils/parseHashtags.test.ts
|
||||
- src/lib/components/SearchBar.svelte.test.ts
|
||||
- src/lib/components/TagInput.svelte.test.ts
|
||||
- src/lib/components/CompletedToggle.svelte.test.ts
|
||||
- vite.config.ts
|
||||
autonomous: true
|
||||
|
||||
must_haves:
|
||||
truths:
|
||||
- "All utility functions have passing tests"
|
||||
- "Component tests run in real browser via Vitest browser mode"
|
||||
- "Coverage threshold is enforced (starts with autoUpdate baseline)"
|
||||
artifacts:
|
||||
- path: "src/lib/utils/highlightText.test.ts"
|
||||
provides: "Tests for text highlighting utility"
|
||||
min_lines: 20
|
||||
- path: "src/lib/utils/parseHashtags.test.ts"
|
||||
provides: "Tests for hashtag parsing utility"
|
||||
min_lines: 20
|
||||
- path: "src/lib/components/SearchBar.svelte.test.ts"
|
||||
provides: "Browser-mode test for SearchBar component"
|
||||
min_lines: 25
|
||||
- path: "src/lib/components/TagInput.svelte.test.ts"
|
||||
provides: "Browser-mode test for TagInput component"
|
||||
min_lines: 25
|
||||
- path: "src/lib/components/CompletedToggle.svelte.test.ts"
|
||||
provides: "Browser-mode test for toggle component"
|
||||
min_lines: 20
|
||||
key_links:
|
||||
- from: "src/lib/components/SearchBar.svelte.test.ts"
|
||||
to: "vitest-browser-svelte"
|
||||
via: "render import"
|
||||
pattern: "import.*render.*from.*vitest-browser-svelte"
|
||||
---
|
||||
|
||||
<objective>
|
||||
Write unit tests for utility functions and initial component tests to establish testing patterns.
|
||||
|
||||
Purpose: Create comprehensive tests for pure utility functions (easy wins for coverage) and establish the component testing pattern using Vitest browser mode. This proves both test project configurations work.
|
||||
|
||||
Output: All utility functions tested, 3 component tests demonstrating the browser-mode pattern, coverage baseline established.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@/home/tho/.claude/get-shit-done/workflows/execute-plan.md
|
||||
@/home/tho/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/phases/09-ci-pipeline/09-RESEARCH.md
|
||||
@.planning/phases/09-ci-pipeline/09-01-SUMMARY.md
|
||||
|
||||
@src/lib/utils/highlightText.ts
|
||||
@src/lib/utils/parseHashtags.ts
|
||||
@src/lib/components/SearchBar.svelte
|
||||
@src/lib/components/TagInput.svelte
|
||||
@src/lib/components/CompletedToggle.svelte
|
||||
@vitest-setup-client.ts
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: Write unit tests for remaining utility functions</name>
|
||||
<files>src/lib/utils/highlightText.test.ts, src/lib/utils/parseHashtags.test.ts</files>
|
||||
<action>
|
||||
Read each utility file to understand its behavior, then write comprehensive tests:
|
||||
|
||||
**highlightText.test.ts:**
|
||||
- Import function and test utilities from vitest
|
||||
- Test: Returns original text when no search term
|
||||
- Test: Highlights single match with mark tag
|
||||
- Test: Highlights multiple matches
|
||||
- Test: Case-insensitive matching
|
||||
- Test: Handles special regex characters in search term
|
||||
- Test: Returns empty string for empty input
|
||||
|
||||
**parseHashtags.test.ts:**
|
||||
- Import function and test utilities from vitest
|
||||
- Test: Extracts single hashtag from text
|
||||
- Test: Extracts multiple hashtags
|
||||
- Test: Returns empty array when no hashtags
|
||||
- Test: Handles hashtags at start/middle/end of text
|
||||
- Test: Ignores invalid hashtag patterns (e.g., # alone, #123)
|
||||
- Test: Removes duplicates if function does that
|
||||
|
||||
Each test file should have describe block with descriptive test names.
|
||||
Use `it.each` for data-driven tests where appropriate.
|
||||
</action>
|
||||
<verify>
|
||||
Run `npm run test:unit -- --reporter=verbose` - all utility tests pass
|
||||
Run `npm run test:coverage` - shows improved coverage for src/lib/utils/
|
||||
</verify>
|
||||
<done>All 3 utility functions (filterEntries, highlightText, parseHashtags) have comprehensive test coverage.</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Write browser-mode component tests for 3 simpler components</name>
|
||||
<files>src/lib/components/SearchBar.svelte.test.ts, src/lib/components/TagInput.svelte.test.ts, src/lib/components/CompletedToggle.svelte.test.ts</files>
|
||||
<action>
|
||||
Create .svelte.test.ts files (note: .svelte.test.ts NOT .test.ts for browser mode) for three simpler components.
|
||||
|
||||
**Pattern for all component tests:**
|
||||
```typescript
|
||||
import { render } from 'vitest-browser-svelte';
|
||||
import { page } from '@vitest/browser/context';
|
||||
import { describe, expect, it } from 'vitest';
|
||||
import ComponentName from './ComponentName.svelte';
|
||||
```
|
||||
|
||||
**SearchBar.svelte.test.ts:**
|
||||
- Read SearchBar.svelte to understand props and behavior
|
||||
- Test: Renders input element
|
||||
- Test: Calls onSearch callback when user types (if applicable)
|
||||
- Test: Shows clear button when text entered (if applicable)
|
||||
- Test: Placeholder text is visible
|
||||
|
||||
**TagInput.svelte.test.ts:**
|
||||
- Read TagInput.svelte to understand props and behavior
|
||||
- Test: Renders tag input element
|
||||
- Test: Can add a tag (simulate user typing and pressing enter/adding)
|
||||
- Test: Displays existing tags if passed as prop
|
||||
|
||||
**CompletedToggle.svelte.test.ts:**
|
||||
- Read CompletedToggle.svelte to understand props
|
||||
- Test: Renders toggle in unchecked state by default
|
||||
- Test: Toggle state changes on click
|
||||
- Test: Calls callback when toggled (if applicable)
|
||||
|
||||
Use `page.getByRole()`, `page.getByText()`, `page.getByPlaceholder()` for element selection.
|
||||
Use `await button.click()` for interactions.
|
||||
Use `flushSync()` from 'svelte' after external state changes if needed.
|
||||
Use `await expect.element(locator).toBeInTheDocument()` for assertions.
|
||||
</action>
|
||||
<verify>
|
||||
Run `npm run test:unit` - component tests run in browser mode (you'll see Chromium launch)
|
||||
All 3 component tests pass
|
||||
</verify>
|
||||
<done>Browser-mode component testing pattern established with 3 working tests.</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 3: Configure coverage thresholds with baseline</name>
|
||||
<files>vite.config.ts</files>
|
||||
<action>
|
||||
Update vite.config.ts coverage configuration:
|
||||
|
||||
1. Set initial thresholds using autoUpdate to establish baseline:
|
||||
```typescript
|
||||
thresholds: {
|
||||
autoUpdate: true, // Will update thresholds based on current coverage
|
||||
}
|
||||
```
|
||||
|
||||
2. Run `npm run test:coverage` once to establish baseline thresholds
|
||||
|
||||
3. Review the auto-updated thresholds in vite.config.ts
|
||||
|
||||
4. If coverage is already above 30%, manually set thresholds to a reasonable starting point (e.g., 50% of current coverage) with a path toward 80%:
|
||||
```typescript
|
||||
thresholds: {
|
||||
global: {
|
||||
statements: [current - 10],
|
||||
branches: [current - 10],
|
||||
functions: [current - 10],
|
||||
lines: [current - 10],
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
5. Add comment noting the target is 80% coverage (CI-01 decision)
|
||||
|
||||
Note: Full 80% coverage will be achieved incrementally. This plan establishes the enforcement mechanism.
|
||||
</action>
|
||||
<verify>
|
||||
Run `npm run test:coverage` - shows coverage percentages
|
||||
Coverage thresholds are set in vite.config.ts
|
||||
Future test runs will fail if coverage drops below threshold
|
||||
</verify>
|
||||
<done>Coverage thresholds configured. Enforcement mechanism in place for incremental coverage improvement.</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
1. `npm run test:unit` runs all tests (utility + component)
|
||||
2. Component tests run in Chromium browser (browser mode working)
|
||||
3. `npm run test:coverage` shows coverage for utilities and tested components
|
||||
4. Coverage thresholds are configured in vite.config.ts
|
||||
5. All tests pass
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
- All 3 utility functions have comprehensive tests
|
||||
- 3 component tests demonstrate browser-mode testing pattern
|
||||
- Coverage thresholds configured (starting point toward 80% goal)
|
||||
- Both Vitest projects (client browser, server node) verified working
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/09-ci-pipeline/09-02-SUMMARY.md`
|
||||
</output>
|
||||
219
.planning/phases/09-ci-pipeline/09-03-PLAN.md
Normal file
219
.planning/phases/09-ci-pipeline/09-03-PLAN.md
Normal file
@@ -0,0 +1,219 @@
|
||||
---
|
||||
phase: 09-ci-pipeline
|
||||
plan: 03
|
||||
type: execute
|
||||
wave: 2
|
||||
depends_on: ["09-01"]
|
||||
files_modified:
|
||||
- playwright.config.ts
|
||||
- tests/e2e/fixtures/db.ts
|
||||
- tests/e2e/user-journeys.spec.ts
|
||||
- tests/e2e/index.ts
|
||||
autonomous: true
|
||||
|
||||
must_haves:
|
||||
truths:
|
||||
- "E2E tests run against the application with seeded test data"
|
||||
- "User journeys cover create, edit, search, organize, and delete workflows"
|
||||
- "Tests run on both desktop and mobile viewports"
|
||||
- "Screenshots are captured on test failure"
|
||||
artifacts:
|
||||
- path: "playwright.config.ts"
|
||||
provides: "E2E configuration with multi-viewport and screenshot settings"
|
||||
contains: "screenshot: 'only-on-failure'"
|
||||
- path: "tests/e2e/fixtures/db.ts"
|
||||
provides: "Database seeding fixture using drizzle-seed"
|
||||
contains: "drizzle-seed"
|
||||
- path: "tests/e2e/user-journeys.spec.ts"
|
||||
provides: "Core user journey E2E tests"
|
||||
min_lines: 100
|
||||
- path: "tests/e2e/index.ts"
|
||||
provides: "Custom test function with fixtures"
|
||||
contains: "base.extend"
|
||||
key_links:
|
||||
- from: "tests/e2e/user-journeys.spec.ts"
|
||||
to: "tests/e2e/fixtures/db.ts"
|
||||
via: "test import with seededDb fixture"
|
||||
pattern: "import.*test.*from.*fixtures"
|
||||
---
|
||||
|
||||
<objective>
|
||||
Create comprehensive E2E test suite with database fixtures and multi-viewport testing.
|
||||
|
||||
Purpose: Establish E2E tests that verify full user journeys work correctly. These tests catch integration issues that unit tests miss and provide confidence that the deployed application works as expected.
|
||||
|
||||
Output: E2E test suite covering core user workflows, database seeding fixture for consistent test data, multi-viewport testing for desktop and mobile.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@/home/tho/.claude/get-shit-done/workflows/execute-plan.md
|
||||
@/home/tho/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/phases/09-ci-pipeline/09-RESEARCH.md
|
||||
@.planning/phases/09-ci-pipeline/09-01-SUMMARY.md
|
||||
|
||||
@playwright.config.ts
|
||||
@tests/docker-deployment.spec.ts
|
||||
@src/lib/server/db/schema.ts
|
||||
@src/routes/+page.svelte
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: Update Playwright configuration for E2E requirements</name>
|
||||
<files>playwright.config.ts</files>
|
||||
<action>
|
||||
Update playwright.config.ts with E2E requirements from user decisions:
|
||||
|
||||
1. Set `testDir: './tests/e2e'` (separate from existing docker test)
|
||||
2. Set `fullyParallel: false` (shared database)
|
||||
3. Set `workers: 1` (avoid database race conditions)
|
||||
4. Configure `reporter`:
|
||||
- `['html', { open: 'never' }]`
|
||||
- `['github']` for CI annotations
|
||||
|
||||
5. Configure `use`:
|
||||
- `baseURL: process.env.BASE_URL || 'http://localhost:5173'`
|
||||
- `trace: 'on-first-retry'`
|
||||
- `screenshot: 'only-on-failure'` (per user decision: screenshots, no video)
|
||||
- `video: 'off'`
|
||||
|
||||
6. Add two projects:
|
||||
- `chromium-desktop`: using `devices['Desktop Chrome']`
|
||||
- `chromium-mobile`: using `devices['Pixel 5']`
|
||||
|
||||
7. Configure `webServer`:
|
||||
- `command: 'npm run build && npm run preview'`
|
||||
- `port: 4173`
|
||||
- `reuseExistingServer: !process.env.CI`
|
||||
|
||||
Move existing docker-deployment.spec.ts to tests/e2e/ or keep in tests/ with separate config.
|
||||
</action>
|
||||
<verify>
|
||||
Run `npx playwright test --list` - shows test files found
|
||||
Configuration is valid (no syntax errors)
|
||||
</verify>
|
||||
<done>Playwright configured for E2E with desktop/mobile viewports, screenshots on failure, single worker for database safety.</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Create database seeding fixture</name>
|
||||
<files>tests/e2e/fixtures/db.ts, tests/e2e/index.ts</files>
|
||||
<action>
|
||||
First, install drizzle-seed:
|
||||
```bash
|
||||
npm install -D drizzle-seed
|
||||
```
|
||||
|
||||
Create tests/e2e/fixtures/db.ts:
|
||||
1. Import test base from @playwright/test
|
||||
2. Import db from src/lib/server/db
|
||||
3. Import schema from src/lib/server/db/schema
|
||||
4. Import seed and reset from drizzle-seed
|
||||
|
||||
Create a fixture that:
|
||||
- Seeds database with known test data before test
|
||||
- Provides seeded entries (tasks, thoughts) with predictable IDs and content
|
||||
- Cleans up after test using reset()
|
||||
|
||||
Create tests/e2e/index.ts:
|
||||
- Re-export extended test with seededDb fixture
|
||||
- Re-export expect from @playwright/test
|
||||
|
||||
Test data should include:
|
||||
- At least 5 entries with various states (tasks vs thoughts, completed vs pending)
|
||||
- Entries with tags for testing filter/search
|
||||
- Entries with images (if applicable to schema)
|
||||
- Entries with different dates for sorting tests
|
||||
|
||||
Note: Read the actual schema.ts to understand the exact model structure before writing seed logic.
|
||||
</action>
|
||||
<verify>
|
||||
TypeScript compiles without errors
|
||||
Fixture can be imported in test file
|
||||
</verify>
|
||||
<done>Database fixture created. Tests can import { test, expect } from './fixtures' to get seeded database.</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 3: Write E2E tests for core user journeys</name>
|
||||
<files>tests/e2e/user-journeys.spec.ts</files>
|
||||
<action>
|
||||
Create tests/e2e/user-journeys.spec.ts using the custom test with fixtures:
|
||||
|
||||
```typescript
|
||||
import { test, expect } from './index';
|
||||
```
|
||||
|
||||
Write tests for each user journey (per CONTEXT.md decisions):
|
||||
|
||||
**Create workflow:**
|
||||
- Navigate to home page
|
||||
- Use quick capture to create a new entry
|
||||
- Verify entry appears in list
|
||||
- Verify entry persists after page reload
|
||||
|
||||
**Edit workflow:**
|
||||
- Find an existing entry (from seeded data)
|
||||
- Click to open/edit
|
||||
- Modify content
|
||||
- Save changes
|
||||
- Verify changes persisted
|
||||
|
||||
**Search workflow:**
|
||||
- Use search bar to find entry by text
|
||||
- Verify matching entries shown
|
||||
- Verify non-matching entries hidden
|
||||
- Test search with tags filter
|
||||
|
||||
**Organize workflow:**
|
||||
- Add tag to entry
|
||||
- Filter by tag
|
||||
- Verify filtered results
|
||||
- Pin an entry (if applicable)
|
||||
- Verify pinned entry appears first
|
||||
|
||||
**Delete workflow:**
|
||||
- Select an entry
|
||||
- Delete it
|
||||
- Verify entry removed from list
|
||||
- Verify entry not found after reload
|
||||
|
||||
Use `test.describe()` to group related tests.
|
||||
Each test should use `seededDb` fixture for consistent starting state.
|
||||
Use page object pattern if tests get complex (optional - can keep simple for now).
|
||||
</action>
|
||||
<verify>
|
||||
Run `npm run test:e2e` with app running locally (or let webServer start it)
|
||||
All E2E tests pass
|
||||
Screenshots are generated in test-results/ for any failures
|
||||
</verify>
|
||||
<done>E2E test suite covers all core user journeys. Tests run on both desktop and mobile viewports.</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
1. `npm run test:e2e` executes E2E tests
|
||||
2. Tests run on both chromium-desktop and chromium-mobile projects
|
||||
3. Database is seeded with test data before each test
|
||||
4. All 5 user journeys (create, edit, search, organize, delete) have tests
|
||||
5. Screenshots captured on failure (can test by making a test fail temporarily)
|
||||
6. Tests pass consistently (no flaky tests)
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
- CI-04 requirement satisfied: E2E tests ready for pipeline
|
||||
- User journeys cover create/edit/search/organize/delete as specified in CONTEXT.md
|
||||
- Multi-viewport testing (desktop + mobile) per CONTEXT.md decision
|
||||
- Database fixtures provide consistent, isolated test data
|
||||
- Screenshot on failure configured (no video per CONTEXT.md decision)
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/09-ci-pipeline/09-03-SUMMARY.md`
|
||||
</output>
|
||||
218
.planning/phases/09-ci-pipeline/09-04-PLAN.md
Normal file
218
.planning/phases/09-ci-pipeline/09-04-PLAN.md
Normal file
@@ -0,0 +1,218 @@
|
||||
---
|
||||
phase: 09-ci-pipeline
|
||||
plan: 04
|
||||
type: execute
|
||||
wave: 3
|
||||
depends_on: ["09-02", "09-03"]
|
||||
files_modified:
|
||||
- .gitea/workflows/build.yaml
|
||||
autonomous: false
|
||||
|
||||
user_setup:
|
||||
- service: slack
|
||||
why: "Pipeline failure notifications"
|
||||
env_vars:
|
||||
- name: SLACK_WEBHOOK_URL
|
||||
source: "Slack App settings -> Incoming Webhooks -> Create new webhook -> Copy URL"
|
||||
dashboard_config:
|
||||
- task: "Create Slack app with incoming webhook"
|
||||
location: "https://api.slack.com/apps -> Create New App -> From scratch -> Add Incoming Webhooks"
|
||||
|
||||
must_haves:
|
||||
truths:
|
||||
- "Pipeline runs type checking before Docker build"
|
||||
- "Pipeline runs unit tests with coverage before Docker build"
|
||||
- "Pipeline runs E2E tests before Docker build"
|
||||
- "Pipeline fails fast when tests or type checking fail"
|
||||
- "Slack notification sent on pipeline failure"
|
||||
- "Test artifacts (coverage, playwright report) are uploaded"
|
||||
artifacts:
|
||||
- path: ".gitea/workflows/build.yaml"
|
||||
provides: "CI pipeline with test jobs"
|
||||
contains: "npm run check"
|
||||
- path: ".gitea/workflows/build.yaml"
|
||||
provides: "Unit test step"
|
||||
contains: "npm run test:coverage"
|
||||
- path: ".gitea/workflows/build.yaml"
|
||||
provides: "E2E test step"
|
||||
contains: "npm run test:e2e"
|
||||
key_links:
|
||||
- from: ".gitea/workflows/build.yaml"
|
||||
to: "package.json scripts"
|
||||
via: "npm run commands"
|
||||
pattern: "npm run (check|test:coverage|test:e2e)"
|
||||
- from: "build job"
|
||||
to: "test job"
|
||||
via: "needs: test"
|
||||
pattern: "needs:\\s*test"
|
||||
---
|
||||
|
||||
<objective>
|
||||
Integrate tests into Gitea Actions pipeline with fail-fast behavior and Slack notifications.
|
||||
|
||||
Purpose: Ensure tests run automatically on every push/PR and block deployment when tests fail. This is the final piece that makes the test infrastructure actually protect production.
|
||||
|
||||
Output: Updated CI workflow with test job that runs before build, fail-fast on errors, and Slack notification on failure.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@/home/tho/.claude/get-shit-done/workflows/execute-plan.md
|
||||
@/home/tho/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/phases/09-ci-pipeline/09-RESEARCH.md
|
||||
@.planning/phases/09-ci-pipeline/09-02-SUMMARY.md
|
||||
@.planning/phases/09-ci-pipeline/09-03-SUMMARY.md
|
||||
|
||||
@.gitea/workflows/build.yaml
|
||||
@package.json
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: Add test job to CI pipeline</name>
|
||||
<files>.gitea/workflows/build.yaml</files>
|
||||
<action>
|
||||
Update .gitea/workflows/build.yaml to add a test job that runs BEFORE build:
|
||||
|
||||
1. Add new `test` job at the beginning of jobs section:
|
||||
```yaml
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
cache: 'npm'
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Run type check
|
||||
run: npm run check -- --output machine
|
||||
|
||||
- name: Install Playwright browsers
|
||||
run: npx playwright install --with-deps chromium
|
||||
|
||||
- name: Run unit tests with coverage
|
||||
run: npm run test:coverage
|
||||
|
||||
- name: Run E2E tests
|
||||
run: npm run test:e2e
|
||||
env:
|
||||
CI: true
|
||||
|
||||
- name: Upload test artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
if: always()
|
||||
with:
|
||||
name: test-results
|
||||
path: |
|
||||
coverage/
|
||||
playwright-report/
|
||||
test-results/
|
||||
retention-days: 7
|
||||
```
|
||||
|
||||
2. Modify existing `build` job to depend on test:
|
||||
```yaml
|
||||
build:
|
||||
needs: test
|
||||
runs-on: ubuntu-latest
|
||||
# ... existing steps ...
|
||||
```
|
||||
|
||||
This ensures build only runs if tests pass (fail-fast behavior).
|
||||
</action>
|
||||
<verify>
|
||||
YAML syntax is valid: `python3 -c "import yaml; yaml.safe_load(open('.gitea/workflows/build.yaml'))"`
|
||||
Build job has `needs: test` dependency
|
||||
</verify>
|
||||
<done>Test job added to pipeline. Build job depends on test job (fail-fast).</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Add Slack notification on failure</name>
|
||||
<files>.gitea/workflows/build.yaml</files>
|
||||
<action>
|
||||
Add a notify job that runs on failure:
|
||||
|
||||
```yaml
|
||||
notify:
|
||||
needs: [test, build]
|
||||
runs-on: ubuntu-latest
|
||||
if: failure()
|
||||
steps:
|
||||
- name: Notify Slack on failure
|
||||
env:
|
||||
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
|
||||
run: |
|
||||
curl -X POST -H 'Content-type: application/json' \
|
||||
--data "{\"text\":\"Pipeline failed for ${{ gitea.repository }} on ${{ gitea.ref }}\"}" \
|
||||
$SLACK_WEBHOOK_URL
|
||||
```
|
||||
|
||||
Note: Using direct curl to Slack webhook rather than a GitHub Action for maximum Gitea compatibility (per RESEARCH.md recommendation).
|
||||
|
||||
The SLACK_WEBHOOK_URL secret must be configured in Gitea repository settings by the user (documented in user_setup frontmatter).
|
||||
</action>
|
||||
<verify>
|
||||
YAML syntax is valid
|
||||
Notify job has `if: failure()` condition
|
||||
Notify job depends on both test and build
|
||||
</verify>
|
||||
<done>Slack notification configured for pipeline failures.</done>
|
||||
</task>
|
||||
|
||||
<task type="checkpoint:human-verify" gate="blocking">
|
||||
<what-built>Complete CI pipeline with test job, fail-fast behavior, artifact upload, and Slack notification</what-built>
|
||||
<how-to-verify>
|
||||
1. Review the updated .gitea/workflows/build.yaml file structure
|
||||
2. Verify the job dependency chain: test -> build -> (notify on failure)
|
||||
3. Confirm test job includes all required steps:
|
||||
- Type checking (svelte-check)
|
||||
- Unit tests with coverage (vitest)
|
||||
- E2E tests (playwright)
|
||||
4. If ready to test in CI:
|
||||
- Push a commit to trigger the pipeline
|
||||
- Monitor Gitea Actions for the test job execution
|
||||
- Verify build job waits for test job to complete
|
||||
5. (Optional) Set up SLACK_WEBHOOK_URL secret in Gitea to test failure notifications
|
||||
</how-to-verify>
|
||||
<resume-signal>Type "approved" to confirm CI pipeline is correctly configured, or describe any issues found</resume-signal>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
1. .gitea/workflows/build.yaml has test job with:
|
||||
- Type checking step
|
||||
- Unit test with coverage step
|
||||
- E2E test step
|
||||
- Artifact upload step
|
||||
2. Build job has `needs: test` (fail-fast)
|
||||
3. Notify job runs on failure with Slack webhook
|
||||
4. YAML is valid syntax
|
||||
5. Pipeline can be triggered on push/PR
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
- CI-02 satisfied: Unit tests run in pipeline before build
|
||||
- CI-03 satisfied: Type checking runs in pipeline
|
||||
- CI-04 satisfied: E2E tests run in pipeline
|
||||
- CI-05 satisfied: Pipeline fails fast on test/type errors (needs: test)
|
||||
- Slack notification on failure (per CONTEXT.md decision)
|
||||
- Test artifacts uploaded for debugging failed runs
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/09-ci-pipeline/09-04-SUMMARY.md`
|
||||
</output>
|
||||
Reference in New Issue
Block a user