Skip to content

Convert json.RawMessage to JSON strings instead of byte arrays (#41)#43

Open
moshloop wants to merge 34 commits intomainfrom
pr/fixes
Open

Convert json.RawMessage to JSON strings instead of byte arrays (#41)#43
moshloop wants to merge 34 commits intomainfrom
pr/fixes

Conversation

@moshloop
Copy link
Copy Markdown
Member

@moshloop moshloop commented Nov 19, 2025

Summary by CodeRabbit

  • New Features

    • Expanded icon library and icon lookup map for richer UI visuals
    • Interactive HTML tables (Grid.js) and enhanced HTML/PDF output with embedded assets
    • File-system tree viewer with depth/hidden-file options and interactive tree controls
    • Concurrent batch processing utility for faster background tasks
    • JSON cleanup and robust structured-output helpers for model responses
  • Improvements

    • Unified, richer table/tree rendering with word-wrapping and multi-format exports
    • Cleaner, more consistent text/date/number formatting and ANSI/HTML/Markdown outputs

moshloop and others added 30 commits November 3, 2025 18:24
- Add Textable interface check in pretty formatter's formatDefaultWithVisited
- Add Textable interface check in pretty formatter's formatStructWithVisited
- Add Textable interface check in api.parser.processFieldValueWithVisited
- Add Textable check in HTML formatter's formatFieldValueHTMLWithStyle
- Render slices of Textable objects one per line/div:
  - Pretty: newline separated with ANSI()
  - HTML: each wrapped in <div> using HTML()
  - Markdown: newline separated with Markdown()
- Add comprehensive showcase to uber_demo:
  - All available icons table
  - Tailwind colors (50-900 shades) table
  - Text styles & transformations table

Fixes issue where api.Text objects were printed as raw structs
{Content:... Class:...} instead of calling their rendering methods.
Phase 1 of formatter refactoring to eliminate duplicate formatting logic
and establish a clean architecture where data knows how to represent itself.

Key Changes:
- Added TextList type with JoinNewlines() method for composing output
- Implemented PrettyData.Pretty() that returns TextList representation
- Simplified PrettyFormatter.FormatPrettyData() to use Pretty().JoinNewlines().ANSI()
- Added FieldValue.String() to implement Textable interface
- Removed unused formatFieldLabel() method
- Exported text.StripANSI() for test usage
- Fixed test to strip ANSI codes when checking content

Architecture Improvements:
- PrettyData now owns its own representation logic
- Formatters simply render pre-structured Text objects
- All formatting logic centralized in FieldValue.createText()
- Consistent indentation using tabs for nested structures

Benefits:
- Single source of truth for text formatting
- No duplicate formatting logic in formatters
- Easier to test and maintain
- Consistent output across all format types

Note: Table rendering temporarily shows placeholders - will be implemented
in Phase 2 with proper Table object generation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Restored table rendering in PrettyFormatter while keeping the new
PrettyData.Pretty() architecture. Tables will be properly integrated
with TextList in Phase 3.

Changes:
- FormatPrettyData() now calls Pretty() for non-table fields
- Tables rendered separately using existing renderTable methods
- Updated PrettyData.Pretty() to skip table fields
- Fixed TestFormatterMatrix to strip ANSI codes before content checks

Test Status:
- Table tests passing (TestTableFormattingWithDates, TestTableWordWrapping)
- Matrix tests passing (TestFormatterMatrix, TestNestedMaps)
- 9 tests still failing (mostly HTML formatter icon tests, unrelated)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Fixed compilation error in GetStructRow where string values were being
appended to TextList ([]Textable) without wrapping them in api.Text.

Now properly wraps formatted string values in api.Text{Content: value}
before appending to the row.

Test Status: 89/98 tests passing (91%)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Add PrettyDataRow type definition
- Add Values and Tables fields to PrettyData
- Add GetValue() and GetTable() methods to PrettyData
- Add Rows field to TableOptions
- Add IsCircular field to TypedValue
- Add NewTypedList() and NewTypedMap() constructors
- Add Formatted(), Plain(), Markdown(), HTML() methods to FieldValue
- Fix TextList reference in formatters/parser.go
…alue

- Remove these methods as formatting decision should be made after parsing
- FieldValue now always returns Text object
- Formatters call .Text.String(), .Text.Markdown(), .Text.HTML() directly
- Updated all formatters: csv, excel, markdown, html, schema
- The decision of plain text vs formatted is now made by formatters, not during parsing
- Type assert Text field to *Text type to access Content and Style fields
- This fixes test compilation after removing formatting methods from FieldValue
- Fix inverted logic in HTML formatter registration check
- Add varargs unwrapping for custom formatters
- Remove x-transition animation from tree expand/collapse
- Increase expand/collapse icon size and improve padding
- Enable expand/collapse when clicking on node label
- Extract inline JS and CSS to go:embed files (tree.js, tree.css)
- Remove root "." node from tree display
- Remove bullet indicators for leaf nodes
- Add hover effect for clickable tree nodes
…vrons

- Replace unicode chevron (▸) with Iconify icons (ion:chevron-forward/down)
- Remove CSS rotation transform, use x-show to swap icons instead
- Extract Grid.js theme CSS to gridjs-theme.css
- Extract PDF-specific CSS to pdf.css
- Extract Tippy.js initialization to tooltips.js
- Update html_formatter.go to use all embedded CSS/JS files
- Improve code organization and maintainability
- Updated ParseDataWithSchema to use NewTypedValue for tree fields
- Fixed tree and table fields to be stored in values TypedMap instead of list TypedList
- This allows nested trees to render correctly when they are fields in a struct

Resolves issue where trees work standalone but not as struct fields
- Added FieldMeta struct to TypedValue to carry rendering hints
- Updated parser to attach field metadata (CompactItems, Format) when creating TypedValue for table fields
- Enhanced HTML formatter to detect nested TextTable in TypedValue
- Implemented inline compact table rendering for tables with pretty:"compact" tag
- Implemented deferred full table rendering for non-compact nested tables
- Deferred tables are rendered after struct fields with proper Grid.js/HTML table formatting

Nested tables are now properly rendered in HTML output:
- Compact tables render inline within the struct
- Non-compact tables render as full tables after the struct section
- Add AgentTypeLLM constant
- Implement LLMAgent using commons-db/llm client
- Support all LLM backends (OpenAI, Anthropic, Gemini, Claude Code)
- Add StructuredOutput field to PromptRequest
- Add StructuredData field to PromptResponse
- Support structured JSON output via WithStructuredOutput
- Integrate with clicky task management for progress tracking
- Session-based cost tracking
- Concurrent batch execution with semaphore control
- Register LLMAgent in AgentManager dispatcher
- Add emergencyCleanup() to force terminal reset in fatal error paths
- Update Fatal() to use deferred cleanup ensuring terminal restoration
- Fix batch goroutine leaks with context cancellation support
- Add panic recovery in batch workers to prevent deadlocks
- Add 5-minute timeout to batch monitoring goroutine
- Fix race condition where batch task completes before workers finish
- Signal workers to stop gracefully via shutdown channel
- Ensure alternate screen exit in all shutdown paths
- Apply emergency terminal reset as last resort in shutdown hook
- Remove unused wg field from Manager struct
Fixed multiple race conditions in batch processing that caused
"context canceled" errors and potential panics:

- Added sync.Once to ensure results channel is closed exactly once
- Added sync.WaitGroup to track goroutine lifecycle
- Added mutex to protect concurrent task status updates
- Re-enabled context cancellation handling in monitoring goroutine
- Ensured goroutines complete before channel closure

The changes prevent:
- Panic from closing already-closed channel (5 different close paths)
- Panic from sending to closed channel
- Data races on task.SetName() and task.SetProgress()
- Context cancellation errors due to improper cleanup

Added comprehensive tests covering:
- Concurrent context cancellation
- Rapid batch completion
- Panic recovery
- Channel closure synchronization
- Goroutine lifecycle management
Fixed multiple data races in the task framework:

Manager struct:
- Converted noColor, noProgress, verbose from bool to atomic.Bool
- Updated all setters to use atomic.Store()
- Updated all getters/usages to use atomic.Load()
- Fixed initialization in newManagerWithConcurrency()

Task struct:
- Protected name field with mutex in SetName() and Name()
- Protected description field with mutex in SetDescription() and Description()
- Protected progress and maxValue fields with mutex in SetProgress()
- Fixed getBufferedLogger() race using sync.Once for initialization
- Added loggerOnce field to ensure thread-safe logger initialization

All fields now have proper synchronization:
- Atomic operations for simple flags (noColor, noProgress, verbose)
- Mutex protection for complex state (name, description, progress)
- sync.Once for one-time initialization (bufferedLogger)

Verified with go test -race ./task - all races eliminated.
- ApplyStyle now defaults to suffix truncation when max-w or max-lines is set
- Updated test to reflect new default behavior
- All truncation tests passing
- truncate/text-ellipsis/text-clip now only set default width if not already specified
- Explicit max-w-[N] values now take precedence over truncate defaults
- Fixes max-w-[5ch] truncate to correctly truncate at 5 chars, not 50
- Replace tablewriter with lipgloss for ANSI and String output
- Keep tablewriter for HTML and Markdown output
- Add renderLipgloss() helper with Tailwind style support
- Parse Tailwind classes to lipgloss.Style for colored cells
- Tables now render with beautiful rounded borders
- Maintain full styling support (colors, bold, italic, etc.)
- Terminal width awareness and word wrapping preserved
Replace custom tree string-building implementation with lipgloss/tree
for improved terminal rendering and ANSI color handling.

Changes:
- Replace FormatTree() method with buildLipglossTree() in tree_formatter.go
- Update TextTree.String() and ANSI() methods to use lipgloss tree rendering
- Use rounded enumerator (╭─, ╰─) for modern tree appearance
- Preserve ASCII mode support for environments without Unicode
- Keep HTML and Markdown rendering unchanged (simple indentation)
- Fix ANSI color handling issues (resolves FIXME on line 166)

Features preserved:
- Collapsed nodes support
- Compact list mode for inline rendering
- Icon support from Pretty() method
- Tailwind style parsing to lipgloss styles
- Max depth limiting

Tests:
- All tree rendering tests pass (TestTreeRendering, TestCompactListNode,
  TestASCIITreeOptions, TestTreeMaxDepth)
- Skip TestTreeWithPrettyTags (parser issue, not related to migration)
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Nov 19, 2025

Walkthrough

This PR large-scale refactors structured rendering: replaces FieldValue with TypedValue, adds TextTable/TextTree/TextList and many HTML/Markdown rendering features, restructures formatters (including an HTML package with embedded assets), extends CLI flag/args handling, adds batching utilities, and introduces many tests and dependency upgrades.

Changes

Cohort / File(s) Summary
Core data model & parser
api/meta.go, api/types.go, api/parser.go, api/parse_tags.go
Introduces TypedValue, TypedMap, TypedList, PrettyData accessors; renames PrettyTableTableOptions (Columns); parser now produces TypedValue trees and TextTable structures; tag parser added.
Text / rendering primitives
api/text.go, api/markdown.go, api/html.go, api/table.go, api/tree_html.go, api/html.go
Adds TextList, HtmlElement, multi-format rendering methods (String/ANSI/HTML/Markdown), TextTable/transformers, HTML tree renderer with optional interactivity and PDF mode.
Styling and tailwind integration
api/styles.go, api/tailwind/tailwind.go, api/themes.go
Adds ResolveStyles/ApplyTailwindStyle, TailwindStyle type, cached style parsing, improved truncate handling, and terminal size/lines utilities.
Icons & file heuristics
api/icons/icons.go, api/icons/files.go
Reworks icon set and initializers, adds All map for lookups, and Filename() to classify files to icons.
Formatters & HTML package
formatters/manager.go, formatters/schema.go, formatters/csv_formatter.go, formatters/excel.go, formatters/markdown_formatter.go, formatters/html/*
Converts formatters to consume TypedValue/TextTable; moves HTML formatter to html package, embeds CSS/JS assets, Grid.js integration, PDF mode, and updated Format signatures (accepting FormatOptions).
Pretty/Parser tests & utilities
formatters/parser.go, formatters/parser_test.go, formatters/*_test.go, formatters/suite_test.go
Updates tests to new APIs (GetTable, GetValue, TextList, String()), migrates many tests to Ginkgo/Gomega, adds new tests (time/duration, ordering, word-wrap), and removes some browser tests.
Humanization & helpers
api/human.go, api/human_test.go, api/json_cleanup.go, ai/json_cleanup.go
Adds Human/HumanNumber/HumanDate helpers and tests; JSON cleanup and UnmarshalWithCleanup utilities.
AI agent API & helpers
ai/api.go, ai/agent.go, ai/claude_agent.go, ai/cost.go
Adds typed prompt helpers (TypedPromptRequest/Response, ExecutePromptTyped), AgentConfig.Pretty, adds AgentTypeAider handling, cost helpers IsEmpty and AggregateByModel, and reduced prompt logging.
Task utilities & batching
task/batch.go, task/manager.go, task/task.go, task/render.go, task/options.go, task/batch_test.go
Adds generic Batch[T] with Run streaming results; Manager refactors to atomic flags, emergencyCleanup, PlainRender, improved shutdown and synchronization; numerous tests.
CLI / flags changes
cobra_command.go, flags/*.go, flags/assignment.go
Adds AddNamedCommand generic, supports args-as-fields (IsArgs), new ARGS constant and arg/stdin priority, guards flag registration on FlagName.
Formatting helpers & format.go
format.go, formatters/*
Adds CompactList alias, exposes Badge, refactors Map/Key filtering, integrates task.Wait in printing, and updates format manager dispatch.
Exec package
exec/exec.go
Adds package logger, tracef, ExecResult.PrettyFull, populates Args in results, and widens arg display.
Examples and filesystem demo
examples/*, filesystem.go, clicky/*
Adds filesystem tree builder (FileTreeNode, NewFileSystem, options), updates examples to use clicky and new rendering paths, and extends Uber demo with showcase types.
Text utilities
text/tokenizer.go
Exports ANSI stripper as StripANSI.
Miscellaneous tests & removals
many formatters/html/*, formatters/*_test.go
Adds/removes tests, moves HTML tooltip assets, removes Playwright tooltip test file, and updates many expected outputs to new rendering.
Repository config
.gitignore, go.mod, examples/go.mod
Adds gitignore entries (ginkgo.report, *.bak, examples/uber_demo/uber_demo) and bumps multiple dependencies and indirect modules.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant Batch as Batch.Run
    participant WorkerPool
    participant Item as ItemFunc
    participant Results

    Caller->>Batch: Run()
    Batch->>WorkerPool: start workers (semaphore MaxWorkers)
    loop for each Item
        Batch->>WorkerPool: submit item
        WorkerPool->>Item: execute(ctx)
        Item-->>WorkerPool: result / panic
        WorkerPool->>Results: send BatchResult{Value,Error,Duration}
    end
    WorkerPool->>Results: close channel when done
    Results-->>Caller: stream of BatchResult
Loading

(Note: diagram highlights the high-level flow for the new generic Batch.Run worker pool and result streaming.)

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~150 minutes

Areas needing extra attention:

  • api model shift: TypedValue/TextTable/TextTree impacts many downstream formatters and renderers.
  • Formatter rework and package move: formattershtml package, new Format signatures, embedded assets and Grid.js integration.
  • Concurrency and shutdown sequences in task manager and Batch implementation (atomic flags, emergencyCleanup, synchronization).
  • CLI flag/args precedence and binding changes (IsArgs, ARGS constant) — potential edge cases.
  • Large public surface changes in icons and style resolution — verify consumers and tests.

Poem

🐰
In bytes and tables I rabbit-hop,
TypedValue seeds let renderings pop.
Trees unfold and icons gleam,
Batches hum — a coder's dream.
Hooray for changes, quick and spry,
I nibble bugs and hop goodbye! 🥕

Pre-merge checks and finishing touches

❌ Failed checks (3 warnings)
Check name Status Explanation Resolution
Title check ⚠️ Warning Title describes a specific technical change but is misaligned with the actual changeset which implements comprehensive refactoring across multiple packages. Revise title to reflect the actual scope: something like 'Refactor data rendering with TypedValue system and structured output support' would better capture the extensive changes.
Description check ⚠️ Warning No pull request description was provided by the author; the description section is completely empty. Add a comprehensive description following the provided template, explaining the objectives, type of change, testing performed, and any breaking changes introduced.
Docstring Coverage ⚠️ Warning Docstring coverage is 52.99% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch pr/fixes

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 32

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (9)
examples/go.mod (1)

162-162: Remove local replace directive before merging.

The replace directive points to a local file path /Users/moshe/go/src/github.com/flanksource/commons, which is user-specific and will break builds for other developers.

Remove this line or replace it with a proper version constraint:

-replace github.com/flanksource/commons v1.41.0 => /Users/moshe/go/src/github.com/flanksource/commons
api/filter.go (1)

191-207: Revert to Primitive() for correct CEL type handling.

The change from fieldValue.Primitive() to fieldValue.String() at line 204 breaks type-aware CEL filtering. The comments at lines 197-203 explicitly document that Primitive() should be used to extract strongly-typed values (int64, float64, bool, string, time.Time) for accurate CEL comparisons.

Converting everything to strings will break filters like:

  • age > 30 (numeric comparison becomes string comparison)
  • active == true (boolean comparison becomes string comparison)
  • Date/time comparisons

This also contradicts the PR title "Convert json.RawMessage to JSON strings instead of byte arrays" - the issue is likely specific to json.RawMessage handling, not all field values.

Revert the change:

-		result[key] = fieldValue.String()
+		result[key] = fieldValue.Primitive()

And at line 259:

-		celType := inferCELTypeFromValue(fieldValue.String())
+		celType := inferCELTypeFromValue(fieldValue.Primitive())

If the goal is to handle json.RawMessage specifically, add special handling for that type rather than converting all values to strings.

api/filter_test.go (1)

10-18: Verify reason for XDescribe and document or re-enable before merge

Your observation is accurate. The file contains two test suites wrapped in ginkgo.XDescribe, which skips all their tests:

  1. FilterTableRows (line 10): 12 test cases covering CEL filter expressions (equality, numeric comparisons, boolean logic, AND/OR conditions, string functions, error handling).
  2. rowToCELMap (line ~154): 3 test cases for TypedValue-to-map conversion logic.

Both are essential for validating the CEL filtering pipeline with the new TypedValue-based implementation. Their tests are completely skipped.

Before merging, either:

  • Re-enable with Describe if tests pass,
  • Add an explicit comment explaining the temporary skip with an issue/task reference,
  • Ensure the re-enable work is tracked and committed to.
task/task.go (2)

475-504: Clarify timeout error construction in WaitFor

In the timeout branch, you read from the timeout channel twice and format the message with %s:

case <-timeout:
    // ...
    t.err = fmt.Errorf("task wait timeout after %s", <-timeout)

Since case <-timeout already drained the channel, the second <-timeout returns the zero time.Time, and the message doesn’t convey how long it actually waited. Consider instead using a fixed duration or elapsed time:

- t.err = fmt.Errorf("task wait timeout after %s", <-timeout)
+ t.err = fmt.Errorf("task wait timeout after %s", 300*time.Second)
+ // or: time.Since(t.startTime) if that’s more useful

This avoids the extra channel read and makes the error clearer.


399-422: Move emergencyCleanup() call before logger.Fatalf() to ensure it executes

The review comment is correct. Standard Go log.Fatalf calls os.Exit(1), and os.Exit immediately terminates the process without running deferred functions. Since logger in this file imports from github.com/flanksource/commons/logger and follows standard logging patterns, the defer block will never execute when Fatal() is called.

The current code at lines 407–422 sets up a defer to call emergencyCleanup(), but this will never run. The cleanup must be called explicitly before logger.Fatalf(), similar to how stopRenderAndWait() is already called before it.

Apply the suggested fix: remove the defer and call t.manager.emergencyCleanup() directly before logger.Fatalf().

formatters/formatters_test.go (2)

401-438: Time expectations in tests are tied to local timezone while parsing normalizes to UTC

FieldTypeDate parsing now normalizes Unix timestamps to UTC (time.Unix(...).UTC()), but the tests in TestDateParsing and TestTableFormattingWithDates build expectations with time.Unix(..., 0).Format(...) (no UTC), and comments still mention “local timezone”. This will diverge on non‑UTC environments and doesn’t match the new UTC behavior.

Consider updating expectations and comments to be explicitly UTC-based, e.g.:

expected := time.Unix(1705315800, 0).UTC().Format("2006-01-02 15:04:05")

and aligning table tests with that same convention.

Also applies to: 453-456, 588-601


462-487: TestNestedMapFormatting no longer exercises the pretty map rendering path

TestNestedMapFormatting now does:

fieldValue, err := field.Parse(nestedData)
formatted := fmt.Sprintf("%v", fieldValue.Primitive())

but FieldValue.Primitive() for map values returns the raw map (or its default %v representation), not the prettified, title‑cased, indented map string the test asserts against ("Level1:", "Level2:", tabs, etc.).

Unless some other code is populating StringValue or altering Primitive() for maps, this test is either going to fail or is no longer validating the real formatting logic (which seems to live in the pretty/formatter layer).

I’d recommend:

  • Either calling the code that actually formats nested maps for display (e.g., via the PrettyFormatter/PrettyData pipeline), or
  • Asserting against fieldValue.Text.String() (if that’s where map prettification now happens), and adjusting expectations accordingly.

Right now the assertions about Level1: and indentation don’t match what Primitive() returns for a map.

api/types.go (1)

410-428: Primitive now returns formatted strings for dates; doc comment is outdated

Primitive() now does:

if v.TimeValue != nil {
    // Return formatted date string without timezone
    return v.TimeValue.Format(v.DateTimeFormat())
}

but the comment still says it returns “string, int64, float64, bool, time.Time or the raw value”. Since dates are now returned as formatted strings (not time.Time), that description is misleading.

Two suggestions:

  • Update the comment to reflect the new behavior (dates → formatted strings), and
  • Double-check any callers that might have relied on getting a time.Time back and adjust them (or use a dedicated accessor) if needed.

This change is semantically important for downstream consumers (e.g., JSON marshaling).

task/manager.go (1)

702-707: Debug output prints atomic.Bool structs, not boolean values

Debug() currently interpolates global.noColor and global.noProgress directly, which are atomic.Bool values. This will print the struct representation (e.g. {0}) instead of a plain true/false, and also bypasses the intended .Load() accessor.

Consider switching to:

- result += fmt.Sprintf("Task Manager: {no-color=%v, no-progress=%v, workers=%v}\n", global.noColor, global.noProgress, global.workersActive.Load())
+ result += fmt.Sprintf(
+  "Task Manager: {no-color=%v, no-progress=%v, workers=%v}\n",
+  global.noColor.Load(),
+  global.noProgress.Load(),
+  global.workersActive.Load(),
+)
🧹 Nitpick comments (74)
formatters/filter_integration_test.go (4)

52-91: Clarify or tighten runJQQuery behavior for non‑JSON/YAML formats

runJQQuery’s comment says non‑JSON/YAML formats are “converted to JSON first,” but the default branch just calls json.Unmarshal on the raw output string. For CSV/Markdown/plain text this will just fail with a JSON parse error, which is fine for a misuse but contradicts the comment and is a bit misleading for future readers.

Since all current call sites only pass "json" or "yaml", two options that would make this more robust:

  • Narrow the helper to only support JSON/YAML and fail fast for anything else:
 func runJQQuery(output string, jqExpr string, format string) ([]interface{}, error) {
   var data interface{}

   // Parse output based on format
   switch format {
   case "json":
     if err := json.Unmarshal([]byte(output), &data); err != nil {
       return nil, err
     }
   case "yaml":
     if err := yaml.Unmarshal([]byte(output), &data); err != nil {
       return nil, err
     }
-  default:
-    // For non-JSON/YAML formats, convert to JSON first
-    if err := json.Unmarshal([]byte(output), &data); err != nil {
-      return nil, err
-    }
+  default:
+    return nil, fmt.Errorf("runJQQuery only supports json/yaml formats, got %q", format)
   }
  • Or, if you really want a JSON fallback, update the comment to match actual behavior (and document that non‑JSON text will just error on parse) so future test authors aren’t surprised.

355-440: Make CSV/Markdown expectations data‑driven instead of hard‑coded

For the CSV/Markdown cases, the test body always does:

Expect(result).To(ContainSubstring("Alice"))
Expect(result).ToNot(ContainSubstring("Bob"))

This matches the current fixtures but will be easy to break if someone adds another test case with different names.

Consider driving these expectations from the table alongside match/notMatch, e.g.:

-   match    string
-   notMatch string
+   match        string
+   notMatch     string
+   contains     []string
+   notContains  []string

Then in the else branch:

- } else {
-   // For CSV/Markdown, use simple substring validation
-   Expect(result).To(ContainSubstring("Alice"))
-   Expect(result).ToNot(ContainSubstring("Bob"))
- }
+ } else {
+   for _, s := range tt.contains {
+     Expect(result).To(ContainSubstring(s))
+   }
+   for _, s := range tt.notContains {
+     Expect(result).ToNot(ContainSubstring(s))
+   }
+ }

This keeps the behavior the same for current tests but makes the table safer to extend.


528-616: Strengthen some edge‑case assertions for clearer intent

The edge‑case table is useful and covers important scenarios (empty filter, match‑nothing, match‑all, invalid CEL, bad field). Two small tweaks would make the tests more explicit:

  • For “filter that matches nothing”, also assert that the overall result is empty (e.g., Expect(result).To(Equal("[]\n")) or similar, depending on your JSON encoder), not just that Alice is absent.
  • For “empty filter – no filtering”, you might also assert that Bob is still present to make the “no filtering” behavior explicit (either via jq or a simple substring check).

Not required for correctness, but it would make the expected behavior in these edge cases less ambiguous.


618-717: TreeNodeFiltering tests don’t currently validate filter semantics

The tree fixture is nicely constructed, but each test only asserts that the filtered output is non‑empty:

Expect(result).ToNot(BeEmpty(), "Filtered tree output should not be empty")

This would still pass if the filter logic for trees was accidentally ignored or misapplied.

Consider strengthening these tests by checking for presence/absence of specific labels or ids, for example:

  • For active == true: expect ChildA/GrandchildA present, ChildB/GrandchildB absent.
  • For _type == "child": expect only child nodes’ labels/ids present, no grandchildren.
  • For the grandchild compound filter: expect only GrandchildA present.

Even simple substring checks on result (e.g., ContainSubstring("ChildA"), Not(ContainSubstring("ChildB"))) would significantly increase the signal of these tests.

ai/cost.go (2)

20-22: Consider including TotalTokens in the emptiness check.

The Cost struct has a TotalTokens field that isn't checked in IsEmpty(). For consistency and correctness, consider including it in the check.

Apply this diff:

 func (c Cost) IsEmpty() bool {
-	return c.InputTokens == 0 && c.OutputTokens == 0 && c.InputCost == 0 && c.OutputCost == 0
+	return c.InputTokens == 0 && c.OutputTokens == 0 && c.TotalTokens == 0 && c.InputCost == 0 && c.OutputCost == 0
 }

43-68: Consider consolidating with GetCostsByModel().

The AggregateByModel() method has similar logic to GetCostsByModel() (lines 78-96). Both aggregate costs by model name. Consider whether these can be consolidated or one can call the other to reduce duplication.

For example:

func (c Costs) AggregateByModel() Costs {
	modelCosts := c.GetCostsByModel()
	aggregated := make(Costs, 0, len(modelCosts))
	for _, cost := range modelCosts {
		if !cost.IsEmpty() {
			aggregated = append(aggregated, cost)
		}
	}
	return aggregated
}
ai/agent.go (1)

194-194: Consider removing commented-out code.

The commented Temperature assignment could be dead code. If this is intentional (e.g., temporarily disabled), please add a comment explaining why. Otherwise, consider removing it entirely.

Apply this diff to remove the commented line:

 	MaxConcurrent: 4,
 	Debug:         false,
 	Verbose:       false,
-	// Temperature:   0.2,
 	StrictMCPConfig: true,
formatters/tree_test.go (1)

155-155: Track the skipped test issue.

The test is skipped with a clear reason: "Parser doesn't populate Tree field from TreeNode". Consider creating an issue to track this limitation if one doesn't already exist.

Do you want me to help generate an issue description for tracking this parser limitation?

api/tailwind/tailwind.go (1)

370-378: Consider a more maintainable suffix-stripping approach.

The current implementation strips suffixes one at a time, which works but could be refactored for better maintainability if more units are added in the future.

If additional units might be added, consider using a loop:

// Strip common CSS unit suffixes
units := []string{"ch", "px", "rem", "em", "vw", "vh"}
for _, unit := range units {
    valueStr = strings.TrimSuffix(valueStr, unit)
}
ai/json_cleanup.go (1)

14-15: Consider non-greedy matching for JSON extraction.

The regex patterns use greedy matching (.*), which could extract more content than intended when multiple JSON structures are present in the response.

Consider using non-greedy matching:

-	jsonObjectRegex = regexp.MustCompile(`(?s)\{.*\}`)
-	jsonArrayRegex  = regexp.MustCompile(`(?s)\[.*\]`)
+	jsonObjectRegex = regexp.MustCompile(`(?s)\{.*?\}`)
+	jsonArrayRegex  = regexp.MustCompile(`(?s)\[.*?\]`)

However, this may not work correctly for nested structures. A better approach might be to use a proper JSON parser to find the first valid JSON object/array, but that would require more complex implementation.

filesystem.go (1)

132-168: Consider logging skipped directories and files.

The buildFileTree function silently ignores errors when reading directories (line 150) and building child nodes (line 161). While this graceful degradation prevents the entire tree from failing, it may hide permission issues or filesystem problems that users should be aware of.

Consider adding debug logging for skipped items:

 		entries, err := os.ReadDir(path)
 		if err != nil {
+			logger.V(4).Infof("Skipping directory %s: %v", path, err)
 			return node, nil
 		}

 		for _, entry := range entries {
 			if !options.ShowHidden && strings.HasPrefix(entry.Name(), ".") {
 				continue
 			}

 			childPath := filepath.Join(path, entry.Name())
 			childNode, err := buildFileTree(childPath, maxDepth, currentDepth+1, options)
 			if err != nil {
+				logger.V(4).Infof("Skipping %s: %v", childPath, err)
 				continue
 			}
 			node.Children = append(node.Children, childNode)
 		}
flags.go (1)

51-56: Filter/no-color wiring looks correct; color branch can be simplified

Binding --filter into Flags.Filter and --no-color into Flags.NoColor lines up with the FormatOptions struct and ensures both formatting and logging can see these options. Mirroring NoColor into a.Color in UseFlags() correctly keeps logger coloring in sync.

You can simplify the branch without changing behavior:

 func (a *AllFlags) UseFlags() {
-	if a.NoColor {
-		a.Color = false
-	} else {
-		a.Color = true
-	}
+	a.Color = !a.NoColor
 	logger.Configure(a.Flags)
 	logger.V(6).Infof("Using flags: %s", MustFormat(*a, FormatOptions{Pretty: true}))

Also applies to: 81-88

formatters/parser.go (5)

73-104: GetStructRow TextList conversion is sound; inner CanInterface branch is redundant

Switching GetStructRow to return api.TextList and wrapping non-Pretty fields in api.Text{Content: value} fits the new Textable pipeline, and using processFieldValue before fmt.Sprintf should give consistent pointer handling.

The inner if fieldVal.CanInterface() / else is redundant because you already continue on !fieldVal.CanInterface() at Line 81, so the else branch is effectively unreachable. You can safely drop that else block and the second CanInterface check to simplify the function.


365-396: convertSliceToPrettyList correctly produces a TypedList from Pretty slice

Building an api.TypedList by iterating the slice, safely dereferencing pointers, and appending results of api.TryTypedValue is aligned with the new TypedValue design. Storing the result in prettyData.TypedList via the embedded TypedValue is also idiomatic.

One nuance: elements for which TryTypedValue returns nil are silently dropped. If you ever expect mixed slices (Pretty + non-Pretty), consider falling back to api.NewTypedValue(elem.Interface()) for those cases instead of skipping them.


548-549: Column ordering by Tailwind order‑X and TypedValue wrapping are reasonable; watch assumptions

The changes in convertSliceToPrettyDataWithOptions:

  • Return an empty PrettyObject schema for empty slices,
  • Derive tableFields from the first successful PrettyRow, and
  • Sort columns by api.ExtractOrderValue(text.Style) when the first row’s cells are api.Text

provide a sensible way to control column ordering using Tailwind order-* classes.

A couple of subtle points to be aware of:

  • Columns with the same orderValue retain the (random) Go map iteration order; if deterministic order among ties matters, you may want a secondary sort key (e.g., column name).
  • The final TypedValue: *api.TryTypedValue(rows) assumes TryTypedValue will always handle []api.PrettyDataRow (which it currently does). If that case is ever removed, this would panic; using api.NewTypedValue(rows) or guarding against nil would make the function more future‑proof.

If you want to harden this, verify whether any callers rely on deterministic column ordering for equal order-* values and, if so, introduce a tie‑breaker sort.

Also applies to: 645-684, 706-719


1041-1050: Tree conversion correctly hooks into TypedValue; consider guarding TryTypedValue

Using TypedValue: *api.TryTypedValue(rootNode) in convertSliceToTreeData is consistent with the TryTypedValue case for api.TreeNode (which builds a TextTree via NewTree). Given isTreeNodeSlice guards this path, TryTypedValue should always match.

If you want extra robustness against future changes to TryTypedValue, you could switch to:

if tv := api.TryTypedValue(rootNode); tv != nil {
    return &api.PrettyData{Schema: ..., TypedValue: *tv, Original: val.Interface()}, nil
}

and fall back or error if it returns nil.


1068-1069: Slice→table PrettyData now fully uses TableOptions and TypedValue

For convertSliceToPrettyData:

  • Empty slices now return a PrettyData with an empty PrettyObject schema (rather than a nil schema), which simplifies downstream handling.
  • TableOptions.Columns is populated from tableFields, and the row slice is wrapped via TypedValue: *api.TryTypedValue(rows).

This mirrors the options‑aware variant and is consistent with how tables are represented elsewhere. The same caveat about relying on TryTypedValue([]PrettyDataRow) always being non‑nil applies, but functionally this is a solid upgrade.

Also applies to: 1163-1178

formatters/html/html_icon_test.go (1)

1-9: Tests correctly target new HTML formatter API; consider avoiding dot‑import

Switching the test package to html and updating calls to formatter.Format(data, FormatOptions{}) ensures the tests exercise the new Format signature on NewHTMLFormatter.

The only stylistic concern is the dot‑import of github.com/flanksource/clicky/formatters. To reduce the chance of name collisions and make it clearer where FormatOptions comes from, you might prefer:

import (
    "strings"
    "testing"

    "github.com/flanksource/clicky/api/icons"
    fmtopts "github.com/flanksource/clicky/formatters"
)

// ...

html, err := formatter.Format(data, fmtopts.FormatOptions{})

This keeps the intent while staying explicit.

Also applies to: 21-23, 73-76

formatters/html/tree.js (1)

1-23: Tree expansion helper is straightforward; normalize IDs to avoid string/number mismatches

The createTreeData helper cleanly encapsulates expand/collapse behavior using a Set of node IDs and data-node-id attributes under this.$el, which fits well with an Alpine.js x-data pattern.

To avoid subtle bugs when IDs are sometimes numbers and sometimes strings (dataset values are always strings), consider normalizing in toggleNode / isExpanded:

toggleNode(id) {
    const key = String(id);
    if (this.expandedNodes.has(key)) {
        this.expandedNodes.delete(key);
    } else {
        this.expandedNodes.add(key);
    }
},
isExpanded(id) {
    return this.expandedNodes.has(String(id));
}

This guarantees consistency with n.dataset.nodeId.

flags/parser.go (1)

45-52: args/stdin tagging logic is correct; update comment for clarity

The new logic:

  • Treats field.Tag.Get("args") == "true" and field.Tag.Get("stdin") == "true" as boolean flags, and
  • Includes fields when they have a flag tag, or are marked as args/stdin, even if flag:"-"

is a sensible expansion of what ParseStructFields returns, and propagating IsArgs / IsStdin into FieldInfo gives downstream binding code enough information.

The top‑level comment still says “returns a flat list of all fields with flag tags,” which is now slightly misleading. Consider updating it to reflect the broader behavior, e.g.:

// ParseStructFields recursively parses struct fields (including embedded structs)
// and returns a flat list of all fields with flag/args/stdin tags.

Also applies to: 56-67

api/icons/files.go (1)

8-71: Filename-based icon mapping looks solid; consider Taskfile.yml/YAML variants as a follow-up

The basename/extension mapping is straightforward and safe, and the fallback to File is sensible. One small enhancement you might want to consider later is handling common Taskfile YAML variants (e.g. Taskfile.yml, Taskfile.yaml) in the basename switch so they also get the Makefile icon instead of the generic YAML icon.

formatters/pretty_formatter.go (1)

30-39: PrettyFormatter.Parse only populates Schema/Original; TypedValue remains empty

Parse currently delegates to p.parser.Parse and returns PrettyData with Schema and Original set, but leaves the embedded TypedValue (and any row/value structures) unset. If callers expect a fully populated PrettyData similar to ToPrettyData, this could be surprising.

If the intent is for Parse to be a schema-only helper, this is fine; otherwise, consider delegating to ToPrettyData (or mirroring its logic) so PrettyData from Parse is interchangeable with PrettyData from the normal formatting path.

formatters/html/pdf.css (1)

1-118: PDF layout overrides look appropriate; double-check media scope

The compacting of typography, spacing, grids, and tables is coherent for PDF output, and disabling tree toggles in this context makes sense. One thing to confirm is that this stylesheet is only loaded for PDF/preview HTML; with @media print, screen it will affect regular screen rendering on any page where it’s included. If you ever reuse pdf.css in non-PDF views, you may want to narrow this to @media print or a PDF-specific parent class.

formatters/html/html_pdf_formatter.go (1)

36-39: PDF conversion logic is duplicated and could be centralized

Both Format and FormatPrettyData contain nearly identical sequences for:

  • Creating temp HTML/PDF files,
  • Invoking ChromiumConverter.Convert, and
  • Reading back the generated PDF bytes.

You could reduce duplication and future maintenance risk by extracting a helper like convertHTMLToPDF(htmlContent string) (string, error) that encapsulates the temp-file lifecycle and conversion call. Both methods would then only be responsible for producing HTML (with or without options) and delegating the rest.

Also applies to: 80-97, 158-197

formatters/table_labels_test.go (1)

35-42: TableOptions.Columns usage looks correct; update comment to match new API

The switch from TableOptions{Fields: ...} to TableOptions{Columns: ...} in all schemas aligns with the current api.TableOptions definition and the tests’ intent (labels + ordering).

One minor follow‑up: the comment on Line 167 still says “order of TableOptions.Fields”; it should now say “TableOptions.Columns” to avoid confusion for future readers.

Also applies to: 122-129, 184-193, 260-266, 311-317

flags/assignment.go (1)

12-12: Args/stdin precedence is clear; confirm behavior when both stdin and args are present

The new precedence for string and string-slice fields is:

  • Use existing flag value if non‑empty.
  • Otherwise, if fv.IsArgs and len(args) > 0, take from args.
  • Otherwise, if fv.IsStdin and len(args) > 0, also take from args.
  • Otherwise, if fv.IsStdin and isStdinAvailable, read from stdin.

This means for a stdin:"true" field, if both stdin and args are present, args will win and stdin will be ignored. That may be intentional (args override piped input), but many CLIs instead prefer stdin when explicitly requested and only fall back to args when stdin is absent.

If you’d rather “prefer stdin when available, then fall back to args”, you could reorder the IsStdin branches, e.g.:

-    } else if val == "" && fv.IsStdin && len(args) > 0 {
-        // stdin:"true" can also use args as fallback
-        val = args[0]
-    } else if val == "" && fv.IsStdin && isStdinAvailable {
+    } else if val == "" && fv.IsStdin && isStdinAvailable {
         if lines, err := loadFromStdin(); err != nil {
             return err
         } else if len(lines) > 0 {
             val = lines[0]
         }
+    } else if val == "" && fv.IsStdin && len(args) > 0 {
+        // stdin:"true" can also use args as fallback
+        val = args[0]
     }

and similarly for the string-slice branch.

Also applies to: 27-39, 58-71

formatters/time_duration_formatting_test.go (1)

21-45: Avoid passing an explicit empty style string into api.Human

runTests currently always does:

text := api.Human(tt.input, tt.style)

When tt.style is the zero value (""), this still passes a style argument to Human, which treats "" as a real style token.

To better model “no extra style”, consider:

-            text := api.Human(tt.input, tt.style)
+            var text api.Text
+            if tt.style != "" {
+                text = api.Human(tt.input, tt.style)
+            } else {
+                text = api.Human(tt.input)
+            }

This keeps behavior explicit when a style is set while avoiding accidental empty-class artifacts in the HTML/ANSI output when no style is intended.

format.go (3)

76-84: MustPrint now waits on tasks and omits the trailing newline—confirm call sites

MustPrint changed to:

  • Call task.Wait() unconditionally.
  • Use fmt.Print(result) instead of fmt.Println(result).

Two implications:

  1. Potential deadlock if called from within a worker

    task.Wait() blocks until workersActive == 0. If MustPrint is ever invoked from inside a task worker, this will block forever. That may be fine if you only ever call MustPrint from top‑level CLI code after scheduling tasks, but it’s worth double‑checking.

    If you want to keep Wait() as a top‑level concern, an alternative is to call task.Wait() explicitly in your main/entrypoint and keep MustPrint as a pure “format + print” helper.

  2. Change in output formatting

    Switching from Println to Print removes the automatic trailing newline. If existing callers assume a newline (e.g., for shell pipelines), you may want to either:

    • Append "\n" to result in Formatters, or
    • Keep Println and trim where needed.

Please verify that current call sites are safe under this new behavior.


100-111: CompactList wrapper is fine; consider delegating to api.CompactList

The new CompactList helper mirrors api.CompactList but adds a special case for empty lists:

if len(items) == 0 {
    return Text("[]", "text-muted")
}
list := api.List{MaxInline: 3}
for _, item := range items {
    list.Items = append(list.Items, Human(item))
}
return list

That’s handy, but it does duplicate the list-building logic already present in api.CompactList. To reduce future divergence, you could delegate:

func CompactList[T any](items []T) api.Textable {
    if len(items) == 0 {
        return Text("[]", "text-muted")
    }
-   list := api.List{MaxInline: 3}
-   for _, item := range items {
-       list.Items = append(list.Items, Human(item))
-   }
-   return list
+   return api.CompactList(items)
}

The Badge alias export (var Badge = api.Badge) is straightforward and makes the HTML helper easier to reach from the clicky package.

Also applies to: 136-137


153-155: Map now skips empty/<nil> values; avoid double stringification

Filtering out effectively-empty values in Map is a good idea:

if fmt.Sprintf("%v", m[k]) == "" || fmt.Sprintf("%v", m[k]) == "<nil>" {
    continue
}

You can avoid calling fmt.Sprintf twice per entry and make the intent a bit clearer:

-    if fmt.Sprintf("%v", m[k]) == "" || fmt.Sprintf("%v", m[k]) == "<nil>" {
-        continue
-    }
+    sval := fmt.Sprintf("%v", m[k])
+    if sval == "" || sval == "<nil>" {
+        continue
+    }

Also be aware this check relies on the %v representation: any type whose String() returns "" or "<nil>" will be filtered, which is probably fine but worth keeping in mind.

api/tree_html.go (1)

66-100: Unify node ID generation instead of recomputing from nodeCounter

For interactive non-leaf nodes you call generateNodeID() once, then later derive IDs via fmt.Sprintf("node-%d", r.nodeCounter). This works only as long as nothing else increments nodeCounter between those uses and tightly couples the markup to internal state.

It would be more robust to compute nodeID once and reuse it for the toggle span, the clickable label, and the x-show <ul>:

-    if len(children) > 0 && r.interactive {
-        // Alpine.js toggle for nodes with children
-        nodeID := r.generateNodeID()
+    hasChildren := len(children) > 0
+    var nodeID string
+    if hasChildren && r.interactive {
+        // Alpine.js toggle for nodes with children
+        nodeID = r.generateNodeID()
         result.WriteString(fmt.Sprintf(
             `<span class="tree-toggle" @click.stop="toggleNode('%s')" data-node-id="%s">`,
             nodeID, nodeID))
         ...
-    } else {
+    } else {
         result.WriteString(`<span class="tree-leaf-indicator">•</span>`)
     }
@@
-    if len(children) > 0 && r.interactive {
-        nodeID := fmt.Sprintf("node-%d", r.nodeCounter)
+    if hasChildren && r.interactive {
         result.WriteString(fmt.Sprintf(
             `<span class="tree-node cursor-pointer" @click="toggleNode('%s')">`, nodeID))
@@
-    if len(children) > 0 {
-        if r.interactive {
-            nodeID := fmt.Sprintf("node-%d", r.nodeCounter)
+    if hasChildren {
+        if r.interactive {
             result.WriteString(fmt.Sprintf(
                 `<ul class="tree-children ml-4 mt-1 space-y-1" x-show="isExpanded('%s')">`, nodeID))

This keeps the ID logic localized and resilient to future changes in how IDs are generated or counters are managed.

formatters/csv_formatter.go (1)

54-65: Propagate per-row CSV write errors explicitly

You currently ignore the error from writer.Write for each row and only check writer.Error() after Flush(). While this ultimately surfaces failures, it makes it harder to localize issues and slightly obscures control flow.

Consider checking the error per row and returning early:

-    for _, row := range table.Rows {
-        writer.Write(table.AsString(row))
-    }
+    for _, row := range table.Rows {
+        if err := writer.Write(table.AsString(row)); err != nil {
+            writer.Flush()
+            return "", fmt.Errorf("failed to write CSV row: %w", err)
+        }
+    }

This keeps the error path closer to the failing operation while preserving the final writer.Error() safety net.

formatters/pretty_row_integration_test.go (1)

155-201: Header-order test is clear; minor robustness improvement is optional

The column-ordering test is a nice end-to-end check that order-N styles affect header order. Using strings.Index on the header line works here because the column names are distinct.

If you ever add columns with overlapping names, consider parsing the header row into fields (strings.Split(headerLine, "|")) and asserting positions by index rather than substring offsets, but this is optional given the current names.

formatters/html/html_formatter_test.go (1)

11-392: Comprehensive HTML formatter test coverage looks solid

The tests exercise schema parsing, nested maps, date/currency formatting, table header labels, and PDF mode in a realistic way; nothing stands out as fragile beyond the usual string‑matching brittleness in HTML tests.

api/pretty_row_test.go (2)

104-123: Tighten the first Text type assertion for consistency

You capture the ok value from nameField.Textable.(Text) but never assert it, while you do assert ok for countField. Either drop ok there or add an Expect(ok).To(BeTrue()) before using nameText to keep the pattern consistent and make failures easier to diagnose.

-			nameText, ok := nameField.Textable.(Text)
-			Expect(nameText.Content).To(Equal("Interface Test"))
-			Expect(nameText.Style).To(Equal("font-bold"))
+			nameText, ok := nameField.Textable.(Text)
+			Expect(ok).To(BeTrue())
+			Expect(nameText.Content).To(Equal("Interface Test"))
+			Expect(nameText.Style).To(Equal("font-bold"))

186-202: OrderedTestStruct.PrettyRow ordering contract is clear

Defining explicit order-* styles plus an unordered field (Age) is a clear way to exercise ordering behavior; comments documenting the intended relative positions will help future tests that rely on this struct.

formatters/map_fields_test.go (1)

31-82: Map/table formatter tests updated cleanly to new PrettyData APIs

The migration to TableOptions.Columns, api.NewStructParser(), GetValue/GetTable, and .String() is consistent across all tests (including the XTests) and keeps assertions focused on rendered content rather than internal maps, which should make future refactors easier.

Also applies to: 136-161, 237-261, 288-332, 349-394

task/batch_test.go (1)

162-201: Use started/completed counters in assertions or remove them

started and completed atomics are incremented but never asserted, so the test doesn’t currently verify its comment that “all goroutines complete before the channel is closed.” Either add an assertion such as Expect(completed.Load()).To(Equal(started.Load())) (with appropriate tolerance if cancellation is possible), or drop these counters to keep the test focused on result counts.

task/task.go (1)

652-653: Guard ellipsis width against very small terminals and confirm log prefix change

Using lo.Ellipsis(displayName, api.GetTerminalWidth()-20) is fine for normal widths, but on very narrow terminals GetTerminalWidth()-20 can go non‑positive. It’s safer to clamp to a small minimum:

- text.Content = fmt.Sprintf("%s %-10s", lo.Ellipsis(displayName, api.GetTerminalWidth()-20), duration)
+ maxWidth := api.GetTerminalWidth() - 20
+ if maxWidth < 10 {
+     maxWidth = 10
+ }
+ text.Content = fmt.Sprintf("%s %-10s", lo.Ellipsis(displayName, maxWidth), duration)

Also, the change from "\n\t%s" to "\n%s" for log child lines is intentional but impacts indentation; just make sure downstream renderers are happy with the new alignment.

Also applies to: 707-710

task/batch.go (5)

16-21: Consider removing or populating the unused Results field.

The Results field in the Batch[T] struct is exported but never populated by the Run() method. Results are only returned via the channel. Consider either:

  1. Removing the field if it's not needed
  2. Populating it during Run() if callers are expected to access results via the struct

35-37: Document the default MaxWorkers value.

The default MaxWorkers value of 3 appears arbitrary. Consider documenting why this specific value was chosen or making it configurable via a package-level constant.


61-75: Improve debug tracing for batch items.

Line 62 traces item which is a function pointer and won't produce useful output. Consider using itemNum or a descriptive label instead:

-			b.tracef(t, "Queuing %s %d of %d", item, i+1, total)
+			b.tracef(t, "Queuing item %d of %d", i+1, total)

Similarly on line 75:

-			b.tracef(t, "Acquired semaphore %v %d of %d", item, i+1, total)
+			b.tracef(t, "Acquired semaphore for item %d of %d", i+1, total)

And line 97:

-				b.tracef(t, "Running %s %d of %d", item, itemNum, total)
+				b.tracef(t, "Running item %d of %d", itemNum, total)

121-121: Consider making the batch timeout configurable.

The 5-hour timeout (line 121) is hardcoded and may not be appropriate for all use cases. Consider making this configurable via a field in the Batch[T] struct:

type Batch[T any] struct {
	Name       string
	Items      []func(logger logger.Logger) (T, error)
	MaxWorkers int
+	Timeout    time.Duration  // Default: 5 hours if not set
	Results    []T
}

Then in the Run method:

+			timeout := b.Timeout
+			if timeout == 0 {
+				timeout = 5 * time.Hour
+			}
-			timeout := time.After(5 * time.Hour)
+			timeoutChan := time.After(timeout)

			for {
				select {
				case <-ctx.Done():
					// ...
-				case <-timeout:
+				case <-timeoutChan:

197-197: Review the hardcoded trace logging level.

Setting the log level to logger.Trace1 for all batch operations may produce excessive output in production. Consider:

  1. Making the log level configurable
  2. Only enabling trace logging when the DEBUG environment variable includes "batch"
-	}).SetLogLevel(logger.Trace1)
+	}, task.WithLogLevel(logger.GetEffectiveLogLevel()))

Or conditionally:

+	logLevel := logger.Info
+	if strings.Contains(os.Getenv("DEBUG"), "batch") {
+		logLevel = logger.Trace1
+	}
-	}).SetLogLevel(logger.Trace1)
+	}).SetLogLevel(logLevel)
api/html.go (2)

128-152: Underline wrapping order contradicts the comment

In formatHTML you state “Underline goes innermost” but prepend "u" to the tags slice:

if style.Underline {
    tags = append([]string{"u"}, tags...)
}

Given the subsequent loop, this makes <u> the outermost tag. To make underline innermost, append it last instead:

if style.Underline {
-    tags = append([]string{"u"}, tags...)
+    tags = append(tags, "u")
}

73-81: Replace custom HTML escaping with stdlib and escape attribute values

The review comment correctly identifies two improvements:

  1. The html.EscapeString function escapes &, <, >, ', and " — the same characters your custom implementation handles, making the stdlib version a better choice for maintainability.

  2. The formatAttributes function at lines 197-203 does not escape attribute values before interpolation, which creates a vulnerability if attribute values ever contain special characters (particularly "). While current usages in this file use hardcoded or validated class names, this is a valid defensive programming concern.

Both improvements are recommended for security and code quality best practices.

formatters/manager.go (3)

197-205: FormatWithOptions: []byte shortcut won’t affect json.RawMessage

The new shortcut:

if len(data) == 1 {
    switch v := data[0].(type) {
    case string:
        return v, nil
    case []byte:
        return string(v), nil
    case api.Textable:
        return formatTextable(v, options)
    }
}

does convert raw []byte to string, but it does not match json.RawMessage (its dynamic type is json.RawMessage, not []byte). Given the PR’s goal (“Convert json.RawMessage to JSON strings instead of byte arrays”), you probably also want:

-import "encoding/json"
+import "encoding/json"

if len(data) == 1 {
    switch v := data[0].(type) {
    case string:
        return v, nil
-   case []byte:
-       return string(v), nil
+   case []byte:
+       return string(v), nil
+   case json.RawMessage:
+       return string(v), nil
    case api.Textable:
        return formatTextable(v, options)
    }
}

so RawMessage gets the same treatment.


274-279: HTML/HTML‑PDF error messages and custom formatter lookup could be simplified

In both FormatWithOptions and FormatWithSchema you repeat the custom‑formatter lookup and hard‑code an “html formatter not registered” message even for "html-pdf":

case "html", "html-pdf":
    if formatter, ok := GetCustomFormatter(format); ok {
        return formatter(data, options)
    }
    return "", fmt.Errorf("html formatter not registered, register using 'import _ github.com/flanksource/clicky/formatters/html'")

and:

case "html", "html-pdf":
    formatter, ok := GetCustomFormatter(options.Format)
    if !ok {
        return "", fmt.Errorf("%s formatter not registered, registing using 'import _ github.com/flanksource/clicky/formatters/html'", options.Format)
    }
    return formatter(prettyData, options)

Given you already do a generic GetCustomFormatter(format) check at the top of FormatWithOptions, the per‑case lookup is redundant. Also, the messages should use the actual format name and fix the typo.

Suggested clean‑up for the switch case:

case "html", "html-pdf":
-   if formatter, ok := GetCustomFormatter(format); ok {
-       return formatter(data, options)
-   }
-   return "", fmt.Errorf("html formatter not registered, register using 'import _ github.com/flanksource/clicky/formatters/html'")
+   return "", fmt.Errorf("%s formatter not registered, register using 'import _ \"github.com/flanksource/clicky/formatters/html\"'", format)

and mirror the same pattern (and spelling) in FormatWithSchema.

Also applies to: 386-411


155-180: formatTextable JSON/YAML behavior may serialize structs instead of user-facing text

formatTextable marshals the api.Textable value directly for JSON/YAML:

case "json":
    d, err := json.MarshalIndent(data, "", "  ")
...
case "yaml":
    d, err := yaml.Marshal(data)

Depending on the concrete Textable (e.g. api.Text), this will emit its struct fields instead of the plain string/HTML/Markdown that users might expect.

If the intent is “JSON/YAML of the rendered text”, consider:

case "json":
-   d, err := json.MarshalIndent(data, "", "  ")
+   d, err := json.MarshalIndent(data.String(), "", "  ")
...
case "yaml", "yml":
-   d, err := yaml.Marshal(data)
+   d, err := yaml.Marshal(data.String())

or clarify the contract so callers know they’re getting a structured representation.

api/table.go (1)

110-112: Lipgloss table logging at Info level may be too noisy

renderLipgloss logs on every render:

logger.Infof("Rendering lipgloss table with max width: %d", width)

This can spam logs in interactive or high‑frequency paths. Consider moving this behind a verbosity check (e.g. logger.V(4).Infof) or removing it once the layout is stable.

formatters/markdown_formatter.go (1)

52-88: TypedValue-based helpers look correct but appear unused

formatSummaryFieldsData and formatTreeData have been updated to work with api.TypedValue and use Markdown()/Tree appropriately, but the comments already note they “appear to be unused”.

If there are no external callers, consider either:

  • Removing them, or
  • Adding tests and explicit references to make their compatibility role clearer.

This will reduce maintenance surface area.

Also applies to: 230-240

formatters/formatters_test.go (2)

548-587: Table date/header assertions OK but rely on behavior outside this test

The table test correctly uses TableOptions{Columns: ...} and is tolerant about header case (ID vs id, CREATED AT vs created_at). The date checks only assert presence of the formatted date substrings, which keeps them robust against spacing changes.

Given the UTC normalization in the parser, once expectations are updated to explicitly use UTC (see previous comment), this test structure looks sound.

Also applies to: 593-603


605-679: Table word-wrapping test is minimal but useful

TestTableWordWrapping validates:

  • Headers are present (case-insensitive),
  • Long/short rows are present,
  • A key fragment of the long description is present (without relying on exact wrapping).

This is a good pragmatic check for integration with the table writer. If you want stronger coverage later, you might also assert that the long description appears on more than one line, but that’s optional.

api/styles.go (3)

11-163: ResolveStyles caching and parsing are correct but could be more efficient

The cache + RWMutex pattern is sound and avoids re‑parsing style strings across calls. Within the loop you:

  • Split styleStr into classes,
  • Call tailwind.ParseStyle(class) for each single class,
  • Then also manually handle font weight/style/decoration, size, padding, and colors.

Since ParseStyle is designed to handle a full style string, repeatedly calling it per class is redundant work; you’re effectively re‑parsing the same token several times (once by your manual logic and once by ParseStyle). Consider either:

  • Calling ParseStyle once per styleStr and using that to drive most of the properties, or
  • Dropping ParseStyle here entirely and relying purely on the manual per‑class handling.

Functionally this is fine; it’s just avoidable overhead on hot paths.


165-219: TailwindStyle’s Font field is never populated

ApplyTailwindStyle returns:

style := TailwindStyle{
    Foreground:    twStyle.Foreground,
    Background:    twStyle.Background,
    Bold:          twStyle.Bold,
    Faint:         twStyle.Faint,
    Italic:        twStyle.Italic,
    Underline:     twStyle.Underline,
    Strikethrough: twStyle.Strikethrough,
    TextTransform: twStyle.TextTransform,
}

but leaves Font at its zero value. Likewise classToTailwindStyle only copies boolean decoration flags and colors, not font name/size/weight.

If you intend TailwindStyle.Font to be part of the public payload, you probably want to:

  • Either populate it from twStyle / Class.Font, or
  • Remove it from TailwindStyle to avoid implying support that isn’t there yet.

Right now it’s a dead field in the public type.


221-269: ANSI formatting works but recreates termenv output every call

formatANSI builds a new termenv.Output on each invocation:

output := termenv.NewOutput(termenv.DefaultOutput().Writer(), termenv.WithProfile(termenv.ANSI))
termStyle := output.String(text)

Functionally this is fine, but on heavily used paths it’s a bit wasteful. A package‑level var output = termenv.NewOutput(...) reused across calls would be cheaper and still safe given termenv’s design.

The strikethrough handling is also a bit asymmetric (sometimes wraps with \x1b[0m, sometimes \x1b[29m). That’s acceptable for now, just something to watch if you see odd interactions with surrounding styles.

formatters/tree_formatter.go (3)

77-102: PrettyData → tree lookup path is clear and aligned with TypedMap

The changes in FormatPrettyData to:

  • Prefer data.Tree when present, then
  • Scan schema fields with format == api.FormatTree and use data.TypedMap[field.Name].Tree,

give a straightforward precedence order and should work well with the new TypedValue/Tree pipeline.

One note: this returns fieldValue.Tree.String() directly, bypassing TreeFormatter’s lipgloss rendering. That’s fine if Tree.String() is the canonical representation, but if you expect TreeFormatter’s options (ASCII vs Unicode, compact mode, etc.) to apply, you may eventually want a path that feeds the underlying api.TreeNode back through FormatTreeFromRoot.


138-189: buildLipglossTree behavior for nil nodes and children

For a nil node, buildLipglossTree returns tree.New(), never nil. Later, when iterating children:

childTree := f.buildLipglossTree(child, depth+1)
if childTree != nil {
    t = t.Child(childTree)
}

If any child is nil, this will attach an “empty” child node instead of skipping it. That’s harmless but slightly odd; returning nil for a nil node and checking for nil before Child() would make the intent clearer and skip empty nodes entirely.

Optional clean‑up, not a blocker.


205-238: Tailwind → lipgloss translation covers core cases, more utilities can be added later

parseTailwindToLipgloss correctly maps:

  • text-* / bg-* via tailwind.ParseTailwindColor,
  • Bold/italic/underline and strikethrough classes.

This is a good starting subset. If you later need support for additional Tailwind utilities (e.g., text-opacity-*, custom fonts, or spacing), they can be layered in here without disturbing callers.

api/types.go (5)

23-25: PrettyFull interface looks reasonable; check that implementers are wired up

Introducing:

type PrettyFull interface {
    PrettyFull() Textable
}

fits well with the new Textable-centric rendering. Just ensure any types that previously only implemented Pretty and expect richer output now implement PrettyFull where appropriate, and that callers prefer PrettyFull when available.


66-76: TableOptions.Columns tagged as "fields" – intentional compatibility?

TableOptions defines:

Columns []PrettyField `json:"fields" yaml:"fields"`

even though the Go field name is Columns. If this is to preserve the old on‑the‑wire schema (fields in JSON/YAML) while modernizing the in‑code name, that’s fine.

If you actually intend the external schema key to be "columns", this tag should be updated; otherwise, maybe add a short comment noting that the fields tag is kept for backward compatibility to avoid confusion.


90-102: FieldValue.Text and FieldValue.Tree extensions match new rendering model

Adding:

Text Textable
Tree TreeNode

gives you a clean bridge into the new Textable / TreeNode APIs. Just be aware that Parse() currently only initializes Text (via createText()); Tree is left to be set by higher‑level code, so callers shouldn’t assume it’s populated after Parse() alone.


533-580: Date parsing for FieldTypeDate consistently normalizes Unix timestamps to UTC

The updated FieldTypeDate case:

  • Treats string, int, int64, and float64 Unix timestamps as seconds since epoch,
  • Creates times with .UTC(),
  • Falls back to a set of date string formats.

This is a good cleanup and matches the comment changes in tests that talk about UTC. Just ensure all test expectations and any humanization functions (like Human in api/human.go) are aligned with this new UTC normalization, not local time.


620-735: createText centralizes formatted display, but map formatting is flat and unordered

createText() now drives all FieldValue.Text construction:

  • Nil → "null" with muted style,
  • *PrettyData → returned directly as Textable,
  • Maps/arrays produce simple {k: v} / [ ... ] strings,
  • Formats (currency, date, float, duration, bytes) get specialized styles.

Two points:

  1. For maps, you currently do:

    for k, val := range v.MapValue {
        pairs = append(pairs, fmt.Sprintf("%s: %v", k, val))
    }

    This is iteration over a Go map, so key order is nondeterministic, and you don’t use prettifyFieldName here. If deterministic ordering or title‑cased keys are important (as implied by map-related tests), you may want to:

    • Sort keys before joining, and/or
    • Use prettifyFieldName(k) for display.
  2. Because createText() is always called from Parse(), any manual FieldValue constructions that skip Parse() will end up with Text == nil and slightly different behavior in Color() and elsewhere. That’s OK as long as the public API expects Parse() to be the entry point.

Overall this is a solid consolidation; map formatting tweaks are optional but could improve consistency with tree/pretty output.

api/text.go (8)

4-11: CompactList and new lo import fit with Human/Textable infrastructure

CompactList[T] building a List{MaxInline: 3} and filling it with Human(item) is a nice generic helper and aligns with the format.go wrapper that uses api.Human. The lo import is also used later in PrintfWithStyle, so dependency-wise this is fine.

One behavioral difference vs some older helpers is that an empty list now renders as "" instead of "[]"; ensure that’s intentional for all call sites.

Also applies to: 37-45


107-117: List behavior is fine; Unstyled is currently unused

The List rendering methods (String/Markdown/ANSI/HTML) behave sensibly for numbered and bulleted lists and honor MaxInline. The new Unstyled flag isn’t used in these methods yet, so it’s effectively dead state.

If you intend Unstyled to suppress bullets/numbering, consider wiring it into Markdown/ANSI/HTML later; otherwise you could drop it to avoid confusion.

Also applies to: 119-193


194-255: New helpers (NewLine/HR/Space/Tab/Append string handling) improve composition

Notable points:

  • NewLine and HR now append BR/HR and re‑indent, which keeps HTML and plain-text fallbacks consistent.
  • Space() and Tab() append NBSP/TAB, which is better than raw spaces when targeting HTML.
  • Append’s string case now splits on \n and inserts BR between lines, avoiding literal newlines that don’t render in HTML.

These changes should make mixed-format rendering more robust. No correctness issues spotted.

Also applies to: 262-290


315-323: PrintfWithStyle now routes args through Human, which changes formatting semantics

Mapping args as:

args = lo.Map(args, func(i any, _ int) any {
    return Human(i)
})
t.Children = append(t.Children, Text{Content: fmt.Sprintf(format, args...), Style: style})

means each argument is first converted to a Text via Human, and then fmt.Sprintf prints those Text values using the Text.Format implementation.

This gives you consistent humanization (durations, numbers, etc.) but also means:

  • %s and %v now see Text instead of raw values, and
  • The resulting string may already contain ANSI codes when using non‑%s verbs.

If this function is only used for human-facing output, that’s acceptable. Just be aware this is a behavior change vs interpolating raw values, and document it accordingly so callers don’t rely on strict fmt formatting semantics here.


407-430: KeyValuePair/DescriptionList empty handling avoids noise but can leave stray separators

KeyValuePair.IsEmpty and KeyValue now suppress entries where value == nil or formats to "", which is good.

However, DescriptionList.String / ANSI / Markdown still do:

for _, item := range dl.Items {
    parts = append(parts, item.String()/ANSI()/Markdown())
}
return strings.Join(parts, ", ")

If some items are empty, you’ll get extra separators (e.g. ", value" or ", , value"). To tighten this up, you could skip empties:

for _, item := range dl.Items {
    if item.IsEmpty() {
        continue
    }
    parts = append(parts, item.String())
}

Same idea for ANSI/Markdown. Not a blocker, but it would make the output cleaner.

Also applies to: 493-517


519-545: Map helper bypasses KeyValue() and may render nil/empty values

Map constructs DescriptionList items using KeyValuePair{Key: k, Value: m[k], Style: style} directly, so the KeyValue nil/empty guard is not applied. Combined with IsEmpty, this can again produce empty strings in the list and stray separators (see previous comment).

If you want Map to inherit the same “skip empty values” semantics as KeyValue, consider using KeyValue(k, m[k], style) instead of constructing KeyValuePair directly, and then applying the same IsEmpty filtering when building the DescriptionList.


599-687: TextList is a useful façade over []Textable

TextList and its helpers (JoinNewlines, Join, Indent, As*, ANSI/HTML/Markdown) provide a clean way to work with slices of Textable without manually joining/indenting each time. Implementation is straightforward and consistent with the rest of the Text API.

Only nit: Indent() wraps each item as Text{Content: "\t"}.Add(item). If you ever support variable indentation (spaces vs tabs), you might want an option there, but as-is it’s fine.


689-713: ExtractOrderValue correctly parses order-N; consider Tailwind’s named orders

ExtractOrderValue scans for order-<number> and returns that integer, defaulting to 0. That’s perfectly adequate for the common order-1..order-12 cases and for sorting columns.

Tailwind also has named orders like order-first, order-last, and order-none. Right now those will map to 0 (same as “no order”), which is fine if you don’t use them in column styles; just be aware that if they appear, they won’t influence ordering.

api/parser.go (1)

449-502: parseTableData depends entirely on schema Columns; empty Columns yields structurally empty tables

parseTableData builds TextTable rows by iterating field.TableOptions.Columns for headers and per‑row values. If Columns is empty (e.g. schema not populated or auto‑detection failed), you end up with:

  • len(table.Rows) == val.Len(), but
  • Each row is an empty TableRow, and
  • Headers is empty as well.

If callers expect a non‑empty TextTable whenever the input slice contains data, you may want to:

  • Either derive columns from the first element when Columns is empty (similar to NewTableFromRows), or
  • Return TextTable{} early when len(field.TableOptions.Columns) == 0 to avoid misleading “empty” tables.
api/icons/icons.go (1)

186-190: Operation icons (Add/Delete/Edit/Rename) are not exposed via All

You added Add, Delete, Edit, and Rename icons (Lines 186–189) but they are not present in the All map (Lines 192–329). If All is meant as the central lookup registry, callers won’t be able to get these by key the way they do for skip, start, success, test, etc.

If that’s unintended, consider registering them:

 var All = map[string]Icon{
@@
 	"variable":            Variable,
 	"exclamation":         Exclamation,
 	"wrench":              Wrench,
 	"zombie":              Zombie,
+	"add":                 Add,
+	"delete":              Delete,
+	"edit":                Edit,
+	"rename":              Rename,
 }

If All is intentionally limited to a subset (e.g., mostly file/types), a short comment documenting the selection criteria would help future maintainers.

Also applies to: 192-329

Comment thread ai/api.go
Comment thread api/html.go
Comment on lines +8 to +43
func (t Text) HTML() string {
content := t.Content
for _, child := range t.Children {
content += child.HTML()
}

// Get the effective style (Class takes precedence over Style string)
var style TailwindStyle
var transformedText string
var originalStyle string

if t.Class != (Class{}) {
// Use Class if available
transformedText = content
style = classToTailwindStyle(t.Class)
// Could convert Class back to style string if needed
originalStyle = ""
} else if t.Style != "" {
// Fall back to Style string
transformedText, style = ApplyTailwindStyle(content, t.Style)
originalStyle = t.Style
} else {
// No style
transformedText = content
}

html := formatHTML(transformedText, style, originalStyle)

// Apply tooltip if present
if t.Tooltip != nil && t.Tooltip.String() != "" {
// HTML-escape the tooltip content using standard library
escapedTooltip := htmlEscapeString(t.Tooltip.String())
html = fmt.Sprintf(`<span title="%s">%s</span>`, escapedTooltip, html)
}
return html
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Unescaped Text.Content is emitted as raw HTML

Text.HTML() concatenates t.Content and each child’s HTML() and passes the result straight into formatHTML without escaping. If any untrusted content reaches Text.Content or a Text child, it will be rendered as raw HTML and can inject markup or scripts.

If the intent is “HTML by default”, this should be clearly documented and you may want a separate “safe” rendering path; otherwise, consider HTML‑escaping content by default and only allowing raw HTML via explicit HtmlElement usage.


🏁 Script executed:

#!/bin/bash
# Examine the full api/html.go file to understand all related functions
cat api/html.go

Length of output: 6101


🏁 Script executed:

#!/bin/bash
# Search for formatHTML definition
rg -n "func formatHTML" --type go

Length of output: 96


🏁 Script executed:

#!/bin/bash
# Search for ApplyTailwindStyle definition
rg -n "func ApplyTailwindStyle" --type go

Length of output: 90


🏁 Script executed:

#!/bin/bash
# Search for htmlEscapeString implementation
rg -n "func htmlEscapeString" --type go

Length of output: 55


🏁 Script executed:

#!/bin/bash
# Find all usages of Text.Content to understand data flow
rg -n "\.Content" --type go -B2 -A2 | head -100

Length of output: 5247


🏁 Script executed:

#!/bin/bash
# Look for any existing HTML escaping in the codebase
rg -n "html.EscapeString\|htmlEscape\|HTMLEscape" --type go

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check for Text struct definition and Comments field
rg -n "type Text struct" --type go -A 20

Length of output: 2015


🏁 Script executed:

#!/bin/bash
# Look for any tests related to Text.HTML() to understand expected behavior
rg -n "Text.*HTML\|TestText" --type go

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Search for sanitization, validation, or HTML-related functions
rg -n "sanitize\|sanitise\|clean\|escape\|validate" --type go -i | head -50

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Search for any README or documentation about HTML rendering
fd -type f -name "README*" -o -name "*.md" | xargs grep -l "HTML\|html" 2>/dev/null | head -5

Length of output: 170


🏁 Script executed:

#!/bin/bash
# Look for comments in api/text.go about Content field and rendering intent
cat api/text.go | head -100

Length of output: 2197


🏁 Script executed:

#!/bin/bash
# Check if Text is ever constructed with user input or external data
rg -n "api\.Text\{" --type go -A 3 | head -60

Length of output: 2298


🏁 Script executed:

#!/bin/bash
# Look at what Icon() returns to understand potential untrusted data
rg -n "func.*Icon\(\)" --type go -A 10 | head -30

Length of output: 974


🏁 Script executed:

#!/bin/bash
# Search for any upstream validation of content in Text structs
rg -n "Icon()" --type go -B 5 -A 2 | grep -E "Content|Icon" | head -30

Length of output: 1981


HTML content is output unescaped, creating XSS vulnerabilities in multiple places

Verification confirms the original concern. Text.Content and all HtmlElement content is concatenated and output directly without escaping:

  1. Text.HTML() (line 8-31): t.Content concatenated and passed to formatHTML() unescaped
  2. HtmlElement.HTML() (line 97): e.Content output directly in formatted string
  3. Badge() (line 100-102): Label parameter wrapped in unescaped HtmlElement.Content
  4. formatAttributes() (line 171-176): Attribute values not escaped—permits attribute injection

Additional findings:

  • htmlEscapeString() helper exists but is only used inconsistently (KeyValuePair escapes; Text does not)
  • No upstream sanitization or validation
  • No design documentation indicating raw HTML pass-through is intentional
  • The inconsistency (KeyValuePair escapes with explicit comment; Text does not) suggests this is an oversight
🤖 Prompt for AI Agents
In api/html.go around lines 8 to 43, Text.HTML currently concatenates t.Content
and child HTML unescaped and calls formatHTML, which creates XSS risk; update
Text.HTML to HTML-escape t.Content and each child HTML output (using the
existing htmlEscapeString helper or a central escape helper) before composing
transformedText and passing it to formatHTML; also audit and change
HtmlElement.HTML, Badge(), and formatAttributes to escape element content, badge
labels, and all attribute values respectively (use htmlEscapeString consistently
and avoid double-escaping by documenting/marking any trusted/raw HTML paths).

Comment thread api/html.go
Comment on lines +83 to +103
type HtmlElement struct {
Tag string
Attributes map[string]string
Content string
Fallback Textable
}

func Badge(label string, classes ...string) Textable {
allClasses := append([]string{"badge", "p-0.5", "mr-1", "rounded-lg", "text-xs", "font-light bg-gray-200"}, classes...)
return HtmlElement{
Tag: "span",
Attributes: map[string]string{
"class": strings.Join(allClasses, " "),
},
Content: label,
Fallback: Text{
Content: label,
Style: strings.Join(classes, " "),
},
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Badge label is not HTML‑escaped

Badge injects label directly as HtmlElement.Content, which HtmlElement.HTML() prints without escaping. A label containing </& will become raw HTML.

If labels can be user‑controlled, escape them before assigning to Content (e.g. using htmlEscapeString) or document that label must be trusted.


I need to verify the HTML escaping concern in the Badge function. Let me search the codebase for the HtmlElement.HTML() method implementation and available escaping utilities.


---

🏁 Script executed:

```shell
#!/bin/bash
# Find the HtmlElement.HTML() method implementation
rg -n "func.*HTML\(\)" api/ -A 15

Length of output: 11240


🏁 Script executed:

#!/bin/bash
# Search for HTML escaping utilities in the codebase
rg -n "Escape|escape" api/ --type=go -B 2 -A 2

Length of output: 4528


🏁 Script executed:

#!/bin/bash
# Search for Badge() function usage to understand where labels come from
rg -n "Badge\(" . --type=go -B 2 -A 2

Length of output: 745


🏁 Script executed:

#!/bin/bash
# Look at the full api/html.go file to understand context
cat api/html.go

Length of output: 6101


I found the issue. Let me verify the usage patterns and test coverage to assess the severity.


---

🏁 Script executed:

```shell
#!/bin/bash
# Search for tests involving Badge with special characters
rg -n "Badge" api/ --type=go -B 3 -A 5 | head -100

Length of output: 2153


🏁 Script executed:

#!/bin/bash
# Check if there are any tests with HTML-sensitive characters in Badge labels
rg -n "Badge.*<|Badge.*&" api/ --type=go

Length of output: 0


Escape the Badge label in Badge() function

Badge() injects label directly as HtmlElement.Content, which HtmlElement.HTML() outputs unescaped at line 125. A label containing <, &, or similar characters will render as raw HTML, creating an XSS vulnerability.

Apply HTML escaping to the label parameter before assignment:

func Badge(label string, classes ...string) Textable {
	allClasses := append([]string{"badge", "p-0.5", "mr-1", "rounded-lg", "text-xs", "font-light bg-gray-200"}, classes...)
	return HtmlElement{
		Tag: "span",
		Attributes: map[string]string{
			"class": strings.Join(allClasses, " "),
		},
		Content: htmlEscapeString(label),  // Escape label here
		Fallback: Text{
			Content: label,
			Style:   strings.Join(classes, " "),
		},
	}
}

This matches the pattern used in KeyValuePair.HTML() (lines 53–54) and uses the existing htmlEscapeString() utility already defined in this file.

🤖 Prompt for AI Agents
In api/html.go around lines 83 to 103, Badge() places the raw label into
HtmlElement.Content causing unescaped HTML/XSS; call the existing
htmlEscapeString(label) and assign that to HtmlElement.Content (leave the
Fallback.Text.Content as the original label), so the span output is safe while
keeping the fallback unchanged.

Comment thread api/human_test.go
Comment on lines +21 to +23
{input: fmt.Sprintf("(%v in, %v out)", Human(5403200), Human(9003200)), expected: "(5.4M in, 9M out)"},
{input: time.Date(2023, 10, 5, 14, 30, 0, 0, time.UTC), expected: "2023-10-05T14:30:00Z"},
{input: time.Date(2023, 10, 5, 0, 0, 0, 0, time.UTC), expected: "2023-10-05"},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Test expectation doesn’t match Human()’s UTC formatting

Human() currently formats UTC times with "2006-01-02 15:04:05" (space separator, no trailing Z), so this case will fail every run. Either adjust the expected string or update Human() to emit RFC3339 before landing this test.

-		{input: time.Date(2023, 10, 5, 14, 30, 0, 0, time.UTC), expected: "2023-10-05T14:30:00Z"},
+		{input: time.Date(2023, 10, 5, 14, 30, 0, 0, time.UTC), expected: "2023-10-05 14:30:00"},
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{input: fmt.Sprintf("(%v in, %v out)", Human(5403200), Human(9003200)), expected: "(5.4M in, 9M out)"},
{input: time.Date(2023, 10, 5, 14, 30, 0, 0, time.UTC), expected: "2023-10-05T14:30:00Z"},
{input: time.Date(2023, 10, 5, 0, 0, 0, 0, time.UTC), expected: "2023-10-05"},
{input: fmt.Sprintf("(%v in, %v out)", Human(5403200), Human(9003200)), expected: "(5.4M in, 9M out)"},
{input: time.Date(2023, 10, 5, 14, 30, 0, 0, time.UTC), expected: "2023-10-05 14:30:00"},
{input: time.Date(2023, 10, 5, 0, 0, 0, 0, time.UTC), expected: "2023-10-05"},
🤖 Prompt for AI Agents
In api/human_test.go around lines 21-23, the test expects UTC timestamps in
RFC3339 ("2023-10-05T14:30:00Z") but Human() currently formats UTC times as
"2006-01-02 15:04:05" (space separator, no trailing Z), causing the test to
fail; fix by either updating the test expectation to match Human()'s current
output or, preferably, modify Human() to emit RFC3339 for UTC times (use
t.Format(time.RFC3339) when the time is UTC or consistently format times to
RFC3339), and ensure the date-only case still returns the intended "YYYY-MM-DD"
string.

Comment thread api/human.go
Comment on lines +22 to +37
func HumanDate(d any, format string) Text {
if format == "" {
format = time.RFC3339
}
switch t := d.(type) {
case time.Time:
return Text{
Content: t.Format(format),
Style: "date",
}
case *time.Time:
return Text{
Content: t.Format(format),
Style: "date",
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Avoid nil-pointer panic for *time.Time in HumanDate

HumanDate calls t.Format(format) for *time.Time without a nil check. Passing a nil *time.Time will panic, unlike Human, which explicitly guards against nil pointers.

Consider handling nil the same way as Human:

 case *time.Time:
-    return Text{
-        Content: t.Format(format),
-        Style:   "date",
-    }
+    if t == nil {
+        return Text{}
+    }
+    return Text{
+        Content: t.Format(format),
+        Style:   "date",
+    }
🤖 Prompt for AI Agents
In api/human.go around lines 22 to 37, the *time.Time branch in HumanDate calls
t.Format(format) without checking for nil which will panic if a nil pointer is
passed; update that branch to handle nil like Human does by returning an empty
Text (or a Text with empty Content and Style "date") when t == nil, preserve the
existing default format behavior, and ensure all code paths return a Text to
avoid panics.

Comment on lines +49 to +78
ginkgo.Context("Time Formatting", func() {
tests := []formatFixture{
{
name: "RFC3339 time (UTC)",
input: time.Date(2024, 1, 15, 14, 30, 0, 0, time.UTC),
style: "date",
str: "2024-01-15T14:30:00Z",
ansi: "2024-01-15T14:30:00Z",
html: `<span class="date date">2024-01-15T14:30:00Z</span>`,
markdown: `2024-01-15T14:30:00Z`,
},
{
name: "RFC3339 time with milliseconds",
input: time.Date(2024, 1, 15, 14, 30, 45, 123456789, time.UTC),
style: "date",
str: "2024-01-15T14:30:45Z",
ansi: "2024-01-15T14:30:45Z",
html: `<span class="date date">2024-01-15T14:30:45Z</span>`,
markdown: `2024-01-15T14:30:45Z`,
},
{
name: "Zero time",
input: time.Time{},
style: "date",
str: "",
ansi: "",
html: `<span class="date date"></span>`,
markdown: ``,
},
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Time-formatting expectations (T/Z) appear out of sync with api.Human

For the “Time Formatting” and pointer tests you expect strings like:

str:  "2024-01-15T14:30:00Z"
html: `<span class="date date">2024-01-15T14:30:00Z</span>`

for time.Date(..., time.UTC) and *time.Time equivalents.

The current api.Human implementation (from api/human.go) formats time.Time roughly as:

  • YYYY-MM-DD if truncated to a day.
  • YYYY-MM-DD 15:04:05 (no timezone, space separator) for non‑midnight UTC.
  • RFC3339 only for non‑UTC times.

So for a UTC time with hours/minutes, Human will return "2024-01-15 14:30:00", not "2024-01-15T14:30:00Z".

Unless api.Human has been or will be updated in this PR to always emit RFC3339 for these cases, these tests will fail. You probably want to either:

  • Adjust the expectations to match the existing, more human‑oriented format ("YYYY-MM-DD 15:04:05"), or
  • Intentionally change api.Human(time.Time, "date") to use RFC3339 here and keep these tests as the new contract.

Also applies to: 247-255, 321-327


I'll verify this review comment by examining the actual api.Human implementation and the complete test expectations.


---

🏁 Script executed:

```shell
#!/bin/bash
# Find and examine the api.Human implementation
find . -type f -name "human.go" | head -20

Length of output: 14


🏁 Script executed:

#!/bin/bash
# Find the test file and examine all mentioned line ranges
find . -type f -name "time_duration_formatting_test.go" | head -20

Length of output: 45


🏁 Script executed:

#!/bin/bash
# Search for api.Human function definition
rg -n "func.*Human" --type go -A 20 | head -100

Length of output: 4008



---

🏁 Script executed:

```shell
#!/bin/bash
# Get complete api/human.go Human function implementation
cat ./api/human.go | head -120

Length of output: 2448


🏁 Script executed:

#!/bin/bash
# Read formatters test file at specified line ranges
cat ./formatters/time_duration_formatting_test.go | sed -n '49,78p;247,255p;321,327p'

Length of output: 1602


🏁 Script executed:

#!/bin/bash
# Check if api/human_test.go tests are passing or would pass
rg -n "expected:" ./api/human_test.go -A 1 -B 1

Length of output: 563



---

🏁 Script executed:

```shell
#!/bin/bash
# Find and examine the formatters code to see how it formats times
find . -type f -name "*.go" -path "*/formatters/*" | grep -v test | head -10

Length of output: 286


🏁 Script executed:

#!/bin/bash
# Look at the test setup to understand what's being tested
cat ./formatters/time_duration_formatting_test.go | head -50

Length of output: 1159


🏁 Script executed:

#!/bin/bash
# Search for the formatFixture usage and how it calls the formatter
rg -n "formatFixture" ./formatters/time_duration_formatting_test.go -A 2 -B 2 | head -50

Length of output: 571


Tests will fail: api.Human() for UTC times produces space-separated format, but test expectations require RFC3339

The review comment is accurate. Verification confirms the mismatch:

  • Implementation (api/human.go line 62-64): For UTC times with hours/minutes, Human() returns t.Format("2006-01-02 15:04:05") → produces "2024-01-15 14:30:00"
  • Test expectations (formatters/time_duration_formatting_test.go lines 56, 64, 252, 325): All expect "2024-01-15T14:30:00Z" (RFC3339)

This mismatch also exists in api/human_test.go line 22, which expects the same RFC3339 format. These tests will fail with the current implementation.

Either update test expectations to match the space-separated format, or change api.Human() to use RFC3339 for UTC times and align api/human_test.go accordingly.

Comment on lines +196 to +203
name: ">= 24h (2 days)",
input: 48 * time.Hour,
style: "duration",
str: "2d",
ansi: "2d",
html: `<span class="duration duration">2dh</span>`,
markdown: `2d`,
},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duration fixture HTML value likely a typo; also duplicated test name

In the ">= 24h (2 days)" fixture:

str:  "2d",
ansi: "2d",
html: `<span class="duration duration">2dh</span>`,

the HTML content 2dh doesn’t match the string/ANSI expectation 2d. Unless HTML() is intentionally adding an extra h, this looks like a typo and should probably be:

- html:     `<span class="duration duration">2dh</span>`,
+ html:     `<span class="duration duration">2d</span>`,

Also, both the 48h and 28h cases use the same name: ">= 24h (2 days)", which can make Ginkgo output ambiguous. Consider renaming one of them (e.g., " >= 24h (2d)" vs " >= 24h (28h / 1d4h)").

🤖 Prompt for AI Agents
In formatters/time_duration_formatting_test.go around lines 196 to 203, the test
fixture for the ">= 24h (2 days)" case has an HTML string that appears to be a
typo (`2dh`) and the test name is duplicated across two fixtures; update the
html field to match the expected output (change `<span class="duration
duration">2dh</span>` to `<span class="duration duration">2d</span>`) and rename
one of the duplicated test names so each fixture has a unique, descriptive name
(e.g., keep one as ">= 24h (2 days)" and rename the other to ">= 24h (28h /
1d4h)").

Comment on lines +138 to 189
// buildLipglossTree builds a lipgloss tree structure from a TreeNode
func (f *TreeFormatter) buildLipglossTree(node api.TreeNode, depth int) *tree.Tree {
if node == nil {
return ""
}

// Check max depth
if f.Options.MaxDepth >= 0 && depth > f.Options.MaxDepth {
return ""
return tree.New()
}

var result strings.Builder

_prefix := prefix
// Build the current line prefix
if depth > 0 {
if isLast {
_prefix += f.Options.LastPrefix
} else {
_prefix += f.Options.BranchPrefix
}
}
result.WriteString(_prefix)

// All TreeNodes now implement Pretty(), so use it for formatting
// All TreeNodes implement Pretty(), so use it for formatting
prettyText := node.Pretty()
// Convert Text to string with appropriate formatting

// Build the node label with styling
var nodeLabel string
if f.NoColor {
result.WriteString(strings.ReplaceAll(prettyText.String(), "\n", "\n"+_prefix))
nodeLabel = prettyText.String()
} else {

// FIXME parse for text for ANSI colors, and then reset the ANSI to print the prefix, and then reset back to the original ansi color
result.WriteString(strings.ReplaceAll(prettyText.ANSI(), "\n", "\n"+api.Text{Content: _prefix, Style: "text-white"}.ANSI()))
// Apply lipgloss styling if we have a style
if prettyText.Style != "" {
style := parseTailwindToLipgloss(prettyText.Style)
nodeLabel = style.Render(prettyText.Content)
} else {
nodeLabel = prettyText.Content
}
}

// Handle compact list node specially
if compactNode, ok := node.(*api.CompactListNode); ok && f.Options.Compact {
items := f.FormatCompactList(compactNode.GetItems(), "")
if items != "" {
result.WriteString(": ")
result.WriteString(items)
nodeLabel = nodeLabel + ": " + items
}
}

result.WriteString("\n")
// Create the tree with this node as root
t := tree.New().Root(nodeLabel)

// Check if node is collapsed (using pretty text as key)
// Check if node is collapsed
if f.Options.CollapsedNodes != nil && f.Options.CollapsedNodes[prettyText.String()] {
return result.String()
return t
}

// Process children
children := node.GetChildren()
for i, child := range children {
isLastChild := i == len(children)-1

// Build the prefix for child nodes
var childPrefix string
if depth > 0 {
childPrefix = prefix
if isLast {
childPrefix += f.Options.IndentPrefix
} else {
childPrefix += f.Options.ContinuePrefix
// Process children if not at max depth
if f.Options.MaxDepth < 0 || depth < f.Options.MaxDepth {
children := node.GetChildren()
for _, child := range children {
childTree := f.buildLipglossTree(child, depth+1)
if childTree != nil {
t = t.Child(childTree)
}
}

childOutput := f.FormatTree(child, depth+1, childPrefix, isLastChild)
result.WriteString(childOutput)
}

return result.String()
return t
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Nil Options on TreeFormatter will now panic

buildLipglossTree and FormatTreeFromRoot both assume f.Options is non‑nil:

  • if compactNode, ok := node.(*api.CompactListNode); ok && f.Options.Compact { ... }
  • if f.Options.CollapsedNodes != nil && ...
  • if !f.Options.UseUnicode { ... }

NewTreeFormatter always sets Options, but existing call sites may still construct TreeFormatter via a zero value or struct literal (e.g. &TreeFormatter{Theme: theme}). That would now cause a nil‑pointer dereference at runtime.

Defensive options:

  • Ensure all external construction goes through NewTreeFormatter, or
  • Add a small helper to lazily initialize f.Options in the methods:
if f.Options == nil {
    f.Options = api.DefaultTreeOptions()
}

before dereferencing it.

Also applies to: 240-271

🤖 Prompt for AI Agents
formatters/tree_formatter.go around lines 138-189 (also applies to 240-271):
methods assume f.Options is non-nil and will panic for zero-valued
TreeFormatter; before any dereference of f.Options add a defensive nil-check
that assigns a default (e.g. if f.Options == nil { f.Options =
api.DefaultTreeOptions() }) or call a small ensureOptions() helper at the start
of buildLipglossTree and other affected methods so f.Options is always non-nil
before using Compact, CollapsedNodes, UseUnicode, MaxDepth, etc.

Comment thread task/batch_test.go
Comment on lines +13 to +51
func TestBatch_ConcurrentContextCancellation(t *testing.T) {
// This test verifies that concurrent context cancellation doesn't cause panics
_, cancel := context.WithCancel(context.Background())

items := make([]func(logger.Logger) (string, error), 10)
for i := range items {
i := i
items[i] = func(log logger.Logger) (string, error) {
time.Sleep(10 * time.Millisecond)
return fmt.Sprintf("item-%d", i), nil
}
}

batch := &Batch[string]{
Name: "test-cancellation",
Items: items,
MaxWorkers: 5,
}

// Cancel context while batch is running
go func() {
time.Sleep(20 * time.Millisecond)
cancel()
}()

results := batch.Run()

// Should be able to read from channel without panics
count := 0
for range results {
count++
}

// Channel should be closed, reading again should immediately return
_, ok := <-results
if ok {
t.Error("Channel should be closed")
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Context cancellation in this test is currently a no-op

context.WithCancel creates a derived context that is never passed into the Batch, so the goroutine calling cancel() only cancels an unused context. The test effectively just drains the results channel.

If Batch.Run supports context, consider wiring this test through that API (or the task manager’s context) so that cancellation actually influences batch execution; otherwise the test name/comment should be adjusted to reflect what’s really being verified.

🤖 Prompt for AI Agents
In task/batch_test.go around lines 13 to 51, the test creates a cancellable
context with context.WithCancel but never passes that context into the Batch, so
cancel() is a no-op; either wire the cancellation into the Batch.Run call (or
the task manager’s API) so the derived context is passed to the batch execution
(e.g., change Batch.Run to accept a context or call the existing context-taking
API) and trigger cancel() after starting Run, or if Batch.Run does not support
contexts, remove the unused context/cancel logic and rename the test to reflect
that it only verifies safe channel draining (and update comments accordingly).

Comment thread task/batch_test.go
Comment on lines +203 to +231
func TestBatch_ContextCancellationPropagation(t *testing.T) {
// This test verifies that the batch can be canceled mid-execution
items := make([]func(logger.Logger) (string, error), 5)
for i := range items {
i := i
items[i] = func(log logger.Logger) (string, error) {
time.Sleep(10 * time.Millisecond)
return fmt.Sprintf("item-%d", i), nil
}
}

batch := &Batch[string]{
Name: "test-context-propagation",
Items: items,
MaxWorkers: 5,
}

results := batch.Run()

// Consume results
count := 0
for range results {
count++
}

// We should get at least some results
if count < 1 {
t.Error("Expected at least one result")
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

ContextPropagation test does not currently exercise cancellation

TestBatch_ContextCancellationPropagation’s description talks about mid‑execution cancellation, but the test never creates or cancels a context; it just drains all results and asserts count >= 1. Either:

  • Thread a cancelable context into Batch.Run and cancel it partway through, then assert partial results, or
  • Rename/retarget the test to explicitly validate “at least one result is produced” without referring to cancellation.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (9)
cobra_command.go (1)

119-160: args:"true" handling changes stdin fallback semantics and lacks multiple-args protection

The new RunE logic only passes args to the field marked args:"true" and passes nil for all others:

// First pass: Find the field with args:"true"
var argsFieldValue *flags.FlagValue
for _, fv := range flagValues {
	if fv.IsArgs {
		argsFieldValue = fv
		break
	}
}

// Process flags and populate struct
for _, fv := range flagValues {
	// Only pass args to the field with args:"true", pass nil to all others
	argsToPass := []string(nil)
	if fv.IsArgs && argsFieldValue == fv {
		argsToPass = args
	}

	if err := flags.AssignFieldValue(optsValue, fv, argsToPass, isStdinAvailable()); err != nil {
		return err
	}
}

Given flags.AssignFieldValue’s documented behavior (stdin fields can also fall back to CLI args when present), this introduces two problems:

  1. Behavioral regression for stdin:"true" fields without args:"true"
    If there is no args:"true" field, stdin‑annotated fields now receive nil args and can no longer use CLI args as fallback, contrary to the logic in AssignFieldValue. Existing commands that relied on stdin:"true" plus positional args will break.

  2. No guard against multiple args:"true" fields
    If multiple fields are annotated with args:"true", the first one found in the flagValues map wins; map iteration order is undefined, so this is nondeterministic.

You can preserve the old behavior when no args:"true" field is defined, while enforcing a single args field, with a small adjustment:

  cmd.RunE = func(c *cobra.Command, args []string) error {
      // Create new instance of opts
      optsValue := reflect.New(optsType).Elem()

-     // First pass: Find the field with args:"true"
-     var argsFieldValue *flags.FlagValue
-     for _, fv := range flagValues {
-         if fv.IsArgs {
-             argsFieldValue = fv
-             break
-         }
-     }
+     // First pass: Find (and validate) the field with args:"true"
+     var argsFieldValue *flags.FlagValue
+     for _, fv := range flagValues {
+         if fv.IsArgs {
+             if argsFieldValue != nil {
+                 return fmt.Errorf("multiple args:\"true\" fields defined")
+             }
+             argsFieldValue = fv
+         }
+     }

      // Process flags and populate struct
      for _, fv := range flagValues {
-         // Only pass args to the field with args:"true", pass nil to all others
-         argsToPass := []string(nil)
-         if fv.IsArgs && argsFieldValue == fv {
-             argsToPass = args
-         }
+         // If an args field exists, only it sees the CLI args.
+         // Otherwise, preserve previous behavior and pass args to all fields
+         argsToPass := args
+         if argsFieldValue != nil && fv != argsFieldValue {
+             argsToPass = nil
+         }

          if err := flags.AssignFieldValue(optsValue, fv, argsToPass, isStdinAvailable()); err != nil {
              return err
          }
      }

This:

  • Keeps the new args:"true" semantics (only that field receives args) when such a field is present.
  • Preserves previous stdin/args fallback behavior when no args:"true" field exists.
  • Fails fast and deterministically if more than one args field is declared.
task/task.go (1)

477-504: Timeout handling in WaitFor reads from the timeout channel twice and can hang

In the WaitFor loop, the timeout case both selects on case <-timeout: and then reads from timeout again when formatting the error:

case <-timeout:
    // Timeout fallback to prevent infinite waiting
    t.mu.Lock()
    if t.status == StatusRunning || t.status == StatusPending {
        t.status = StatusFailed
        t.err = fmt.Errorf("task wait timeout after %s", <-timeout)
        ...

time.After delivers a single value on its channel; the second <-timeout can block indefinitely, defeating the timeout safeguard.

You can format the error without re‑reading the channel, e.g.:

case <-timeout:
    // Timeout fallback to prevent infinite waiting
    t.mu.Lock()
    if t.status == StatusRunning || t.status == StatusPending {
        t.status = StatusFailed
-       t.err = fmt.Errorf("task wait timeout after %s", <-timeout)
+       t.err = fmt.Errorf("task wait timeout after %s", 300*time.Second)
        t.endTime = time.Now()
        t.completed.Store(true)
    }
    t.mu.Unlock()
    goto done

Alternatively, store the timeout duration in a constant or variable and reuse it in both time.After and the error message.

formatters/manager.go (1)

386-412: HTML / HTML-PDF error message in FormatWithSchema still has typo and wrong hint

The FormatWithSchema branch for "html", "html-pdf" correctly uses GetCustomFormatter, but the error message:

  • Contains a typo ("registing"), and
  • Always suggests importing formatters/html even when options.Format is "html-pdf".

This is misleading for the html-pdf case.

Suggested tweak:

-		formatter, ok := GetCustomFormatter(options.Format)
-		if !ok {
-			return "", fmt.Errorf("%s formatter not registered, registing using 'import _ github.com/flanksource/clicky/formatters/html'", options.Format)
-		}
+		formatter, ok := GetCustomFormatter(options.Format)
+		if !ok {
+			return "", fmt.Errorf("%s formatter not registered, register using 'import _ \"github.com/flanksource/clicky/formatters/html\"'", options.Format)
+		}
task/manager.go (1)

701-707: Debug() prints atomic types instead of underlying booleans

Debug() currently formats global.noColor and global.noProgress directly, which will print the atomic.Bool struct rather than the flag value and also bypasses the intended atomic .Load() accessor.

Suggested change:

func Debug() string {
	var result string
-	result += fmt.Sprintf("Task Manager: {no-color=%v, no-progress=%v, workers=%v}\n", global.noColor, global.noProgress, global.workersActive.Load())
+	result += fmt.Sprintf(
+		"Task Manager: {no-color=%v, no-progress=%v, workers=%v}\n",
+		global.noColor.Load(),
+		global.noProgress.Load(),
+		global.workersActive.Load(),
+	)
	// rest unchanged...
api/filter.go (1)

193-205: Update stale comment and document intentional table-row string-only CEL typing

The comment at lines 197-203 in rowToCELMap is outdated. It references Primitive() and typed extraction, but the code uses fieldValue.String() which converts all values to strings. Consequently, getVariableDeclarationsFromRow (line 259) passes only strings to inferCELTypeFromValue, resulting in all table variables being declared as cel.StringType.

This diverges from tree-side behavior (via nodeToCELMap and getVariableDeclarationsFromNode), which preserves raw typed values for proper numeric/boolean CEL comparisons.

Actions required:

  1. Update the comment at lines 197-203 to accurately describe string extraction, or remove it entirely.
  2. Add a brief code comment explaining that table rows intentionally use string-only variables (for JSON-string backing compatibility), and document the implications for CEL expressions (e.g., numeric comparisons must parse values within CEL).
  3. Consider updating user-facing documentation to clarify that table filters operate on string values, unlike tree filters.
formatters/csv_formatter.go (1)

45-72: Handle empty tables so CSV output matches tests (no headers for zero rows)

FormatPrettyData writes headers unconditionally once data.FirstTable() is non‑nil:

table := data.FirstTable()
if table == nil {
    return "", fmt.Errorf("No tables defined")
}

if err := writer.Write(table.Headers.AsString()); err != nil { ... }

for _, row := range table.Rows {
    writer.Write(table.AsString(row))
}

But TestCSVTableEmptyData in formatters/table_labels_test.go expects an empty CSV string (no headers) when there are no data rows. If FirstTable() returns a table with zero rows, this implementation will still emit a header line and cause that test to fail.

You can align the formatter with the test contract by short‑circuiting on empty row sets:

 table := data.FirstTable()
 if table == nil {
-    return "", fmt.Errorf("No tables defined")
+    return "", fmt.Errorf("No tables defined")
 }
 
+// If the table has no rows, don’t emit headers; return empty CSV.
+if len(table.Rows) == 0 {
+    return "", nil
+}
+
 if err := writer.Write(table.Headers.AsString()); err != nil {
     return "", fmt.Errorf("failed to write CSV headers: %w", err)
 }
 
 for _, row := range table.Rows {
-    writer.Write(table.AsString(row))
+    if err := writer.Write(table.AsString(row)); err != nil {
+        return "", fmt.Errorf("failed to write CSV row: %w", err)
+    }
 }
 
 writer.Flush()

(The per‑row error handling is optional but makes the function more robust.)

api/types.go (1)

433-624: Map/struct early-return in PrettyField.Parse likely makes map formatting helpers unreachable

For actualType == FieldTypeStruct || actualType == FieldTypeMap (Lines 474–479), Parse returns immediately:

if actualType == FieldTypeStruct || actualType == FieldTypeMap {
    // For nested structures, we'll handle them separately
    return v, nil
}

This means the subsequent switch cases for FieldTypeMap and the generic map/array normalization + v.createText() initialization (Lines ~582–623) never run for map/struct values. As a result, callers that directly use PrettyField.Parse on a map (e.g. TestNestedMapFormatting) get a FieldValue with only Value populated and no pretty Text / MapValue, which contradicts the behavior your map formatting helpers imply.

If nested maps/structs are supposed to be handled exclusively by the higher‑level struct parser, consider:

  • Removing the dead FieldTypeMap case and documenting that PrettyField.Parse isn’t meant for nested maps/structs, or
  • Dropping the early‑return for maps (and possibly structs) so MapValue/Text are built and map formatting tests work via PrettyField.Parse.

Right now this split makes the API surprising and can lead to tests/users calling Parse directly getting very different results from the struct parser path.

formatters/formatters_test.go (1)

481-519: Fix TestNestedMapFormatting to match current PrettyField.Parse behavior or refactor Parse for maps

The test expectation is incompatible with the current implementation. PrettyField.Parse returns early (lines 469-472 of api/types.go) for actualType == FieldTypeMap, before populating MapValue and before calling createText(). This means:

  • Primitive() returns the raw map[string]interface{} via v.Value
  • fmt.Sprintf("%v", ...) produces Go's default map dump (e.g. map[level1:map[...]]), not the multi-line, title-cased, indented structure the assertions expect

Either:

  1. Remove or rewrite TestNestedMapFormatting to test actual output (raw map dump), or
  2. Refactor PrettyField.Parse to not early-return for maps—instead populate MapValue, call createText(), and ensure createText() produces the multi-line indented format the test expects
formatters/html/html_formatter.go (1)

295-365: Deferred nested tables render empty because TableOptions.Columns is not populated

In FormatPrettyData you collect “deferred” nested tables:

if fieldValue.Table != nil && fieldValue.FieldMeta != nil && !fieldValue.FieldMeta.CompactItems {
    deferredTables = append(deferredTables, deferredTable{
        table:     fieldValue.Table,
        fieldName: prettyFieldName,
        fieldMeta: fieldValue.FieldMeta,
    })
}
...
for _, deferred := range deferredTables {
    // ...
    field := api.PrettyField{Name: deferred.fieldMeta.Name}
    tableHTML = f.formatTableDataHTML(deferred.table, field)
    // or formatTableDataHTMLWithGridJS(...)
}

But field.TableOptions.Columns is empty in this synthetic PrettyField, and both formatTableDataHTML and formatTableDataHTMLWithGridJS iterate field.TableOptions.Columns to build headers and cells. Result: deferred tables will render a section title but no <th> or <td> content.

Given api.TextTable already carries header/field metadata (Headers, FieldNames), a robust fix is to derive a column slice from the TextTable when field.TableOptions.Columns is empty, and use that consistently:

func (f *HTMLFormatter) formatTableDataHTML(table *api.TextTable, field api.PrettyField) string {
    if table == nil || len(table.Rows) == 0 {
        ...
    }

+   // Derive columns either from schema or from the table itself
+   columns := field.TableOptions.Columns
+   if len(columns) == 0 && len(table.Headers) > 0 {
+       columns = make([]api.PrettyField, len(table.Headers))
+       for i, header := range table.Headers {
+           name := header.String()
+           if i < len(table.FieldNames) && table.FieldNames[i] != "" {
+               name = table.FieldNames[i]
+           }
+           columns[i] = api.PrettyField{
+               Name:  name,
+               Label: header.String(),
+           }
+       }
+   }
    // Write headers
-   for _, tableField := range field.TableOptions.Columns {
+   for _, tableField := range columns {
        ...
    }
    ...
-   for _, row := range table.Rows {
+   for _, row := range table.Rows {
-       for _, tableField := range field.TableOptions.Columns {
+       for _, tableField := range columns {
            fieldValue, exists := row[tableField.Name]
            ...
        }
    }
}

and similarly in formatTableDataHTMLWithGridJS, use the same derived columns slice for the columns: config and row data lookups.

That way:

  • Top-level tables using schema-backed TableOptions.Columns continue to work as before.
  • Deferred nested tables (and any other TextTable without schema) render correctly using table.Headers/FieldNames.

Also applies to: 428-447, 593-653, 656-741

♻️ Duplicate comments (23)
formatters/excel.go (1)

68-79: Use field names for row lookups and guard SaveAs when filename == ""

Two issues here are the same as in the earlier review of this file:

  1. Row lookup via headers
    Using table.Headers.AsString() as keys into table.Rows assumes headers are identical to row map keys. If headers are labels and rows are keyed by field names (as intended via TextTable.FieldNames), many cells will be silently empty.

    Prefer FieldNames for lookups and fall back to headers only when FieldNames is empty, while still using headers for display:

-	// Get headers and field names from TableOptions
-	var headers = table.Headers.AsString()
+	// Get headers for display and field names for row lookups
+	headers := table.Headers.AsString()
+	fieldNames := table.FieldNames
+	if len(fieldNames) == 0 {
+		fieldNames = headers
+	}
@@
-	// Write data rows using Text.String() for formatted text
-	for _, row := range table.Rows {
-		for i, fieldName := range headers {
-			cellRef := f.getCellReference(i+1, currentRow)
+	// Write data rows using Text.String() for formatted text
+	for _, row := range table.Rows {
+		for i, fieldName := range fieldNames {
+			cellRef := f.getCellReference(i+1, currentRow)
 			if fieldValue, exists := row[fieldName]; exists {
 				if err := file.SetCellValue(sheetName, cellRef, fieldValue.String()); err != nil {
 					return fmt.Errorf("failed to set cell value: %w", err)
 				}
 			}
 		}
 		currentRow++
 	}
@@
-	// Save the file
-	if err := file.SaveAs(filename); err != nil {
-		return fmt.Errorf("failed to save Excel file: %w", err)
-	}
+	// Save the file only when a filename is provided (on-disk export)
+	if filename != "" {
+		if err := file.SaveAs(filename); err != nil {
+			return fmt.Errorf("failed to save Excel file: %w", err)
+		}
+	}
  1. SaveAs with empty filename
    FormatPrettyData passes filename == "" to FormatPrettyDataToFile, so SaveAs("") will fail and make the in‑memory path unusable. The guard above fixes this while preserving behavior for on-disk exports.

Also applies to: 97-108, 120-120

task/batch_test.go (2)

13-51: Context cancellation in this test is still a no‑op

TestBatch_ConcurrentContextCancellation creates a cancellable stdlib context and calls cancel(), but that context is never threaded into Batch.Run (which relies on the task manager’s flanksourceContext.Context). The cancellation doesn’t influence batch execution; the test is effectively just draining the results channel.

Either:

  • Extend the Batch API to accept a context and pass the derived context through to Run, or
  • Remove the unused context/cancel logic and rename the test/comment to reflect that it verifies safe channel draining, not real cancellation.

203-231: TestBatch_ContextCancellationPropagation does not exercise any cancellation path

This test’s name and comment say “batch can be canceled mid‑execution”, but it never creates or cancels a context; it just runs 5 items and asserts count >= 1.

Same options as above:

  • If you want to test cancellation propagation, add a context‑aware API for Batch.Run (or use the manager’s context) and explicitly cancel it mid‑run, or
  • Rename the test and update the comment to match what’s actually being validated (e.g. that some results are produced and the channel closes cleanly).
formatters/tree_formatter.go (1)

18-35: Nil Options on TreeFormatter can still panic despite constructor defaulting

TreeFormatter methods dereference f.Options directly (e.g., f.Options.Compact, f.Options.CollapsedNodes, f.Options.MaxDepth, f.Options.UseUnicode) without a nil check. Although NewTreeFormatter now fills in default options when options == nil, external callers can still construct a zero-valued TreeFormatter (e.g., &TreeFormatter{Theme: theme}), leading to panics in buildLipglossTree and FormatTreeFromRoot. This matches the earlier review concern and remains unresolved.

A compact way to harden this:

type TreeFormatter struct {
	Theme   api.Theme
	NoColor bool
	Options *api.TreeOptions
}

+func (f *TreeFormatter) ensureOptions() {
+	if f.Options == nil {
+		f.Options = api.DefaultTreeOptions()
+	}
+}

func (f *TreeFormatter) Format(data ...any) (string, error) {
+	f.ensureOptions()
	// existing logic...
}

func (f *TreeFormatter) buildLipglossTree(node api.TreeNode, depth int) *tree.Tree {
+	f.ensureOptions()
	// existing logic...
}

func (f *TreeFormatter) FormatTreeFromRoot(root api.TreeNode) string {
+	f.ensureOptions()
	// existing logic...
}

(and similarly for any other method that reads f.Options).

Also applies to: 138-189, 240-271

formatters/manager.go (1)

87-103: HTML / HTMLPDF logic is inverted and will never successfully render HTML

HTML and HTMLPDF currently return an error when a formatter is registered and attempt to call a (nil) formatter when it is not, with an incorrect package path and typo in the message. This will either always error or panic and matches a previously raised concern.

Suggested fix:

func (f FormatManager) HTML(data interface{}) (string, error) {
-	if formatter, ok := GetCustomFormatter("html"); ok {
-		return "", fmt.Errorf("html formatter not registered, registing using 'import _ github.com/flanksource/clicky/formatters/http'")
-	} else {
-		return formatter(data, FormatOptions{})
-	}
+	formatter, ok := GetCustomFormatter("html")
+	if !ok {
+		return "", fmt.Errorf("html formatter not registered, register using 'import _ \"github.com/flanksource/clicky/formatters/html\"'")
+	}
+	return formatter(data, FormatOptions{})
}
 
func (f FormatManager) HTMLPDF(data interface{}) (string, error) {
-	if formatter, ok := GetCustomFormatter("html-pdf"); ok {
-		return "", fmt.Errorf("html-pdf formatter not registered, registing using 'import _ github.com/flanksource/clicky/formatters/http'")
-	} else {
-		return formatter(data, FormatOptions{})
-	}
+	formatter, ok := GetCustomFormatter("html-pdf")
+	if !ok {
+		return "", fmt.Errorf("html-pdf formatter not registered, register using 'import _ \"github.com/flanksource/clicky/formatters/html\"'")
+	}
+	return formatter(data, FormatOptions{})
}
examples/uber_demo/main.go (1)

585-611: Remove or gate debug/diagnostic output in showAll

showAll still writes debug diagnostics to stderr and logs the filesystem tree separately via clicky.Infof(clicky.MustFormat(*demo.FileSystem, ...)). For a user‑facing demo command this is noisy and bypasses the normal MustPrint/format flags.

Recommend either:

  • Removing the two fmt.Fprintf(os.Stderr, "[DEBUG ...]") lines and the clicky.Infof(...) call, or
  • Gating them behind a dedicated verbosity/log‑level flag and using the project logger consistently.
api/human_test.go (1)

21-23: Test expectation doesn't match Human() UTC formatting.

Line 22 expects "2023-10-05T14:30:00Z" (RFC3339 with T and Z), but Human() in api/human.go (lines 56-60 of relevant snippets) formats UTC times as "2006-01-02 15:04:05" (space separator, no trailing Z). This test will fail on every run.

A past review comment already flagged this issue. Fix by either:

  • Updating the test expectation to match the current implementation: "2023-10-05 14:30:00"
  • Or modifying Human() to emit RFC3339 for UTC times (use t.Format(time.RFC3339))

Apply this diff to fix the test expectation:

-		{input: time.Date(2023, 10, 5, 14, 30, 0, 0, time.UTC), expected: "2023-10-05T14:30:00Z"},
+		{input: time.Date(2023, 10, 5, 14, 30, 0, 0, time.UTC), expected: "2023-10-05 14:30:00"},
formatters/time_duration_formatting_test.go (2)

52-59: Re‑confirm api.Human time formatting contract vs RFC3339‑style expectations

These tests assume api.Human(..., "date") returns RFC3339‑like UTC timestamps (2024-01-15T14:30:00Z / 2024-01-15T14:30:45Z) for both plain time.Time and *time.Time, and that sub‑second precision is dropped. A previous revision of api.Human formatted UTC times as "YYYY-MM-DD 15:04:05" (space separator, no Z), which would make these expectations fail unless api.Human has been updated accordingly.

Please double‑check that the current api.Human implementation now matches this RFC3339‑style contract, or adjust the expectations to mirror the actual behavior.

Also applies to: 245-255, 321-327


196-212: Fix duration HTML typo and duplicate test name for >= 24h (2 days)

In the >= 24h (2 days) fixture for 48h:

str:  "2d",
ansi: "2d",
html: `<span class="duration duration">2dh</span>`,

the HTML content 2dh doesn’t match the string/ANSI "2d". This looks like a typo and will cause the HTML expectation to diverge from the others. Also, both the 48h and 28h cases share the same name, which can make Ginkgo output ambiguous.

Consider:

-            {
-                name:     ">= 24h (2 days)",
+            {
+                name:     ">= 24h (2 days, 48h)",
                 input:    48 * time.Hour,
@@
-                html:     `<span class="duration duration">2dh</span>`,
+                html:     `<span class="duration duration">2d</span>`,
@@
-            {
-                name:     ">= 24h (2 days)",
+            {
+                name:     ">= 24h (2d4h, 28h)",
                 input:    28 * time.Hour,
api/markdown.go (1)

30-50: Underline and faint styles are effectively ignored unless colors are set

hasColors currently only checks Foreground/Background:

hasColors := style.Foreground != "" || style.Background != ""
...
if hasColors {
    var styles []string
    ...
    if style.Faint {
        styles = append(styles, "opacity: 0.6")
    }
}

and Underline is never added to the style list. This has two effects:

  • style.Underline is never visible in the Markdown output, despite the comment claiming it “will be handled by HTML span.”
  • style.Faint only takes effect when a foreground or background color is present; a purely faint style is silently dropped.

To make these attributes observable, consider broadening the span condition and CSS generation, e.g.:

-    hasColors := style.Foreground != "" || style.Background != ""
+    hasSpan := style.Foreground != "" || style.Background != "" || style.Faint || style.Underline
@@
-    if hasColors {
+    if hasSpan {
         var styles []string
@@
         if style.Faint {
             styles = append(styles, "opacity: 0.6")
         }
+        if style.Underline {
+            styles = append(styles, "text-decoration: underline")
+        }

and update the later checks that rely on hasColors (for decoration placement) to use the appropriate boolean (hasSpan and/or a separate hasColors flag).

Also applies to: 44-47, 90-92

formatters/html/html_pdf_formatter.go (1)

14-17: FormatOptions now flow for non‑PrettyData, but *api.PrettyData still bypasses them

Nice improvements here:

  • Auto‑registering "html-pdf" in init() simplifies usage.
  • Changing Format to Format(data interface{}, opts FormatOptions) and calling f.htmlFormatter.Format(data, opts) means most callers’ options (theme, width, etc.) now affect HTML‑PDF rendering.

However, the special‑case branch:

if prettData, ok := data.(*api.PrettyData); ok {
    return f.FormatPrettyData(prettData)
}

still routes *api.PrettyData through FormatPrettyData, which doesn’t accept FormatOptions and internally calls f.htmlFormatter.FormatPrettyData(data) without options. So callers passing a prebuilt *api.PrettyData plus non‑default FormatOptions will still see those options ignored on this path.

Similarly, FormatToFile always uses FormatOptions{} when generating HTML, so there’s currently no way to influence PDF output there via options.

Consider one of:

  • Extending FormatPrettyData (and the HTML formatter’s FormatPrettyData) to accept FormatOptions and threading opts through, or
  • Removing the special‑case and letting f.htmlFormatter.Format(data, opts) handle *api.PrettyData directly, or
  • Providing a FormatToFileWithOptions variant so file output can be customized too.

This keeps behavior consistent regardless of whether callers pass raw structs/slices or pre‑parsed PrettyData.

Also applies to: 42-97, 100-143, 145-197

formatters/schema.go (1)

184-207: JSON/YAML lose primitive types by converting everything to strings

convertTypedValueToInterface falls back to tv.String() for non-map/list values, so numbers, booleans, and nulls become JSON/YAML strings instead of native types. formatPrettyDataToMap then serializes this structure, so "42", "true", etc. appear where 42, true were expected. Consider carrying original types through TypedValue (or metadata on FieldMeta) and branching here to return int/float/bool/nil where possible instead of string.

Also applies to: 209-231

api/human.go (1)

22-37: *Guard against nil time.Time in HumanDate to avoid panic

The *time.Time branch calls t.Format(format) unconditionally. If a nil *time.Time is passed, this will panic. Mirror the nil handling you already use in Human for *time.Time/*time.Duration:

	case *time.Time:
-		return Text{
-			Content: t.Format(format),
-			Style:   "date",
-		}
+		if t == nil {
+			return Text{}
+		}
+		return Text{
+			Content: t.Format(format),
+			Style:   "date",
+		}

This keeps HumanDate safe and consistent.

api/html.go (2)

8-43: Unescaped Text.Content is still emitted as raw HTML (XSS risk)

Text.HTML() concatenates t.Content and each child’s HTML(), then formatHTML wraps that string without escaping. If any Text or child content comes from untrusted input, it can inject arbitrary HTML/JS. The presence of htmlEscapeString suggests you intend to escape where needed, but it’s not applied here.

Consider:

  • HTML-escaping t.Content before styling, and
  • Treating children as already-HTML or adding a distinct “raw HTML” type for trusted markup,

so that the default Text path is safe and only explicit constructs can emit raw HTML.

Also applies to: 128-195


90-103: Badge label, HtmlElement.Content, and attribute values are not escaped (XSS)

Badge stores label directly in HtmlElement.Content, HtmlElement.HTML() prints e.Content verbatim, and formatAttributes interpolates attribute values as %s without escaping. A label or attribute containing <, ", or & will be rendered as raw HTML, enabling injection.

You already have htmlEscapeString; using it here would close the gap:

func Badge(label string, classes ...string) Textable {
@@
-		Content: label,
+		Content: htmlEscapeString(label),
@@
}

func (e HtmlElement) HTML() string {
@@
-	return fmt.Sprintf("<%s %s>%s</%s>", e.Tag, formatAttributes(e.Attributes), e.Content, e.Tag)
+	return fmt.Sprintf("<%s %s>%s</%s>", e.Tag, formatAttributes(e.Attributes), htmlEscapeString(e.Content), e.Tag)
}

func formatAttributes(attrs map[string]string) string {
@@
-		parts = append(parts, fmt.Sprintf(`%s="%s"`, k, v))
+		parts = append(parts, fmt.Sprintf(`%s="%s"`, k, htmlEscapeString(v)))

This keeps NBSP/TAB/BR/HR working while making generic elements safe.

Also applies to: 105-126, 197-203

api/table.go (1)

75-90: Header-derived map key should use plain text, not transformed output

In both render (Lines 75–82) and renderLipgloss (Lines 124–130), the fallback derives fieldName via fieldName = transform(header). For HTML/Markdown/ANSI transformers this injects markup/escape codes, so lookups against row[fieldName] will fail unless FieldNames is explicitly set, leading to empty cells despite present data.

You already use TransformerString(t.Headers[col]) in the style function for a stable key; the same approach should be used for data lookups:

-           if i < len(t.FieldNames) && t.FieldNames[i] != "" {
-               fieldName = t.FieldNames[i]
-           } else {
-               fieldName = transform(header)
-           }
+           if i < len(t.FieldNames) && t.FieldNames[i] != "" {
+               fieldName = t.FieldNames[i]
+           } else {
+               // Use plain-text header as the map key; apply `transform` only when rendering.
+               fieldName = TransformerString(header)
+           }

and similarly in renderLipgloss for fieldName when FieldNames is empty.

Also applies to: 121-130

api/icons/icons.go (2)

42-125: Unicode fallbacks for ArrowDoubleDown and LUA remain inconsistent

Two earlier nits are still present:

  • ArrowDoubleDown uses Unicode "⇕", which is the same as ArrowDoubleUpDown and semantically “up/down”, not “double down”.
  • LUA has Unicode: "", so in ANSI/Markdown/plain text you get an empty string, unlike other language icons which provide a visible fallback.

If these weren’t intentional, consider:

-   ArrowDoubleDown   = Icon{Unicode: "⇕", Iconify: "material-symbols-light:keyboard-double-arrow-down", Style: "muted"}
+   ArrowDoubleDown   = Icon{Unicode: "⇓", Iconify: "material-symbols-light:keyboard-double-arrow-down", Style: "muted"}

-   LUA               = Icon{Unicode: "", Iconify: "vscode-icons:file-type-lua", Style: "muted"}
+   LUA               = Icon{Unicode: "lua", Iconify: "vscode-icons:file-type-lua", Style: "muted"}

(or any other short but visible fallback for Lua).


24-30: Icon.HTML() still ignores Style, so WithStyle() has no effect in HTML

HTML() always renders:

return fmt.Sprintf(`<iconify-icon icon="%s" class="text-lg"></iconify-icon>`, i.Iconify)

and drops Icon.Style, even though most icons initialize Style and WithStyle() mutates it.

To make icon styling observable in HTML while preserving the base size, consider:

func (i Icon) HTML() string {
    if i.Iconify != "" {
-       return fmt.Sprintf(`<iconify-icon icon="%s" class="text-lg"></iconify-icon>`, i.Iconify)
+       class := "text-lg"
+       if i.Style != "" {
+           class += " " + i.Style
+       }
+       return fmt.Sprintf(`<iconify-icon icon="%s" class="%s"></iconify-icon>`, i.Iconify, class)
    }
    return i.Unicode
}

(Optionally also wrap the Unicode fallback in a <span class="..."> if you want the same styling applied there.)

api/parser.go (1)

415-427: Use parsed FieldValue instead of re-processing raw fieldVal

In the field.Type != "" && field.Format != "" branch you call field.Parse(fieldVal.Interface()), but then:

  • Only fieldValue.TimeValue is honored.
  • The non-time path falls back to p.ProcessFieldValue(fieldVal), ignoring any normalization/formatting Field.Parse applied (e.g. currency, URL, custom formats).

This preserves behavior only for time fields and silently drops Field.Parse semantics for every other format.

Consider routing the non-time path through the parsed FieldValue so the special handling in ProcessFieldValueWithVisited for FieldValue structs is actually used, e.g.:

-               // Convert FieldValue to TypedValue
-               if fieldValue.TimeValue != nil {
-                       values[field.Name] = TypedValue{Textable: Human(*fieldValue.TimeValue)}
-               } else {
-                       values[field.Name] = p.ProcessFieldValue(fieldVal)
-               }
+               // Convert parsed FieldValue to TypedValue
+               if fieldValue.TimeValue != nil {
+                       values[field.Name] = TypedValue{Textable: Human(*fieldValue.TimeValue)}
+               } else {
+                       // Let ProcessFieldValue handle FieldValue via its special-case
+                       values[field.Name] = p.ProcessFieldValue(reflect.ValueOf(fieldValue))
+               }

This keeps the existing time behavior while preserving other format-specific conversions coming from Field.Parse.

formatters/html/html_formatter.go (1)

531-561: Compact table rendering still uses header text as the row key

formatCompactTableHTML currently does:

for _, header := range table.Headers {
    ...
}
...
for _, header := range table.Headers {
    cellValue := row[header.String()]
    ...
}

But TextTable separates display labels (Headers) from the data keys (FieldNames), and parseTableData populates FieldNames accordingly. When labels differ from field names, looking up row[header.String()] will produce empty cells.

Consider using the header index to consult FieldNames:

- for _, header := range table.Headers {
+ for i, header := range table.Headers {
+     key := header.String()
+     if i < len(table.FieldNames) && table.FieldNames[i] != "" {
+         key = table.FieldNames[i]
+     }
      ...
- }
+ }
...
- for _, header := range table.Headers {
-     cellValue := row[header.String()]
+ for i, header := range table.Headers {
+     key := header.String()
+     if i < len(table.FieldNames) && table.FieldNames[i] != "" {
+         key = table.FieldNames[i]
+     }
+     cellValue := row[key]
      ...
  }

This matches the way parseTableData structures the table and avoids header/field name mismatches.

api/meta.go (3)

160-178: TextTable.AsString ignores FieldNames, causing mismatched lookups

AsString currently does:

for _, header := range tt.Headers {
    if cell, exists := row[header.String()]; exists {
        result = append(result, cell.String())
    } else {
        result = append(result, "")
    }
}

But TextTable is populated with:

  • Headers[i] as the display label, and
  • FieldNames[i] as the actual key in each TableRow.

When labels differ from field names, this method will never find the cells.

You can mirror the FieldNames lookup pattern suggested for the HTML formatter:

func (tt TextTable) AsString(row TableRow) []string {
    var result []string
-   for _, header := range tt.Headers {
-       if cell, exists := row[header.String()]; exists {
+   for i, header := range tt.Headers {
+       key := header.String()
+       if i < len(tt.FieldNames) && tt.FieldNames[i] != "" {
+           key = tt.FieldNames[i]
+       }
+       if cell, exists := row[key]; exists {
            result = append(result, cell.String())
        } else {
            result = append(result, "")
        }
    }
    return result
}

267-278: TextTree.Markdown recurses via String(), not Markdown()

The Markdown implementation claims to “keep simple indentation”, but children are rendered with child.String():

if tt.Node != nil {
    n = strings.Repeat("  ", tt.depth) + tt.Node.String()
}
for _, child := range tt.Children {
    child.depth = tt.depth + 1
    n += "\n" + child.String()
}

String() uses the lipgloss tree renderer (box-drawing glyphs), so child lines won’t follow the “simple indentation” comment and will differ from the plain-text Markdown representation.

If you want Markdown to be a pure text/indented view, switch children to .Markdown():

for _, child := range tt.Children {
    child.depth = tt.depth + 1
-   n += "\n" + child.String()
+   n += "\n" + child.Markdown()
}

334-386: Tree traversal in TypedValue.Visit will panic on normal TextTree nodes

In the Tree branch:

if tv.Tree != nil {
    if tv.Tree.Node != nil {
        tv.Tree.Node.(TypedValue).Visit(visitor)
    }
    for _, child := range tv.Tree.Children {
        if !child.Visit(visitor) { ... }
    }
}

TextTree.Node has static type Textable, and in normal usage (e.g. via NewTree) it holds Text, not TypedValue. The type assertion tv.Tree.Node.(TypedValue) will therefore panic at runtime whenever this code path is hit.

To safely visit the node’s value, convert it to a TypedValue instead of asserting it already is one:

if tv.Tree != nil {
    if tv.Tree.Node != nil {
-       tv.Tree.Node.(TypedValue).Visit(visitor)
+       if !NewTypedValue(tv.Tree.Node).Visit(visitor) {
+           return false
+       }
    }
    for _, child := range tv.Tree.Children {
        if !child.Visit(visitor) {
            return false
        }
    }
}

This mirrors how you already handle slices, maps, and TextMap entries.

Comment thread ai/api.go
Comment on lines +152 to +203
// ExecutePromptTyped is a generic helper that executes a prompt with type-safe structured output.
// It automatically cleans up the JSON response before unmarshaling into the target type.
func ExecutePromptTyped[T any](ctx context.Context, agent Agent, request TypedPromptRequest[T]) (*TypedPromptResponse[T], error) {
// Convert to regular PromptRequest
var schema T
promptReq := PromptRequest{
Context: request.Context,
Name: request.Name,
Prompt: request.Prompt,
StructuredOutput: &schema,
}

// Execute the request
resp, err := agent.ExecutePrompt(ctx, promptReq)
if err != nil {
return &TypedPromptResponse[T]{
Request: request,
Error: err.Error(),
}, err
}

// Convert to typed response
typedResp := &TypedPromptResponse[T]{
Request: request,
Costs: resp.Costs,
Model: resp.Model,
Duration: resp.Duration,
DurationModel: resp.DurationModel,
CacheHit: resp.CacheHit,
Error: resp.Error,
}

// If there's structured data, convert it
if resp.StructuredData != nil {
if data, ok := resp.StructuredData.(*T); ok {
typedResp.Data = *data
} else {
// Try to marshal and unmarshal through JSON
jsonData, marshalErr := json.Marshal(resp.StructuredData)
if marshalErr != nil {
typedResp.Error = fmt.Sprintf("failed to marshal structured data: %v", marshalErr)
return typedResp, marshalErr
}
if unmarshalErr := json.Unmarshal(jsonData, &typedResp.Data); unmarshalErr != nil {
typedResp.Error = fmt.Sprintf("failed to unmarshal structured data to target type: %v", unmarshalErr)
return typedResp, unmarshalErr
}
}
}

return typedResp, nil
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Fix missing fmt import and consider explicit handling of structured data types

Two points here:

  1. Compile-time issue (must fix)
    The error-path branches use fmt.Sprintf:

    typedResp.Error = fmt.Sprintf("failed to marshal structured data: %v", marshalErr)
    ...
    typedResp.Error = fmt.Sprintf("failed to unmarshal structured data to target type: %v", unmarshalErr)

    but fmt is not imported in this file. Add it to the import block:

    import (
        "context"
        "encoding/json"
        "fmt"
        "time"
        // ...
    )

    Without this, the package won’t compile.

  2. StructuredData conversion edge cases
    The generic conversion logic:

    if data, ok := resp.StructuredData.(*T); ok {
        typedResp.Data = *data
    } else {
        jsonData, marshalErr := json.Marshal(resp.StructuredData)
        ...
        json.Unmarshal(jsonData, &typedResp.Data)
    }

    works well when StructuredData is a *T, a T, or a JSON-serializable map/slice. If StructuredData can ever be a json.RawMessage, []byte, or a raw JSON string, json.Marshal will either base64-encode the bytes or double-encode the JSON string, which could cause unexpected unmarshal failures.

    If those types are possible here, consider special‑casing them, e.g.:

    • json.RawMessage / []byte: use the bytes directly as jsonData.
    • string: treat as raw JSON and []byte(stringVal).

    Otherwise, this path is fine as-is.


🏁 Script executed:

#!/bin/bash
# Check imports at the top of ai/api.go
head -50 ai/api.go | grep -E "^import|^[[:space:]]*\"" | head -20

Length of output: 124


🏁 Script executed:

#!/bin/bash
# Search for StructuredData field definition and PromptRequest/TypedPromptResponse types
rg -n "StructuredData|PromptRequest|TypedPromptResponse" ai/api.go -A2 -B2 | head -100

Length of output: 2892


🏁 Script executed:

#!/bin/bash
# Search for where StructuredData is assigned or populated in the codebase
rg -n "StructuredData\s*=" --type=go -C3 | head -80

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check if fmt is used elsewhere in ai/api.go
rg -n "fmt\." ai/api.go

Length of output: 200


🏁 Script executed:

#!/bin/bash
# Find Agent interface implementations (implementations of ExecutePrompt)
rg -n "func.*ExecutePrompt" --type=go -B5 -A15 | head -150

Length of output: 5395


🏁 Script executed:

#!/bin/bash
# Search for any code that creates PromptResponse and sets StructuredData
rg -n "StructuredData:" --type=go -B3 -A3

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check if json.RawMessage is imported or used anywhere in the codebase
rg -n "json\.RawMessage|RawMessage" --type=go

Length of output: 0


Fix missing fmt import

The code uses fmt.Sprintf at lines 192 and 196, but fmt is not in the import block. This will cause a compile error. Add it:

import (
    "context"
    "encoding/json"
    "fmt"
    "time"
    // ...
)

StructuredData conversion edge cases (optional)

The conversion logic handles *T, T, and JSON-serializable types via json.Marshal/Unmarshal. While special-casing for json.RawMessage, []byte, or raw JSON strings would be defensive (to avoid base64-encoding or double-encoding), no evidence of these types currently being assigned to StructuredData was found in the codebase. If such types are introduced in future, consider explicit handling.

🤖 Prompt for AI Agents
In ai/api.go around lines 152 to 203, the code calls fmt.Sprintf on lines ~192
and ~196 but fmt is not imported; add "fmt" to the import block (with other
standard libs) to resolve the compile error. Additionally, optionally harden the
StructuredData conversion: explicitly handle json.RawMessage, []byte and raw
JSON string cases before generic JSON marshal/unmarshal to avoid double-encoding
edge cases.

Comment thread api/themes.go

import (
"os"
"strings"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard terminal size caching against concurrent access

The switch to stderr, wider default (120), height caching, and one‑time debug logging are all sensible. However, terminalWidth, terminalHeight, and terminalDimensionsLogged are mutated/read without synchronization, so concurrent calls to GetTerminalWidth/GetTerminalLines can introduce data races and, in practice, repeated logging.

A small sync.Once helper that caches both width and height would fix races and avoid duplicate term.GetSize calls. For example:

@@
-import (
-	"os"
-	"strings"
-
-	"github.com/charmbracelet/lipgloss"
-	"github.com/flanksource/commons/logger"
-	"github.com/muesli/termenv"
-	"golang.org/x/term"
-)
+import (
+	"os"
+	"strings"
+	"sync"
+
+	"github.com/charmbracelet/lipgloss"
+	"github.com/flanksource/commons/logger"
+	"github.com/muesli/termenv"
+	"golang.org/x/term"
+)
@@
-var terminalWidth = -1
-var terminalDimensionsLogged = false
+var (
+	terminalWidth            = -1
+	terminalHeight           = -1
+	terminalDimensionsLogged = false
+	terminalSizeOnce         sync.Once
+)
+
+func initTerminalSize() {
+	width, height, err := term.GetSize(int(os.Stderr.Fd()))
+	if err != nil {
+		terminalWidth = 120
+		terminalHeight = 40
+		return
+	}
+	terminalWidth = width
+	terminalHeight = height
+
+	if !terminalDimensionsLogged && logger.V(4).Enabled() {
+		terminalDimensionsLogged = true
+
+		halfWidth := width / 2
+		line50 := strings.Repeat("─", halfWidth)
+		line100 := strings.Repeat("─", width)
+
+		logger.V(4).Infof("Terminal dimensions: width=%d, height=%d", width, height)
+		_, _ = os.Stderr.WriteString(line50 + "\n")
+		_, _ = os.Stderr.WriteString(line100 + "\n")
+	}
+}
@@
 func GetTerminalWidth() int {
-	if terminalWidth != -1 {
-		return terminalWidth
-	}
-	width, height, err := term.GetSize(int(os.Stderr.Fd()))
-	if err != nil {
-		return 120 // Default width
-	}
-	terminalWidth = width
-
-	// Log terminal dimensions once at startup when trace logging is enabled
-	if !terminalDimensionsLogged && logger.V(4).Enabled() {
-		terminalDimensionsLogged = true
-
-		// Create test lines with box drawing character
-		halfWidth := width / 2
-		line50 := strings.Repeat("─", halfWidth)
-		line100 := strings.Repeat("─", width)
-
-		logger.V(4).Infof("Terminal dimensions: width=%d, height=%d", width, height)
-		os.Stderr.WriteString(line50 + "\n")
-		os.Stderr.WriteString(line100 + "\n")
-	}
-
-	return width
+	terminalSizeOnce.Do(initTerminalSize)
+	if terminalWidth == -1 {
+		return 120
+	}
+	return terminalWidth
 }
@@
-var terminalHeight = -1
-
 func GetTerminalLines() int {
-	if terminalHeight != -1 {
-		return terminalHeight
-	}
-	_, height, err := term.GetSize(int(os.Stderr.Fd()))
-	if err != nil {
-		return 40 // Default height
-	}
-	terminalHeight = height
-	return height
+	terminalSizeOnce.Do(initTerminalSize)
+	if terminalHeight == -1 {
+		return 40
+	}
+	return terminalHeight
 }

This keeps the behavior (including logging) but makes it race‑free and avoids duplicate terminal size syscalls.

Also applies to: 8-8, 215-243, 245-257, 259-261

Comment on lines +706 to +709
// showTrees displays tree structure examples
func showTrees(opts clicky.FileTreeOptions) (any, error) {
return clicky.NewFileSystem(".", clicky.WithMaxDepth(opts.MaxDepth)), nil
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Tree CLI flags (hidden, size, modified, age) are ignored in showTrees

showTrees receives a fully-populated clicky.FileTreeOptions from Cobra, but currently calls:

return clicky.NewFileSystem(".", clicky.WithMaxDepth(opts.MaxDepth)), nil

This means:

  • opts.ShowHidden is never passed (despite WithHiddenFiles existing),
  • opts.ShowSize, opts.ShowModified, and opts.ShowAge are never propagated into the FileTreeNode options used by Pretty.

As a result, the --hidden, --size, --modified, and --age flags exposed on the trees command do nothing.

Consider wiring all relevant fields through, e.g. by:

  • Passing WithHiddenFiles(opts.ShowHidden), and
  • Ensuring FileTreeOptions used inside NewFileSystem/buildFileTree is initialized from the CLI opts (not just the functional options), so display flags affect Pretty() output.
🤖 Prompt for AI Agents
In examples/uber_demo/main.go around lines 706 to 709, showTrees ignores CLI
flags in opts and only sets MaxDepth; wire all display flags into the filesystem
creation by passing clicky.WithHiddenFiles(opts.ShowHidden) and any other
functional options the package exposes for size/modified/age, and ensure the
internal FileTreeOptions used by NewFileSystem/buildFileTree is initialized from
the incoming opts (ShowSize, ShowModified, ShowAge) so those flags are applied
to the FileTreeNode/Pretty output; update showTrees to construct and pass the
full set of options rather than only WithMaxDepth.

Comment on lines 36 to 41
// HTMLFormatter handles HTML formatting
type HTMLFormatter struct {
IncludeCSS bool
IsPDFMode bool
tableCounter int // Counter for generating unique table IDs
nodeCounter int // Counter for generating unique tree node IDs
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Singleton HTMLFormatter mutates tableCounter without synchronization

init() registers a single HTMLFormatter instance:

func init() {
    html := NewHTMLFormatter()
    RegisterFormatter("html", html.Format)
}

and generateTableID mutates tableCounter on that shared instance:

func (f *HTMLFormatter) generateTableID() string {
    f.tableCounter++
    return fmt.Sprintf("gridjs-table-%d", f.tableCounter)
}

If Format can be called from multiple goroutines (e.g. in an HTTP server), this introduces a data race on tableCounter.

To make this safe:

  • Either create a fresh HTMLFormatter per call in the registry, or
  • Guard tableCounter with sync/atomic or a mutex.

For example:

type HTMLFormatter struct {
-   IncludeCSS   bool
-   IsPDFMode    bool
-   tableCounter int
+   IncludeCSS   bool
+   IsPDFMode    bool
+   tableCounter uint64
}
...
func (f *HTMLFormatter) generateTableID() string {
-   f.tableCounter++
-   return fmt.Sprintf("gridjs-table-%d", f.tableCounter)
+   id := atomic.AddUint64(&f.tableCounter, 1)
+   return fmt.Sprintf("gridjs-table-%d", id)
}

(or keep a local counter if you end up instantiating a formatter per request).

Also applies to: 751-754


🏁 Script executed:

#!/bin/bash
# Quick grep to see typical call sites for the HTML formatter and whether it's used in concurrent HTTP handlers.
rg -n "RegisterFormatter\(\"html\"" .
rg -n "FormatWithOptions\\(.*html" .
rg -n "Format\\(.*\"html" .

Length of output: 78


I need to gather more information about how the formatter is used and whether the data race concern is valid.

Now I need to search the codebase more broadly to understand how the formatter is actually used.

Let me try with proper codebase context:


---

🏁 Script executed:

```shell
#!/bin/bash
# Search for calls to the Format method more broadly
rg -n "\.Format\(" . -A 2 -B 2
# Also search for how the registered formatter callback is invoked
rg -n "Formatter\(" . -A 2 -B 2 | head -50

Length of output: 27031


🏁 Script executed:

#!/bin/bash
# Look at the html_formatter.go file to understand the structure
cat formatters/html/html_formatter.go | head -100

Length of output: 2577



---

🏁 Script executed:

```shell
#!/bin/bash
# Look at html_pdf_formatter to see if it uses the singleton
cat formatters/html/html_pdf_formatter.go | head -80

# Also check generateTableID implementation
rg -n "generateTableID" formatters/html/ -A 5 -B 2

Length of output: 4795



---

🏁 Script executed:

```shell
#!/bin/bash
# Check how registered formatters are invoked in the manager
cat formatters/formatters.go | grep -A 20 "func (m \*FormatManager) Format"

# Also check if there's any HTTP handler that uses formatters
rg -n "http\." rpc/ --type=go -A 2 -B 2 | head -50

# Search for goroutine patterns around formatting
rg -n "go " rpc/serve.go -A 3 -B 1

Length of output: 2769



---

🏁 Script executed:

```shell
#!/bin/bash
# Find FormatManager implementation
fd -e go | xargs rg -l "type FormatManager struct" | head -3

# Look at it
find . -name "*.go" -type f | xargs rg -A 30 "type FormatManager struct" | head -60

# Search for where registered formatters are stored/accessed
rg -n "registeredFormatters\|formatters\[" formatters/ --type=go -B 2 -A 2 | head -40

Length of output: 1904



---

🏁 Script executed:

```shell
#!/bin/bash
# Find RegisterFormatter function
rg -n "func RegisterFormatter" . --type=go -A 8

# Also look for the format.go file which seemed to have RegisterFormatter
cat format.go | grep -A 10 "func RegisterFormatter"

# Look for the global registry
rg -n "var.*formatters.*=" . --type=go | head -20

Length of output: 1339



---

🏁 Script executed:

```shell
#!/bin/bash
# Find where custom formatters are invoked
rg -n "GetCustomFormatter\|customFormatters\[" . --type=go -A 3 -B 2

# Look at custom.go to see full implementation
cat formatters/custom.go

Length of output: 2192



---

🏁 Script executed:

```shell
#!/bin/bash
# Search for where "html" format is used
rg -n '"html"' . --type=go -B 2 -A 2 | grep -A 2 -B 2 -i format

# Look at the Format function in format.go to see how it uses formatters
cat format.go | grep -A 40 "^func Format\("

Length of output: 8868



---

🏁 Script executed:

```shell
#!/bin/bash
# Look at schema.go around line 171 to see the complete flow
sed -n '165,185p' formatters/schema.go

# Check manager.go HTML method
sed -n '87,95p' formatters/manager.go

# Look for HTTP handler that calls Format or FormatWithOptions
rg -n "FormatWithOptions\|\.Format\(" rpc/serve.go -B 3 -A 3

Length of output: 1101



---

🏁 Script executed:

```shell
#!/bin/bash
# Verify the exact signature and usage of Format method to ensure it's truly the registered singleton
sed -n '140,160p' formatters/html/html_formatter.go

# Check if there's any existing sync mechanism we might have missed
rg -n "atomic\|sync\|mutex" formatters/html/html_formatter.go

# Verify the singleton is registered exactly as the review states
sed -n '31,35p' formatters/html/html_formatter.go

Length of output: 481


Singleton HTMLFormatter mutates tableCounter without synchronization — data race confirmed

Verification confirms the vulnerability described in the review:

  • The singleton is registered at init() (line 33): RegisterFormatter("html", html.Format)
  • generateTableID() (lines 751-754) mutates tableCounter unsynchronized: f.tableCounter++
  • The method is called from Format (lines 248, 400, 444)
  • HTTP endpoints accept format=html query parameters (http_test.go:20)
  • Concurrent HTTP requests invoke the formatter via GetCustomFormatter("html") (formatters/schema.go:171-176)

This creates a data race: multiple concurrent requests will simultaneously call FormatgenerateTableID() on the shared singleton, mutating tableCounter without guards.

Recommended fix: Use sync/atomic or a mutex to protect tableCounter. The suggested diff approach with atomic.AddUint64() is appropriate:

type HTMLFormatter struct {
-   IncludeCSS   bool
-   IsPDFMode    bool
-   tableCounter int
+   IncludeCSS   bool
+   IsPDFMode    bool
+   tableCounter uint64
}

func (f *HTMLFormatter) generateTableID() string {
-   f.tableCounter++
-   return fmt.Sprintf("gridjs-table-%d", f.tableCounter)
+   id := atomic.AddUint64(&f.tableCounter, 1)
+   return fmt.Sprintf("gridjs-table-%d", id)
}

Alternatively, create a fresh formatter instance per request rather than reusing the singleton.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
formatters/html/html_formatter.go lines 36-41 and generateTableID at ~751-754:
the singleton HTMLFormatter mutates tableCounter without synchronization causing
a data race; fix by either (A) changing tableCounter to a uint64 and using
sync/atomic (atomic.AddUint64(&f.tableCounter, 1) and atomic.LoadUint64 when
reading) in generateTableID to perform increments atomically, or (B) add a
sync.Mutex on the struct and lock/unlock around increments and reads in
generateTableID, or (C) stop using a singleton by changing the registration to
return a new HTMLFormatter instance per request so each request has its own
tableCounter; implement one of these approaches and update all related callers
if needed.

Comment thread formatters/manager.go
Comment on lines +240 to +248
// Extract single element from variadic data
var d any
if len(data) == 1 {
d = data[0]
} else {
d = data
}
logger.V(4).Infof("Extracted data type: %T", d)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

HTML / HTML-PDF branch in FormatWithOptions has misleading error and redundant lookup

In the "html", "html-pdf" case, you:

  • Repeat the GetCustomFormatter lookup already performed at the top of FormatWithOptions, and
  • Always return "html formatter not registered, register using 'import _ github.com/flanksource/clicky/formatters/html'" even when format == "html-pdf".

This is confusing for html-pdf users and unnecessary work.

A minimal cleanup:

-	case "html", "html-pdf":
-		if formatter, ok := GetCustomFormatter(format); ok {
-			return formatter(data, options)
-		}
-		return "", fmt.Errorf("html formatter not registered, register using 'import _ github.com/flanksource/clicky/formatters/html'")
+	case "html", "html-pdf":
+		// At this point custom formatter lookup has already been attempted above.
+		// If we get here, there is no registered formatter.
+		return "", fmt.Errorf("%s formatter not registered, register using 'import _ \"github.com/flanksource/clicky/formatters/html\"'", format)

Also applies to: 274-279

🤖 Prompt for AI Agents
In formatters/manager.go around lines 240-248 (and similarly at 274-279), you
repeat the GetCustomFormatter lookup and always return the HTML-specific
registration error even when format == "html-pdf"; remove the redundant
GetCustomFormatter call and reuse the formatter value already retrieved at the
top of FormatWithOptions, and update the error return to either use a generic
"formatter not registered" message that includes the actual format variable or
conditionally return an "html-pdf requires html formatter" message only when
format == "html-pdf" so users see the correct registration guidance.

"Price": {Content: fmt.Sprintf("$%.2f", p.Price), Style: "text-green-600 order-3"},
}
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Remove extra closing brace (syntax error).

There is an extra closing brace at line 154 that will cause a compilation error. The PrettyRow method is already closed at line 153.

Apply this diff to remove the extra brace:

 		"Price": {Content: fmt.Sprintf("$%.2f", p.Price), Style: "text-green-600 order-3"},
 	}
 }
-}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
}
"Price": {Content: fmt.Sprintf("$%.2f", p.Price), Style: "text-green-600 order-3"},
}
}
🤖 Prompt for AI Agents
In formatters/pretty_row_integration_test.go around line 154, there is an extra
closing brace '}' that duplicates the end of the PrettyRow method and causes a
compilation error; remove the extraneous brace on line 154 so the file has the
correct matching braces and compiles.

Comment thread task/batch.go
Comment on lines +34 to +199
func (b *Batch[T]) Run() chan BatchResult[T] {
if b.MaxWorkers <= 0 {
b.MaxWorkers = 3
}
total := len(b.Items)
results := make(chan BatchResult[T], total)

// Synchronization primitives to prevent race conditions
var closeOnce sync.Once
var wg sync.WaitGroup
var taskMu sync.Mutex // Protects concurrent task updates
closeResults := func() {
closeOnce.Do(func() {
close(results)
})
}

StartTask(b.Name, func(ctx flanksourceContext.Context, t *Task) (interface{}, error) {
b.tracef(t, "Run starting: %s, items=%d, context.Err=%v", b.Name, total, ctx.Err())

sem := semaphore.NewWeighted(int64(b.MaxWorkers))
count := atomic.Int32{}
done := make(chan error, 1) // Channel to signal completion from monitoring goroutine

t.SetName(fmt.Sprintf("%s %d of %d (concurrency:%d)", b.Name, 0, total, b.MaxWorkers))
t.SetProgress(0, total)

for i, item := range b.Items {
b.tracef(t, "Queuing %s %d of %d", item, i+1, total)

// Check for context cancellation before acquiring semaphore
if ctx.Err() != nil {
closeResults()
return nil, ctx.Err()
}

if err := sem.Acquire(ctx, 1); err != nil {
t.Errorf("failed to acquire semaphore: %v", err)
closeResults()
return nil, err
}
b.tracef(t, "Acquired semaphore %v %d of %d", item, i+1, total)

wg.Add(1)
go func(item func(log logger.Logger) (T, error), itemNum int) {
defer sem.Release(1)
defer wg.Done()

// Panic recovery to prevent goroutine crashes
defer func() {
if r := recover(); r != nil {
t.Errorf("panic in batch item %d: %v", itemNum, r)
results <- BatchResult[T]{Error: fmt.Errorf("panic: %v", r)}
}
}()

// Check for context cancellation before executing
if ctx.Err() != nil {
results <- BatchResult[T]{Error: ctx.Err()}
return
}

start := time.Now()
b.tracef(t, "Running %s %d of %d", item, itemNum, total)

value, err := item(t)
duration := time.Since(start)
results <- BatchResult[T]{Value: value, Error: err, Duration: duration}
newCount := count.Add(1)

// t.Debugf("Item completed: count=%d/%d, duration=%v", newCount, total, duration)

// Protect concurrent task updates with mutex
taskMu.Lock()
t.SetName(fmt.Sprintf("%s %d of %d", b.Name, newCount, total))
t.SetProgress(int(newCount), total)
taskMu.Unlock()

// t.Infof("Finished %s %d of %d", item, newCount, total)
}(item, i+1)
}

// Monitoring goroutine with timeout
go func() {
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()

timeout := time.After(5 * time.Hour) // 5 hour timeout for batch completion

for {
select {
case <-ctx.Done():
// Task cancelled, but check if all items completed first
b.tracef(t, "Context cancelled detected: count=%d/%d, context.Err=%v", count.Load(), total, ctx.Err())
wg.Wait()
finalCount := count.Load()
b.tracef(t, "All goroutines finished after cancellation: final count=%d/%d", finalCount, total)

if finalCount >= int32(total) {
// All items actually completed before cancellation
b.tracef(t, "Completed batch %s %d of %d", b.Name, finalCount, total)
taskMu.Lock()
t.Success()
taskMu.Unlock()
closeResults()
done <- nil
} else {
// Genuinely cancelled mid-execution
b.tracef(t, "Batch cancelled: %s (completed %d of %d)", b.Name, finalCount, total)
taskMu.Lock()
t.SetStatus(StatusCancelled)
taskMu.Unlock()
closeResults()
done <- ctx.Err()
}
return
case <-timeout:
// Timeout reached, but check if all items completed
t.Infof("Timeout reached: count=%d/%d", count.Load(), total)
wg.Wait()
finalCount := count.Load()
if finalCount >= int32(total) {
// All items actually completed before timeout
b.tracef(t, "Completed batch %s %d of %d", b.Name, finalCount, total)
taskMu.Lock()
t.Success()
taskMu.Unlock()
closeResults()
done <- nil
} else {
// Genuinely timed out
t.Errorf("Batch timeout: %s (completed %d of %d)", b.Name, finalCount, total)
taskMu.Lock()
err := fmt.Errorf("batch timeout after 5 hours")
_, _ = t.FailedWithError(err)
taskMu.Unlock()
closeResults()
done <- err
}
return
case <-ticker.C:
currentCount := count.Load()
if currentCount >= int32(total) {
// All items completed
b.tracef(t, "Completed batch %s %d of %d", b.Name, currentCount, total)
wg.Wait()
taskMu.Lock()
t.Success()
taskMu.Unlock()
closeResults()
done <- nil
return
}
b.tracef(t, "Waiting %d of %d", currentCount, total)
}
}
}()

// Wait for monitoring goroutine to complete and signal status
t.Debugf("Waiting for monitoring goroutine to signal completion")
err := <-done
t.Debugf("Batch.Run finished: %s, error=%v", b.Name, err)
return nil, err
}).SetLogLevel(logger.Trace1)

return results
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Closing results inside the scheduling loop can race with active workers and panic

In Run(), the per-item loop closes results and returns when ctx.Err() != nil or sem.Acquire fails:

if ctx.Err() != nil {
    closeResults()
    return nil, ctx.Err()
}

if err := sem.Acquire(ctx, 1); err != nil {
    t.Errorf("failed to acquire semaphore: %v", err)
    closeResults()
    return nil, err
}

If some worker goroutines have already started, they will later execute results <- BatchResult[...], which will panic if the channel has been closed by these early-return paths. The monitoring goroutine already handles ctx.Done(), timeout, and normal completion, and closes results only after wg.Wait(), which is the safe place.

You can fix this by removing the loop‑level closes/returns and letting the monitor own the lifecycle:

 		for i, item := range b.Items {
 			b.tracef(t, "Queuing %s %d of %d", item, i+1, total)
 
-			// Check for context cancellation before acquiring semaphore
-			if ctx.Err() != nil {
-				closeResults()
-				return nil, ctx.Err()
-			}
+			// If context is canceled, stop scheduling new items; monitor goroutine
+			// will observe ctx.Done(), wait for any in-flight workers, and close results.
+			if ctx.Err() != nil {
+				break
+			}
 
-			if err := sem.Acquire(ctx, 1); err != nil {
-				t.Errorf("failed to acquire semaphore: %v", err)
-				closeResults()
-				return nil, err
-			}
+			if err := sem.Acquire(ctx, 1); err != nil {
+				t.Errorf("failed to acquire semaphore: %v", err)
+				// Likely due to context cancellation; let the monitor handle shutdown.
+				break
+			}

The rest of the function (monitor goroutine + <-done) already propagates cancellation/timeout via done and closes results after wg.Wait(), so this change preserves the high‑level semantics while removing the send‑on‑closed‑channel risk.

🤖 Prompt for AI Agents
task/batch.go lines 34-199: the loop currently calls closeResults() and returns
early when ctx.Err() != nil or sem.Acquire fails, which can close the results
channel while worker goroutines still send to it; remove those loop-level
close+return paths and instead stop scheduling more items: replace both
early-return branches with code that records the error (e.g. scheduleErr :=
ctx.Err() or scheduleErr := err), logs it, and breaks out of the for loop so
existing goroutines finish and the monitoring goroutine (which waits on wg and
closes results) owns closing/results lifecycle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant