v0.6.74: security hardening, workers recycling, next-mdx-remote and opentelemetry updates, data drains to snowflake, blob, datadog, bigquery#4561
Conversation
… 200 executions (#4543) * fix(execution): cap isolate memory at 128MB and recycle workers every 100 executions * fix(execution): set IVM_MAX_EXECUTIONS_PER_WORKER env default to 100 * fix(execution): raise MAX_EXECUTIONS_PER_WORKER from 100 to 200 * fix(execution): update memory limit error messages from 256 MB to 128 MB
…erhaul, component modernization (#4544) * fix(react-doctor): remove unused export types, useEffect clearTimeout missing, a11y fixes * fix(react-doctor): strip unused export types from contracts, copilot, stores, and components Remove export keyword from type/interface declarations confirmed to have zero importers across lib/api/contracts/tools/aws/, lib/api/contracts/*.ts, lib/copilot/generated/, stores/workflows/workflow/types.ts, ee/access-control, ee/data-retention, lib/logs/types.ts, and app/workspace component files. TypeScript and API validation both pass clean. Reduces unused-types count from 394 → 181 and fully eliminates the ✗ critical dead-code categories (exports, types, files now show as ⚠ warnings not ✗ errors). * docs improvements * fix(react-doctor): delete 50 unused files (dead barrels, unreachable components, stale utilities) Remove confirmed-unused barrel index.ts files across stores/, connectors/, executor/, lib/, and app/workspace/ that had zero importers. Also delete unreachable components (chat-history-skeleton, trace-spans, logs-list, template-profile, enterprise landing sections), stale utilities (buffered-stream, blob-to-data-url, queued-workflow-execution, compute-edit-sequence), and obsolete generated/contract files. TypeScript passes clean. * remove dead code * cleanup * more * fix(blog): restore DiffControlsDemo for v0-5 blog post * fix: restore ContactButton for enterprise blog post, export WIKIPEDIA_PAGE_CONTENT_OUTPUT_PROPERTIES * fix(react-doc): restore stripped exports and remove server-only dependency * chore(deps): update lockfile after removing server-only * added back some exports * docs * more * fix type issues * tc * fix docs search route Co-authored-by: Cursor <cursoragent@cursor.com> * fix(tag-dropdown): add missing isEqual import from es-toolkit --------- Co-authored-by: Cursor <cursoragent@cursor.com>
…eted (#4546) * fix(settings): accurate View navigation after restore in recently deleted Files now deep-link to /files/{id} (was /files), and folders expand the restored folder + its parent chain in the sidebar before navigating to /w so the user actually lands on the item they restored. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * fix: fall back to archived folders for parent-chain lookup The restored folder may not be in the active folders cache yet when View is clicked (the invalidation+refetch fires from onSettled, after onSuccess surfaces the View button). Merge archived folder data — where the restored item still lives — into the lookup map so the expansion loop can always resolve the folder and walk its parent chain. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * chore(styling): canonicalize size-* shorthand for equal height/width Document the size-* shorthand as the canonical pattern across CLAUDE.md, AGENTS.md, .claude rules, .cursor + .agents commands, and the emcn design-review skill. Default icon size is size-[14px]. Treat h-[Npx] w-[Npx] and h-N w-N pairs as refactor targets. Also migrate the remaining occurrences in recently-deleted.tsx. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
#4547) Adds the enterprise data-drains docs page with two screenshots, a search bar over the drains table, and a UI cleanup pass on the settings component (size-*, useMemo removal, text-sm fixes). The previously-proposed env-var reference feature has been dropped — drain credentials remain raw values, encrypted at rest by the existing pipeline.
* fix: address SSRF and token-leakage security vulnerabilities
- Azure TTS SSRF: validate region against /^[a-z][a-z0-9-]{1,30}[a-z0-9]$/
in both the contract (tts.ts) and runtime guard in synthesizeWithAzure,
preventing user-supplied region from redirecting requests to arbitrary hosts
- HubSpot token in logs: remove fullResponse from logger.info call;
log only non-sensitive metadata (hub_id, hub_domain, user_id) instead
of the full introspection response which included the access token
- Wealthbox account takeover: replace hardcoded email with per-user identity
by fetching /v1/users/me; fall back to token-derived stable identifier
so distinct Wealthbox users no longer share the same email address
- Shopify SSRF: apply shopifyShopDomainSchema (.myshopify.com allowlist)
to shopDomain from cookie before using it to build the fetch URL
* fix(wealthbox): correct getUserInfo endpoint, auth header, and stable identity
- Bug 1: Change API endpoint from /v1/users/me to /v1/me (correct Wealthbox API path)
- Bug 2: Replace ACCESS_TOKEN header with Authorization: Bearer <token> (standard OAuth 2.0)
- Bug 3: Remove generateId() from returned id (was non-deterministic, caused duplicate accounts);
use refresh token (stable, long-lived) instead of access token (rotates every ~2 hours)
as the hash source for the fallback identity; return null if no token is available
* fix(security): hash wealthbox fallback token identity, guard undefined userId
- Replace base64 encoding with SHA-256 hash for fallback token-derived identity
so raw token bytes are never stored in the DB
- Return null early when Wealthbox API response lacks an id field to prevent
all such users colliding on the wealthbox-undefined account
* fix(auth): replace stale wealthbox userInfoUrl placeholder with actual endpoint
The dummy URL comment was rendered obsolete when getUserInfo was updated
to fetch from api.crmworkspace.com/v1/me. Align userInfoUrl with the real
endpoint used in the getUserInfo implementation.
* fix(auth): append generateId() suffix to Wealthbox account IDs to match codebase pattern
All other providers use `${stableId}-${generateId()}` so the account.create.after
hook can strip the UUID suffix, find stale sibling rows, and migrate credential FKs.
Without the suffix the migration logic is skipped and reconnections would hit
duplicate key conflicts instead of gracefully updating credentials.
…xecution auth (#4549) * fix(security): close IDOR gaps in OAuth credential and execution authorization Routes that called resolveOAuthAccountId followed by a conditional workspace permission check (only run when workspaceId was set) silently skipped all ownership validation on the legacy account-ID fallback path. Any authenticated user could supply a raw account.id to access another tenant's OAuth credentials. - Replace resolveOAuthAccountId + conditional perm check with authorizeCredentialUse in: auth/oauth/wealthbox/item, tools/gmail/label, tools/onedrive/files, tools/onedrive/folder, tools/outlook/folders, tools/wealthbox/item (routes 1, 3-7) - Add authorizeCredentialUse ownership gate before resolveVertexCredential in providers/route.ts (route 2) - Add verifyFileAccess check on the user-supplied file key before downloadFileFromStorage in tools/wordpress/upload (route 8) - Add workflowId param to PauseResumeManager methods (enqueueOrStartResume, beginPausedCancellation, completePausedCancellation, blockQueuedResumesForCancellation, clearPausedCancellationIntent, getPausedCancellationStatus, processQueuedResumes) and filter all pausedExecutions lookups by workflowId so callers cannot act on another tenant's paused execution by supplying a foreign executionId (route 9) - Update all call sites (cancel, resume, poll routes) to pass workflowId * fix(security): close verifyFileAccess bypass, thread workflowId to processQueuedResumes, fix log level - Fail closed in WordPress upload when userFile.key is present but authResult.userId is absent, preventing silent bypass of ownership check via JWT fallback path - Thread workflowId into processQueuedResumes in the async resume error-recovery path and in pause-persistence.ts to close residual cross-tenant gap - Change logger.error to logger.warn for credential access denial in OneDrive folder route to match all other routes in this PR * fix(security): thread workflowId through all processQueuedResumes call sites Closes residual cross-tenant IDOR gap where processQueuedResumes was called without a workflowId scope in persistPauseResult, startResumeExecution (success and error paths), and clearPausedCancellationIntent. workflowId was already in scope at each site — this wires it through to the existing optional parameter. * fix(security): remove any types, drop extraneous comments, normalize caught errors - catch (error: any) → catch (error) + toError(error).message in resume and cancel routes - Remove what-not-why inline comments from wordpress upload and onedrive/files routes - Remove redundant debug-only item breakdown log and the file-IDs log in onedrive/files - Trim extraneous DAG-edge comments from updateSnapshotAfterResume in HITL manager * fix: use logger.warn for credential access denial in outlook folders route * fix(security): make workflowId required in all HITL pause/resume methods All 7 method signatures (processQueuedResumes, enqueueOrStartResume, beginPausedCancellation, completePausedCancellation, blockQueuedResumesForCancellation, clearPausedCancellationIntent, getPausedCancellationStatus) previously accepted workflowId as optional. Every call site already supplies it — making it required closes the vulnerability at the type level so future callers cannot accidentally omit tenant scoping and silently fall back to an unscoped DB query. * fix(security): thread workflowId through internal HITL cancellation calls and remove dead branches in credential-access * fix(security): harden workflowId scoping and file key guard Replace falsy workflowId checks in PauseResumeManager (all methods now unconditionally apply the workflowId WHERE clause, preventing empty-string bypass). Flip WordPress upload file guard from truthy key check to explicit non-empty validation so key:"" fails closed with a 404 instead of silently skipping access control.
* fix(oauth): persist rotated Microsoft refresh tokens Microsoft Entra rotates refresh tokens on every refresh and expects clients to replace the stored token with the new one. The Microsoft provider config was missing supportsRefreshTokenRotation, so the rotated refresh_token returned by Azure AD was silently discarded and the original token from initial OAuth connect was reused indefinitely — causing periodic 'Failed to refresh access token' errors for Excel, Teams, Outlook, OneDrive, SharePoint, Planner, AD, and Dataverse integrations. * test(oauth): cover hyphenated Microsoft service IDs in rotation test
…l admin demotion (#4551) * fix(security): authorize MCP subagent IDs, oauth workspace, credential admin demotion - handleSubagentToolCall and handleDirectToolCall now authorize user-supplied workflowId/workspaceId via authorizeWorkflowByWorkspacePermission / ensureWorkspaceAccess before forwarding downstream; resolvedWorkspaceId is derived from the authorized workflow record instead of trusted from the body - executeOAuthGetAuthLink verifies caller membership (write level) on the target workspaceId before generating the OAuth link or writing pendingCredentialDraft, closing the cross-workspace credential injection path - POST /api/credentials/[id]/members wraps role updates in a transaction that counts active admins and rejects demotion of the last admin (mirrors the existing DELETE guard in the same file) - GET /api/credentials/[id]/members returns uniform 404 for both missing and inaccessible credentials to remove the existence oracle * fix(security): address PR review — active-status guard, FOR UPDATE locks, workspaceId propagation - credentials/members POST: add `current.status === 'active'` check to the last-admin demotion guard so re-inviting a revoked admin as a non-admin role no longer incorrectly hits the "Cannot demote the last admin" path - credentials/members POST+DELETE: add `.for('update')` to the active-admin count SELECT inside both transactions to serialize concurrent demotions and eliminate the admin-count TOCTOU race under Postgres READ COMMITTED - credentials/members POST: also lock the member row itself with `.for('update')` so the role+status read and the subsequent UPDATE are atomic - mcp/copilot handleDirectToolCall: thread the DB-verified workspaceId from the authorization result into prepareExecutionContext instead of relying on user-supplied args - oauth handler: fix error message to mention both workspaceId and userId when either is missing from the execution context
- bump next-mdx-remote 5.0.0 → 6.0.0 (GHSA-g4xw-jxrg-5f6m / CVE-2026-0969, arbitrary code execution in MDX serialize) - bump @opentelemetry/sdk-node and exporter-trace-otlp-http 0.200.0 → 0.217.0 (GHSA-q7rr-3cgh-j5r3 / CVE-2026-44902, Prometheus exporter DoS) - align @opentelemetry/sdk-trace-base, sdk-trace-node, resources to ^2.7.0 to keep all @opentelemetry/* packages on a single core@2.7.1 instance
* feat(table): live cell updates via SSE + per-table event buffer
Replaces the polling-based row refetch with a push-based SSE stream that
patches the React Query cache directly as cell-state events arrive.
Architecture:
- New per-table event buffer in apps/sim/lib/table/events.ts. Redis sorted-set
with monotonic eventId, 1h TTL, 5000-event cap, in-memory fallback. Modeled
after apps/sim/lib/execution/event-buffer.ts but stripped of complexity
tables don't need (no per-execution lifecycle, no id-batching, no write
queue serialization). ~150 lines instead of 700.
- writeWorkflowGroupState appends a fat event after each successful 'wrote'.
Status transitions carry executionId + jobId; terminal/partial transitions
also include the new output values inline so the client can patch row data
without a follow-up refetch.
- New SSE route at /api/table/[tableId]/events/stream?from=<lastEventId>.
Replays from buffer on connect, polls at 500ms (mirrors workflow execution
stream), heartbeat every 15s, signals 'pruned' if the caller fell off the
back of the buffer.
- Client hook useTableEventStream subscribes via EventSource. Reconnect-resume
with last-seen eventId. On 'pruned', invalidates the rows query and resumes
from the new earliest. Cache patches walk every cached query under
rowsRoot(tableId) so filter/sort variants all stay live.
- Removes refetchInterval from useTableRows and the per-page polling effect
from useInfiniteTableRows. React Query's refetchOnWindowFocus +
refetchOnReconnect cover the durability gap if any push is dropped.
Out of scope:
- Bulk-cancel events (cancellation path is being redesigned separately).
- Generalizing the workflow event-buffer module to a shared primitive (defer
until a third use case appears; for now the table buffer is the simpler
cousin of the workflow one).
* fix(table): drop run-mutation refetch so SSE patches aren't overwritten
useRunColumn.onSettled was canceling in-flight queries and invalidating the
rows query — leftover behavior from the polling era. With the SSE stream
now keeping the cache live via incremental patches, this refetch races the
stream and snaps the cache back to whatever DB shows at the refetch moment,
which can lag the just-arrived queued/running events. Cells appeared stuck
on the optimistic 'pending' even though the SSE was delivering the real
transitions.
* chore(table): simplify SSE plumbing — reuse helpers, drop dead polling code
- Reuse snapshotAndMutateRows for SSE cache patches instead of reimplementing
the page-walk + cache-shape detection. Adds a {cancelInFlight: false} opt
for the SSE caller (mutations still cancel as before).
- Drop client-side type duplication in use-table-event-stream — import
TableEvent and TableEventEntry from lib/table/events directly.
- Drop the now-dead mergePagePreservingIdentity + rowEqual from tables.ts;
their only caller was the polling effect that was removed earlier.
- Drop the defensive try/catch around appendTableEvent in cell-write — the
function is documented as never-throwing (returns null on failure).
- Combine INCR + ZADD into one Lua eval in events.ts. Halves Redis RTT per
cell-write. Lua returns the new eventId; the script splices it into the
pre-built entry JSON.
- Trim refs to plain let bindings inside the effect; trim stale
comments referencing the old polling implementation.
* fix(table): address PR review on SSE buffer
- TTL-expiry silent miss: when all keys expire, hgetall(meta) returns empty
so earliestEventId is undefined and the prune branch was skipped. Reconnect
with non-zero afterEventId now checks the seq counter — its absence (TTL
expired) signals pruned so the client refetches. Memory fallback mirrors.
- Unbounded ZRANGEBYSCORE: cap reads at TABLE_EVENT_READ_CHUNK = 500 events
per call. The route's 500ms poll loop drains chunks across ticks instead of
flushing 5000 entries (multi-MB) in one tick after a long disconnect.
- Pruned handler closes EventSource client-side: server-side close was firing
onerror and routing through the 500ms backoff path. Now we close
proactively, reset the reconnect attempt counter, and reconnect immediately
from the new earliest.
* Cross env copilot
* Force deploy
* Run migration
* Updates
* Fix migration
* Redeploy
* Make dev db push
* restore old migs
* Cross env copilot
* Add custom tools, skills, mcps to mothership
* Update migration
* Fix migs
* UPdate
* Fix types
* Fix
---------
Co-authored-by: Theodore Li <theo@sim.ai>
…dog destinations (#4552) * feat(data-drains): add GCS, Azure Blob, BigQuery, Snowflake, and Datadog destinations * fix(data-drains): address PR review comments * fix(data-drains): extract sleepUntilAborted, honor abort across all destinations * fix(data-drains): widen BigQuery projectId max and dedupe parseServiceAccount * fix(data-drains): tighten GCS bucket contract and expose Azure endpointSuffix * improvement(data-drains): extract normalizePrefix and buildObjectKey to shared utils * fix(data-drains): retry BigQuery network errors; tighten Azure accountKey contract - BigQuery insertAll now wraps the fetch in try/catch inside the retry loop so DNS failures, socket resets, and timeouts are retried with backoff instead of propagating immediately. - Align azureBlobCredentialsBodySchema with the runtime schema (min 64 / max 120 / base64 regex) so obviously invalid keys are rejected at the API boundary rather than at drain-run time. * improvement(data-drains): consolidate parseRetryAfter; add Datadog NDJSON line context - Extract a single parseRetryAfter helper (capped at 30s, returns number | null) into lib/data-drains/destinations/utils.ts and remove the five local copies in bigquery, datadog, gcs, snowflake, and webhook. - Datadog parseNdjson now wraps JSON.parse in try/catch and surfaces the failing line index, matching BigQuery's parser. * fix(data-drains): correct Datadog size guard and Snowflake VARIANT limit - Datadog payload guard now checks the uncompressed size against the 5 MB limit and the wire size against the 6 MB compressed limit, so gzip cannot smuggle an oversized body past the client-side check. - Snowflake VARIANT limit is 16 MiB (16,777,216 bytes), not 16,000,000 bytes — small payloads between 16 MB and 16 MiB were being rejected unnecessarily. - Drop the unused apiKey field on Datadog PostInput; the key is already embedded in the prepared request headers. * improvement(data-drains): consolidate backoffWithJitter into shared utils Datadog, GCS, and webhook each had byte-identical backoff helpers (BASE 500ms, MAX 30s, jitter ±20%, Retry-After floor). Lift the helper into lib/data-drains/destinations/utils.ts alongside parseRetryAfter and sleepUntilAborted, and drop the per-file copies and their BASE_BACKOFF_MS/MAX_BACKOFF_MS constants. * fix(data-drains): align destinations with live provider specs Audited every destination against live AWS/GCS/Azure/BigQuery/Snowflake/ Datadog/webhook docs and applied spec-correctness fixes: - S3: reserved bucket prefix amzn-s3-demo-, suffixes --x-s3/--table-s3; metadata byte formula excludes x-amz-meta- prefix per AWS spec - GCS: reject -./.- adjacency; UTF-8 prefix cap; forbid .well-known/ acme-challenge/ prefix; ASCII-only x-goog-meta-* enforcement - BigQuery: insertId is 128 chars (not bytes); split DATASET_RE (ASCII) and TABLE_RE (Unicode L/M/N + connectors); UTF-8 byte cap on tableId - Snowflake: disambiguate org-account vs legacy locator account formats; requestId+retry=true for idempotent retries; server-side timeout=600; default column DATA uppercase to match unquoted canonical form - Azure: endpoint suffix allowlist (4 sovereign clouds); accountKey length(88) base64 - Webhook: url max(2048); CRLF/NUL rejection on bearer/secret/sig header * fix(data-drains): address PR review on snowflake poll + shared NDJSON parsing - snowflake pollStatement: per-attempt timeout via AbortSignal.any, retry on 429/5xx with Retry-After + jitter - bigquery parseNdjson error messages now 1-indexed - consolidate parseNdjson variants into shared parseNdjsonLines/parseNdjsonObjects in utils * fix(data-drains): per-attempt fetch timeouts in gcs/bigquery, snowflake poll double-sleep - gcs.fetchWithRetry + bigquery.postInsertAll now use AbortSignal.any with a per-attempt timeout so a hung TCP connection cannot stall the drain - snowflake.pollStatement skips the next interval sleep when it just slept for retry backoff * fix(data-drains): bigquery probe timeout + jittered retries, align Snowflake column default UI/docs - bigquery test() probe now uses AbortSignal.any + per-attempt timeout - bigquery insertAll retry switches to backoffWithJitter for thundering-herd avoidance - Snowflake column placeholder + docs say DATA (uppercase) to match the code default * fix(data-drains): mirror webhook signingSecret min length in form gate isComplete now requires signingSecret >= 32 to match the contract/runtime schema so the Save button can't enable on a value that will fail server-side. * fix(data-drains): validate JSON client-side for Snowflake before binding Switch Snowflake to parseNdjsonObjects so malformed rows are caught locally with 1-indexed line numbers instead of failing the whole INSERT server-side. Re-stringify each parsed object before binding to PARSE_JSON(?). Drop the now-unused parseNdjsonLines helper. * fix(data-drains): cross-cutting audit pass against live provider docs - Azure: bound retryOptions on BlobServiceClient (SDK default tryTimeoutInMs is per-try unbounded; cap at 30s x 5 tries) - Webhook contract: mirror runtime — signingSecret.max(512), bearerToken.max(4096) + CRLF/NUL refine, signatureHeader charset + CRLF/NUL refine - S3 (lib + contract): reject bucket names with dash adjacent to dot; require https:// endpoint at the schema layer - Snowflake: bind original NDJSON line bytes (re-stringifying a JSON.parse'd value loses bigint precision beyond 2^53-1); check pollStatement 200 body for the SQL error envelope (sqlState/code) - Datadog: entry builder writes defaults first then user attrs then forced ddtags/message so user rows can't clobber routing fields; validate config.tags as comma-separated key:value pairs - registry.tsx: tighten isComplete predicates to mirror contract minimums (GCS bucket >= 3, Azure containerName >= 3 / accountKey === 88, BigQuery projectId >= 6, Snowflake account >= 3) * fix(data-drains): force ddsource/service overrides on Datadog entries Previous fix placed ddsource/service before ...attrs, leaving them clobberable by a user row field. Per Datadog docs, service + ddsource pick the processing pipeline, so a drain's routing config must not be overridable per-row. Spread attrs first, then force all four reserved fields (ddsource, service, ddtags, message). * fix(data-drains): preserve row-distinguishing index when BigQuery insertId overflows Truncating from the left dropped the index suffix, so any overflow would collapse all rows in a chunk to the same insertId and BigQuery would silently dedupe them. Path is unreachable today (UUIDs keep raw ~85 chars), but the overflow branch is now correct: hash the prefix, keep the index intact. * fix(data-drains): refresh GCS token per retry, tighten Azure key regex - gcs: rebuild Authorization header per attempt via buildHeaders so token refresh from google-auth-library kicks in if a 5xx retry crosses the hour-long token lifetime - azure_blob: pin account-key regex to {0,2} trailing '=' (base64 of 64 bytes = exactly 88 chars with up to two '=' pad chars) * fix(data-drains): address bugbot review of 6336948 - gcs: allow 1-char dot-separated bucket components (e.g. "a.bucket") to match GCS naming rules — overall name is 3-63 (or up to 222 with dots), but per-component minimum is 1 per Google's spec - bigquery: drain the 401 response body before re-issuing the request with a refreshed token so undici can return the socket to the keep-alive pool - snowflake: hoist getJwt() above the perAttempt timer in executeStatement so JWT signing doesn't eat the network budget (matches the order already used in pollStatement) * fix(data-drains): allow org-account Snowflake identifier with region suffix The account validation rejected `<orgname>-<acctname>.<region>.<cloud>` because `ACCOUNT_LOCATOR_RE`'s first segment forbade hyphens, while `ACCOUNT_ORG_RE` forbade dots. `normalizeAccountForJwt` already handles this composite form. Widen the first segment of `ACCOUNT_LOCATOR_RE` to allow hyphens so the boundary contract and the runtime schema accept what the JWT layer was already designed to process. * fix(data-drains): drain retryable response bodies in datadog/gcs loops Mirrors the bigquery 401 fix. Without consuming the body before sleeping, undici can't return the socket to the keep-alive pool, so each retry leaks a TCP connection instead of reusing it. * fix(data-drains): drain snowflake poll bodies on 202 and retryable status Mirrors the bigquery/datadog/gcs drains. Long async statements can poll many times against the same connection; without consuming the body undici can't return the socket to the keep-alive pool, so each iteration leaks a connection until GC. * fix(data-drains): consume success bodies; check Snowflake sqlState on 200 - gcs: drain the body on success paths so undici can return the socket to the keep-alive pool - snowflake: drain the body on synchronous 200 OK and run the same sqlState envelope check pollStatement already does — otherwise a statement-level failure that completes synchronously would silently return success * fix(data-drains): drain datadog and bigquery probe success bodies Same undici keep-alive issue as the prior fixes: postWithRetries returned the Response on success without draining (callers only read headers); the BigQuery `test()` probe returned without consuming the body. Both now drain before returning. * chore(data-drains): regenerate enum migration as 0206 after staging rebase * fix(data-drains): cap snowflake poll retries; tighten datadog tags min length
…ing (#4559) * fix(file-viewer): prevent scroll jump to top during Mothership streaming - Fix root cause: MarkdownCheckboxCtx.Provider was conditionally rendered, causing the scroll container to unmount/remount when isStreaming flipped, resetting scrollTop to 0 on every stream start - Add useScrollAnchor hook with spacer element to preserve scroll position when streamed content temporarily shrinks the scroll container - Linger active session ID on complete to prevent streamingContent→undefined flicker between consecutive tool calls on the same file - Gate upsert activation on incoming session having renderable content - Fix shouldShowStreamingFilePanel to keep panel mounted during linger - Fix use-chat post-write navigation to work with lingered completed session - Fix useAutoScroll to check proximity before pinning to bottom on stream start * fix(file-viewer): preserve session linger in hydrate/upsert and clear spacer after stream * chore(file-viewer): trim verbose comments to match codebase style * fix(file-viewer): re-engage auto-scroll when user scrolls back to bottom * chore(file-viewer): update scroll-anchor tsdoc, remove test separators, add hydrate linger tests * fix(file-viewer): prevent false re-engage when spacer restoration triggers onScroll * refactor(file-viewer): extract shouldReengage as pure tested function Pulls the spacer-guard re-engage condition out of onScroll into an exported pure function so the false-re-engage invariant (spacer active → no re-engage) is covered by automated tests rather than relying on manual QA. Adds 8 unit tests for shouldReengage alongside the existing 15 for computeSpacerShortage. * chore(file-viewer): trim verbose inline comments in use-scroll-anchor * chore(file-viewer): final comment cleanup before merge
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
PR SummaryMedium Risk Overview Docs SEO + platform hardening: Docs runtime behavior tweaks: the OG image route adds cached font fetches with explicit error handling and refactors inline styles into constants; the search API now validates/caps Reviewed by Cursor Bugbot for commit b7c34d4. Configure here. |
* fix(tables): cmd+a always selects all cells on the table page * fix(tables): move preventDefault inside non-empty guard
Uh oh!
There was an error while loading. Please reload this page.