Builder
App Quality Report
Powered by Testers.AI
B-80%
Quality Score
7
Pages
151
Issues
8.1
Avg Confidence
8.2
Avg Priority
75 Critical63 High13 Medium
Testers.AI
>_ Testers.AI AI Analysis

Builder was tested and 151 issues were detected across the site. The most critical finding was: AI/LLM endpoint calls detected on page load without explicit consent. Issues span Security, Legal, A11y, Performance categories. Persona feedback rated Visual highest (9/10) and Accessibility lowest (5/10).

Qualitative Quality
Builder
Category Avg
Best in Category
Issue Count by Type
Content
29
UX
23
Security
23
A11y
6
Legal
1
Pages Tested · 7 screenshots
Detected Issues · 151 total
1
AI/LLM endpoint calls detected on page load without explicit consent
CRIT P10
Conf 9/10 SecurityOther
Prompt to Fix
Wrap all on-load AI/LLM calls behind user consent or interaction gates. Delay non-essential AI requests until after user action and display a privacy notice describing AI usage and data collection.
Why it's a bug
Network logs show '⚠️ AI/LLM ENDPOINT DETECTED' indicating on-load AI-service calls, which can surprise users and raise privacy/performance concerns without visible consent or disclosure.
Why it might not be a bug
If these calls are essential analytics or dynamic content, proper consent and disclosure are still required; the screenshot suggests a lack of explicit user consent gating.
Suggested Fix
Gate AI calls behind user interaction or explicit consent, lazy-load them after initial paint, and add a clear privacy notice about AI usage.
Why Fix
Reduces privacy risk, improves load performance, and aligns with user expectations for third-party AI usage.
Route To
Security/Privacy Engineer or Frontend Performance Engineer
Page
Tester
Jason · GenAI Code Analyzer
Technical Evidence
Console: ⚠️ AI/LLM ENDPOINT DETECTED
Network: GET https://www.builder.io/assets/main-DPeExoxH.js - Status: 200
2
Sensitive token-like data exposed in LaunchDarkly URL (token in URL)
CRIT P9
Conf 9/10 SecurityOther
Prompt to Fix
In the frontend code, stop exposing the LaunchDarkly evaluation URL payload containing user context in logs. Remove or redact the base64-encoded JSON segment from any console logs or error messages. If user context must be provided, pass it through secure means (headers, in-memory context, or server-provided tokens) and avoid placing it in the URL. Implement a log sanitizer to scrub sensitive query parameters and ensure the LaunchDarkly SDK usage does not reveal tokens in navigation or network logs.
Why it's a bug
A LaunchDarkly evaluation URL contains a base64-encoded JSON payload that appears to include a user key and context (e.g., anonymous: true, key: UUID). Exposing token-like data in URLs can be captured in browser history, server logs, analytics, or referer headers, enabling correlation or misuse.
Why it might not be a bug
The payload may not be a secret credential and could be a non-sensitive user key used for feature targeting. However, including any token-like data in URLs is still a security risk since it can be logged or reused if intercepted.
Suggested Fix
Avoid including sensitive data or tokens in URLs. Do not log full request URLs containing encoded payloads. Pass user context to LaunchDarkly via non-URL mechanisms (e.g., in-memory context, secure headers, or server-mediated context) and ensure any client-side logs scrub this data. If necessary, redact the payload before logging and use opaque identifiers instead of raw JSON in the URL.
Why Fix
Preventing exposure of tokens in URLs reduces the risk of leakage through browser history, analytics, or logs, which could be exploited for session correlation or targeted abuse.
Route To
Frontend Security Engineer
Page
Tester
Sharon · Security Console Log Analyzer
Technical Evidence
Console: [INFO] [LaunchDarkly] Opening stream connection to https://clientstream.launchdarkly.com/eval/615353df27e723246ce15b07/eyJhbm9ueW1vdXMiOnRydWUsImtleSI6IjcyY2FlOTgwLTI4ODgtMTFmMS1iM2Q0LTkxM2I4M2M1YTNlNCJ9
Network: https://clientstream.launchdarkly.com/eval/615353df27e723246ce15b07/eyJhbm9ueW1vdXMiOnRydWUsImtleSI6IjcyY2FlOTgwLTI4ODgtMTFmMS1iM2Q0LTkxM2I4M2M1YTNlNCJ9
3
LaunchDarkly user key exposed in console logs via URL payload
CRIT P9
Conf 9/10 Other
Prompt to Fix
Identify and redact any sensitive tokens or identifiers that appear in client-side console logs. Specifically remove or mask internal tracking keys in URLs or payloads used by third-party services (e.g., LaunchDarkly eval URLs). Implement a log sanitizer that replaces keys with redacted placeholders (e.g., "key":"REDACTED"), or avoid logging full request URLs containing identifiers. Ensure that no PII or unique tracking identifiers are exposed in browser consoles or error reports. Provide code changes for JS/TS to detect and redact such patterns before console.log or error capture, and verify that logs no longer reveal the encoded payload or the actual tracking key.
Why it's a bug
The console log reveals a LaunchDarkly streaming URL containing an encoded payload with an internal user key (example payload decodes to {"anonymous":true,"key":"<uuid>"}). This exposes a persistent tracking identifier that could be used to correlate user activity across sessions/services, constituting a privacy risk and potential for user profiling if logs are accessed or leaked.
Why it might not be a bug
The key appears to be an internal, non-PII tracking identifier used by a third-party service for feature flag evaluation. If the value is a non-identifying token and logs are strictly internal or ephemeral, the privacy risk is lower. However, due to visibility in client-side logs and potential log exposure, it should still be treated as a privacy risk and mitigated.
Suggested Fix
Redact or avoid logging internal tracking identifiers in console logs. Do not include encoded user keys in URLs or logs. Use non-identifying session references or server-side evaluation where possible. Implement a log redaction utility that masks values matching patterns like "key":<value> and avoid printing full request URLs with sensitive tokens.
Why Fix
Protecting user privacy requires preventing exposure of tracking identifiers in client-side logs. Redacting or removing such data from logs reduces risk of correlation, profiling, or leakage through browser consoles, error reports, or devtools sharing.
Route To
Security Engineer
Page
Tester
Pete · Privacy Console Log Analyzer
Technical Evidence
Console: [INFO] [LaunchDarkly] Opening stream connection to https://clientstream.launchdarkly.com/eval/615353df27e723246ce15b07/eyJhbm9ueW1vdXMiOnRydWUsImtleSI6IjcyY2FlOTgwLTI4ODgtMTFmMS1iM2Q0LTkxM2I4M2M1YTNlNCJ9
Network: https://clientstream.launchdarkly.com/eval/615353df27e723246ce15b07/eyJhbm9ueW1vdXMiOnRydWUsImtleSI6IjcyY2FlOTgwLTI4ODgtMTFmMS1iM2Q0LTkxM2I4M2M1YTNlNCJ9
+44
44 more issues detected  View all →
Exposed asset access token in URL for image delivery
Credentials in URL expose tokens and API keys in CDN asset r...
API Key exposed in URL query parameter on pixel/analytics en...
and 41 more...
Unlock All 151 Issues
You're viewing the top 3 issues for Builder.
Sign up at Testers.AI to access the full report with all 151 detected issues, detailed fixes, and continuous monitoring.
Sign Up at Testers.AI or let us run the tests for you