Braingrid
App Quality Report
Powered by Testers.AI
B84%
Quality Score
7
Pages
110
Issues
7.9
Avg Confidence
7.9
Avg Priority
51 Critical36 High22 Medium1 Low
Testers.AI
>_ Testers.AI AI Analysis

Braingrid was tested and 110 issues were detected across the site. The most critical finding was: Unconsented third-party analytics and tracking scripts loaded. Issues span Security, Performance, A11y, Other categories. Persona feedback rated Visual highest (8/10) and Accessibility lowest (6/10).

Qualitative Quality
Braingrid
Category Avg
Best in Category
Issue Count by Type
Content
29
A11y
21
UX
12
Security
6
Pages Tested · 7 screenshots
Detected Issues · 110 total
1
Unconsented third-party analytics and tracking scripts loaded
CRIT P9
Conf 9/10 Other
Prompt to Fix
Actionable fix: Integrate a Consent Management Platform (CMP) and gate all external analytics before user consent. Move away from loading Ahrefs, Plausible, GTM, and Dubcdn scripts until consent is granted. If possible, switch to first-party analytics or privacy-preserving analytics that do not share detailed user data with external domains. Ensure analytics calls minimize data (avoid PII, anonymize IPs, set appropriate cookies with SameSite and Secure, respect Do Not Track). Update the privacy policy to disclose third-party data sharing and provide clear opt-in/out options.
Why it's a bug
The page loads multiple third-party analytics/tracking scripts (e.g., Ahrefs analytics.js, Google Tag Manager, gtag, Plausible, Dubcdn analytics) with no visible consent gating. This enables cross-site data collection and profiling by external services, potentially violating data protection regulations and user expectations.
Why it might not be a bug
Some sites rely on third-party analytics by default; however, without explicit user consent and disclosure, this constitutes a privacy risk and potential regulatory non-compliance.
Suggested Fix
Implement a consent management platform (CMP) and gate all non-essential analytics behind user consent. Lazy-load or defer third-party scripts until consent is given. Prefer privacy-preserving or first-party analytics where possible, and ensure data minimization (no PII in requests, anonymize data, and restrict data sharing). Add explicit privacy disclosures and a clear opt-in/opt-out UI.
Why Fix
Reduces privacy risk, improves regulatory compliance, and increases user trust by avoiding unconsented data sharing with third parties.
Route To
Privacy Engineer / Frontend Lead
Page
Tester
Pete · Privacy Networking Analyzer
Technical Evidence
Console: ⚠️ POTENTIAL ISSUE: Tracking request detected; multiple third-party trackers detected on page load.
Network: GET https://analytics.ahrefs.com/analytics.js; GET https://www.googletagmanager.com/gtm.js?id=GTM-NJPSLP5W; GET https://www.googletagmanager.com/gtag/js?id=G-WED1BW2RGQ&cx=c&gtm=4e63o0; GET https://plausible.io/js/script.js; GET https://www.dubcdn.com/analytics/script.js
2
AI/LLM endpoints invoked on page load (privacy/UX risk)
CRIT P9
Conf 9/10 Other
Prompt to Fix
Audit the page for all calls to AI/LLM endpoints and ensure they are not executed on initial load. Implement a consent gate before any data is sent to external AI services. Move calls behind user actions (click/hover) or feature flags, and ensure proper throttling/backoff if retries are needed.
Why it's a bug
Network activity shows multiple AI/LLM endpoints being detected and queried during or before user interaction (AI/LLM ENDPOINT DETECTED). This implies embedded AI calls on initial paint, which can leak prompts/data, impact performance, and violate user consent expectations. It also raises concerns about unnecessary data transmission to third-party/AI services before the user opts in.
Why it might not be a bug
If explicit consent or opt-in is present, this could be acceptable; however, the screenshot provides no evidence of consent flow, and the endpoints appear to run automatically on load, which is a high-priority usability/privacy risk.
Suggested Fix
Move all AI/LLM calls to occur after user interaction (lazy-load or on-demand). Introduce a consent modal or opt-in mechanism before any data is sent to AI endpoints. Defer non-critical analytics/LLM calls and remove any prompts from initial paint. Centralize AI endpoint usage behind feature flags and respect privacy by default.
Why Fix
Protects user privacy, improves performance on initial load, and aligns with best practices for GenAI integrations by avoiding unsolicited data transmission.
Route To
Frontend/Infra Engineer, Security/Privacy Engineer
Page
Tester
Jason · GenAI Code Analyzer
Technical Evidence
Console: ⚠️ AI/LLM ENDPOINT DETECTED
Network: POST https://www.braingrid.ai/ingest/flags/?v=2&config=true&ip=0&_=1774475392599&ver=1.310.1&compression=base64
3
Authentication token exposed in URL for surveys endpoint
CRIT P9
Conf 9/10 SecurityOther
Prompt to Fix
In the client code that calls the surveys endpoint, ensure authentication tokens are not appended as query parameters. Change requests to send tokens via an Authorization: Bearer <token> header (or HttpOnly cookie) and ensure server validates this header. Remove token from the URL, and sanitize logs to avoid storing URLs containing tokens. Example fix: fetch('/ingest/api/surveys/', { method: 'GET', headers: { 'Authorization': 'Bearer ' + token } }) and unpack token handling server-side to rely on Authorization header. Enforce TLS and enable token rotation with short expiration times.
Why it's a bug
URL-based tokens are exposed in browser history, referer headers, logs, and analytics, enabling session hijacking and replay attacks. The request to /ingest/api/surveys/ includes a token in the query string, which can be logged or leaked through various channels.
Why it might not be a bug
Some systems historically pass tokens in URLs, but this practice is insecure and widely discouraged; there is no legitimate security rationale for sending sensitive tokens in query strings.
Suggested Fix
Remove tokens from URL query parameters. Use Authorization: Bearer <token> headers or HttpOnly cookies for authentication. If body data is needed, include tokens in the request body for POST/PUT (not in the URL). Enforce TLS, rotate tokens, and implement short-lived credentials with proper audience/scope validation. Audit logs to avoid logging full URLs with tokens.
Why Fix
Fixing this prevents credential leakage via browser history, server logs, and referer headers, reducing risk of token theft and unauthorized access.
Route To
Security Engineer
Page
Tester
Sharon · Security Networking Analyzer
Technical Evidence
Console: "POTENTIAL ISSUE: Credentials in URL"
Network: https://www.braingrid.ai/ingest/api/surveys/?token=phc_sU8ozZzWLJb09dT3LmyeFrhWAzCiZwyfToh1eowhR9e&ip=0&_=1774475400312&ver=1.310.1
+32
32 more issues detected  View all →
Cross-site tracking risk from multiple external analytics pr...
Resource load failed due to DNS resolution error (ERR_NAME_N...
Buttons rendered with empty text labels
and 29 more...
Unlock All 110 Issues
You're viewing the top 3 issues for Braingrid.
Sign up at Testers.AI to access the full report with all 110 detected issues, detailed fixes, and continuous monitoring.
Sign Up at Testers.AI or let us run the tests for you