Hadoseo
App Quality Report
Powered by Testers.AI
B-81%
Quality Score
6
Pages
76
Issues
7.8
Avg Confidence
7.7
Avg Priority
25 Critical34 High17 Medium
Testers.AI
>_ Testers.AI AI Analysis

Hadoseo was tested and 76 issues were detected across the site. The most critical finding was: Unconsented third-party telemetry to Sentry (potential PII exposure). Issues span Security, Performance, A11y, Other categories. Persona feedback rated Visual highest (7/10) and Accessibility lowest (5/10).

Qualitative Quality
Hadoseo
Category Avg
Best in Category
Issue Count by Type
Content
24
A11y
17
UX
8
Security
4
Pages Tested · 6 screenshots
Detected Issues · 76 total
1
Unconsented third-party telemetry to Sentry (potential PII exposure)
CRIT P9
Conf 9/10 Other
Prompt to Fix
Audit all telemetry/instrumentation calls to third-party services (Sentry in this case). For each event, identify whether PII could be included (names, emails, IDs, IP addresses, auth tokens, etc.). Implement a beforeSend hook to redact or drop PII fields, enable IP anonymization, and minimize data captured in error/usage events. Add a user consent toggle and privacy notice in the UI; ensure Sentry is only enabled after consent. Provide a concrete patch that updates the telemetry initialization to scrub data, and document the consent flow in the privacy policy.
Why it's a bug
There is a POST request to a third-party Sentry ingestion endpoint (o4506071217143808.ingest.us.sentry.io) without visible consent indicators in the network activity. Sentry events can include error details, stack traces, environment/data context, and potentially PII. Without explicit consent controls or data scrubbing, this could leak user data to a third party and enable profiling across sites.
Why it might not be a bug
Sentry is often used for error tracking and performance monitoring; if telemetry is fully scrubbed of PII, IPs are anonymized, and user consent is clearly obtained via a privacy notice, this can be acceptable. However, the current logs show a direct third-party call with no visible consent mechanism or data-scrubbing details, making it a high-priority concern to confirm and fix.
Suggested Fix
Implement explicit user consent gating for telemetry collection. Configure Sentry to scrub PII (remove emails, names, IDs, and sensitive fields) and anonymize IP addresses. Use a beforeSend hook or equivalent to redact or drop sensitive fields from all events. Ensure a clear privacy notice and a consent toggle in the UI before enabling telemetry. Consider loading Sentry DSN only after consent and enabling privacy-friendly defaults (no automatic user data capture).
Why Fix
Protect user privacy by preventing leakage of personal data to a third party, reduce risk of regulatory non-compliance, and align with privacy-by-design principles. Explicit consent and data minimization reduce potential trust and legal issues.
Route To
Frontend Privacy Engineer
Page
Tester
Pete · Privacy Networking Analyzer
Technical Evidence
Network: POST https://o4506071217143808.ingest.us.sentry.io/api/4506071220944896/envelope/?sentry_version=7&sentry_key=58ff8fddcbe1303f19bc19fbfed46f0f&sentry_client=sentry.javascript.nextjs%2F10.28.0
2
Early AI/LLM endpoint calls on page load
CRIT P9
Conf 9/10 Other
Prompt to Fix
Actionable prompt for AI assistant: Review the page code and identify all AI/LLM endpoint calls that execute on initial page load. Remove or lazy-load these calls behind a user-initiated event, add a consent banner before any external AI communication, and implement a feature flag to disable AI calls by default until explicitly enabled by the user. Provide a minimal, concrete code patch showing how to wrap AI calls in a function that requires user interaction (e.g., a button click) and how to gate network requests with a consent state.
Why it's a bug
Console output shows frequent notices of AI/LLM endpoints detected on page load, implying automatic or unconsented calls to AI services before user interaction. This can lead to privacy leaks, added latency, and degraded user trust if data is sent to external AI endpoints without explicit consent.
Why it might not be a bug
If AI calls are intentionally triggered only after user action or are behind a consent banner, this would be expected. The screenshot/logs indicate detection rather than explicit user-triggered activity, which still signals potential unconsented behavior.
Suggested Fix
Defer all AI/LLM requests until explicit user action or a clear opt-in. Add a privacy/consent prompt before any external AI calls. Ensure requests are logged with user consent and consider lazy-loading or batching AI calls to minimize impact on initial render.
Why Fix
Protects user privacy, reduces initial load latency, and aligns with best practices for AI-enabled features.
Route To
Frontend Engineer / Privacy & Security Engineer
Page
Tester
Jason · GenAI Code Analyzer
Technical Evidence
Console: ⚠️ AI/LLM ENDPOINT DETECTED
Network: Unknown AI endpoint URL(s) detected on page load (URLs not explicitly shown in the screenshot).
3
AI endpoint detection markers exposed in frontend logs
CRIT P9
Conf 9/10 SecurityOther
Prompt to Fix
Actionable prompt for AI coder: Remove all production UI markers that indicate AI/LLM endpoints. Do not print or expose 'AI endpoint detected' messages in the browser console or network logs. If telemetry is needed, send it to a backend service with proper authentication and do not surface endpoint URLs or internal IDs in any UI or logs.
Why it's a bug
The page/network activity includes explicit markers '⚠️ AI/LLM ENDPOINT DETECTED' next to resource requests. This reveals internal AI backend endpoints and instrumentation in production UI/logs, which is a security/privacy and reliability concern and could confuse users or expose architecture.
Why it might not be a bug
If this is intended as internal telemetry, it should be stripped from production UI; logs could be harmless in a dev environment. However, visible markers in production traces are still risky and noteworthy.
Suggested Fix
Remove or gate AI-endpoint detection markers from production builds; ensure any telemetry is sent to a secure backend and not surfaced in UI or public logs. Obfuscate or standardize logs without exposing endpoint details.
Why Fix
Prevents leakage of internal AI infrastructure details, reducing attack surface and improving user trust and compliance with privacy expectations.
Route To
Security Engineer
Page
Tester
Jason · GenAI Code Analyzer
Technical Evidence
Console: [DEBUG] JSHandle@error ⚠️ AI/LLM ENDPOINT DETECTED
Network: GET https://lovable.dev/fonts/CameraPlainVariable-c48bd243.woff2 - Status: N/A
+31
31 more issues detected  View all →
Exposed /security endpoint accessible without clear authenti...
Repeated srcset parsing warnings and dropped candidates due ...
Incomplete Page Content JSON (unclosed object)
and 28 more...
Unlock All 76 Issues
You're viewing the top 3 issues for Hadoseo.
Sign up at Testers.AI to access the full report with all 76 detected issues, detailed fixes, and continuous monitoring.
Sign Up at Testers.AI or let us run the tests for you