Officeworks
App Quality Report
Powered by Testers.AI
B-82%
Quality Score
10
Pages
173
Issues
7.9
Avg Confidence
7.7
Avg Priority
63 Critical78 High32 Medium
Testers.AI
>_ Testers.AI AI Analysis

Officeworks scored B- (82%) with 173 issues across 7 tested pages, ranking #4 of 7 Australian retail sites. That's 52 more than the 120.6 category average (14th percentile).

Top issues to fix immediately: "Global DNS resolution failures blocking resource loading" โ€” Audit and fix DNS/host configuration for all asset and API domains used by the app; "404 errors loading critical resources (missing assets)" โ€” Identify the 404-causing resources by examining the network panel or server logs; "Broken/missing hero and product images due to empty image sources" โ€” Provide valid, hosted image URLs for all banner/product tiles; verify assets exist at the given paths; remove any emp....

Weakest area โ€” accessibility (6/10): Potential issues include small text in dense areas, varied image sizes, and unclear focus indicators on some controls; ensure k...

Quick wins: Simplify the above-the-fold area with a clear search bar, category shortcuts, and a prominent call-to-action for quick.... Improve accessibility by increasing typography, ensuring color contrast, keyboard focus indicators, and alt text for....

Qualitative Quality
Officeworks
Category Avg
Best in Category
Issue Count by Type
Content
35
UX
23
A11y
13
Visual
9
Security
4
Pages Tested ยท 10 screenshots
Detected Issues ยท 173 total
1
Client-side AI/LLM endpoint calls on page load
CRIT P9
Conf 9/10 Other
Prompt to Fix
Audit all JavaScript that triggers AI/LLM endpoints on page load. Remove or lazy-load non-critical calls. Add a user consent flow before any data is sent to external AI services. Replace hard-coded endpoint calls with feature-flag-controlled, opt-in APIs.
Why it's a bug
Console shows repeated AI/LLM endpoint detections on page load, implying client-side calls to AI services are made without user interaction or explicit consent. This can degrade performance, leak data, and surprise users.
Why it might not be a bug
If the site legitimately uses on-demand AI features, these calls could be justified; however, there is no visible user consent prompt or obvious interaction triggering these requests in the static view.
Suggested Fix
Defer all AI/LLM calls until user action or explicit consent is obtained. Introduce feature flags, lazy-load scripts, and clearly communicate data usage in a privacy-consent dialog. Remove non-essential AI calls from initial paint.
Why Fix
Reduces performance impact, avoids unintended data exposure, and improves user trust by making AI usage opt-in.
Route To
Frontend Engineer / Security & Privacy Engineer
Page
Tester
Jason ยท GenAI Code Analyzer
Technical Evidence
Console: [WARN] AI/LLM ENDPOINT DETECTED
2
Third-Party Tracking: Session ID Shared in Omniture mbox Requests
CRIT P9
Conf 9/10 Other
Prompt to Fix
On the client side, intercept requests to mboxedge.tt.omtrdc.net and strip the sessionId parameter. Replace with a non-identifying token or use server-side aggregation for analytics. Add a consent check before loading third-party tracking scripts. Document data minimization rules and implement CSP/Privacy Policy changes accordingly.
Why it's a bug
Requests to mboxedge.tt.omtrdc.net include a sessionId (and client) in the query string. This exposes a unique user/session identifier to a third-party domain, enabling cross-site tracking without a clear consent indicator or disclosure.
Why it might not be a bug
Third-party analytics/tracking is common; however, including a persistent sessionId in URL parameters to a third party increases privacy risk and may violate user expectations and data minimization principles.
Suggested Fix
Remove or sanitize session identifiers from all third-party tracking requests. Prefer first-party analytics or server-side event processing. If third-party integration is required, implement consent banners, minimize data sent (no session IDs in URLs), and consider using hashed/anonymized identifiers or server-to-server integrations.
Why Fix
Reducing sharing of persistent identifiers with third parties lowers risk of user profiling, data leakage, and regulatory concerns while preserving essential analytics with privacy-friendly alternatives.
Route To
Privacy Engineer
Page
Tester
Pete ยท Privacy Networking Analyzer
Technical Evidence
Console: โš ๏ธ POTENTIAL ISSUE: Tracking request detected
Network: POST https://mboxedge35.tt.omtrdc.net/rest/v1/delivery?client=officeworks&sessionId=53003f8a4d6043f8b2c5a5bc5e17eaf2&version=2.11.4
3
AI/LLM endpoint calls detected on page load (defer to user action)
CRIT P9
Conf 8/10 SecurityOther
Prompt to Fix
Remove on-load LLM endpoint calls. Gate AI requests behind a user action or explicit consent. Do not log prompts to the console. Implement lazy-loading and proper privacy guards.
Why it's a bug
Console shows 'AI/LLM ENDPOINT DETECTED', indicating LLM calls occur during initial page load. This can introduce latency, data exposure, and prompts being inadvertently logged. Should be deferred until user action or with explicit consent.
Why it might not be a bug
If the LLM usage is essential for initial rendering, it should still be gated and clearly documented; the screenshot suggests unseen UX/privacy concerns.
Suggested Fix
Move LLM interactions to explicit user-initiated events or lazy-load after user action. Remove any sensitive prompts from console/logs and implement proper consent/privacy controls.
Why Fix
Reduces latency, improves privacy, and aligns with best practices for dynamic AI usage in UI.
Route To
Frontend/Platform Engineer
Page
Tester
Jason ยท GenAI Code Analyzer
Technical Evidence
Console: "โš ๏ธ AI/LLM ENDPOINT DETECTED"
+170
170 more issues detected  View all →
AI/LLM endpoints loaded/detected on page load (privacy risk)
Hardcoded/ubiquitous analytics and AI-related endpoints with...
AI/LLM endpoints loaded/detected on page load
and 167 more...
Unlock All 173 Issues
You're viewing the top 3 issues for Officeworks.
Sign up at Testers.AI to access the full report with all 173 detected issues, detailed fixes, and continuous monitoring.
Sign Up at Testers.AI or let us run the tests for you