🧩 Components
The project consists of:
- Django app
apps.search_console_tools- Playwright core:
apps/search_console_tools/core/ - Models:
Account,CommandRequest,DomainAccountMapping - Celery tasks:
apps/search_console_tools/tasks.py - Callbacks + signals
- Playwright core:
- API:
apps.api(DRF)- v1 endpoints:
apps/api/v1/gsc/
- v1 endpoints:
- Admin panel: Django Admin (
/admin/) - Background execution: Celery worker + Celery Beat +
command_request_starter - Infrastructure: PostgreSQL + Redis
🔁 End-to-end command execution flow
High-level pipeline:
- A
CommandRequestis created with statusNEW(initiator: API/ADMIN/SCHEDULER) command_request_starterperiodically scans NEW requests and dispatches them to Celery- At most one NEW request per account is scheduled at a time
- If an account has an
IN_PROGRESSrequest, new ones are not scheduled
- Celery worker executes
run_command_request_task(request_id) CommandRequestExecutorsetsIN_PROGRESSand runsGoogleSearchConsoleEngine.run_command(...)- The engine starts a persistent Playwright Chromium session, authenticates to Google, and executes the command
- When finished, the system stores:
status=COMPLETEDorFAILEDresultand/orerrors- a screenshot (best-effort)
- Signals:
- after a successful
get_account_domains,DomainAccountMappingis updated - if
callback_urlis set, an HTTP callback is sent (async)
- after a successful
🧠 Concurrency model
The “one command per account at a time” rule is enforced by command_request_starter.
This reduces UI conflicts and lowers the risk of Google-side blocks.
🗂️ Browser profile storage (browser-data)
The engine uses a Playwright persistent context and stores profile data in:
browser-data/<account_username>
This allows session reuse (cookies/localStorage) and reduces repeated logins.