🧩 Components

The project consists of:

  • Django app apps.search_console_tools
    • Playwright core: apps/search_console_tools/core/
    • Models: Account, CommandRequest, DomainAccountMapping
    • Celery tasks: apps/search_console_tools/tasks.py
    • Callbacks + signals
  • API: apps.api (DRF)
    • v1 endpoints: apps/api/v1/gsc/
  • Admin panel: Django Admin (/admin/)
  • Background execution: Celery worker + Celery Beat + command_request_starter
  • Infrastructure: PostgreSQL + Redis

🔁 End-to-end command execution flow

High-level pipeline:

  1. A CommandRequest is created with status NEW (initiator: API/ADMIN/SCHEDULER)
  2. command_request_starter periodically scans NEW requests and dispatches them to Celery
    • At most one NEW request per account is scheduled at a time
    • If an account has an IN_PROGRESS request, new ones are not scheduled
  3. Celery worker executes run_command_request_task(request_id)
  4. CommandRequestExecutor sets IN_PROGRESS and runs GoogleSearchConsoleEngine.run_command(...)
  5. The engine starts a persistent Playwright Chromium session, authenticates to Google, and executes the command
  6. When finished, the system stores:
    • status = COMPLETED or FAILED
    • result and/or errors
    • a screenshot (best-effort)
  7. Signals:
    • after a successful get_account_domains, DomainAccountMapping is updated
    • if callback_url is set, an HTTP callback is sent (async)

🧠 Concurrency model

The “one command per account at a time” rule is enforced by command_request_starter. This reduces UI conflicts and lowers the risk of Google-side blocks.

🗂️ Browser profile storage (browser-data)

The engine uses a Playwright persistent context and stores profile data in:

  • browser-data/<account_username>

This allows session reuse (cookies/localStorage) and reduces repeated logins.

0 items under this folder.