Architecture¶
This page is a 10-minute read that maps the conceptual pieces of QQA4CO to concrete source files. After reading it you should know exactly where to look in the code for any feature.
The big picture¶
QQA4CO is a thin set of orthogonal contracts plus one solver loop:
┌────────────────────────────────────────────────┐
│ qqa.anneal() │
│ (the only solver loop in src/qqa/annealing.py) │
└─────────────────────────┬──────────────────────┘
│ delegates to
┌─────────────────────────────────┼─────────────────────────────────┐
▼ ▼ ▼
┌──────────────────┐ ┌────────────────────────────┐ ┌────────────────────┐
│ COProblem │ │ Relaxation (Protocol) │ │ Schedule (callable)│
│ loss_fn(x) │ │ init / forward / project │ │ (epoch, T) -> bg │
│ score_summary() │ │ penalty / diversity │ └────────────────────┘
└──────────────────┘ │ perturb_ / num_variables │
└────────────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Callback (Callback ABC) │
│ on_train_begin / on_epoch_end / │
│ on_train_end (mutates hyperparams) │
└─────────────────────────────────────┘
Everything else — the CLI, the Streamlit dashboard, the visualisation
module, the optional qqa.pignn backend — is a consumer of these
four contracts.
Data flow for a single solve¶
- User builds a problem.
qqa.MaximumIndependentSet(g, penalty=2, device='cuda')constructs aCOProblemwhoseQ_matlives on the right device and whoserelaxationis aBinaryRelaxation(). - User calls
qqa.anneal(problem, sol_size=128, num_epochs=2000). annealinitialises the latent tensorx = relax.init(sol_size, problem, device)— shape(B, N)for binary,(B, N, K)for categorical,(B, I, N)for batched-instance.- For every epoch the loop:
- computes
bg = schedule(epoch, num_epochs), - forwards
x_fwd = relax.forward(x)and getslosses = problem.loss_fn(x_fwd), - adds
penalties * bg(the QQA continuous-relaxation penalty) and a diversity term scaled bydiv_param, - back-propagates and steps AdamW,
- applies an in-place
relax.perturb_(x, lr, temp)step (Langevin noise + clamping), - projects to discrete with
relax.project(x)to evaluate the true objective and update the running best, - fires
on_epoch_end(state)on every callback. - At the end
annealcallsproblem.score_summary(best_sol)to produce the human-readable result and packages everything into anAnnealResultdataclass.
Why this decomposition¶
- One annealer for every variable kind. Binary, spin, categorical,
permutation problems all use the same
anneal()because the variable-specific bits live behind theRelaxationprotocol. Adding a fifth variable kind is a single new class — no edits to the loop. - Problems are pure functions of
x. A problem only has to know how to compute its loss; it never sees the optimiser, the schedule, or the parallel batch dimension semantics. This is what makesqqa.UserProblemwork — wrap anyloss_fn(x)and you have a first-class problem. - Callbacks are read-only by default. They see the full state but
the only sanctioned write target is
state.hyperparams(a mutable dict). This is enough to implementAutoDivTunerand others without inviting callbacks to silently corrupt the training loop. - Backends are functions, not frameworks. A "backend" is anything
that takes a
COProblemand returns anAnnealResult. Theqqa.pignntrainers do not subclass anything; they just satisfy that contract, which is why they reuse the same downstream tooling.
Where extension points live in the source¶
| Extension | File | Lines | Note |
|---|---|---|---|
| New problem | src/qqa/problems/*.py |
varies | Subclass COProblem |
| New relaxation | src/qqa/relaxation.py |
~220 | Implement Protocol |
| New schedule | anywhere | n/a | Any (epoch, T) -> float callable |
| New callback | src/qqa/callbacks.py (or external) |
~170 | Subclass Callback |
| New backend | src/qqa/<name>/ (e.g. pignn/) |
~700 reference | Return AnnealResult |
See Extending QQA4CO for worked examples of each.
The optional pignn backend¶
qqa.pignn is the canonical "second backend" example. It illustrates
three idioms worth copying:
- Heavy deps stay opt-in.
torch_geometricis never imported from the top-levelqqa.__init__;qqa.pignn._import.require_pygraises an actionable error if the extra is missing. - Trainers reuse
BinaryRelaxation.penaltyso the CRA loss and the QQA loss are numerically identical forcurve_rate=2, making head-to-head comparisons trustworthy. - They return
qqa.AnnealResult, which is whyqqa solve --backend pignn ...and the Streamlit dashboard work with no extra code.
CLI / GUI / scripts as "external consumers"¶
The CLI, the Streamlit app, and the scripts/ benchmarks all call
qqa.anneal (or qqa.pignn.train_*) and then read AnnealResult.
None of them peek inside the solver loop. That separation is what lets
you extend the solver without touching the user-facing tooling — the
tooling is bound to the contract, not the implementation.