The trek goes on

0048

Specifying queue order using parametric types, abstractly

Posted at 16:31 on 19 January 2026, and revised at 16:31 on 29 January 2026

Continuing 0047, I’ve finally worked out a proof that any two implementations of a parametrically quantified abstract Queue type are bisimilar, so the type acts as a tight specification that fully determines the extensional behaviour. In the meantime I’ve changed the definition a couple of times. At one point I switched to the ‘time-indexed’ version:

data ListF (T : Set) (S : T → T → Set) (A : T → Set) (X : T → T → Set)
       : T → T → Set where
  nil  : ∀ {u}                                     → ListF T S A X u u
  cons : ∀ {l m n u} → S l m → S m n → A m → X n u → ListF T S A X l u

record Queue (T : Set) (S : T → T → Set) (A : T → Set) : Set₁ where
  field
    State : T → T → Set
    empty : ∀ {u} → State u u
    push  : ∀ {l u m v} → S u m → S m v → A m → State l u → State l v
    pop   : ∀ {l u} → State l u → ListF T S A State l u

But applying the parametricity translation to this definition resulted in scarily complex definitions (and I had to write parametricity translation combinators to make the translation manageable), which slowed me down for a while. Then it came to me that I can actually simplify this definition: instead of always pushing or popping two S-proofs enclosing an element, with the whole thing having type S u m × S m v × A m, I can just fuse them into a single R u v, where R can be instantiated to store the element too. So the definition is changed to

data ListF (I : Set) (R : I → I → Set) (X : I → I → Set)
       : I → I → Set where
  nil  : ∀ {u}                     → ListF I R X u u
  cons : ∀ {l m u} → R l m → X m u → ListF I R X l u

record Queue (I : Set) (i₀ : I) (R : I → I → Set) : Set₁ where
  field
    State : I → I → Set
    empty : State i₀ i₀
    push  : ∀ {l u v} → R u v → State l u → State l v
    pop   : ∀ {l u} → State l u → ListF I R State l u

Notably, the least fixed-point of ListF I R becomes the familiar definition of reflexive transitive closure. I rename the parameter T (short for ‘time’) to the more generic I since the time metaphor may be wrongly associated with the order in which pushes and pops happen — instead, the indices should be thought of as serial numbers assigned to elements as they come. A more substantial change (suggested by Shu-Hung You) is that empty starts from a fixed i₀ — this helps to simplify the bisimilarity proof later.

Our subject of study is the parametrically quantified type

PQueue : Set₁
PQueue = ∀ I i₀ R → Queue I i₀ R

A queue implementation should have this type, but to actually use the implementation, we instantiate it to a type of ‘simple queues’ with elements having a given type A : Set:

SQueue : Set → Set₁
SQueue A = Queue ⊤ tt (λ _ _ → A)

inst : PQueue → (A : Set) → SQueue A
inst Q A = Q ⊤ tt (λ _ _ → A)

For simplicity, I formulate bisimilarity (as existence of a bisimulation relation) only for SQueue:

data ListFᴿ {A : Set} {X X' : Set} (Xᴿ : X → X' → Set)
        : ListF ⊤ (λ _ _ → A) (λ _ _ → X ) tt tt
        → ListF ⊤ (λ _ _ → A) (λ _ _ → X') tt tt → Set where
  nilᴿ  : fx ≡ nil      → fx' ≡ nil
        → ListFᴿ Xᴿ fx fx'
  consᴿ : fx ≡ cons a x → fx' ≡ cons a' x' → a ≡ a' → Xᴿ x x'
        → ListFᴿ Xᴿ fx fx'

record QueueBisim (q q' : SQueue A) : Set₁ where
  field
    Stateᴿ : q .State tt tt → q' .State tt tt → Set
    emptyᴿ : Stateᴿ (q .empty) (q' .empty)
    pushᴿ  : (a : A)
           → Stateᴿ s s' → Stateᴿ (q .push a s) (q' .push a s')
    popᴿ   : Stateᴿ s s' → ListFᴿ Stateᴿ (q .pop s) (q' .pop s')

And the theorem we’ll prove is

bisim : (Q Q' : PQueue) → Param Q → Param Q'
      → (A : Set) → QueueBisim (inst Q A) (inst Q' A)

where Param, the (unary) parametricity of PQueue, is standard (and will be given later when we need it). Note that QueueBisim is a specialised result of applying the binary parametricity translation to PQueue, so an instantiation of any Q : PQueue will necessarily be related to itself due to (binary) parametricity, that is, QueueBisim (inst Q A) (inst Q A); what’s interesting/non-trivial here is that QueueBisim holds even for instantiations of different PQueues.

As I surmised from the beginning, the proof idea is to instantiate the parameter I of a PQueue to to produce a ‘counting queue’ that assigns distinct serial numbers to element types, and relate this counting queue to the SQueue instantiated from the same PQueue by parametricity. To be concrete, consider

CQueue = (P : ℕ → Set) → Queue ℕ 0 (λ i j → suc i ≡ j × P i)

Even though CQueue is more concrete than PQueue in the sense that it abstracts over only the element types P (indexed by serial numbers) and can be derived by instantiating PQueue, our intuition about PQueue being a tight specification is preserved: push of CQueue has type

{l u v : ℕ} → suc u ≡ v × P u → State l u → State l v

so starting from empty : State 0 0, we are required to push elements of distinct types P 0, P 1, P 2, and so on in order — each push increments the second index in the state type to record the next available serial number. Subsequently we can pop from a State 0 3; the result cannot be nil since 0 is not 3, so it has to be a cons with an element of type P 0 and a new State 1 3 — each successful pop increments the first index in the state type, which is the serial number (in the type) of the element to be popped next. Intuitively, since the elements previously pushed into the queue are assigned distinct serial numbers, specifying the serial number of the element to be popped should determine which element is popped. However, I went too far and made a wrong conjecture that the indices alone would completely determine the extensional behaviour of the counting queue regardless of the implementation detail — specifically, I couldn’t rule out the possibility that popping from some State u u might produce a cons. (The wrong conjecture got me stuck for a long time — I went as far as attempting to use transitivity of general bisimulations (which turned out to be very tedious to formulate) to concatenate the parametricity-induced bisimulations between the two SQueues and their corresponding counting queues with a hand-crafted bisimulation between the counting queues in the middle, but that didn’t work.)

Eventually I think the proof should be understood mainly as extending classic parametricity-based proofs, which we’ll revisit here as warm-up exercises. Suppose that we push two elements of types P 0 and P 1 into a CQueue and then pop an element, which has type P 0 (if the pop is successful). This situation is comparable with a parametric function f : {P : ℕ → Set} → P 0 → P 1 → P 0, whose result can be uniquely determined using parametricity. What’s noteworthy is that the proof needs to invoke (unary) parametricity for every input: Suppose that the unary parametricity of f is established by a function

fᴾ : {P  : ℕ → Set} (Pᴾ : (n : ℕ) → P n → Set)
   → {p₀ : P 0}    → Pᴾ 0 p₀
   → {p₁ : P 1}    → Pᴾ 1 p₁
                   → Pᴾ 0 (f p₀ p₁)

That is, any invariant Pᴾ on elements is preserved by f. Then for every p₀ : P 0 and p₁ : P 1, choose the invariant

PMap : (n : ℕ) → P n → Set
PMap  0     p  =  p ≡ p₀
PMap  1     p  =  p ≡ p₁
PMap (2+ _) _  =  ⊥

and we can construct fᴾ PMap refl refl : f p₀ p₁ ≡ p₀, that is, prove that PMap holds for the inputs p₀ and p₁ and conclude that PMap holds for the output f p₀ p₁ too. The same invariant also works for determining the behaviour of {P : ℕ → Set} → P 0 → P 1 → P 1, which is comparable to a successful second pop. If we successfully popped a third time, the situation would be comparable to g : {P : ℕ → Set} → P 0 → P 1 → P 2, which we can prove to be non-existent by deriving a contradiction (so popping a third element is impossible): if g existed, there would be a proof gᴾ of its unary parametricity having almost the same type as fᴾ except ending with Pᴾ 2 (g p₀ p₁), which is , so gᴾ PMap refl refl : ⊥. (We could also prove more directly that a particular instance of g leads to contradiction by instantiating the parameter P to, for example, _< 2. But reasoning with gᴾ works too.)

Back to proving the bisim theorem. We’ll invoke parametricity for every input to a queue implementation, which is a sequence of operations that have been performed on the queue. Such an operation sequence arises naturally when using a QueueBisim proof, which says that two queue states are kept related whenever the same operation is performed on both states (and gives the same result if the operation produces something observable — the result of a pop in the case of queues). As more operations are performed on both queues, we get a operation sequence. In fact, we will define that two queue states are related exactly when they are obtained by performing the same operation sequence. First define a data type of operation sequences, parametrised by an element type A and indexed by the numbers of pops and pushes:

data OpSeq (A : Set) : ℕ → ℕ → Set where
  emptyOp :                          OpSeq A  zero      zero
  pushOp  : A → OpSeq A #pop #push → OpSeq A      #pop (suc #push)
  popOp   :     OpSeq A #pop #push → OpSeq A (suc #pop)     #push

We can interpret an OpSeq as a predicate on states of an SQueue, whose proof is a sequence of state transition equalities witnessing that a state is reached by performing the operation sequence leftwards/outwards:

data RunOpSeq : OpSeq A #pop #push → (q : SQueue A)
              → q .State tt tt → Set where
  emptyOp : q .empty ≡ s
          → RunOpSeq emptyOp q s
  pushOp  : RunOpSeq ops q s' → q .push a s' ≡ s
          → RunOpSeq (pushOp a ops) q s
  popOp   : RunOpSeq ops q s' → q .pop s' ≡ cons a s
          → RunOpSeq (popOp ops) q s

(RunOpSeq could also be defined as a function by induction on OpSeq, but the constructor syntax allows us to hide existentially quantified variables as implicit arguments.) Now we can write down the definition that two queue states s and s' are related exactly when there is an operation sequence which, when ran using respective queue implementations q and q' : SQueue A, leads to s and s':

record BisimState (q q' : SQueue A)
  (s : q .State tt tt) (s' : q' .State tt tt) : Set where
  field
    {#pop #push} : ℕ
    ops  : OpSeq A #pop #push
    run  : RunOpSeq ops q  s
    run' : RunOpSeq ops q' s'

We can then start proving bisim by supplying BisimState above as the definition of Stateᴿ. The emptyᴿ/pushᴿ case simply starts/extends the operation sequence, whereas the popᴿ case is where the real work is done.

bisim : (Q Q' : PQueue) → Param Q → Param Q'
      → (A : Set) → QueueBisim (inst Q A) (inst Q' A)
bisim Q Q' param param' A = record
  { Stateᴿ = BisimState q q'
  ; emptyᴿ = record { ops  = emptyOp
                    ; run  = emptyOp refl
                    ; run' = emptyOp refl
                    }
  ; pushᴿ  = λ a sᴿ → record
                    { ops  = pushOp a (sᴿ .ops )
                    ; run  = pushOp   (sᴿ .run ) refl
                    ; run' = pushOp   (sᴿ .run') refl
                    }
  ; popᴿ   = popᴿ-proof
  }
  where

  q q' : SQueue A
  q  = inst Q  A
  q' = inst Q' A

  popᴿ-proof : BisimState q q' s s'
             → ListFᴿ (BisimState q q') (q .pop s) (q' .pop s')
  popᴿ-proof sᴿ = ?

In popᴿ-proof, to fill the hole with either the nilᴿ or consᴿ constructor of ListFᴿ, we need to establish that q .pop s and q' .pop s' are both nil or both cons, and in the latter case the popped elements should be the same. A case analysis of the two ListF-values would be unhelpful since the type provides little information. Instead, we will perform a case analysis on a counting queue derived from q/q' using unary parametricity; the counting queue will be related to q/q', so the case analysis will tell us how q/q' behaves as well. Below we’ll go through the counting queue argument more slowly.

First of all, we need to use the parametricity translation to write down Param, the unary parametricity of PQueue:

data ListFᴾ {I : Set}         (Iᴾ : I → Set)
            {R : I → I → Set} (Rᴾ : ∀ {i j} → Iᴾ i → Iᴾ j → R i j → Set)
            {X : I → I → Set} (Xᴾ : ∀ {l u} → Iᴾ l → Iᴾ u → X l u → Set)
        : ∀ {l u} → Iᴾ l → Iᴾ u → ListF I R X l u → Set where
  nilᴾ  : ∀ {u fx} → {uᴾ : Iᴾ u}
        → fx ≡ nil → ListFᴾ Iᴾ Rᴾ Xᴾ uᴾ uᴾ fx
  consᴾ : ∀ {l m u r x fx} → {lᴾ : Iᴾ l} {mᴾ : Iᴾ m} {uᴾ : Iᴾ u}
        → Rᴾ lᴾ mᴾ r → Xᴾ mᴾ uᴾ x → fx ≡ cons r x
        → ListFᴾ Iᴾ Rᴾ Xᴾ lᴾ uᴾ fx

record Queueᴾ
  {I  : Set}         (Iᴾ  : I → Set)
  {i₀ : I}           (i₀ᴾ : Iᴾ i₀)
  {R  : I → I → Set} (Rᴾ  : ∀ {i j} → Iᴾ i → Iᴾ j → R i j → Set)
  (q : Queue I i₀ R) : Set₁ where
  field
    Stateᴾ : ∀ {l u} → Iᴾ l → Iᴾ u → q .State l u → Set
    emptyᴾ : Stateᴾ i₀ᴾ i₀ᴾ (q .empty)
    pushᴾ  : ∀ {l u v r s} → {lᴾ : Iᴾ l} {uᴾ : Iᴾ u} {vᴾ : Iᴾ v}
           → Rᴾ uᴾ vᴾ r → Stateᴾ lᴾ uᴾ s → Stateᴾ lᴾ vᴾ (q .push r s)
    popᴾ   : ∀ {l u s} → {lᴾ : Iᴾ l} {uᴾ : Iᴾ u}
           → Stateᴾ lᴾ uᴾ s → ListFᴾ Iᴾ Rᴾ Stateᴾ lᴾ uᴾ (q .pop s)

Param : PQueue → Set₁
Param Q = {I  : Set}         (Iᴾ  : I → Set)
          {i₀ : I}           (i₀ᴾ : Iᴾ i₀)
          {R  : I → I → Set} (Rᴾ  : ∀ {i j} → Iᴾ i → Iᴾ j → R i j → Set)
                           → Queueᴾ Iᴾ i₀ᴾ Rᴾ (Q I i₀ R)

Param says that for any queue Q I i₀ R instantiated from Q : PQueue we can derive a Queueᴾ related to it. We can think of the derived Queueᴾ as a queue itself, to/from which we push/pop additional information about I-indices and R-elements as we perform the same operations on the instantiated queue, with the additional information specified by the type families Iᴾ and Rᴾ — in short, we supplement every I/R by Iᴾ/Rᴾ, with the latter managed separately by the derived Queueᴾ. Take the type of pushᴾ as an example: with an instantiated queue q : Queue I i₀ R, if we push r : R u v into s : q .State l u and there is a derived queue state sᴾ : Stateᴾ lᴾ uᴾ s, where the type remembers that sᴾ is related to s (and lᴾ : Iᴾ l, uᴾ : Iᴾ u are additional information about l and u respectively), then we can invoke pushᴾ on sᴾ by supplying vᴾ : Iᴾ v (since v is a newly appeared index, we need to supply additional information about v) and an Rᴾ uᴾ vᴾ r (additional information about the pushed element r and the indices u and v in its type), and get a derived queue state of type Stateᴾ lᴾ vᴾ (q .push r s), now related to the queue state q .push r s after the push. And popᴾ is similar: a Queue successfully pops an element if and only if a related Queueᴾ successfully pops additional information about the element.

Remark. Iᴾ/Rᴾ can be a predicate on I/R restricting which I/R-values may appear throughout the use of a queue, in which case Param can be thought of as an induction principle for all possible uses of a Queue. But we will also use Iᴾ/Rᴾ proof-relevantly to carry information that doesn’t count as proofs, so ‘additional information’ is a more encompassing intuition about Iᴾ/Rᴾ. The same can be said for induction principles in general though if we interpret ‘induction principles’ not just logically but also computationally. (End of Remark.)

Back to popᴿ-proof, from q = inst Q A : SQueue A we can use Param Q to derive a related queue of type

CCQueue : SQueue A → OpSeq A #pop #push → Set₁
CCQueue q ops = Queueᴾ (λ _ → ℕ)
                       0
                       (λ l u a → suc l ≡ u × PushMap ops l a)
                       q

This derived queue ‘counts and checks’: It counts by supplementing every -index with a natural number and requiring suc l ≡ u whenever l and u are the natural numbers supplementing the two indices of an element type. And it checks using the relation PushMap, which assigns distinct serial numbers to pushed elements in an operation sequence, and plays a similar role to PMap above:

data PushMap : OpSeq A #pop #push → ℕ → A → Set where
  pushOp-here  : {ops : OpSeq A #pop #push}
                                  → PushMap (pushOp a ops) #push a
  pushOp-there : PushMap ops k a' → PushMap (pushOp a ops) k     a'
  popOp        : PushMap ops k a' → PushMap (popOp    ops) k     a'

Take opsₑₓ = pushOp a₁ (popOp (pushOp a₀ emptyOp)) as an example, where there are two pushed elements a₁ and a₀, to which we expect to assign the serial numbers 1 and 0; indeed, we can construct pushOp-here : PushMap opsₑₓ 1 a₁ and pushOp-there (popOp pushOp-here) : PushMap opsₑₓ 0 a₀ but not anything else (for opsₑₓ).

Now we will proceed in a way that’s similar to the PMap argument. Observe that the PMap argument proceeds in two stages for any given input: check that the input satisfies PMap, and then deduce what the output should be from the fact that it must also satisfy PMap. For a queue q : SQueue A, a given input is an operation sequence ops, which, when run on q, leads to a state s satisfying RunOpSeq ops q s. The first stage is to fast-forward a derived queue qᴾ : CCQueue q ops to a state related to s, checking all the PushMap constraints along the way:

simulate : {ops : OpSeq A #pop #push} → RunOpSeq ops q s
         → (qᴾ : CCQueue q ops) → qᴾ .Stateᴾ #pop #push s

A concrete example may help. Again consider the operation sequence opsₑₓ = pushOp a₁ (popOp (pushOp a₀ emptyOp)). Start from the state s₀ = q .empty, to which the derived state s₀ᴾ = qᴾ .emptyᴾ : qᴾ .Stateᴾ 0 0 s₀ is related. First push the element a₀ and reach the state s₁ = q .push a₀ s₀. The related derived state is

s₁ᴾ = pushᴾ {! suc 0 ≡ 1 × PushMap opsₑₓ 0 a₀ ⊣ … !} (qᴾ .empty)
    : qᴾ .Stateᴾ 0 1 s₁

In the hole (showing the goal type), the equality constraint forces the second index in the Stateᴾ type to increment, and we need to prove the PushMap constraint saying that the 0th pushed element is a₀. Next we pop from s₁, which is guaranteed to be successful due to the given RunOpSeq, so let cons a s₂ = q .pop s₁. Since s₁ᴾ is related to s₁, popping from s₁ᴾ succeeds as well, and we get consᴾ (ieq , m) s₂ᴾ _ = qᴾ .popᴾ s₁ᴾ where ieq : suc 0 ≡ 1 forces the first index in the type of

s₂ᴾ : qᴾ .Stateᴾ 1 1 s₂

to increment. (At this point, if we want, we can deduce a ≡ a₀ from m : PushMap opsₑₓ 0 a.) Finally, push a₁ and reach s₂ = q .push a₁ s₁, and similarly advance the related state to

s₂ᴾ = pushᴾ {! suc 1 ≡ 2 × PushMap opsₑₓ 1 a₁ ⊣ … !} (qᴾ .empty)
    : qᴾ .Stateᴾ 1 2 s₂

where the PushMap constraint we prove this time says that the 1st pushed element is a₁. (And if there’s a next pop, which will be numbered 1, the popped proof of the constraint will let us deduce that the popped element is a₁.) This completes the first stage. Note that the natural number indices in the type of s₂ᴾ are exactly the indices in the type of opsₑₓ : OpSeq A 1 2, since in both CCQueue and OpSeq the indices are counts of pops and pushes; this holds in general and is stated in the type of simulate.

Back to popᴿ-proof, to find out what happens when we pop from both s and s', instead of popping from s and s' directly, we perform the above first-stage construction: derive CCQueues qᴾ and q'ᴾ, fast-forward them to states related to s and s' (by invoking simulate), and pop from these states.

popᴿ-proof : BisimState q q' s s'
           → ListFᴿ (BisimState q q') (q .pop s) (q' .pop s')
popᴿ-proof sᴿ with qᴾ  ← CCQueue q  (sᴿ .ops) ∋ param  _ _ _
                 | q'ᴾ ← CCQueue q' (sᴿ .ops) ∋ param' _ _ _
    | qᴾ  .popᴾ (simulate (sᴿ .run ) qᴾ )
    | q'ᴾ .popᴾ (simulate (sᴿ .run') q'ᴾ)
... | fsᴾ | fs'ᴾ = ?

The result fsᴾ has type

ListFᴾ (λ _ → ℕ) (λ l u a → suc l ≡ u × PushMap (sᴿ .ops) l a)
       (qᴾ .Stateᴾ) (sᴿ .#pop) (sᴿ .#push) (q .pop s)

and fs'ᴾ has the same type except that q, qᴾ, and s are the primed versions. Importantly, the two types include the same counts of pops and pushes. This helps us to rule out the cases where fsᴾ and fs'ᴾ match different constructors: when fsᴾ matches nilᴾ, the indices sᴿ .#pop and sᴿ .#push are unified, and the type of fs'ᴾ becomes

ListFᴾ (λ _ → ℕ) (λ l u a → suc l ≡ u × PushMap (sᴿ .ops) l a)
       (q'ᴾ .Stateᴾ) (sᴿ .#push) (sᴿ .#push) (q' .pop s')

Now, if fs'ᴾ matched consᴾ, we’d get a proof

PushMap (sᴿ .ops) (sᴿ .#push) a

but sᴿ .#push is a serial number that’s yet to be assigned to any element, so this proof cannot possibly exist. Formally, we can prove

PushMap-domain-bounded :
  {ops : OpSeq A #pop #push} → PushMap ops k a → k < #push

and use this lemma to derive contradiction in the cases where the two values match different constructors:

popᴿ-proof : BisimState q q' s s'
           → ListFᴿ (BisimState q q') (q .pop s) (q' .pop s')
popᴿ-proof sᴿ with qᴾ  ← CCQueue q  (sᴿ .ops) ∋ param  _ _ _
                 | q'ᴾ ← CCQueue q' (sᴿ .ops) ∋ param' _ _ _
    | qᴾ  .popᴾ (simulate (sᴿ .run ) qᴾ )
    | q'ᴾ .popᴾ (simulate (sᴿ .run') q'ᴾ)
... | nilᴾ eq            | nilᴾ eq'             = {! ListFᴿ … ⊣ eq : q .pop s ≡ nil; eq' : q' .pop s' ≡ nil; … !}
... | nilᴾ _             | consᴾ (_ , m') _ _   with () ← <-irrefl refl (PushMap-domain-bounded m')
... | consᴾ (_ , m) _ _  | nilᴾ _               with () ← <-irrefl refl (PushMap-domain-bounded m )
... | consᴾ (_ , m) _ eq | consᴾ (_ , m') _ eq' = {! ListFᴿ … ⊣ eq : q .pop s ≡ cons a t; eq' : q' .pop s' ≡ cons a' t'; … !}

In the remaining two cases, eq and eq' (whose types are shown in the holes) are the state transition equalities we need to invoke nilᴿ or consᴿ:

popᴿ-proof : BisimState q q' s s'
           → ListFᴿ (BisimState q q') (q .pop s) (q' .pop s')
popᴿ-proof sᴿ with qᴾ  ← CCQueue q  (sᴿ .ops) ∋ param  _ _ _
                 | q'ᴾ ← CCQueue q' (sᴿ .ops) ∋ param' _ _ _
    | qᴾ  .popᴾ (simulate (sᴿ .run ) qᴾ )
    | q'ᴾ .popᴾ (simulate (sᴿ .run') q'ᴾ)
... | nilᴾ eq            | nilᴾ eq'             = nilᴿ eq eq'
... | nilᴾ _             | consᴾ (_ , m') _ _   with () ← <-irrefl refl (PushMap-domain-bounded m')
... | consᴾ (_ , m) _ _  | nilᴾ _               with () ← <-irrefl refl (PushMap-domain-bounded m )
... | consᴾ (_ , m) _ eq | consᴾ (_ , m') _ eq' = consᴿ eq eq' ? ?

For the consᴿ, we need to prove that the two elements a and a' popped respectively from s and s' are the same, which is true because we have m : PushMap (sᴿ .ops) (sᴿ .#pop) a and m' : PushMap (sᴿ .ops) (sᴿ .#pop) a' and can prove

PushMap-functional : PushMap ops k a → PushMap ops k a' → a ≡ a'

Finally we need to show that the new states (t and t') are in bisimulation, which is done by extending the operation sequence stored in sᴿ. This completes the proof!

popᴿ-proof : BisimState q q' s s'
           → ListFᴿ (BisimState q q') (q .pop s) (q' .pop s')
popᴿ-proof sᴿ with qᴾ  ← CCQueue q  (sᴿ .ops) ∋ param  _ _ _
                 | q'ᴾ ← CCQueue q' (sᴿ .ops) ∋ param' _ _ _
    | qᴾ  .popᴾ (simulate (sᴿ .run ) qᴾ )
    | q'ᴾ .popᴾ (simulate (sᴿ .run') q'ᴾ)
... | nilᴾ eq            | nilᴾ eq'             = nilᴿ eq eq'
... | nilᴾ _             | consᴾ (_ , m') _ _   with () ← <-irrefl refl (PushMap-domain-bounded m')
... | consᴾ (_ , m) _ _  | nilᴾ _               with () ← <-irrefl refl (PushMap-domain-bounded m )
... | consᴾ (_ , m) _ eq | consᴾ (_ , m') _ eq' =
  consᴿ eq eq' (PushMap-functional m m') record
    { ops  = popOp (sᴿ .ops )
    ; run  = popOp (sᴿ .run ) eq
    ; run' = popOp (sᴿ .run') eq'
    }

An obvious next step is to adapt the proof for other data structure interfaces. Stacks and deques are probably worth working out: For queues it could be argued that the quantification over I in PQueue isn’t really necessary — we could use a more concrete interface, such as CQueue, to achieve uniqueness as well; but for the stack interface

data ListF (I : Set) (R : I → I → Set) (X : I → I → Set)
       : I → I → Set where
  nil  : ∀ {u}                     → ListF I R X u u
  cons : ∀ {l m u} → R l m → X m u → ListF I R X l u

record Stack (I : Set) (i₀ : I) (R : I → I → Set) : Set₁ where
  field
    State : I → I → Set
    empty : State i₀ i₀
    push  : ∀ {k l u} → R k l → State l u → State k u
    pop   : ∀ {l u} → State l u → ListF I R State l u

(which is different from Queue only in the type of push), specialising it in the same way to a CStack is probably going to destroy uniqueness — my gut feeling is that I needs to be specialised to a more complex model than natural numbers. So for the stack interface, the parametric quantification will be able to hide more complexity. Deques will presumably need an even more complex model for I, but hopefully we’ll have more clues about how to construct that model after dealing with stacks. And deques are probably the most complex data structure interface we can specify using this indexing pattern, which I think expresses a consecutive list of elements that can be added or deleted from the ends; also there’s more variety in deque implementations, so it can be convincingly demonstrated how easily it has become to prove that all these implementations are extensionally equal. (I’m pessimistic about data structures beyond deques, such as random-access lists and priority queues, but who knows.) After I figure out how to prove ‘simple uniqueness’ for all the three interfaces, that is, only in terms of SQueue, SStack, and SDeque, for the sake of completeness I may need to generalise the proofs to PQueue, PStack, and PDeque, pushing and popping not the same but related elements and indices. Hopefully, these proofs will only be more tedious rather than require new ideas, and can be omitted in the paper that will follow to report all these developments.

Uniqueness doesn’t tell us directly what the PQueue interface does — to actually prove something about PQueue, we can write a simple reference implementation having the type, prove (extensional) properties we need for that reference implementation, and then use uniqueness to say that those properties hold for all implementations. Alternatively, we should be able to prove properties of PQueue directly using parametricity. This in general is (conservatively) extending the interface, so we can consider adding new operations too. For example, we can extend the interface to include an operation size : State l u → ℕ and traditional equational laws such as size empty ≡ 0, size (add a s) ≡ suc (size s), and how size interacts with pop (which I don’t bother to formulate here).

Even though the proof ends up being not long or tedious, I find this post very difficult to write (and spend much more time than I anticipated), probably because I’ve been driven by my intuition and not some systematic theory (apart from the parametricity translation). I do believe there’s some intuitive principle that I can try to formulate and verify. For example, I think when we write parametric programs we actually have some concrete models in mind for the parametrically quantified stuff, and proofs about the behaviour of parametric programs are done by explicitly formulating those concrete models so that we can instantiate the parametric programs concretely and reason about their behaviour. On the other hand, principles are by their nature vague and not directly reusable, so I wonder whether I can extract a lemma or two from the proof I have. For example, SQueue by itself doesn’t give rise to uniqueness — only SQueues instantiated from a PQueue are bisimilar. Maybe there’s some form of data that can help us remember that an SQueue is instantiated from a PQueue and prove that any parametricity-based constructions still work for the SQueue. And I suspect that RunOpSeq is an important part of that piece of data (saying that an SQueue has been manipulated only through the PQueue interface, so invariants established by parametricity haven’t been broken).

Incidentally, I wonder whether indexed types are essential in this development — if I switch to a displayed style (using simple, non-indexed types and separately formulated constraints), will parametricity continue to work? I guess the answer is yes (so choosing between indexed vs displayed types is only a matter of convenience), but can’t really see what the displayed version will be like without actually writing something down. And, finally, Zhixuan’s previous observation about the relationship with substructural typing is still intriguing, although I don’t really see how to redo the proof in that setting (since my proof seems entirely parametricity-based) — this I’ll leave for some other people to figure out.

Agda file: QueueBisimilarity.agda
Follow-up: 0049 (Specifying stack order using parametric types)