# Bidifferential calculus

This is a formalism that embraces a large class of (nonlinear) partial differential equations (PDEs), and more generally partial differential-difference equations (PDDEs), which are integrable in the sense that they possess a Lax pair, i.e. a pair of linear equations for which the nonlinear equation is the compatibility (or integrability) condition. The bidifferential calculus approach to integrable PDDEs is an attempt to extract a kind of essence from (at best all) such equations. In particular, it avoids starting with specific assumptions on the form of the considered equations. Instead, it uses an abstract and therefore very flexible framework, which is, however, extremely simple to deal with because of computational rules that could hardly be simpler.

Let **A** be an associative algebra (over the real or complex numbers). Examples of particular
interest are algebras of functions (on some space), or matrices of functions. But there are also
important cases where **A** includes operators (like partial derivative or shift operators).
Let **Ω** = **A** ⊕ **Ω**^{1} ⊕
**Ω**^{2} ⊕ ... (direct sum) be a graded associative algebra.
Let **d** and **d**
be graded derivations
**Ω**^{r} → **Ω**^{r+1} of degree one. This means that
these two operators satisfy the graded Leibniz rule, i.e.

**d**(χχ')= (

**d**χ) χ' + (-1)

^{r}χ

**d**χ'

for χ ∈ **Ω**^{r} and χ' ∈ **Ω**
and
**d** **Ω**^{r} ⊂ **Ω**^{r+1}.
Furthermore, we require that the two operators satisfy

**d**

^{2}=

**d**

^{2}=

**d**

**d**+

**d**

**d**= 0.

The graded algebra **Ω** generalizes the algebra of differential forms on a
differentiable manifold. **d** and **d**
generalize the exterior derivative. In this differential-geometric case,
**d** and **d** are actually related by a
Nijenhuis tensor, according to *Frölicher-Nijenhuis* theory.

It is important, however, to deviate from this special setting. A class of particularly convenient graded algebras is given by

**Ω**=

**A**⊗

**Λ**,

where **Λ** is the exterior (Grassmann) algebra of a vector space.
A technical advantage is that in this case it is sufficient to define the action of
**d** and **d** on **A**, since
it then extends in a natural way to **Ω**.

There are two natural equations in the bidifferential calculus framework:

**d**

**d**Φ =

**d**Φ

**d**Φ

and

**d**[ (

**d**g) g

^{-1}] = 0 .

Here Φ and g are typically matrix variables.
If, for a given PDDE, there is a choice of (**Ω**,**d**,
**d**) such that (a certain
reduction of) one of these equations is equivalent to it, we say we have a *bidifferential
calculus formulation* of that PDDE.

Whenever a PDDE possesses such a bidifferential calculus formulation, a very simple to prove Theorem, see arXiv:1207.1308, typically allows the construction of a large class of exact solutions. This theorem may be regarded as an abstraction, a universal version, of previous results about binary Darboux transformations that appeared for various equations in the literature.

The above equations may be regarded as abstractions of two familiar potential forms of the
(anti-) *self-dual Yang-Mills* equation. The latter is known to generate many integrable PDEs as
reductions. A bidifferential calculus formulation is achieved by choosing **A** as
the algebra of smooth functions on the four-dimensional Euclidean space, and setting

**d**f = f

_{x}

**ξ**

_{1}+ f

_{y}

**ξ**

_{2},

**d**f = f

_{w}

**ξ**

_{1}+ f

_{z}

**ξ**

_{2},

where w,x,y,z are independent variables, a corresponding subscript denotes a partial
derivative with respect to this variable, and **ξ**_{1}, **ξ**_{2}
is a basis of **Λ**^{1}, here chosen two-dimensional.
We observe that we do have to go beyond differential geometry. Although **d** could be
interpreted as an exterior derivative (by replacing **ξ**_{1} by the
differential dx and **ξ**_{2} by dy), there is no way to give
**d** a differential-geometric meaning
in this example.

## Example: KdV

Let **A**_{0} be the algebra of smooth functions of two variables, x and t. Let
**A** be its extension obtained by adjoining the partial derivative operator ∂
with respect to x (satisfying ∂ f = f_{x} + f ∂ for f ∈ **A**).
We define
**d** and **d** on **A** by

**d**f = [ ∂ , f]

**ξ**

_{1}+ ½ [ ∂

^{2}, f]

**ξ**

_{2}

**d**f = - (1/2) [ ∂

^{2}, f]

**ξ**

_{1}+ (1/3) [ ∂

_{t}- ∂

^{3}, f ]

**ξ**

_{2}

For Φ ∈ **A**_{0}, in terms of **u** = ½ Φ_{x} the equation
**d** **d** Φ
= **d** Φ **d** Φ becomes the KdV equation

**u**

_{t}-

**u**

_{xxx}- 3 (

**u**

^{2})

_{x}= 0