% Groupoidification Made Easy
% John Baez, Alex Hoffnung, and Christopher Walker
% in LaTeX
% 9/6/10 JB - like first arXiv version, but better: includes hyperref
% Also: I hanged "essential inverse image" to "full inverse image"
\documentclass{article}
\usepackage{hyperref}
\usepackage{amsfonts,amsmath,amssymb,amscd,amsthm}
\usepackage{latexsym}
\usepackage{undertilde}
\usepackage[all]{xy}
\usepackage{graphicx}
\usepackage{epsfig}
%\usepackage{psfrag}
\input{epsf}
%categories
\newcommand{\Set}{{\rm Set}}
\newcommand{\Cat}{{\rm Cat}}
\newcommand{\LieGrp}{{\rm LieGrp}}
\newcommand{\Sh}{{\rm Sh}}
\newcommand{\Top}{{\rm Top}}
\newcommand{\Grp}{{\rm Grp}}
\newcommand{\Conv}{{\rm Conv}}
\newcommand{\Cov}{{\rm Cov}}
\newcommand{\Space}{{\rm Space}}
\newcommand{\Diff}{{\rm Diff}}
\newcommand{\Vect}{{\rm Vect}}
\newcommand{\FinSet}{{\rm FinSet}}
\newcommand{\FinGrpd}{{\rm FinGrpd}}
\newcommand{\Span}{{\rm Span}}
\newcommand{\Grpd}{{\rm Grpd}}
%ncats
\newcommand{\nCat}{{\rm nCat} }
\newcommand{\nminusoneCat}{{\rm (n-1)Cat} }
%maps and arrows
\newcommand{\im}{{\rm im}}
\renewcommand{\hom}{{\rm hom}}
\newcommand{\To}{\Rightarrow}
\renewcommand{\to}{\rightarrow}
\newcommand{\tensor}{\otimes}
\newcommand{\ten}{\otimes }
\newcommand{\maps}{\colon}
%miscellaneous junk
\newcommand{\Conc}{\mathrm{Conc}}
\newcommand{\X}{{\mathbf{X}}}
\newcommand{\Y}{{\mathbf{Y}}}
\newcommand{\f}{{\mathbf{f}}}
\renewcommand{\P}{\mathcal{P}}
\newcommand{\inv}{{\rm inv}}
\newcommand{\id}{{\rm id}}
\newcommand{\op}{{\rm op}}
\newcommand{\hf}{\frac{1}{2}}
\newcommand{\iso}{\cong}
\newcommand{\ev}{{\rm ev}}
\newcommand{\ad}{{\rm ad}}
\newcommand{\tr}{{\rm tr}}
\renewcommand{\deg}{{\rm deg}}
\newcommand{\we}{\wedge}
\renewcommand{\c}{{\mathfrak c}}
\newcommand{\goth}[1]{\mathfrak{#1}}
\newcommand{\gl}{\mathfrak{gl}}
\newcommand{\SL}{\mathrm{SL}}
\newcommand{\g}{{\mathfrak g}}
\newcommand{\h}{{\mathfrak h}}
\newcommand{\R}{{\mathbb R}}
\newcommand{\C}{{\mathbb C}}
\newcommand{\F}{{\mathbb F}}
\newcommand{\N}{{\mathbb N}}
\newcommand{\Q}{{\mathbb Q}}
\newcommand{\Z}{{\mathbb Z}}
\newcommand{\U}{{\rm U}}
\newcommand{\scs}{\scriptstyle}
\newcommand{\Aut}{{\rm Aut}}
\newcommand{\Src}{{\rm Src}}
\newcommand{\Tar}{{\rm Tar}}
\newcommand{\ip}[2]{\langle #1,#2 \rangle}
\newcommand{\Over}{/\!/}
\newcommand{\w}{\! \colon \!}
%Christopher's macros
\newcommand{\zHom}[1]{H_0(#1)}
\newcommand{\zPi}[1]{\underline{#1}}
\newcommand{\linop}[1]{\widehat{#1}}
\newcommand{\linopstuff}[1]{\utilde{#1}}
\newcommand{\gen}[1]{\utilde{#1}}
\newcommand{\Ob}{{\rm Ob}}
\newcommand{\Mor}{{\rm Mor}}
\newcommand{\comment}[1]{}
\renewcommand{\u}[1]{\underline{#1}}
\newtheorem{thm}{Theorem}
\newtheorem{definition}[thm]{Definition}
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{corollary}[thm]{Corollary}
\newtheorem{proposition}[thm]{Proposition}
\newtheorem{example}[thm]{Example}
\newtheorem{conjecture}[thm]{Conjecture}
\newtheorem{remark}[thm]{Remark}
\newtheorem{notation}[thm]{Notation}
\newtheorem{convention}[thm]{Convention}
\newtheorem*{theorem*}{Theorem}
\newtheorem*{definition*}{Definition}
\newtheorem*{lemma*}{Lemma}
\newtheorem*{corollary*}{Corollary}
\newtheorem*{proposition*}{Proposition}
\newtheorem*{example*}{Example}
\newtheorem*{conjecture*}{Conjecture}
\newtheorem*{remark*}{Remark}
\newtheorem*{notation*}{Notation}
\newtheorem*{convention*}{Convention}
\newdir{ >}{{}*!/-7pt/@{>}}
\newdir{> >}{*!/0pt/@{>}*!/-5pt/@{>}}
\hfuzz=10pc
\begin{document}
\sloppy
\title{Groupoidification Made Easy}
\author{John C.\ Baez, Alexander E.\ Hoffnung, and Christopher D.\ Walker\\
Department of Mathematics, University of California\\
Riverside, CA 92521 USA}
\maketitle
\begin{abstract}
\noindent
Groupoidification is a form of categorification in which vector spaces
are replaced by groupoids, and linear operators are replaced by spans
of groupoids. We introduce this idea with a detailed exposition of
`degroupoidification': a systematic process that turns groupoids and
spans into vector spaces and linear operators. Then we present two
applications of groupoidification. The first is to Feynman diagrams.
The Hilbert space for the quantum harmonic oscillator arises naturally
from degroupoidifying the groupoid of finite sets and bijections.
This allows for a purely combinatorial interpretation of creation
and annihilation operators, their commutation relations, field
operators, their normal-ordered powers, and finally Feynman
diagrams. The second application is to Hecke algebras. We explain
how to groupoidify the Hecke algebra associated to a Dynkin diagram
whenever the deformation parameter $q$ is a prime power. We illustrate
this with the simplest nontrivial example, coming from the $A_2$
Dynkin diagram. In this example we show that the solution of the
Yang--Baxter equation built into the $A_2$ Hecke algebra arises
naturally from the axioms of projective geometry applied to the
projective plane over the finite field $\mathbb{F}_q$.
\end{abstract}
\section{Introduction}\label{intro}
`Groupoidification' is an attempt to expose the combinatorial
underpinnings of linear algebra --- the hard bones of set theory
underlying the flexibility of the continuum. One of the main lessons
of modern algebra is to avoid choosing bases for vector spaces until
you need them. As Hermann Weyl wrote, ``The introduction of a
coordinate system to geometry is an act of violence". But vector
spaces often come equipped with a natural basis --- and when this
happens, there is no harm in taking advantage of it. The most obvious
example is when our vector space has been defined to consist of formal
linear combinations of the elements of some set. Then this set is our
basis. But surprisingly often, the elements of this set are {\it
isomorphism classes of objects in some groupoid}. This is when
groupoidification can be useful. It lets us work directly with the
groupoid, using tools analogous to those of linear algebra, without
bringing in the real numbers (or any other ground field).
For example, let $E$ be the groupoid of finite sets and bijections.
An isomorphism class of finite sets is just a natural number, so the
set of isomorphism classes of objects in $E$ can be identified with
$\N$. Indeed, this is why natural numbers were invented in the first
place: to count finite sets. The real vector space with $\N$ as basis
is usually identified with the polynomial algebra $\R[z]$, since that
has basis $z^0, z^1, z^2, \dots.$ Alternatively, we can work with
{\it infinite} formal linear combinations of natural numbers, which
form the algebra of formal power series, $\R[[z]]$. So, formal power
series should be important when we apply the tools of linear algebra
to study the groupoid of finite sets.
Indeed, formal power series have long been used as `generating
functions' in combinatorics \cite{Wilf}. Given a combinatorial
structure we can put on finite sets, its generating function is the
formal power series whose $n$th coefficient says how many ways we can
put this structure on an $n$-element set. Andr\'e Joyal formalized
the idea of `a structure we can put on finite sets' in terms of {\it
esp\`eces de structures}, or `structure types' \cite{BLL,Joyal,
Joyal2}. Later his work was generalized to `stuff types'
\cite{BaezDolan:2001}, which are a key example of groupoidification.
Heuristically, a stuff type is a way of equipping finite sets with a
specific type of extra stuff --- for example a 2-coloring, or a linear
ordering, or an additional finite set. Stuff types have generating
functions, which are formal power series. Combinatorially interesting
operations on stuff types correspond to interesting operations on their
generating functions: addition, multiplication, differentiation, and
so on. Joyal's great idea amounts to this: {\it work directly with
stuff types as much as possible, and put off taking their generating
functions.} As we shall see, this is an example of groupoidification.
To see how this works, we should be more precise. A {\bf stuff type}
is a groupoid over the groupoid of finite sets: that is, a groupoid
$\Psi$ equipped with a functor $v \maps \Psi \to E$. The reason for
the funny name is that we can think of $\Psi$ as a groupoid of finite
sets `equipped with extra stuff'. The functor $v$ is then the
`forgetful functor' that forgets this extra stuff and gives the
underlying set.
The generating function of a stuff type $v \maps \Psi \to E$ is the
formal power series
\begin{equation}
\label{eq:generating_function}
\utilde{\Psi}(z) = \sum_{n = 0}^\infty |v^{-1}(n)| \, z^n .
\end{equation}
Here $v^{-1}(n)$ is the `full inverse image' of any $n$-element
set, say $n \in E$. We define this term later, but the idea is
straightforward: $v^{-1}(n)$ is the groupoid of $n$-element sets
equipped with the given type of stuff. The $n$th coefficient of the
generating function measures the size of this groupoid.
But how? Here we need the concept of {\it groupoid cardinality}.
It seems this concept first appeared in algebraic geometry
\cite{Behrends:1993, Kim:1995}. We rediscovered it by
pondering the meaning of division \cite{BaezDolan:2001}. Addition of
natural numbers comes from disjoint union of finite sets, since
\[ |S + T| = |S| + |T| .\]
Multiplication comes from cartesian product:
\[ |S \times T| = |S| \times |T| .\]
But what about division?
If a group $G$ acts on a set $S$, we can `divide' the set by the group
and form the quotient $S/G$. If $S$ and $G$ are finite and $G$ acts
freely on $S$, $S/G$ really deserves the name `quotient', since then
\[ |S/G| = |S|/|G|. \]
Indeed, this fact captures some of our naive intuitions about division.
For example, why is $6/2 = 3$? We can take a 6-element
set $S$ with a free action of the group $G = \Z/2$ and
construct the set of orbits $S/G$:
\medskip
\centerline{\epsfysize=1.0in\epsfbox{6point.eps}}%{6point.eps}
\medskip
\noindent
Since we are `folding the 6-element set in half', we get $|S/G| = 3$.
The trouble starts when the action of $G$ on $S$ fails to
be free. Let's try the same trick starting with a 5-element set:
\medskip
\centerline{\epsfysize=1.0in\epsfbox{5point.eps}}
\medskip
\noindent
We don't obtain a set with $2\frac{1}{2}$ elements! The reason is
that the point in the middle gets mapped to itself. To get the
desired cardinality $2 \frac{1}{2}$, we would need a way to count this
point as `folded in half'.
To do this, we should first replace the ordinary quotient $S/G$ by the
`action groupoid' or `weak quotient' $S/\!/G$. This is the groupoid
where objects are elements of $S$, and a morphism from $s \in S$ to
$s' \in S$ is an element $g \in G$ with $gs = s'$. Composition of
morphisms works in the obvious way. Next, we should
define the `cardinality' of a groupoid as follows. For each
isomorphism class of objects, pick a representative $x$ and compute
the reciprocal of the number of automorphisms of this object; then sum
the result over isomorphism classes. In other words, define the
{\bf cardinality} of a groupoid $X$ to be
\begin{equation}
\label{eq:groupoid_cardinality}
|X| = \sum_{\rm isomorphism\; classes \; of \; objects\; [x]}
\frac{1}{|\Aut(x)|} \;.
\end{equation}
With these definitions, our problematic example gives a groupoid
$S/\!/G$ with cardinality $2\frac{1}{2}$, since the point in the
middle of the picture gets counted as `half a point'. In fact,
\[ |S/\!/G| = |S|/|G| \]
whenever $G$ is a finite group acting on a finite set $S$.
The concept of groupoid cardinality gives an elegant definition of the
generating function of a stuff type --- Eq.\
\ref{eq:generating_function} --- which matches the usual `exponential
generating function' from combinatorics. For the details of how this
works, see Example \ref{generating_function}.
Even better, we can vastly generalize the notion of generating
function, by replacing $E$ with an arbitrary groupoid. For any
groupoid $X$ we get a vector space: namely $\R^{\underline{X}}$, the
space of functions $\psi \maps \underline{X} \to \R$, where
$\underline{X}$ is the set of isomorphism classes of objects in $X$.
Any sufficiently nice groupoid over $X$ gives a vector in this
vector space.
The question then arises: what about linear operators? Here it is
good to take a lesson from Heisenberg's matrix mechanics. In his
early work on quantum mechanics, Heisenberg did not know about matrices.
He reinvented them based on this idea: a matrix $S$ can describe
a quantum process by letting the matrix entry $S_{ji} \in \C$ stand
for the `amplitude' for a system to undergo a transition from its
$i$th state to its $j$th state.
The meaning of complex amplitudes was somewhat mysterious ---
and indeed it remains so, much as we have become accustomed to it.
However, the mystery evaporates if we have a matrix whose entries
are natural numbers. Then the matrix entry $S_{ji} \in \N$
simply counts the {\it number of ways} for the system to undergo
a transition from its $i$th state to its $j$th state.
Indeed, let $X$ be a set whose elements are possible `initial states'
for some system, and let $Y$ be a set whose elements are possible
`final states'. Suppose $S$ is a set equipped with maps to $X$ and
$Y$:
\[\xymatrix{
& S\ar[dl]_q\ar[dr]^p & \\
Y & & X \\
}\]
Mathematically, we call this setup a span of sets. Physically,
we can think of $S$ as a set of possible `events'. Points in $S$ sitting
over $i \in X$ and $j \in Y$ form a subset
\[ S_{ji} = \{ s \colon \; q(s) = j, \; p(s) = i \} . \]
We can think of this as the {\it set of ways} for the system to undergo a
transition from its $i$th state to its $j$th state.
Indeed, we can picture $S$ more vividly as a matrix of sets: \\ \\
\[
\begin{picture}(230,140)
\includegraphics[scale=0.5]{matrix.eps}
\put(-165,70){$q$}
\put(-55,70){$p$}
\put(-10,75){$X$}
\put(-215,75){$Y$}
\put(-90,160){$S$}
\end{picture}
\]
\noindent
If all the sets $S_{ji}$ are finite, we get a matrix of natural
numbers $|S_{ji}|$.
Of course, matrices of natural numbers only allow us to do a limited
portion of linear algebra. We can go further if we consider, not spans
of sets, but {\it spans of groupoids}. We can picture one of
these roughly as follows:
\[
\begin{picture}(230,180)
\includegraphics[scale=0.55]{grpdmatrix.eps}
\put(-185,70){$q$}
\put(-60,70){$p$}
\put(-10,80){$X$}
\put(-235,80){$Y$}
\put(-100,175){$S$}
\end{picture}
\]
\noindent
If a span of groupoids is sufficiently nice --- our technical term
will be `tame' --- we can convert it into a linear operator from
$\R^{\underline X}$ to $\R^{\underline Y}$. Viewed as a matrix, this
operator will have nonnegative real matrix entries. So, we have not
succeeded in `groupoidifying' full-fledged quantum mechanics, where
the matrices can be complex. Still, we have made some progress.
As a sign of this, it turns out that any groupoid $X$ gives not just a
vector space $\R^{\underline X}$, but a real Hilbert space $L^2(X)$.
If $X = E$, the complexification of this Hilbert space is the Hilbert
space of the quantum harmonic oscillator. The quantum harmonic
oscillator is the simplest system where we can see the usual tools of
quantum field theory at work: for example, Feynman diagrams. It turns
out that large portions of the theory of Feynman diagrams can be done
with spans of groupoids replacing operators \cite{BaezDolan:2001}.
The combinatorics of these diagrams then becomes vivid, stripped bare
of the trappings of analysis. We sketch how this works in Section
\ref{feynman}. A more detailed treatment can be found in the work of
Jeffrey Morton \cite{Morton:2006}.
To get complex numbers into the game, Morton generalizes groupoids to
`groupoids over $\U(1)$': that is, groupoids $X$ equipped with
functors $v \maps X \to \U(1)$, where $\U(1)$ is the
groupoid with unit complex numbers as objects and only identity
morphisms. The cardinality of a groupoid over $\U(1)$ can be complex.
Other generalizations of groupoid cardinality are also interesting.
For example, Leinster has generalized it to categories
\cite{Leinster:2008}. The cardinality of a category can be negative!
More recently, Weinstein has generalized it to Lie groupoids
\cite{Weinstein:2008}. Getting a useful generalization of groupoids
for which the cardinality is naturally complex, without putting in the
complex numbers `by hand', remains an elusive goal. However, the work
of Fiore and Leinster suggests it is possible
\cite{FioreLeinster:2005}.
In the last few years James Dolan, Todd Trimble and the authors have
applied groupoidification to structures related to quantum groups,
most notably Hecke algebras and Hall algebras. A beautiful story has
begun to emerge in which $q$-deformation arises naturally from
replacing the groupoid of finite sets by the groupoid of
finite-dimensional vector spaces over $\F_q$, where $q$ is a prime
power. To some extent this work is a reinterpretation of known facts.
However, groupoidification gives a conceptual framework for what
before might have seemed a strange set of coincidences.
We hope to write up this material and develop it further in the years to
come. For now, the reader can turn to the online videos and notes
available through U.\ C.\ Riverside \cite{Baez}. The present
paper has a limited goal: we wish to explain the basic machinery of
groupoidification as simply as possible.
In Section \ref{degroupoidification}, we present the basic facts about
`degroupoidification': the process that turns groupoids into vector
spaces and tame spans into linear operators. Section \ref{feynman}
describes how to groupoidify the theory of Feynman diagrams; Section
\ref{hecke} describes how to groupoidify the theory of Hecke algebras.
In Section \ref{processes} we prove that the process of
degroupoidifying a tame span gives a well-defined linear operator. We
also give an explicit criterion for when a span of groupoids is tame,
and explicit formula for the operator coming from a tame span.
Section \ref{properties} proves many other results stated earlier in
the paper. Appendix \ref{appendix} proves some basic definitions and
useful lemmas regarding groupoids and spans of groupoids. The goal is
to make it easy for readers to try their own hand at
groupoidification.
\section{Degroupoidification}\label{degroupoidification}
In this section we describe a systematic process for turning groupoids
into vector spaces and tame spans into linear operators. This
process, `degroupoidification', is in fact a kind of functor.
`Groupoidification' is the attempt to {\it undo} this functor. To
`groupoidify' a piece of linear algebra means to take some structure
built from vector spaces and linear operators and try to find
interesting groupoids and spans that degroupoidify to give this
structure. So, to understand groupoidification, we need to master
degroupoidification.
We begin by describing how to turn a groupoid into a vector space. In
what follows, all our groupoids will be `essentially small'. This
means that they have a {\it set} of isomorphism classes of objects,
not a proper class. We also assume our groupoids have finite homsets.
In other words, given any pair of objects, the set of morphisms from
one object to another is finite.
\begin{definition}
Given a groupoid $X$, let $\underline{X}$ be the set of isomorphism
classes of objects of $X$.
\end{definition}
\begin{definition}
\label{vectorspace}
Given a groupoid $X$, let the {\bf degroupoidification} of $X$ be
the vector space
\[ \R^{\underline{X}} = \lbrace \Psi \maps \underline{X} \to \R \rbrace. \]
\end{definition}
A nice example is the groupoid of finite sets and bijections:
\begin{example}
\label{power_series}
\textup{
Let $E$ be the groupoid of finite sets and bijections. Then
$\underline{E}\iso \N$, so
\[ \R^{\underline{E}} \iso \lbrace \psi \maps \N \to \R \rbrace
\cong \R[[z]] , \]
where the formal power series associated to a function $\psi
\maps \N \to \R$ is given by:
\[ \sum_{n\in\N}\psi(n)z^n. \]
}
\end{example}
A sufficiently nice groupoid over a groupoid $X$ will give a vector
in $\R^{\underline{X}}$. To construct this, we use the concept of
groupoid cardinality:
\begin{definition}
The {\bf cardinality} of a groupoid $X$ is
\[ |X| = \sum_{[x]\in \underline{X}} \frac{1}{|\Aut(x)|} \]
where $|\Aut(x)|$ is the cardinality of the automorphism group of an object
$x$ in $X$. If this sum diverges, we say $|X| = \infty$.
\end{definition}
The cardinality of a groupoid $X$ is a well-defined nonnegative rational
number whenever $\underline{X}$ and all the automorphism groups of
objects in $X$ are finite. More generally, we say:
\begin{definition}
A groupoid $X$ is {\bf tame} if $|X| < \infty$.
\end{definition}
We show in Lemma \ref{EQUIVGRPD} that given equivalent groupoids
$X$ and $Y$, $|X| = |Y|$. We describe a useful alternative method
for computing groupoid cardinality in Lemma \ref{ALTCARD}.
The reason we use $\R$ rather than $\Q$ as our ground field is that
there are interesting groupoids whose cardinalities are irrational numbers.
The following example is fundamental:
\begin{example}
\textup{
The groupoid of finite sets $E$ has cardinality
\[ |E| ~=~ \sum_{n \in \N} \frac{1}{|S_n|} ~=~
\sum_{n \in \N} \frac{1}{n!} ~=~ e. \]
}
\end{example}
With the concept of groupoid cardinality in hand, we now describe
how to obtain a vector in $\R^{\underline{X}}$
from a sufficiently nice groupoid over $X$.
\begin{definition}
Given a groupoid $X$, a {\bf groupoid over $X$} is a groupoid $\Psi$
equipped with a functor $v \maps \Psi \to X$.
\end{definition}
\begin{definition}
Given a groupoid over $X$, say $v \maps \Psi \to X$, and an object $x \in X$,
we define the {\bf full inverse image} of $x$,
denoted $v^{-1}(x)$, to be the groupoid where:
\begin{itemize}
\item
an object is an object $a \in \Psi$ such that $v(a) \cong x$;
\item
a morphism $f \maps a \to a'$ is any morphism in $\Psi$ from $a$ to
$a'$.
\end{itemize}
\end{definition}
\begin{definition}
A groupoid over $X$, say $v \maps \Psi \to X$, is {\bf tame} if the
groupoid $v^{-1}(x)$ is tame for all $x\in X$.
\end{definition}
\begin{definition}
\label{degroupoidification_of_vectors}
Given a tame groupoid over $X$, say $v \maps \Psi\to X$,
there is a vector $\utilde{\Psi} \in \R^{\underline{X}}$
defined by:
\[ \utilde{\Psi}([x]) = |v^{-1}(x)|. \]
\end{definition}
As discussed in Section \ref{intro}, the theory of generating functions
gives many examples of this construction. Here is one:
\begin{example}
\label{generating_function}
\textup{
Let $\Psi$ be the groupoid of 2-colored finite sets.
An object of $\Psi$ is a `2-colored finite set': that is
a finite set $S$ equipped with a function $c: S \to 2$, where
$2 = \{0,1\}$. A morphism of $\Psi$ is a function between
2-colored finite sets preserving the 2-coloring: that is, a
commutative diagram of this sort:
\[
\xymatrix{
& S \ar[dr]_{c} \ar[rr]^{f} && S' \ar[dl]^{c'} \\
& & \{0,1\} & &
}
\]
There is an forgetful functor $v \maps \Psi \to E$ sending any
2-colored finite set $c \maps S \to 2$ to its underlying set $S$. It is a
fun exercise to check that for any $n$-element set, say $n$ for short,
the groupoid $v^{-1}(n)$ is equivalent to the weak quotient
$2^n/\!/S_n$, where $2^n$ is the set of functions $c: n \to 2$ and the
permutation group $S_n$ acts on $2^n$ in the obvious way. It follows
that
\[ \utilde{\Psi}(n) = |v^{-1}(n)| = |2^n /\!/ S_n | = 2^n/n! \]
so the corresponding power series is
\[ \utilde{\Psi} = \sum_{n \in \N} \frac{2^n}{n!}z^n =
e^{2z} \in \R[[z]]. \]
This is called the {\bf generating function} of $v \maps \Psi \to E$.
Note that the $n!$ in the denominator, often regarded as a convention,
arises naturally from the use of groupoid cardinality.
}
\end{example}
Both addition and scalar multiplication of vectors have
groupoidified analogues. We can add two groupoids $\Phi$,
$\Psi$ over $X$ by taking their coproduct, i.e., the disjoint union of
$\Phi$ and $\Psi$ with the obvious map to $X$:
\[
\xymatrix{
\Phi + \Psi \ar[d] \\
X
}
\]
We then have:
\begin{proposition*}
Given tame groupoids $\Phi$ and $\Psi$ over $X$,
\[\utilde{\Phi + \Psi} = \utilde{\Phi} + \utilde{\Psi}.\]
\end{proposition*}
\begin{proof}
This will appear later as part of Lemma \ref{addvectors},
which also considers infinite sums.
\end{proof}
We can also multiply a groupoid over $X$ by a `scalar' --- that is,
a fixed groupoid. Given a groupoid over $X$, say $v \maps \Phi\to X$,
and a groupoid
$\Lambda$, the cartesian product $\Lambda\times \Psi$ becomes a
groupoid over $X$ as follows:
\[\xymatrix{
\Lambda\times \Psi \ar[d]^{v\pi_2}\\ X\\ }\]
where $\pi_2 \maps \Lambda\times \Psi\to \Psi$ is projection onto the
second factor. We then have:
\begin{proposition*}
Given a groupoid $\Lambda$ and a groupoid $\Psi$ over $X$, the groupoid
$\Lambda \times \Psi$ over $X$ satisfies
\[\utilde{\Lambda \times \Psi} = |\Lambda|\utilde{\Psi}.\]
\end{proposition*}
\begin{proof}
This is proved as Proposition \ref{scalarmult1}.
\end{proof}
We have seen how degroupoidification turns a groupoid $X$ into a vector space
$\R^{\underline{X}}$. Degroupoidification also turns any sufficiently
nice span of groupoids into a linear operator.
\begin{definition}
Given groupoids $X$ and $Y$, a {\bf span} from $X$ to $Y$ is a diagram
\[\xymatrix{
& S\ar[dl]_q\ar[dr]^p & \\
Y & & X \\
}\]
where $S$ is groupoid and $p\maps S\to X$ and $q \maps S\to Y$ are functors.
\end{definition}
To turn a span of groupoids into a linear operator, we need a
construction called the `weak pullback'. This construction will let
us apply a span from $X$ to $Y$ to a groupoid over $X$ to obtain a
groupoid over $Y$. Then, since a tame groupoid over $X$ gives a
vector in $\R^{\underline{X}}$, while a tame groupoid over $Y$ gives a
vector in $\R^{\underline{Y}}$, a sufficiently nice span from $X$ to
$Y$ will give a map from $\R^{\underline{X}}$ to $\R^{\underline{Y}}$.
Moreover, this map will be linear.
As a warmup for understanding weak pullbacks for groupoids, we recall
ordinary pullbacks for sets, also called `fibered products'.
The data for constructing such a pullback is a pair of sets equipped
with functions to the same set:
\[
\xymatrix{
& T \ar[dr]_{q} & & S \ar[dl]^{p} & \\
& & X & &
}
\]
The pullback is the set
\[ P = \lbrace (s,t) \in S \times T \, | \; p(s) = q(t) \rbrace \]
together with the obvious projections $\pi_S \maps P \to S$ and
$\pi_T \maps P \to T$. The pullback makes this diamond commute:
\[
\xymatrix{
& & P \ar[dl]_{\pi_T} \ar[dr]^{\pi_S} & &\\
& T \ar[dr]_{q} & & S \ar[dl]^{p} & \\
& & X & &
}
\]
and indeed it is the `universal solution' to the problem of finding
such a commutative diamond \cite{Mac Lane}.
To generalize the pullback to groupoids, we need to weaken one
condition. The data for constructing a weak pullback is a pair of
groupoids equipped with functors to the same groupoid:
\[
\xymatrix{
& T \ar[dr]_{q} & & S \ar[dl]^{p} & \\
& & X & &
}
\]
But now we replace the {\it equation} in the definition of pullback
by a {\it specified isomorphism}. So, we define the weak pullback
$P$ to be the groupoid where an object is a triple $(s,t,\alpha)$
consisting of an object $s \in S$, an object $t \in T$, and an
isomorphism $\alpha \maps p(s) \to q(t)$ in $X$. A morphism
in $P$ from $(s,t,\alpha)$ to $(s',t',\alpha')$ consists of a morphism
$f \maps s \to s'$ in $S$ and a morphism $g \maps t \to t'$ in $T$
such that the following square commutes:
\[
\xymatrix{
p(s) \ar[d]_{p(f)} \ar[r]^{\alpha} & q(t) \ar[d]^{q(g)} \\
p(s') \ar[r]_{\alpha'} & q(t')
}
\]
Note that any set can be regarded as a {\bf discrete} groupoid:
one with only identity morphisms. For discrete groupoids, the weak
pullback reduces to the ordinary pullback for sets.
Using the weak pullback, we can apply a span from $X$ to $Y$ to a groupoid
over $X$ and get a groupoid over $Y$. Given a span of groupoids:
\[
\xymatrix{
& S\ar[dl]_{q} \ar[dr]^{p} & \\
Y & & X
}
\]
and a groupoid over $X$:
\[
\xymatrix{
& \Phi \ar[dl]_{v} \\
X &
}
\]
we can take the weak pullback, which we call $S\Phi$:
\[
\xymatrix{
& & S\Phi \ar[dl]_{\pi_S}\ar[dr]^{\pi_{\Phi}} & \\
& S \ar[dl]_{q} \ar[dr]^{p} & & \Phi \ar[dl]_{v} \\
Y & & X &
}
\]
and think of $S\Phi$ as a groupoid over $Y$:
\[
\xymatrix{
& S\Phi \ar[dl]_{q \pi_S} \\
Y
}
\]
This process will determine a linear operator from $\R^{\underline X}$
to $\R^{\underline Y}$ if the span $S$ is sufficiently nice:
\begin{definition}
A span
\[
\xymatrix{
& S\ar[dl]_{q} \ar[dr]^{p} & \\
Y & & X
}
\]
is {\bf tame} if $v \maps \Phi \to X$ being tame implies that
$q\pi_{S} \maps S\Phi \to Y$ is tame.
\end{definition}
\begin{theorem*}
Given a tame span:
\[
\xymatrix{
& S\ar[dl]_{q} \ar[dr]^{p} & \\
Y & & X
}
\]
there exists a unique linear operator
\[ \utilde{S} \maps \R^{\underline{X}} \to \R^{\underline{Y}} \]
such that
\[ \utilde{S}\utilde{\Phi} = \utilde{S\Phi} \]
whenever $\Phi$ is a tame groupoid over $X$.
\end{theorem*}
\begin{proof} This is Theorem \ref{PROCESS1}. \end{proof}
Theorem \ref{matrix} provides an explicit criterion for when a span is
tame. This theorem also gives an explicit formula for the the
operator corresponding to a tame span $S$ from $X$ to $Y$. If
$\underline{X}$ and $\underline{Y}$ are finite, then
$\R^{\underline{X}}$ has a basis given by the isomorphism classes
$[x]$ in $X$, and similarly for $\R^{\underline{Y}}$. With respect to
these bases, the matrix entries of $\utilde{S}$ are given as follows:
\[
\utilde{S}_{[y][x]} =
\sum_{[s]\in\u{p^{-1}(x)}\bigcap \u{q^{-1}(y)} }\frac{|\Aut(x)|}{|\Aut(s)|} \]
where $|\Aut(x)|$ is the set cardinality of the automorphism group of
$x \in X$, and similarly for $|\Aut(s)|$. Even when $\underline{X}$
and $\underline{Y}$ are not finite, we have the following formula for
$\utilde S$ applied to $\psi \in \R^{\underline X}$:
\[
(\utilde{S} \psi)([y]) =
\sum_{[x] \in \u{X}} \;\,
\sum_{[s]\in\u{p^{-1}(x)}\bigcap \u{q^{-1}(y)}}
\frac{|\Aut(x)|}{|\Aut(s)|} \,\, \psi([x]) \,.
\]
As with vectors, there are groupoidified analogues of addition and scalar
multiplication for operators. Given two spans from $X$ to $Y$:
\[
\xymatrix{
& S\ar[dl]_{q_S} \ar[dr]^{p_S} & & & T\ar[dl]_{q_T} \ar[dr]^{p_T} & \\
Y & & X & Y & & X
}
\]
we can add them as follows. By the universal property of the
coproduct we obtain from the right legs of the above
spans a functor from the disjoint union $S + T$ to $X$. Similarly, from
the left legs of the above spans, we obtain a functor from $S + T$
to $Y$. Thus, we obtain a span
\[
\xymatrix{
& S + T\ar[dl] \ar[dr] & \\
Y & & X
}
\]
This addition of spans is compatible with degroupoidification:
\begin{proposition*}
If $S$ and $T$ are tame spans from $X$ to $Y$, then so is $S + T$, and
\[ \utilde{S+T} = \utilde{S} + \utilde{T} .\]
\end{proposition*}
\begin{proof} This is proved as Proposition \ref{add_spans}.
\end{proof}
We can also multiply a span by a `scalar': that is, a fixed groupoid.
Given a groupoid $\Lambda$ and a span
\[
\xymatrix{
& S\ar[dl]_q \ar[dr]^p & \\
Y & & X
}
\]
we can multiply them to obtain a span
\[
\xymatrix{
& \Lambda \times S \ar[dl]_{q\pi_2} \ar[dr]^{p\pi_2} & \\
Y & & X
}
\]
Again, we have compatibility with degroupoidification:
\begin{proposition*}
Given a tame groupoid $\Lambda$ and a tame span
\[
\xymatrix{
& S\ar[dl] \ar[dr] & \\
Y & & X
}
\]
then $\Lambda \times S$ is tame and
\[\utilde{\Lambda \times S} = |\Lambda| \, \utilde{S}.\]
\end{proposition*}
\begin{proof} This is proved as Proposition \ref{scalarmult2}.
\end{proof}
Next we turn to the all-important process of {\it composing} spans.
This is the groupoidified analogue of matrix multiplication. Suppose
we have a span from $X$ to $Y$ and a span from $Y$ to $Z$:
\[
\xymatrix{
& T\ar[dl]_{q_T} \ar[dr]^{p_T} & & S \ar[dl]_{q_S} \ar[dr]^{p_S}& \\
Z & & Y & & X
}
\]
Then we say these spans are {\bf composable}.
In this case we can form a weak pullback in the middle:
\[
\xymatrix{
& & TS \ar[dl]_{\pi_T} \ar[dr]^{\pi_S} & &\\
& T\ar[dl]_{q_T} \ar[dr]^{p_T} & & S \ar[dl]_{q_S} \ar[dr]^{p_S}& \\
Z & & Y & & X
}
\]
which gives a span from $X$ to $Z$:
\[\xymatrix{
& TS \ar[dl]_{q_T \pi_T} \ar[dr]^{p_S\pi_S} & \\
Z & & X \\
}\]
called the {\bf composite} $TS$.
When all the groupoids involved are discrete, the spans $S$ and $T$
are just matrices of sets, as explained in Section \ref{intro}. We urge
the reader to check that in this case, the process of composing spans
is really just matrix multiplication, with cartesian product of sets
taking the place of multiplication of numbers, and disjoint union of
sets taking the place of addition:
\[ (TS)_{ki} = \coprod_{j \in Y} T_{kj} \times S_{ji} . \]
So, composing spans of groupoids is a generalization of matrix
multiplication.
Indeed, degroupoidification takes composition of tame
spans to composition of linear operators:
\begin{proposition*} If $S$ and $T$ are composable tame spans:
\[
\xymatrix{
& T\ar[dl]_{q_T} \ar[dr]^{p_T} & & S \ar[dl]_{q_S} \ar[dr]^{p_S}& \\
Z & & Y & & X
}
\]
then the composite span
\[\xymatrix{
& TS \ar[dl]_{q_T \pi_T} \ar[dr]^{p_S\pi_S} & \\
Z & & X \\
}\]
is also tame, and
\[ \utilde{TS} = \utilde{T} \utilde{S} .\]
\end{proposition*}
\begin{proof} This is proved as Lemma \ref{composition}.
\end{proof}
Besides addition and scalar multiplication, there
is an extra operation for groupoids over a groupoid $X$, which
is the reason groupoidification is connected to quantum
mechanics. Namely, we can take their inner product:
\begin{definition}
Given groupoids $\Phi$ and $\Psi$ over $X$, we define
the {\bf inner product} $\ip{\Phi}{\Psi}$ to be this weak pullback:
\[
\xymatrix{
& \ip{\Phi}{\Psi} \ar[dl] \ar[dr] & \\
\Phi \ar[dr] & & \Psi \ar[dl] \\
& X &
}
\]
\end{definition}
\begin{definition}
\label{L2}
A groupoid $\Psi$ over $X$ is called {\bf square-integrable} if
$\ip{\Psi}{\Psi}$ is tame. We define $L^2(X)$ to be the
subspace of $\R^{\underline{X}}$ consisting of finite
real linear combinations of vectors $\utilde{\Psi}$ where
$\Psi$ is square-integrable.
\end{definition}
Note that $L^2(X)$ is all of $\R^{\underline{X}}$ when
$\underline{X}$ is finite. The inner product of groupoids over $X$
makes $L^2(X)$ into a real Hilbert space:
\begin{theorem*}
Given a groupoid $X$, there is a unique inner product $\ip{\cdot}{\cdot}$
on the vector space $L^2(X)$ such that
\[ \ip{\utilde{\Phi}}{\utilde{\Psi}} = |\ip{\Phi}{\Psi}| \]
whenever $\Phi$ and $\Psi$ are square-integrable groupoids over $X$.
With this inner product $L^2(X)$ is a real Hilbert space.
\end{theorem*}
\begin{proof} This is proven later as Theorem \ref{innerprod_theorem}.
\end{proof}
We can always complexify $L^2(X)$ and obtain a complex
Hilbert space. We work with real coefficients simply to admit that
groupoidification as described here does not make essential use of the
complex numbers. Morton's generalization involving groupoids over
$\U(1)$ is one way to address this issue \cite{Morton:2006}.
The inner product of groupoids over $X$ has the properties one
would expect:
\begin{proposition*}
Given a groupoid $\Lambda$ and square-integrable
groupoids $\Phi$, $\Psi$, and $\Psi'$ over $X$, we have the
following equivalences of groupoids:
\begin{enumerate}
\item
\[\ip{\Phi}{\Psi} \simeq \ip{\Psi}{\Phi}.\]
\item
\[\ip{\Phi}{\Psi + \Psi'} \simeq \ip{\Phi}{\Psi} + \ip{\Phi}{\Psi'}.\]
\item
\[\ip{\Phi}{\Lambda \times \Psi} \simeq \Lambda \times \ip{\Phi}{\Psi}.\]
\end{enumerate}
\end{proposition*}
\begin{proof} Here equivalence of groupoids is defined in the usual
way --- see Definition \ref{equivalence_of_groupoids}. This result is
proved below as Proposition
\ref{innerprodandadjointprops}. \end{proof}
Finally, just as we can define the adjoint of an operator
between Hilbert spaces, we can define the adjoint of a span
of groupoids:
\begin{definition}
Given a span of groupoids from $X$ to $Y$:
\[
\xymatrix{
& S\ar[dl]_{q} \ar[dr]^{p} & \\
Y & & X
}
\]
its {\bf adjoint} $S^{\dagger}$ is the following span of
groupoids from $Y$ to $X$:
\[
\xymatrix{
& S\ar[dl]_{p} \ar[dr]^{q} & \\
X & & Y
}
\]
\end{definition}
We warn the reader that the adjoint of a tame span may not be
tame, due to an asymmetry in the criterion for tameness,
Theorem \ref{matrix}. However, we have:
\begin{proposition*}
Given a span
\[
\xymatrix{
& S \ar[dl]_{q} \ar[dr]^{p} & \\
Y & & X
}
\]
and a pair $v \maps \Psi \to X$, $w \maps \Phi \to Y$ of groupoids
over $X$ and $Y$, respectively, there is an equivalence of groupoids
\[ \langle \Phi,S\Psi \rangle
\simeq \langle S^\dagger\Phi,\Psi \rangle. \]
\end{proposition*}
\begin{proof}
This is proven as Proposition \ref{innerprod_adjoint}.
\end{proof}
We say what it means for spans to be `equivalent' in Definition
\ref{EQUIVALENCE}. Equivalent tame spans give the same linear
operator: $S \simeq T$ implies $\utilde{S} = \utilde{T}$. Spans of
groupoids obey many of the basic laws of linear algebra --- up to
equivalence. For example, we have these familiar properties of
adjoints:
\begin{proposition*}
Given spans
\[
\xymatrix{
& T \ar[dl]_{q_T} \ar[dr]^{p_T} & & & S \ar[dl]_{q_S} \ar[dr]^{p_S} & \\
Z & & Y & Y & & X
}
\]
and a groupoid $\Lambda$, we have the following equivalences of
spans:
\begin{enumerate}
\item $(TS)^\dagger \simeq S^\dagger T^\dagger$
\item $(S+T)^\dagger \simeq S^\dagger + T^\dagger$
\item $(\Lambda S)^\dagger \simeq \Lambda S^\dagger$
\end{enumerate}
\end{proposition*}
\begin{proof}
These will follow easily after we show addition and composition
of spans and scalar multiplication are well defined.
\end{proof}
\noindent
In fact, degroupoidification is a functor
\[ \utilde{\;\;} \,\maps \Span \to \Vect \]
where $\Vect$ is the category of real vector spaces and linear
operators, and $\Span$ is a category with
\begin{itemize}
\item
groupoids as objects,
\item
equivalence classes of tame spans as morphisms,
\end{itemize}
where composition comes from the method of composing spans we
have just described. We prove this fact in Theorem \ref{functor}.
A deeper approach, which we shall explain
elsewhere, is to think of $\Span$ as a bicategory with
\begin{itemize}
\item
groupoids as objects,
\item
tame spans as morphisms,
\item
isomorphism classes of maps of spans as 2-morphisms
\end{itemize}
Then degroupoidification becomes a map between bicategories:
\[ \utilde{\;\;} \, \maps \Span \to \Vect \]
where $\Vect$ is viewed as a bicategory with only identity 2-morphisms.
We can go even further and think of of $\Span$ as a tricategory with
\begin{itemize}
\item
groupoids as objects,
\item
tame spans as morphisms,
\item
maps of spans as 2-morphisms,
\item
maps of maps of spans as 3-morphisms.
\end{itemize}
However, we have not yet found a use for this further structure.
In short, groupoidification is not merely a way of replacing
linear algebraic structures involving the real numbers with purely
combinatorial structures. It is also a form of `categorification'
\cite{BaezDolan:1998}, where we take structures defined in the category
$\Vect$ and find analogues that live in the bicategory $\Span$.
\section{Groupoidification}
\label{applications}
Degroupoidification is a systematic process; groupoidification is the
attempt to undo this process. The previous section explains
degroupoidification --- but not why groupoidification is interesting.
The interest lies in its applications to concrete examples. So,
let us sketch two: Feynman diagrams and Hecke algebras.
\subsection{Feynman Diagrams}
\label{feynman}
One of the first steps in developing quantum theory was Planck's new
treatment of electromagnetic radiation. Classically, electromagnetic
radiation in a box can be described as a collection of harmonic
oscillators, one for each vibrational mode of the field in the box.
Planck `quantized' the electromagnetic field by assuming that the energy
of each oscillator could only take discrete, evenly spaced values: if
by fiat we say the lowest possible energy is $0$, the allowed energies
take the form $n \hbar \omega$, where $n$ is any natural number,
$\omega$ is the frequency of the oscillator in question, and $\hbar$
is Planck's constant.
Planck did not know what to make of the number $n$, but Einstein and
others later interpreted it as the number of `quanta' occupying the
vibrational mode in question. However, far from being particles in
the traditional sense of tiny billiard balls, quanta are curiously
abstract entities --- for example, all the quanta occupying a given
mode are indistinguishable from each other.
In a modern treatment, states of a quantized harmonic oscillator are
described as vectors in a Hilbert space called `Fock space'. This
Hilbert space consists of formal power series. For a full treatment
of the electromagnetic field we would need power series in many
variables, one for each vibrational mode. But to keep things simple,
let us consider power series in one variable. In this case, the
vector $z^n/n!$ describes a state in which $n$ quanta are present.
A general vector in Fock space is a convergent linear combination of
these special vectors. More precisely, the {\bf Fock space}
consists of $\psi \in \C[[z]]$ with $\langle \psi, \psi \rangle <
\infty$, where the inner product is given by
\begin{equation}
\label{fock_inner_product}
\left\langle
\sum a_n z^n \, , \,
\sum b_n z^n
\right\rangle
\; = \;
\sum n! \,\, \overline{a}_n b_n \, .
\end{equation}
But what is the meaning of this inner product? It is precisely the
inner product in $L^2(E)$, where $E$ is the groupoid of finite sets!
This is no coincidence. In fact, there is a deep relationship between
the mathematics of the quantum harmonic oscillator and the
combinatorics of finite sets. This relation suggests a program of
{\it groupoidifying} mathematical tools from quantum theory, such as
annihilation and creation operators, field operators and their
normal-ordered products, Feynman diagrams, and so on. This program
was initiated by Dolan and one of the current authors
\cite{BaezDolan:2001}. Later, it was developed much further by Morton
\cite{Morton:2006}. Guta and Maassen \cite{GM:2002} and Aguiar and
Maharam \cite{AM:2008} have also done relevant work. Here we just
sketch some of the basic ideas.
First, let us see why the inner product on Fock space matches the
inner product on $L^2(E)$ as described in Theorem \ref{innerprod_theorem}.
We can compute the latter inner product using a convenient basis. Let
$\Psi_n$ be the groupoid with $n$-element sets as objects and
bijections as morphisms. Since all $n$-element sets are isomorphic
and each one has the permutation group $S_n$ as automorphisms, we have
an equivalence of groupoids
\[ \Psi_n \simeq 1 /\!/S_n . \]
Furthermore, $\Psi_n$ is a groupoid over $E$ in an obvious way:
\[ v \maps \Psi_n \to E . \]
We thus obtain a vector $\utilde{\Psi}_n \in \R^{\underline{E}}$
following the rule described in Definition
\ref{degroupoidification_of_vectors}. We can describe this vector as
a formal power series using the isomorphism
\[ \R^{\underline{E}} \cong \R[[z]]
\]
described in Example \ref{power_series}. To do this, note that
\[ v^{-1} (m) \simeq
\begin{cases}
1 /\!/S_n & m = n \\
0 & m \ne n
\end{cases}
\]
where $0$ stands for the empty groupoid. It follows that
\[ |v^{-1} (m)| =
\begin{cases}
1/n! & m = n \\
0 & m \ne n
\end{cases}
\]
and thus
\[ \utilde{\Psi}_n = \sum_{m \in \N} |v^{-1}(m)| \, z^m =
\frac{z^n}{n!} .\]
Next let us compute the inner product in $L^2(E)$. Since finite
linear combinations of vectors of the form $\utilde{\Psi}_n$ are dense
in $L^2(E)$ it suffices to compute the inner product of two vectors of
this form. We can use the recipe in Theorem \ref{innerprod_theorem}.
So, we start by taking the weak pullback of the corresponding groupoids
over $E$:
\[
\xymatrix{
& \ip{\Psi_m}{\Psi_n} \ar[dl] \ar[dr] & \\
\Psi_m \ar[dr] & & \Psi_n \ar[dl] \\
& E &
}
\]
An object of this weak pullback consists of an $m$-element set $S$, an
$n$-element set $T$, and a bijection $\alpha \maps S \to T$. A
morphism in this weak pullback consists of a commutative square of
bijections:
\[
\xymatrix{
S \ar[d]_{f} \ar[r]^{\alpha} & T \ar[d]^{g} \\
S' \ar[r]_{\alpha'} & T'
}
\]
So, there are no objects in $\ip{\Psi_m}{\Psi_n}$ when $n \ne m$.
When $n = m$, all objects in this groupoid are isomorphic, and each
one has $n!$ automorphisms. It follows that
\[ \langle \utilde{\Psi}_m, \utilde{\Psi}_n \rangle =
| \langle \Psi_m, \Psi_n \rangle | =
\begin{cases}
1/n! & m = n \\
0 & m \ne n
\end{cases}
\]
Using the fact that $\utilde{\Psi}_n = z^n/n!$, we see that
this is precisely the inner product in Eq.\ \ref{fock_inner_product}.
So, as a complex Hilbert space, Fock space is the complexification of
$L^2(E)$.
It is worth reflecting on the meaning of the computation we just did.
The vector $\utilde{\Psi}_n = z^n/n!$ describes a state of the quantum
harmonic oscillator in which $n$ quanta are present. Now we see that
this vector arises from the groupoid $\Psi_n$ over $E$. In Section
\ref{intro} we called a groupoid over $E$ a {\bf stuff type}, since it
describes a way of equipping finite sets with extra stuff. The stuff
type $\Psi_n$ is a very simple special case, where the stuff is simply
`being an $n$-element set'. So, groupoidification reveals the
mysterious `quanta' to be simply elements of finite sets. Moreover,
the formula for the inner product on Fock space arises from the fact
that there are $n!$ ways to identify two $n$-element sets.
The most important operators on Fock space are the annihilation and
creation operators. If we think of vectors in Fock space as formal
power series, the {\bf annihilation operator} is given by
\[ (a \psi)(z) = \frac{d}{dz} \psi(z) \]
while the {\bf creation operator} is given by
\[ (a^* \psi)(z) = z \psi(z) .\]
As operators on Fock space, these are only densely defined: for example,
they map the dense subspace $\C[z]$ to itself. However, we can also think
of them as operators from $\C[[z]]$ to itself. In physics these
operators decrease or increase the number of quanta in a state, since
\[ az^n = n z^{n-1} , \qquad a^* z^n = z^{n+1}.\]
Creating a quantum and then annihilating one is not the same as
annhilating and then creating one, since
\[ a a^* = a^* a + 1 .\]
This is one of the basic examples of noncommutativity in quantum theory.
The annihilation and creation operators arise from spans by
degroupoidification, using the recipe described in Theorem
\ref{PROCESS1}. The annihilation operator comes from this span:
\[
\xymatrix{
& E\ar[dl]_{1} \ar[dr]^{S \mapsto S+1} & \\
E & & E
}
\]
where the left leg is the identity functor and the right
leg is the functor `disjoint union with a 1-element set'.
Since it is ambiguous to refer to this span by the name of
the groupoid on top, as we have been doing, we instead
call it $A$. Similarly, we call its adjoint $A^*$:
\[
\xymatrix{
& E\ar[dl]_{S \mapsto S+1} \ar[dr]^{1} & \\
E & & E
}
\]
A calculation \cite{Morton:2006} shows that indeed:
\[ \utilde{A} = a, \qquad \utilde{A}^* = a^* .\]
Moreover, we have an equivalence of spans:
\[ AA^* \simeq A^*A + 1 . \]
Here we are using composition of spans, addition of spans and the
identity span as defined in Section \ref{degroupoidification}. If we
unravel the meaning of this equivalence, it turns out to be very
simple \cite{BaezDolan:2001}. If you have an urn with $n$ balls in
it, there is one more way to put in a ball and then take one out than
to take one out and then put one in. Why? Because in the first
scenario there are $n+1$ balls to choose from when you take one out,
while in the second scenario there are only $n$. So, the
noncommutativity of annihilation and creation operators is not a
mysterious thing: it has a simple, purely combinatorial explanation.
We can go further and define a span
\[ \Phi = A + A^* \]
which degroupoidifies to give the well-known {\bf field operator}
\[ \phi = \utilde{\Phi} = a + a^* \]
Our normalization here differs from the usual one in physics because
we wish to avoid dividing by $\sqrt{2}$, but all the usual physics
formulas can be adapted to this new normalization.
The powers of the span $\Phi$ have a nice combinatorial
interpretation. If we write its $n$th power as follows:
\[
\xymatrix{
& \Phi^n \ar[dl]_{q} \ar[dr]^{p} & \\
E & & E
}
\]
then we can reinterpret this span as a groupoid over $E \times E$:
\[
\xymatrix{
\Phi^n \ar[d]_{q \times p} \\
E \times E
}
\]
Just as a groupoid over $E$ describes a way of equipping a finite
set with extra stuff, a groupoid over $E \times E$ describes a way
of equipping a {\it pair} of finite sets with extra stuff.
And in this example, the extra stuff in question is a very simple
sort of diagram!
More precisely, we can draw an object of $\Phi^n$ as a $i$-element set
$S$, a $j$-element set $T$, a graph with $i+j$ univalent vertices and
a single $n$-valent vertex, together with a bijection between the
$i+j$ univalent vertices and the elements of $S + T$. It is against
the rules for vertices labelled by elements of $S$ to be connected by
an edge, and similarly for vertices labelled by elements of $T$. The
functor $p \times q \maps \Phi^n \to E \times E$ sends such an object
of $\Phi^n$ to the pair of sets $(S,T) \in E \times E$.
An object of $\Phi^n$ sounds like a complicated thing, but it can be
depicted quite simply as a {\bf Feynman diagram}. Physicists
traditionally read Feynman diagrams from bottom to top. So, we draw
the above graph so that the univalent vertices labelled by elements of
$S$ are at the bottom of the picture, and those labelled by elements
of $T$ are at the top. For example, here is an object of $\Phi^3$,
where $S = \{1,2,3\}$ and $T = \{4,5,6,7\}$:
\begin{center}
\begin{picture}(110,120)
\includegraphics[scale=0.7]{feynman_1.eps}
\put(-96,110){$5$}
\put(-66,110){$4$}
\put(-36,110){$7$}
\put(-6,110){$6$}
\put(-76,-13){$1$}
\put(-46,-13){$3$}
\put(-16,-13){$2$}
\end{picture}
\end{center}
\vskip 0.5em
\noindent
In physics, we think of this as a process where 3 particles come
in and 4 go out.
Feynman diagrams of this sort are allowed to have {\bf self-loops}:
edges with both ends at the same vertex. So, for example, this is
a perfectly fine object of $\Phi^5$ with $S = \{1,2,3\}$ and
$T = \{4,5,6,7\}$:
\begin{center}
\begin{picture}(110,120)
\includegraphics[scale=0.7]{feynman_2.eps}
\put(-96,110){$5$}
\put(-66,110){$4$}
\put(-36,110){$6$}
\put(-6,110){$7$}
\put(-76,-13){$2$}
\put(-46,-13){$3$}
\put(-16,-13){$1$}
\end{picture}
\end{center}
\vskip 0.5em
\noindent
To eliminate self-loops, we can work with the {\bf normal-ordered
powers} or `Wick powers' of $\Phi$, denoted $\w\Phi^n\,\w\,$. These
are the spans obtained by taking $\Phi^n$, expanding it in terms
of the annihilation and creation operators, and moving all the
annihilation operators to the right of all the creation operators
`by hand', ignoring the fact that they do not commute. For example:
\begin{eqnarray*}
\w\Phi^0\,\w &=& 1 \\
\w\Phi^1\,\w &=& A + A^* \\
\w\Phi^2\,\w &=& A^2 + 2A^* A + {A^*}^2 \\
\w\Phi^3\,\w &=& A^3 + 3A^* A^2 + 3{A^*}^2 A + {A^*}^3
\end{eqnarray*}
and so on. Objects of $\w \Phi^n \w$ can be drawn as Feynman
diagrams just as we did for objects of $\Phi^n$. There is just
one extra rule: self-loops are not allowed.
In quantum field theory one does many calculations involving
products of normal-ordered powers of field operators. Feynman
diagrams make these calculations easy. In the groupoidified context,
a product of normal-ordered powers is a span
\[
\xymatrix{
& \w\Phi^{n_1}\,\w \; \cdots\; \w\Phi^{n_k}\,\w
\ar[dl]_>>>>>>>{q} \ar[dr]^>>>>>>>{p} & \\
E & & E \, .
}
\]
As before, we can draw an object of the groupoid $\w\Phi^{n_1}\,\w \;
\cdots\; \w\Phi^{n_k}\,\w$ as a Feynman diagram. But now these
diagrams are more complicated, and closer to those seen in physics
textbooks. For example, here is a typical object of $\w \Phi^3 \w \,
\w \Phi^3 \w \, \w \Phi^4 \w$, drawn as a Feynman diagram:
\begin{center}
\begin{picture}(110,130)
\includegraphics[scale=0.7]{feynman_3.eps}
\put(-100,125){$5$}
\put(-68,125){$8$}
\put(-36,125){$7$}
\put(-6,125){$6$}
\put(-101,-11){$1$}
\put(-68,-11){$4$}
\put(-36,-11){$2$}
\put(-5,-11){$3$}
\end{picture}
\end{center}
\vskip 0.5em
\noindent
In general, a {\bf Feynman diagram} for an object of $\w\Phi^{n_1}\,\w
\; \cdots\; \w\Phi^{n_k}\,\w$ consists of an $i$-element set $S$, a
$j$-element set $T$, a graph with $n$ vertices of valence $n_1, \dots,
n_k$ together with $i+j$ univalent vertices, and a bijection between
these univalent vertices and the elements of $S+T$. Self-loops are
forbidden; it is against the rules for two vertices labelled by
elements of $S$ to be connected by an edge, and similarly for two
vertices labelled by elements of $T$. As before, the forgetful
functor $p \times q$ sends any such object to the pair of sets $(S,T)
\in E \times E$.
The groupoid $\w\Phi^{n_1}\,\w \; \cdots\; \w\Phi^{n_k}\,\w$ also contains
interesting automorphisms. These come from {\it symmetries} of Feynman
diagrams: that is, graph automorphisms fixing the univalent
vertices labelled by elements of $S$ and $T$. These symmetries play
an important role in computing the operator corresponding to this span:
\[
\xymatrix{
& \w\Phi^{n_1}\,\w \; \cdots\; \w\Phi^{n_k}\,\w
\ar[dl]_>>>>>>>{q} \ar[dr]^>>>>>>>{p} & \\
E & & E \, .
}
\]
As is evident from Theorem \ref{matrix}, when a Feynman diagram has
symmetries, we need to divide by the number of symmetries when
determining its contribution to the operator coming from the above
span. This rule is well-known in quantum field theory; here we see it
arising as a natural consequence of groupoid cardinality.
\subsection{Hecke Algebras}
\label{hecke}
Hecke algebras are $q$-deformations of finite reflection groups, also
known as Coxeter groups \cite{Humphreys}. Any Dynkin diagram gives
rise to a simple Lie group, and the Weyl group of this simple Lie
algebra is a Coxeter group. Here we sketch how to groupoidify a Hecke
algebra when the parameter $q$ is a power of a prime number and the
finite reflection group comes from a Dynkin diagram in this way. More
details will appear in future work \cite{Baez}.
Let $D$ be a Dynkin diagram. We write $d \in D$ to mean that $d$ is a
dot in this diagram. Associated to each unordered pair of dots $d,d'
\in D$ is a number $m_{dd'} \in \{2,3,4,6\}$. In the usual Dynkin
diagram conventions:
\begin{itemize}
\item
$m_{dd'} = 2$ is drawn as no edge at all,
\item
$m_{dd'} = 3$ is drawn as a single edge,
\item
$m_{dd'} = 4$ is drawn as a double edge,
\item
$m_{dd'} = 6$ is drawn as a triple edge.
\end{itemize}
For any nonzero number $q$, our Dynkin diagram gives a Hecke algebra.
Since we are using real vector spaces in this paper, we work with the
Hecke algebra over $\R$:
\begin{definition}
Let $D$ be a Dynkin diagram and $q$ a nonzero real number. The
{\bf Hecke algebra} $H(D,q)$ corresponding to this data is the associative
algebra over $\R$ with one generator $\sigma_d$ for each $d \in D$,
and relations:
\[\sigma_d^2 = (q-1)\sigma_d + q \]
for all $d \in D$, and
\[\sigma_d\sigma_{d'}\sigma_d\ldots
= \sigma_{d'} \sigma_d \sigma_{d'}\ldots \]
for all $d,d'\in D$, where each side has $m_{dd'}$ factors.
\end{definition}
\noindent
When $q = 1$, this Hecke algebra is simply the group algebra of
the {\bf Coxeter group} associated to $D$: that is, the group
with one generator $s_d$ for each dot $d \in D$, and relations
\[ s_d^2 = 1, \qquad (s_d s_{d'})^{m_{dd'}} = 1 .\]
So, the Hecke algebra can be thought of as a $q$-deformation of this
Coxeter group.
If $q$ is a power of a prime number, the Dynkin diagram $D$ determines
a simple algebraic group $G$ over the field with $q$ elements, $\F_q$.
We choose a Borel subgroup $B \subseteq G$, i.e., a maximal solvable
subgroup. This in turn determines a transitive $G$-set $X = G/B$.
This set is a smooth algebraic variety called the {\bf flag variety}
of $G$, but we only need the fact that it is a finite set equipped
with a transitive action of the finite group $G$. Starting from just
this $G$-set $X$, we can groupoidify the Hecke algebra $H(D,q)$.
Recalling the concept of `action groupoid' from Section \ref{intro},
we define the {\bf groupoidified Hecke algebra} to be
\[ (X \times X)\Over G. \]
This groupoid has one isomorphism class of objects for each $G$-orbit
in $X \times X$:
\[ \underline{(X \times X)\Over G} \cong (X \times X)/G. \]
The well-known `Bruhat decomposition' of
$X/G$ shows there is one such orbit for each element of the Coxeter group
associated to $D$. Using this, one can check that $(X \times X)\Over G$
degroupoidifies to give the underlying vector space of the Hecke algebra.
In other words, there is a canonical isomorphism of vector spaces
\[ \R^{(X \times X)/G} \cong H(D,q) .\]
Even better, we can groupoidify the {\it multiplication} in the Hecke
algebra. In other words, we can find a span that degroupoidifies to
give the linear operator
\[
\begin{array}{ccc}
H(D,q) \otimes H(D,q) & \to & H(D,q) \\
a \otimes b & \mapsto & ab
\end{array}
\]
This span is very simple:
\begin{equation}
\label{multiplication}
\def\objectstyle{\scriptstyle}
\def\labelstyle{\scriptstyle}
\xy
(0,30)*+{(X \times X \times X)\Over G}="1";
(-25,10)*+{(X\times X)\Over G \;\,\times\;\, (X\times X)\Over G}="2";
(25,10)*+{(X\times X)\Over G}="3";
{\ar_{(p_1,p_2)\times (p_2,p_3)} "1";"2"};
{\ar^{(p_1,p_3)} "1";"3"};
\endxy
\end{equation}
where $p_i$ is projection onto the $i$th factor.
One can check through explicit computation that this span does the
job. The key is that for each dot $d \in D$ there is a special
isomorphism class in $(X \times X)\Over G$, and the function
\[ \psi_d \maps (X \times X)/G \to \R \]
that equals 1 on this isomorphism class and 0 on the rest
corresponds to the generator $\sigma_d \in H(D,q)$.
To illustrate these ideas, let us consider the simplest nontrivial
example, the Dynkin diagram $A_2$:
\[\xymatrix{ \bullet\ar@{-}[r] & \bullet}\]
The Hecke algebra associated to $A_2$ has two generators, which we
call $P$ and $L$, for reasons soon to be revealed:
\[ P = \sigma_1 , \qquad L = \sigma_2 . \]
The relations are
\[ P^2 = (q-1) P + q, \qquad L^2= (q-1)P + q, \qquad PLP=LPL .\]
It follows that this Hecke algebra is a quotient of the group algebra
of the 3-strand braid group, which has two generators $P$ and $L$ and
one relation $PLP = LPL$, called the {\bf Yang--Baxter equation} or
{\bf third Reidemeister move}. This is why Jones could use traces on
the $A_n$ Hecke algebras to construct invariants of knots
\cite{Jones}. This connection to knot theory makes it especially
interesting to groupoidify Hecke algebras.
So, let us see what the groupoidified Hecke algebra looks like, and
where the Yang--Baxter equation comes from. The algebraic group
corresponding to the $A_2$ Dynkin diagram and the prime power $q$ is
$G = \SL(3,\F_q)$, and we can choose the Borel subgroup $B$ to consist
of upper triangular matrices in $\SL(3,\F_q)$. Recall that a {\bf
complete flag} in the vector space $\F_q^3$ is a pair of subspaces
\[ 0 \subset V_1 \subset V_2 \subset \F_q^3 . \]
The subspace $V_1$ must have dimension 1, while $V_2$ must have
dimension 2. Since $G$ acts transitively on the set of complete
flags, while $B$ is the subgroup stabilizing a chosen flag, the flag
variety $X = G/B$ in this example is just the set of complete flags in
$\F_q^3$ --- hence its name.
We can think of $V_1 \subset \F_q^3$ as a point in the projective
plane $\F_q{\mathrm P}^2$, and $V_2 \subset \F_q^3$ as a line in this
projective plane. From this viewpoint, a complete flag is a chosen
point lying on a chosen line in $\F_q {\mathrm P}^2$. This viewpoint
is natural in the theory of `buildings', where each Dynkin diagram
corresponds to a type of geometry \cite{Brown,Garrett}. Each dot in
the Dynkin diagram then stands for a `type of geometrical figure',
while each edge stands for an `incidence relation'. The $A_2$ Dynkin
diagram corresponds to projective plane geometry. The dots in this
diagram stand for the figures `point' and `line':
\[\xymatrix{ {\rm point} \; \bullet \ar@{-}[r] &
\bullet \; {\rm line} }\]
The edge in this diagram stands for the incidence
relation `the point $p$ lies on the line $\ell$'.
We can think of $P$ and $L$ as special elements of the $A_2$ Hecke
algebra, as already described. But when we groupoidify the Hecke
algebra, $P$ and $L$ correspond to {\it objects} of $(X \times X)\Over
G$. Let us describe these objects and explain how the Hecke algebra
relations arise in this groupoidified setting.
As we have seen, an isomorphism class of objects in $(X \times X)\Over
G$ is just a $G$-orbit in $X \times X$. These orbits in turn
correspond to spans of $G$-sets from $X$ to $X$ that are {\bf
irreducible}: that is, not a coproduct of other spans of $G$-sets.
So, the objects $P$ and $L$ can be defined by giving irreducible spans
of $G$-sets:
\[
\xymatrix{
& P \ar[dl]\ar[dr] & && & L\ar[dl]\ar[dr] & \\
X & &X && X & &X
}
\]
In general, any span of $G$-sets
\[
\xymatrix{
& S \ar[dl]_{q} \ar[dr]^{p} & \\
X & & X}
\]
such that $q \times p \maps S \to X \times X$ is injective can be
thought of as $G$-invariant binary relation between elements of $X$.
Irreducible $G$-invariant spans are always injective in this sense.
So, such spans can also be thought of as $G$-invariant relations between
flags. In these terms, we define $P$ to be the relation that says
two flags have the same line, but different points:
\[ P = \{((p,\ell),(p',\ell)) \in X \times X \mid p\neq p'\}
\]
Similarly, we think of $L$ as a relation saying two flags
have different lines, but the same point:
\[ L =
\{((p,\ell),(p,\ell')) \in X \times X \mid \ell\neq \ell'\}. \]
Given this, we can check that
\[ P^2 \cong (q-1) \times P + q \times 1, \qquad L^2 \cong (q-1) \times L +
q \times 1, \qquad PLP \cong LPL . \]
Here both sides refer to spans of $G$-sets, and we denote a span by
its apex. Addition of spans is defined using coproduct, while $1$
denotes the identity span from $X$ to $X$. We use `$q$' to stand for
a fixed $q$-element set, and similarly for `$q-1$'. We compose spans
of $G$-sets using the ordinary pullback. It takes a bit of thought to
check that this way of composing spans of $G$-sets matches the product
described by Eq.\ \ref{multiplication}, but it is indeed the case.
To check the existence of the first two isomorphisms above, we just
need to count. In $\mathbb{F}_q\mathrm{P}^2$, the are $q+1$ points on any
line. So, given a flag we can change the point in $q$ different ways.
To change it again, we have a choice: we can either send it back to
the original point, or change it to one of the $q-1$ other points.
So, $P^2 \cong (q-1) \times P + q \times 1$. Since there are also
$q+1$ lines through any point, similar reasoning shows that $L^2 \cong
(q-1) \times L + q \times 1$.
The Yang--Baxter isomorphism
\[ PLP \cong LPL \]
is more interesting. We construct it as follows. First consider the
left-hand side, $PLP$. So, start with a complete flag called
$(p_1,\ell_1)$:
\hfill \break
\begin{center}
\begin{picture}(50,25)
\includegraphics[scale=0.35]{newflags0.eps}
\put(-20,2){$p_1$}
\put(0,-10){$\ell_1$}
\end{picture}
\end{center}
\medskip
Then, change the point to obtain a flag $(p_2,\ell_1)$.
Next, change the line to obtain a flag $(p_2,\ell_2)$.
Finally, change the point once more, which gives us the flag $(p_3,\ell_2)$:
\begin{center}
\begin{picture}(250,45)
\includegraphics[scale=0.35]{newflags3.eps}
\put(-245,4){$p_1$}
\put(-225,-10){$\ell_1$}
\put(-185,4){$p_1$}
\put(-165,-10){$\ell_1$}
\put(-200,25){$p_2$}
\put(-120,4){$p_1$}
\put(-100,-10){$\ell_1$}
\put(-130,23){$p_2$}
\put(-80,35){$\ell_2$}
\put(-40,4){$p_1$}
\put(-20,-10){$\ell_1$}
\put(-55,23){$p_2$}
\put(5,35){$\ell_2$}
\put(-15,40){$p_3$}
\end{picture}
\end{center}
\medskip
\noindent
The figure on the far right is a typical object of $PLP$.
On the other hand, consider $LPL$. So, start with the same flag as
before, but now change the line, obtaining $(p_1,\ell'_2)$. Next
change the point, obtaining the flag $(p'_2,\ell'_2)$. Finally,
change the line once more, obtaining the flag $(p'_2,\ell'_3)$:
\hfill \break
\medskip
\begin{center}
\begin{picture}(240,40)
\includegraphics[scale=0.35]{newflags2_3.eps}
\put(-245,3){$p_1$}
\put(-225,-10){$\ell_1$}
\put(-185,3){$p_1$}
\put(-170,-10){$\ell_1$}
\put(-150,45){$\ell_2'$}
\put(-120,3){$p_1$}
\put(-100,-10){$\ell_1$}
\put(-85,45){$\ell_2'$}
\put(-85,20){$p_2'$}
\put(-45,4){$p_1$}
\put(-25,-10){$\ell_1$}
\put(-10,43){$\ell_2'$}
\put(-10,20){$p_2'$}
\put(-60,25){$\ell_3'$}
\end{picture}
\end{center}
\medskip
\noindent
The figure on the far right is a typical object of $LPL$.
Now, the axioms of projective plane geometry say that any two distinct
points lie on a unique line, and any two distinct lines intersect in a
unique point. So, any figure of the sort shown on the left below
determines a unique figure of the sort shown on the right, and vice
versa:
\hfill\break
\begin{center}
\begin{picture}(150,50)
\includegraphics[scale=0.50]{flagcompare.eps}
\end{picture}
\end{center}
\medskip
\noindent
Comparing this with the pictures above, we see this bijection induces
an isomorphism of spans $PLP \cong LPL$. So, we have derived the
Yang--Baxter isomorphism from the axioms of projective plane geometry!
\section{Degroupoidifying a Tame Span}
\label{processes}
In Section \ref{degroupoidification} we described a process for
turning a tame span of groupoids into a linear operator. In this
section we show this process is well-defined. The calculations
in the proof yield an explicit criterion for when a span is tame.
They also give an explicit formula for the the operator coming from a
tame span. As part of our work, we also show that equivalent spans
give the same operator.
\subsection{Tame Spans Give Operators}
To prove that a tame span gives a well-defined operator, we begin with
three lemmas that are of some interest in themselves. We postpone to
Appendix \ref{appendix} some well-known facts about groupoids that do
not involve the concept of degroupoidification. This Appendix also
recalls the familiar concept of `equivalence' of groupoids, which
serves as a basis for this:
\begin{definition}
Two groupoids over a fixed groupoid $X$, say $v \maps \Psi \to X$
and $w \maps \Phi \to X$, are {\bf equivalent} as groupoids
over $X$ if there is an equivalence $F \maps \Psi \to \Phi$
such that this diagram
\[
\xymatrix{
\Psi \ar[rr]^{F} \ar[dr]_{p} && \Phi \ar[dl]^{q} \\
&X&
}
\]
commutes up to natural isomorphism.
\end{definition}
\begin{lemma}\label{vectorswelldefined}
Let $v \maps \Psi \to X$ and $w \maps \Phi \to X$ be equivalent
groupoids over $X$. If either one is tame, then both are tame,
and $\utilde{\Psi} = \utilde{\Phi}$.
\end{lemma}
\begin{proof}
This follows directly from Lemmas \ref{EQUIVGRPD} and
\ref{ESSENTIALPULLBACK} in Appendix \ref{appendix}.
\end{proof}
\begin{lemma}
\label{addvectors}
Given tame groupoids $\Phi$ and $\Psi$ over $X$,
\[\utilde{\Phi + \Psi} = \utilde{\Phi} + \utilde{\Psi}.\]
More generally, given any collection of tame groupoids $\Psi_i$ over $X$,
the coproduct $\sum_i \Psi_i$ is naturally a
groupoid over $X$, and if it is tame, then
\[\utilde{\sum_i \Psi_i} = \sum_i \utilde{\Psi}_i \]
where the sum on the right hand side converges pointwise as
a function on $\u{X}$.
\end{lemma}
\begin{proof}
The full inverse image of any object $x \in X$ in the coproduct
$\sum_i \Psi_i$ is the coproduct of its full inverse images in
each groupoid $\Psi_i$. Since groupoid cardinality is additive under
coproduct, the result follows.
\end{proof}
\begin{lemma}
\label{linearity}
Given a span of groupoids
\[
\xymatrix{
& S\ar[dl]_{q} \ar[dr]^{p} & \\
Y & & X
}
\]
we have
\begin{enumerate}
\item $S(\sum_i \Psi_i)\simeq \sum_i S\Psi_i$
\item $S(\Lambda\times \Psi)\simeq \Lambda\times S\Psi$
\end{enumerate}
whenever $v_i \maps \Psi_i \to X$ are groupoids over $X$,
$v \maps \Psi \to X$ is a groupoid over $X$, and
$\Lambda$ is a groupoid.
\end{lemma}
\begin{proof}
To prove 1, we need to describe a functor
\[
F\maps \sum_i S \Psi_i \to S(\sum_i \Psi_i)
\]
that will provide our equivalence. For this, we simply need to
describe for each $i$ a functor $F_i \maps S \Psi_i \to S(\sum_i
\Psi_i)$. An object in $S \Psi_i$ is a triple $(s,z,\alpha)$ where $s
\in S$, $z \in \Psi_i$ and $\alpha \maps p(s) \to v_i(z)$. $F_i$
simply sends this triple to the same triple regarded as an object of
$S(\sum_i \Psi_i)$. One can check that $F$ extends to a functor and
that this functor extends to an equivalence of groupoids over $S$.
To prove 2, we need to describe a functor $F \maps S(\Lambda\times
\Phi) \to \Lambda\times S\Phi$. This functor simply re-orders the
entries in the quadruples which define the objects in each groupoid.
One can check that this functor extends to an equivalence of groupoids
over $X$.
\end{proof}
\noindent
Finally we need the following lemma, which simplifies the computation
of groupoid cardinality:
\begin{lemma}\label{ALTCARD}
We have
\[ |X|
= \sum_{x \in X} \frac{1}{|\Mor(x,-)|} \]
where $\Mor(x,-) = \bigcup_{y\in X}\hom(x,y)$ is
the set of morphisms whose source is the object $x \in X$.
\end{lemma}
\begin{proof}
We check the following equalities:
\[ \sum_{[x] \in \underline{X}} \frac{1}{|\Aut(x)|}
= \sum_{[x] \in \underline{X}} \frac{|[x]|}{|\Mor(x,-)|}
= \sum_{x \in X} \frac{1}{|\Mor(x,-)|} .\]
Here $[x]$ is the set of objects isomorphic to $x$, and $|[x]|$ is the
ordinary cardinality of this set. To check the above equations, we first
choose an isomorphism $\gamma_y \maps x \to y$ for each object $y$ isomorphic
to $x$. This gives a bijection from $[x] \times
\Aut(x)$ to $\Mor(x,-)$ that takes $(y, f \maps x \to x)$ to
$\gamma_y f \maps x \to y$. Thus
\[ |[x]| \, |\Aut(x)| = |\Mor(x,-)| , \]
and the first equality follows. We also get a bijection between
$\Mor(y,-)$ and $\Mor(x,-)$ that takes $f \maps y \to z$ to
$f\gamma_y \maps x \to z$. Thus, $|\Mor(y,-)| = |\Mor(x,-)|$ whenever
$y$ is isomorphic to $x$. The second equation follows from this.
\end{proof}
Now we are ready to prove the main theorem of this section:
\begin{thm}\label{PROCESS1}
Given a tame span of groupoids
\[
\xymatrix{
& S \ar[dl]_{q} \ar[dr]^{p} & \\
Y && X
}
\]
there exists a unique linear operator
$\utilde{S} \maps \R^{\underline{X}} \to \R^{\underline{Y}}$
such that $\utilde{S}\utilde{\Psi} = \utilde{S\Psi}$ for
any vector $\utilde{\Psi}$ obtained from a tame groupoid $\Psi$
over $X$.
\end{thm}
\begin{proof}
It is easy to see that these conditions uniquely determine
$\utilde{S}$. Suppose $\psi \maps \underline{X} \to \R$ is
any nonnegative function. Then we can find a groupoid $\Psi$
over $X$ such that $\utilde{\Psi} = \psi$. So, $\utilde{S}$
is determined on nonnegative functions by the condition that
$\utilde{S}\utilde{\Psi} = \utilde{S\Psi}$. Since every
function is a difference of two nonnegative functions and
$\utilde{S}$ is linear, this uniquely determines $\utilde{S}$.
The real work is proving that $\utilde{S}$ is well-defined.
For this, assume we have a collection $\lbrace v_i \maps \Psi_i \to X
\rbrace_{i\in I}$ of groupoids over $X$ and real numbers $\lbrace
\alpha_i \in \R \rbrace_{i\in I}$ such that
\begin{equation}
\label{assumption1}
\sum_i \alpha_i \, \utilde{\Psi_i} = 0.
\end{equation}
We need to show that
\begin{equation}
\label{fact0}
\sum_i\alpha_i \, \utilde{S\Psi_i} = 0.
\end{equation}
We can simplify our task as follows. First, recall that a {\bf
skeletal} groupoid is one where isomorphic objects are equal. Every
groupoid is equivalent to a skeletal one. Thanks to Lemmas
\ref{vectorswelldefined} and \ref{skeletal}, we may therefore assume
without loss of generality that $S$, $X$, $Y$ and all the groupoids
$\Psi_i$ are skeletal.
Second, recall that a skeletal groupoid is a coproduct of groupoids
with one object. By Lemma \ref{addvectors}, degroupoidification
converts coproducts of groupoids over $X$ into sums of vectors. Also,
by Lemma \ref{linearity}, the operation of taking weak pullback
distributes over coproduct. As a result, we may assume without loss
of generality that each groupoid $\Psi_i$ has one object. Write
$\ast_i$ for the one object of $\Psi_i$.
With these simplifying assumptions, Eq.\ \ref{assumption1} says
that for any $x \in X$,
\begin{equation}
\label{assumption2}
0 = \displaystyle{\sum_{i \in I} \alpha_i \, \utilde{\Psi_i}([x])}
= \displaystyle{\sum_{i \in I} \alpha_i \, |v_i^{-1} (x)|}
= \displaystyle{\sum_{i \in J} \frac{\alpha_i}{|\Aut(\ast_i)|}}
\end{equation}
where $J$ is the collection of $i \in I$ such that $v_i(\ast_i)$ is
isomorphic to $x$. Since all groupoids in sight
are now skeletal, this condition implies $v_i(\ast_i) = x$.
Now, to prove Eq. \ref{fact0}, we need to show that
\[ \sum_{i \in I} \alpha_i \, \utilde{S\Psi_i}([y]) = 0\]
for any $y \in Y$. But since the set $I$ is partitioned into sets
$J$, one for each $x \in X$, it suffices to show
\begin{equation}
\label{fact1}
\sum_{i \in J} \alpha_i \, \utilde{S\Psi_i}([y]) = 0 .
\end{equation}
for any fixed $x \in X$ and $y \in Y$.
To compute $\utilde{S\Psi_i}$, we need
to take this weak pullback:
\[
\xymatrix{
& & S\Psi_i \ar[dl]_{\pi_S}\ar[dr]^{\pi_{\Psi_i}} & \\
& S \ar[dl]_{q} \ar[dr]^{p} & & \Psi_i \ar[dl]_{v_i} \\
Y & & X &
}
\]
We then have
\begin{equation}
\label{formula1}
\utilde{S \Psi_i}([y]) = |(q \pi_S)^{-1}(y)| ,
\end{equation}
so to prove Eq.\ \ref{fact1} it suffices to show
\begin{equation}
\label{fact2}
\sum_{i \in J} \alpha_i \, |(q \pi_S)^{-1}(y)| = 0 .
\end{equation}
Using the definition of `weak pullback', and taking advantage of the
fact that $\Psi_i$ has just one object, which maps down to $x$, we can
see that an object of $S \Psi_i$ consists of an object $s \in S$ with
$p(s) = x$ together with an isomorphism $\alpha \maps x \to x$. This
object of $S\Psi_i$ lies in $(q \pi_S)^{-1}(y)$ precisely when we also
have $q(s) = y$.
So, we may briefly say that an object of $(q\pi_S)^{-1}(y)$ is a pair
$(s, \alpha)$, where $s \in S$ has $p(s) = x$, $q(s) = y$, and
$\alpha$ is an element of $\Aut(x)$. Since $S$ is skeletal, there is
a morphism between two such pairs only if they have the same first
entry. A morphism from $(s,\alpha)$ to $(s,\alpha')$ then consists of
a morphism $f \in \Aut(s)$ and a morphism $g \in \Aut(\ast_i)$ such
that
\[
\xymatrix{
x \ar[r]^{\alpha}\ar[d]_{p(f)} & x\ar[d]^{v_i(g)} \\
x \ar[r]_{\alpha'} & x
}
\]
commutes.
A morphism out of $(s,\alpha)$ thus consists of an arbitrary pair $f
\in \Aut(s)$, $g \in \Aut(\ast_i)$, since these determine the target
$(s,\alpha')$. This fact and Lemma \ref{ALTCARD} allow us to compute:
\begin{equation}
\label{formula2}
\begin{array}{ccl}
\nonumber |(q\pi_S)^{-1}(y)| & =
\displaystyle{
\sum_{(s,\alpha)\in (q\pi_S)^{-1}(y)}
\frac{1}{|\Mor((s,\alpha),-)|}
}
\\
\\
& =
\displaystyle{
\sum_{s \in p^{-1}(y) \cap q^{-1}(y)}
\frac{|\Aut(x)|}{|\Aut(s)||\Aut(\ast_i)|} \,.
}
\end{array}
\end{equation}
So, to prove Eq.\ \ref{fact2}, it suffices to show
\begin{equation}
\label{fact3}
\sum_{i \in J} \;\, \sum_{s \in p^{-1}(x) \cap q^{-1}(y) }
\frac{\alpha_i|\Aut(x)|}{|\Aut(s)||\Aut(\ast_i)|} = 0 \,.
\end{equation}
But this easily follows from Eq.\
\ref{assumption2}. So, the operator $\utilde{S}$ is well defined.
\end{proof}
In Definition \ref{EQUIVALENCE} we recall the natural concept of
`equivalence' for spans of groupoids. The next theorem says that our
process of turning spans of groupoids into linear operators sends
equivalent spans to the same operator:
\begin{thm}\label{linops_equivspans1}
Given equivalent spans
\[
\xymatrix{
& S\ar[dl]_{q_S} \ar[dr]^{p_S} & & & T\ar[dl]_{q_T} \ar[dr]^{p_T} &\\
Y & & X & Y & & X
}
\]
the linear operators $\utilde{S}$ and $\utilde{T}$ are equal.
\end{thm}
\begin{proof}
Since the spans are equivalent, there is a functor providing an
equivalence of groupoids $F \maps S \to T$ along with a pair of
natural isomorphisms $\alpha \maps p_S \Rightarrow p_TF$ and
$\beta \maps q_S \Rightarrow q_TF$. Thus, the diagrams
\[
\xymatrix{
S \ar[dr] && \Phi \ar[dl] & & T \ar[dr] && \Phi \ar[dl] \\
& X & && & X &
}
\]
are equivalent pointwise. It follows from Lemma \ref{skeletal} that
the weak pullbacks $S\Psi$ and $T\Psi$ are equivalent groupoids with
the equivalence given by a functor $\tilde{F} \maps S\Psi \to T\Psi$.
From the universal property of weak pullbacks, along with $F$, we
obtain a natural transformation $\gamma \maps F\pi_S \Rightarrow
\pi_T\tilde{F}$. We then have a triangle
\[ \def\objectstyle{\scriptstyle}
\def\labelstyle{\scriptstyle}
\xy
(20,20)*+{S\Psi}="1";
(-20,20)*+{T\Psi}="2";
(10,0)*+{S}="3";
(-10,0)*+{T}="4";
(0,-20)*+{Y}="5";
{\ar_{\tilde{F}} "1";"2"};
{\ar^{\pi_S} "1";"3"};
{\ar_{\pi_T} "2";"4"};
{\ar^{q_S} "3";"5"};
{\ar_{q_T} "4";"5"};
{\ar_{F} "3";"4"};
{\ar@{=>}_<<{\scriptstyle \gamma} (2,11); (-2,9)};
{\ar@{=>}_<<{\scriptstyle \beta} (2,-9); (-2,-11)};
\endxy
\]
where the composite of $\gamma$ and $\beta$ is $(q_T \cdot
\gamma)^{-1}\beta \maps q_S \pi_S \Rightarrow q_T\pi_T\tilde{F}$.
Here $\cdot$ stands for whiskering: see Definition \ref{whiskering}.
We can now apply Lemma \ref{ESSENTIALPULLBACK}. Thus, for every $y \in
Y$, the full inverse images $(q_S\pi_S)^{-1}(y)$ and
$(q_T\pi_T)^{-1}(y)$ are equivalent. It follows from Lemma
\ref{EQUIVGRPD} that for each $y \in Y$, the groupoid cardinalities
$|(q_S\pi_S)^{-1}(y)|$ and $|(q_T\pi_T)^{-1}(y)|$ are equal. Thus,
the linear operators $\utilde{S}$ and $\utilde{T}$ are the same.
\end{proof}
\subsection{An Explicit Formula}
\label{explicit}
Our calculations in the proof of Theorem \ref{PROCESS1} yield an explicit
formula for the operator coming from a tame span, and a criterion for
when a span is tame:
\begin{thm}
\label{matrix}
A span of groupoids
\[
\xymatrix{
& S\ar[dl]_{q} \ar[dr]^{p} & \\
Y & & X
}
\]
is tame if and only if:
\begin{enumerate}
\item
For any object $y\in Y$, the groupoid $p^{-1}(x)\cap q^{-1}(y)$ is
nonempty for objects $x$ in only a finite number of isomorphism classes
of $X$.
\item For every $x \in X$ and $y \in Y$, the groupoid
$p^{-1}(x)\cap q^{-1}(y)$ is tame.
\end{enumerate}
Here $p^{-1}(x)\cap q^{-1}(y)$ is the subgroupoid of $S$ whose objects
lie in both $p^{-1}(x)$ and $q^{-1}(y)$, and whose morphisms lie in
both $p^{-1}(x)$ and $q^{-1}(y)$.
If $S$ is tame, then for any $\psi \in \R^{\underline X}$ we
have
\[
(\utilde{S} \psi)([y]) =
\sum_{[x] \in \u{X}} \;\,
\sum_{[s]\in\u{p^{-1}(x)}\bigcap \u{q^{-1}(y)}}
\frac{|\Aut(x)|}{|\Aut(s)|} \,\, \psi([x]) \,.
\]
\end{thm}
\begin{proof} First suppose the span $S$ is tame
and $v \maps \Psi \to X$ is a tame groupoid over $X$. Equations
\ref{formula1} and \ref{formula2} show that if $S,X,Y,$ and $\Psi$ are
skeletal, and $\Psi$ has just one object $\ast$, then
\[ \utilde{S \Psi}([y]) = \sum_{s \in p^{-1}(x) \cap q^{-1}(y)}
\frac{|\Aut(v(\ast))|}{|\Aut(s)| |\Aut(\ast)|} \]
On the other hand,
\[ \utilde{\Psi}([x]) =
\begin{cases}\displaystyle{\frac{1}{|\Aut(\ast)|}}
& \textrm{if} \; v(\ast) = x \\
\\
0 & \textrm{otherwise.}
\end{cases}
\]
So in this case, writing $\utilde{\Psi}$ as $\psi$, we have
\[
(\utilde{S} \psi)([y]) =
\sum_{[x] \in X} \;\,
\sum_{[s]\in p^{-1}(x) \bigcap q^{-1}(y)}
\frac{|\Aut(x)|}{|\Aut(s)|} \,\, \psi([x]) \,.
\]
Since both sides are linear in $\psi$, and every nonnegative function
in $\R^{\u{X}}$ is a pointwise convergent nonnegative linear
combination of functions of the form $\psi = \utilde{\Psi}$ with
$\Psi$ as above, the above equation in fact holds for {\it all} $\psi
\in \R^{\u{X}}$.
Since all groupoids in sight are skeletal, we may equivalently
write the above equation as
\[
(\utilde{S} \psi)([y]) =
\sum_{[x] \in \u{X}} \;\,
\sum_{[s]\in\u{p^{-1}(x)}\bigcap \u{q^{-1}(y)}}
\frac{|\Aut(x)|}{|\Aut(s)|} \,\, \psi([x]) \,.
\]
The advantage of this formulation is that now both sides are
unchanged when we replace $X$ and $Y$ by equivalent groupoids,
and replace $S$ by an equivalent span. So, this equation holds
for all tame spans, as was to be shown.
If the span $S$ is tame, the sum above must converge for all
functions $\psi$ of the form $\psi = \utilde{\Psi}$. Any
nonnegative function $\psi \maps \u{X} \to \R$ is of this form.
For the sum above to converge for {\it all} nonnegative $\psi$,
this sum:
\[ \sum_{[s]\in\u{p^{-1}(x)}\bigcap \u{q^{-1}(y)}}
\frac{|\Aut(x)|} {|\Aut(s)|}\]
must have the following two properties:
\begin{enumerate}
\item
For any object $y \in Y$, it is nonzero only for objects $x$ in
a finite number of isomorphism classes of $X$.
\item For every $x \in X$ and $y \in Y$, it converges to a finite
number.
\end{enumerate}
These conditions are equivalent to conditions 1) and 2) in the
statement of the theorem. We leave it as an exercise to check that
these conditions are not only necessary but also sufficient for
$S$ to be tame.
\end{proof}
The previous theorem has many nice consequences. For example:
\begin{proposition}
\label{add_spans}
Suppose $S$ and $T$ are tame spans from a groupoid $X$ to a groupoid
$Y$. Then $\utilde{S+T} = \utilde{S} + \utilde{T}$.
\end{proposition}
\begin{proof}
This follows from the explicit formula given in Theorem \ref{matrix}.
\end{proof}
\section{Properties of Degroupoidification}
\label{properties}
In this section we prove all the remaining results stated in Section
\ref{degroupoidification}. We start with results about scalar
multiplication. Then we show that degroupoidification is a functor.
Finally, we prove the results about inner products and adjoints.
\subsection{Scalar Multiplication}
\label{scalar_mult}
To prove facts about scalar multiplication, we use the following lemma:
\begin{lemma}\label{essinverseproduct}
Given a groupoid $\Lambda$ and a functor between groupoids $p\maps X\to Y$,
then the functor $c\times p \maps \Lambda\times Y\to 1\times X$ (where
$c\maps \Lambda\to 1$ is the unique morphism from $\Lambda$ to the terminal
groupoid $1$) satisfies:
\[ |(c\times p)^{-1}(1, x)|=|\Lambda||p^{-1}(x)| \]
for all $x\in X$.
\end{lemma}
\begin{proof}
Recall that by definition of full inverse image
\[(c\times p)^{-1}(1, x)=
\{(\lambda,y)\in \Lambda\times Y\mid \, \exists
\gamma \maps (c\times p)(\lambda,y)\to (1,x)\}.\]
We notice that the element $\lambda$ plays no real role in determining
the morphism $\gamma$, and $(\lambda,y)\in (c\times p)^{-1}(1,x)$ for
all $\lambda$ if and only if $y\in p^{-1}(x)$. Now consider the
groupoid cardinality of this groupoid. By definition we have
\[ |(c\times p)^{-1}(1, x)|
= \sum_{[(\lambda,y)]}\frac{1}{|\Aut (\lambda,y)|} \]
Since we are working over the product $\Lambda\times Y$, an
automorphism of $(\lambda,y)$ is automorphism of $\lambda$ together
with an automorphism of $y$. It follows that
\[ |\Aut (\lambda,y)| = |\Aut(\lambda)||\Aut(y)|. \]
For a given $y\in p^{-1}(x)$ we can combine all the terms containing
$|\Aut(y)|$ to obtain the sum
\[|(c\times p)^{-1}(1, x)| =
\sum_{[y]\in p^{-1}(x)}
\left(\sum_{[\lambda]}\frac{1}{|\Aut(\lambda)|}\right)\frac{1}{|\Aut(y)|}\]
which then after factoring is equal to $|\Lambda||p^{-1}(x)|$, as desired.
\end{proof}
\begin{proposition}
\label{scalarmult1}
Given a groupoid $\Lambda$ and a groupoid over $X$,
say $v \maps \Psi \to X$, the groupoid
$\Lambda \times \Psi$ over $X$ satisfies
\[\utilde{\Lambda \times \Psi} = |\Lambda|\utilde{\Psi}.\]
\end{proposition}
\begin{proof} This follows from Lemma \ref{essinverseproduct}.
\end{proof}
\begin{proposition}
\label{scalarmult2}
Given a tame groupoid $\Lambda$ and a tame span
\[
\xymatrix{
& S\ar[dl] \ar[dr] & \\
Y & & X
}
\]
then $\Lambda \times S$ is tame and
\[\utilde{\Lambda \times S} = |\Lambda| \, \utilde{S}.\]
\end{proposition}
\begin{proof} This follows from Lemma \ref{essinverseproduct}.
\end{proof}
\subsection{Functoriality of Degroupoidification}
In this section we prove that our process of turning groupoids into
vector spaces and spans of groupoids into linear operators is indeed a
functor. We first show that the process preserves identities, then
show associativity of composition, from which many other things
follow, including the preservation of composition. The lemmas in this
section add up to a proof of the following theorem:
\begin{thm}\label{functor}
Degroupoidification is a functor from the category of groupoids
and equivalence classes of tame spans to the category of real
vector spaces and linear operators.
\end{thm}
\begin{proof}
As mentioned above, the proof follows from Lemmas \ref{identities} and
\ref{composition}.
\end{proof}
\begin{lemma}\label{identities}
Degroupoidification preserves identities, i.e., given a groupoid $X$,
$\utilde{1_X} = 1_{\R^{\utilde{X}}}$, where $1_X$ is the identity span
from $X$ to $X$ and $1_{\R^{\utilde{X}}}$ is the identity operator on
$\R^{\utilde{X}}$.
\end{lemma}
\begin{proof}
This follows from the explicit formula given in Theorem \ref{matrix}.
\end{proof}
We now want to prove the associativity of composition of tame spans.
Amongst the consequences of this proposition we can derive the
preservation of composition under degroupoidification. Given a triple
of composable spans:
\[
\xymatrix{
& T \ar[dl]_{q_T}\ar[dr]^{p_T} && S \ar[dl]_{q_S}\ar[dr]^{p_S} && R \ar[dl]_{q_R}\ar[dr]^{p_R} &\\
Z && Y && X && W
}
\]
we want to show that composing in the two possible orders --- $T(SR)$
or $(TS)R$ --- will provide equivalent spans of groupoids. In fact,
since groupoids, spans of groupoids, and isomorphism classes of maps
between spans of groupoids naturally form a bicategory, there exists a
natural isomorphism called the {\bf associator}. This tells us that
the spans $T(SR)$ and $(TS)R$ are in fact equivalent. But since we
have not constructed this bicategory, we will instead give an explicit
construction of the equivalence $T(SR) \stackrel{\sim}\rightarrow
(TS)R$.
\begin{proposition}\label{associativity}
Given a composable triple of tame spans, the operation of composition
of tame spans by weak pullback is associative up to equivalence of
spans of groupoids.
\end{proposition}
\begin{proof}
We consider the above triple of spans in order to construct the
aforementioned equivalence. The equivalence is simple to describe if
we first take a close look at the groupoids $T(SR)$ and $(TS)R$. The
composite $T(SR)$ has objects $(t,(s,r,\alpha),\beta)$ such that $r\in
R$, $s\in S$, $t\in T$, $\alpha\maps q_R(r)\to p_S(s)$, and
$\beta\maps q_S(s)\to p_T(t)$, and morphisms $f \maps
(t,(s,r,\alpha),\beta) \to (t',(s',r',\alpha'),\beta')$, which consist
of a map $g \maps (s,r,\alpha) \to (s',r',\alpha')$ in $SR$ and a map
$h \maps t \to t'$ such that the following diagram commutes:
\[
\xymatrix{
q_S\pi_s((s,r,\alpha)) \ar[r]^{\beta} \ar[d]_{q_S\pi_S(g)} & p_T(t) \ar[d]^{p_T(h)} \\
q_S\pi_s((s',r',\alpha')) \ar[r]_{\beta'} & p_T(t')
}
\]
where $\pi_S$ maps the composite $SR$ to $S$. Further, $g$ consists of a
pair of maps $k\maps r \to r'$ and $j \maps s\to s'$ such that the following
diagram commutes:
\[
\xymatrix{
q_R(r) \ar[r]^{\alpha} \ar[d]_{q_S(k)} & p_S(s) \ar[d]^{p_S(j)} \\
q_R(r') \ar[r]_{\alpha'} & p_S(s')
}
\]
The groupoid $(TS)R$ has objects $((t,s,\alpha),r,\beta)$ such that
$r\in R$, $s\in S$, $t\in T$, $\alpha\maps q_S(s)\to p_T(t)$,
and $\beta\maps q_R(r)\to p_S(s)$, and morphisms $f \maps
((t,s,\alpha),r,\beta) \to ((t',s',\alpha'),r',\beta')$, which consist
of a map $g \maps (t,s,\alpha) \to (t',s',\alpha')$ in $TS$ and a map
$h \maps r \to r'$ such that the following diagram commutes:
\[
\xymatrix{
p_R(r) \ar[d]_{p_R(h)}\ar[r]^{\beta} & p_S\pi_s((t,s,\alpha)) \ar[d]^{p_S\pi_S(g)} \\
p_R(r') \ar[r]_{\beta'} & p_S\pi_s((t',s',\alpha'))
}
\]
Further, $g$ consists of a pair of maps $k\maps s \to s'$ and $j \maps
t\to t'$ such that the following diagram commutes:
\[
\xymatrix{
q_S(s) \ar[r]^{\alpha} \ar[d]_{q_S(k)} & p_T(t) \ar[d]^{p_T(j)} \\
q_S(s') \ar[r]_{\alpha'} & p_T(t')
}
\]
We can now write down a functor $F\maps T(SR) \to (TS)R$:
\[ (t,(s,r,\alpha),\beta) \mapsto ((t,s,\beta),r,\alpha) \]
Again, a morphism $f \maps (t,(s,r,\alpha),\beta) \to
(t',(s',r',\alpha'),\beta')$ consists of maps $k\maps r \to r'$, $j
\maps s\to s'$, and $h \maps t \to t'$. We need to define $F(f) \maps
((t,s,\beta),r,\alpha) \to ((t',s',\beta'),r',\alpha')$. The first
component $g' \maps (t,s,\beta) \to (t',s',\beta')$ consists of the
maps $j\maps s\to s'$ and $h\maps t \to t'$, and the following diagram
commutes:
\[
\xymatrix{
q_S(s) \ar[r]^{\beta}\ar[d]_{q_S(j)} & p_T(t) \ar[d]^{p_T(h)} \\
q_S(s') \ar[r]_{\beta'} & p_T(t')
}
\]
The other component map of $F(f)$ is $k \maps r \to r'$ and we see
that the following diagram also commutes:
\[
\xymatrix{
p_R(r) \ar[d]_{p_R(k)}\ar[rr]^{\alpha} && p_S\pi_s((t,s,\beta)) \ar[d]^{p_S\pi_S(g')} \\
p_R(r') \ar[rr]_{\alpha'} && p_S\pi_s((t',s',\beta'))
}
\]
thus, defining a morphism in $(TS)R$.
We now just need to check that $F$ preserves identities and
composition and that it is indeed an isomorphism. We will then have
shown that the apexes of the two spans are isomorphic. First, given
an identity morphism $1\maps (t,(s,r,\alpha),\beta) \to
(t,(s,r,\alpha),\beta)$, then $F(1)$ is the identity morphism on
$((t,s,\beta),r,\alpha)$. The components of the identity morphism are
the respective identity morphisms on the objects $r$,$s$, and $t$. By
the construction of $F$, it is clear that $F(1)$ will then be an
identity morphism.
Given a pair of composable maps $f \maps (t,(s,r,\alpha),\beta) \to
(t',(s',r',\alpha'),\beta')$ and $f' \maps (t',(s',r',\alpha'),\beta')
\to (t'',(s'',r'',\alpha''),\beta'')$ in $T(SR)$, the composite is a
map $f'f$ with components $g'g \maps (s,r,\alpha) \to
(s'',r'',\alpha'')$ and $h'h\maps t \to t''$. Further, $g'g$ has
component morphisms $k'k \maps r\to r''$ and $j'j \maps s\to s'$. It
is then easy to check that under the image of $F$ this composition is
preserved.
The construction of the inverse of $F$ is implicit in the construction
of $F$, and it is easy to verify that each composite $FF^{-1}$ and
$F^{-1}F$ is an identity functor. Further, the natural isomorphisms
required for an equivalence of spans can each be taken to be the
identity.
\end{proof}
It follows from the associativity of composition that degroupoidification
preserves composition:
\begin{lemma}
\label{composition}
Degroupoidification preserves composition. That is, given a
pair of composable tame spans:
\[
\xymatrix{
& T\ar[dr]\ar[dl] & & S\ar[dr]\ar[dl] & \\
Z & & Y & & X
}
\]
we have
\[ \utilde{T}\utilde{S} = \utilde{TS}. \]
\end{lemma}
\begin{proof}
Consider the composable pair of spans above along with a groupoid $\Psi$
over $X$:
\[
\xymatrix{
& T\ar[dr]\ar[dl] & & S\ar[dr]\ar[dl] & & \Psi\ar[dr]\ar[dl] &\\
Z & & Y & & X & & 1
}
\]
We can consider the groupoid over $X$ as a span by taking the right leg
to be the unique map to the terminal groupoid. We can compose this triple
of spans in two ways; either $T(S\Psi)$ or $(TS)\Psi$. By the
Proposition \ref{associativity} stated above, these spans are equivalent.
By Theorem \ref{linops_equivspans1}, degroupoidification produces the same linear operators.
Thus, composition is preserved. That is,
\[ \utilde{T}\utilde{S}\utilde{\Psi} = \utilde{TS}\utilde{\Psi}. \]
\end{proof}
\subsection{Inner Products and Adjoints}
\label{innerprodandadjoint}
Now we prove our results about the inner product of groupoids over
a fixed groupoid, and the adjoint of a span:
\begin{thm}\label{innerprod_theorem}
Given a groupoid $X$, there is a unique inner product $\ip{\cdot}{\cdot}$
on the vector space $L^2(X)$ such that
\[ \ip{\utilde{\Phi}}{\utilde{\Psi}} = |\ip{\Phi}{\Psi}| \]
whenever $\Phi$ and $\Psi$ are square-integrable groupoids over $X$.
With this inner product $L^2(X)$ is a real Hilbert space.
\end{thm}
\begin{proof}
Uniqueness of the inner product follows from the formula, since every vector
in $L^2(X)$ is a finite-linear combination of vectors
$\utilde{\Psi}$ for square-integrable groupoids $\Psi$ over $X$.
To show the inner product exists, suppose that $\Psi_i, \Phi_i$ are
square-integrable groupoids over $X$ and $\alpha_i, \beta_i \in \R$
for $1 \le i \le n$. Then we need to check that
\[ \sum_{i}\alpha_i\utilde{\Psi}_i
= \sum_{j}\beta_j \utilde{\Phi}_j = 0 \]
implies
\[ \sum_{i,j} \alpha_i\beta_j \,
|\langle \Psi_i, \Phi_j \rangle | = 0.\]
The proof here closely resembles the proof of existence in
Theorem \ref{PROCESS1}. We leave to the reader the task of
checking that $L^2(X)$ is complete in the norm corresponding
to this inner product.
\end{proof}
\begin{proposition}\label{innerprod_adjoint}
Given a span
\[
\xymatrix{
& S \ar[dl]_{q} \ar[dr]^{p} & \\
Y & & X
}
\]
and a pair $v \maps \Psi \to X$, $w \maps \Phi \to Y$ of
groupoids over $X$ and $Y$, respectively, there is an equivalence of groupoids
\[ \langle \Phi,S\Psi \rangle \simeq \langle S^\dagger\Phi,\Psi \rangle. \]
\end{proposition}
\begin{proof}
We can consider the groupoids over $X$ and $Y$ as spans with one leg
over the terminal groupoid $1$. Then the result follows from the
equivalence given by associtativity in Lemma \ref{associativity} and Theorem
\ref{linops_equivspans1}.
Explicitly, $ \langle \Phi,S\Psi \rangle$ is the composite of spans $S\Psi$
and $\Phi$, while $\langle S^\dagger\Phi,\Psi \rangle$ is the composite
of spans $S^\dagger\Phi$ and $\Psi$.
\end{proof}
\begin{proposition}\label{adjoint_comp}
Given spans
\[
\xymatrix{
& T \ar[dl]_{q_T} \ar[dr]^{p_T} & & & S \ar[dl]_{q_S} \ar[dr]^{p_S} &\\
Z & & Y & Y & & X
}
\]
there is an equivalence of spans
\[(ST)^\dagger \simeq T^\dagger S^\dagger. \]
\end{proposition}
\begin{proof}
This is clear by the definition of composition.
\end{proof}
\begin{proposition}\label{adjoint_add}
Given spans
\[
\xymatrix{
& S \ar[dl]_{q_S} \ar[dr]^{p_S} & & & T \ar[dl]_{q_T} \ar[dr]^{p_T} & \\
Y & & X & Y & & X
}
\]
there is an equivalence of spans
\[(S+T)^\dagger \simeq S^\dagger + T^\dagger. \]
\end{proposition}
\begin{proof}
This is clear since the addition of spans is given by
coproduct of groupoids. This construction is symmetric
with respect to swapping the legs of the span.
\end{proof}
\begin{proposition}
\label{innerprodandadjointprops}
Given a groupoid $\Lambda$ and square-integrable
groupoids $\Phi$, $\Psi$, and $\Psi'$ over $X$, we have the
following equivalences of groupoids:
\begin{enumerate}
\item
\[\ip{\Phi}{\Psi} \simeq \ip{\Psi}{\Phi}.\]
\item
\[\ip{\Phi}{\Psi + \Psi'} \simeq \ip{\Phi}{\Psi} + \ip{\Phi}{\Psi'}.\]
\item
\[\ip{\Phi}{\Lambda \times \Psi} \simeq \Lambda \times \ip{\Phi}{\Psi}.\]
\end{enumerate}
\end{proposition}
\begin{proof}
Each part will follow easily from the definition of weak
pullback. First we label the maps for the groupoids over $X$ as
$v\maps\Phi\to X$, $w\maps\Psi\to X$, and $w' \maps\Psi' \to X$.
\begin{enumerate}
\item $\ip{\Phi}{\Psi} \simeq \ip{\Psi}{\Phi}.$\\
By definition of
weak pullback, an object of $\ip{\Phi}{\Psi}$ is a triple
$(a,b,\alpha)$ such that $a\in\Phi, b\in\Psi,$ and $\alpha\maps
v(a)\to w(b)$. Similarly, an object of $\ip{\Psi}{\Phi}$ is a triple
$(b,a,\beta)$ such that $b\in\Psi, a\in\Phi,$ and $\beta\maps w(b) \to
v(a)$. Since $\alpha$ is invertible, there is an evident equivalence
of groupoids.
\item $\ip{\Phi}{\Psi + \Psi'} \simeq \ip{\Phi}{\Psi} + \ip{\Phi}{\Psi'}.$\\
Recall that in the category of groupoids, the coproduct is just the
disjoint union over objects and morphisms. With this it is easy to
see that the definition of weak pullback will `split' over union.
\item $\ip{\Phi}{\Lambda \times \Psi} \simeq \Lambda \times \ip{\Phi}{\Psi}.$\\
This follows from the associativity (up to isomorphism) of the
cartesian product.
\end{enumerate}
\end{proof}
\subsubsection*{Acknowledgements}
We thank James Dolan, Todd Trimble, and the denizens of the $n$-Category
Caf\'e for many helpful conversations. This work was supported by the
National Science Foundation under Grant No.\ 0653646.
\appendix
\section{Review of Groupoids}\label{appendix}
\begin{definition}
A {\bf groupoid} is a category in which all morphisms are invertible.
\end{definition}
\begin{notation}
We denote the set of objects in a groupoid $X$ by $\Ob(X)$ and the
set of morphisms by $\Mor(X)$.
\end{notation}
\begin{definition}
A {\bf functor} $F \maps X \to Y$ between categories is a pair of
functions $F \maps \Ob(X) \to \Ob(Y)$ and $F \maps \Mor(X) \to
\Mor(Y)$ such that $F(1_x) = 1_{F(x)}$ for $x\in \Ob(X)$ and $F(gh) =
F(g)F(h)$ for $g,h\in \Mor(X)$.
\end{definition}
\begin{definition}
A {\bf natural transformation} $\alpha \maps F \to G$ between
functors $F,G \maps X \to Y$ consists of a morphism $\alpha_x \maps F(x)
\to G(x)$ in $\Mor(Y)$ for each $x\in \Ob(X)$ such that for each
morphism $h\maps x \to x'$ in $\Mor(X)$ the following naturality
square commutes:
\[
\xymatrix{
F(x) \ar[r]^{\alpha_x} \ar[d]_{F(h)} & G(x) \ar[d]^{G(h)} \\
F(x') \ar[r]_{\alpha_{x'}} & G(x')
}
\]
\end{definition}
\begin{definition}
A {\bf natural isomorphism} is a natural transformation $\alpha \maps
F \to G$ between functors $F,G \maps X \to Y$ such that for each $x
\in X$, the morphism $\alpha_x$ is invertible.
\end{definition}
\noindent
Note that a natural transformation between functors between
{\it groupoids} is necessarily a natural isomorphism.
In what follows, and throughout the paper, we write $x \in X$ as
shorthand for $x \in \Ob(X)$. Also, several places throughout this
paper we have used the notation $\alpha\cdot F$ or $F\cdot\alpha$ to
denote operations combining a functor $F$ and a natural transformation
$\alpha$. These operations are called `whiskering':
\begin{definition}
\label{whiskering}
Given groupoids $X$, $Y$ and $Z$, functors $F\maps X\to Y$, $G\maps
Y\to Z$ and $H\maps Y\to Z$, and a natural transformation $\alpha\maps
G\Rightarrow H$, there is a natural transformation $\alpha\cdot F\maps
GF \Rightarrow HF$ called the {\bf right whiskering} of $\alpha$ by
$F$. This assigns to any object $x \in X$ the morphism
$\alpha_{F(x)}\maps G(F(x)) \to H(F(x))$ in $Z$, which we denote as
$(\alpha\cdot F)_{x}$. Similarly, given a groupoid $W$ and a functor
$J \maps Z \to W$, there is a natural transformation $J\cdot \alpha\maps
JG \to JH$ called the {\bf left whiskering} of $\alpha$ by $J$. This
assigns to any object $y \in Y$ the morphism $J(\alpha_y)\maps JG(y)
\to JH(y)$ in $W$, which we denote as $(J\cdot\alpha)_{y}$.
\end{definition}
\begin{definition}
\label{equivalence_of_groupoids}
A functor $F \maps X \to Y$ between groupoids is called an
{\bf equivalence} if there
exists a functor $G \maps Y \to X$, called the {\bf weak inverse} of $F$,
and natural isomorphisms $\eta \maps GF \to 1_X$ and $\rho \maps FG \to 1_Y$.
In this case we say $X$ and $Y$ are {\bf equivalent}.
\end{definition}
\begin{definition}
A functor $F \maps X \to Y$ between groupoids is called {\bf faithful}
if for each pair of objects $x,y \in X$ the function $F \maps
\hom(x,y) \to \hom(F(x),F(y))$ is injective.
\end{definition}
\begin{definition}
A functor $F \maps X \to Y$ between groupoids is called {\bf full} if
for each pair of objects $x,y \in X$, the function $F \maps
\hom(x,y) \to \hom(F(x),F(y))$ is surjective.
\end{definition}
\begin{definition}
A functor $F \maps X \to Y$ between groupoids is called {\bf
essentially surjective} if for each object $y \in Y$, there
exists an object $x \in X$ and a morphism $f \maps F(x) \to y$ in
$Y$.
\end{definition}
\noindent
A functor has all three of the above properties if and only if the functor
is an equivalence. It is often convenient to prove two groupoids are
equivalent by exhibiting a functor which is full, faithful and
essentially surjective.
\begin{definition}\label{MAP}
A {\bf map} from the span of groupoids
\[
\xymatrix{
& S\ar[dl]_{q} \ar[dr]^{p} \\
Y & & X
}
\]
to the span of groupoids
\[
\xymatrix{
& S'\ar[dl]_{q'} \ar[dr]^{p'} \\
Y & & X
}
\]
is a functor $F \maps S \to S'$ together with natural transformations
$\alpha \maps p \To p' F$, $\beta \maps q \To q' F$.
\end{definition}
\begin{definition}\label{EQUIVALENCE}
An {\bf equivalence} of spans of groupoids
\[
\xymatrix{
& S\ar[dl]_{g} \ar[dr]^{f} & & & S'\ar[dl]_{g'} \ar[dr]^{f'} &\\
Y & & X & Y & & X
}
\]
is a map of spans $(F,\alpha,\beta)$ from $S$ to $S'$ such that
$F \maps S \to S'$ is an equivalence of groupoids, together with a
map of spans $(G,\alpha',\beta')$ from $S'$ to $S$ and
a natural isomorphism $\gamma \maps GF \Rightarrow 1$ such
that the following equations hold:
\[ 1_p =
(p \cdot \gamma)\circ (\alpha'\cdot F)\circ \alpha \]
and
\[ 1_q =
(q \cdot \gamma)\circ (\beta'\cdot F)\circ \beta .\]
\end{definition}
%I NEED TO CHECK THAT THIS IS REALLY RIGHT!!! - jb
\begin{lemma}\label{EQUIVGRPD}
Given equivalent groupoids $X$ and $Y$, $|X| = |Y|$.
\end{lemma}
\begin{proof}
From a functor $F \maps X \to Y$ between groupoids, we can
obtain a function $\underline{F} \maps \underline{X} \to
\underline{Y}$. If $F$ is an equivalence, $\underline{F}$
is a bijection. Since these are the indexing sets for the sum
in the definition of groupoid cardinality, we just need to check
that for a pair of elements $[x] \in \underline{X}$ and
$[y] \in \underline{Y}$ such that $\underline{F}([x]) = [y]$, we
have $|\Aut(x)| = |\Aut(y)|$. This follows from $F$ being full
and faithful, and that the cardinality of automorphism
groups is an invariant of an isomorphism class of objects in a
groupoid. Thus,
\[ |X| =
\sum_{x\in \underline{X}}\frac{1}{|\Aut(x)|} =
\sum_{y \in \underline{Y}}\frac{1}{|\Aut(y)|} =
|Y|. \]
\end{proof}
\begin{lemma}\label{ESSENTIALPULLBACK}
Given a diagram of groupoids
\[ \def\objectstyle{\scriptstyle}
\def\labelstyle{\scriptstyle}
\xy
(-20,10)*+{S}="1";
(0,-10)*+{B}="2";
(20,10)*+{T}="3";
{\ar_{p} "1";"2"};
{\ar^{q} "3";"2"};
{\ar^{F} "1";"3"};
{\ar@{=>}_<<{\scriptstyle \alpha} (-2,0); (2,4)};
\endxy
\]
where $F$ is an equivalence of groupoids, the
restriction of $F$ to the full inverse image $p^{-1}(b)$
\[ F|_{p^{-1}(b)} \maps p^{-1}(b) \to q^{-1}(b) \]
is an equivalence of groupoids, for any object $b\in B$.
\end{lemma}
\begin{proof}
It is sufficient to check that $F|_{p^{-1}(b)}$ is a full, faithful,
and essentially surjective functor from $p^{-1}(b)$ to $q^{-1}(b)$.
First we check that the image of $F|_{p^{-1}(b)}$ indeed lies in
$q^{-1}(b)$. Given $b \in B$ and $x \in p^{-1}(b)$, there is
a morphism $\alpha_x \maps p(x) \to qF(x)$ in $B$. Since $p(x) \in
[b]$, then $qF(x) \in [b]$. It follows that $F(x) \in
q^{-1}(b)$. Next we check that $F|_{p^{-1}(b)}$ is full and
faithful. This follows from the fact that full inverse images are
full subgroupoids. It is clear that a full and faithful functor
restricted to a full subgroupoid will again be full and faithful. We
are left to check only that $F|_{p^{-1}(b)}$ is essentially
surjective. Let $y \in q^{-1}(b)$. Then, since $F$ is
essentially surjective, there exists $x \in S$ such that $F(x)
\in [y]$. Since $qF(x) \in [b]$ and there is an isomorphism $\alpha_x
\maps p(x) \to qF(x)$, it follows that $x \in q^{-1}(b)$. So
$F|_{p^{-1}(b)}$ is essentially surjective. We have shown that
$F|_{p^{-1}(b)}$ is full, faithful, and essentially surjective, and,
thus, is an equivalence of groupoids.
\end{proof}
The data needed to construct a weak pullback of groupoids is a
`cospan':
\begin{definition}
Given groupoids $X$ and $Y$, a {\bf cospan} from $X$ to $Y$ is a diagram
\[\xymatrix{
Y\ar[dr]_g & & X\ar[dl]^f\\
& Z & \\
}\]
where $Z$ is groupoid and $f\maps X\to Z$ and $g \maps Y\to Z$ are functors.
\end{definition}
\noindent
We next prove a lemma stating that the weak pullbacks of equivalent
cospans are equivalent. Weak pullbacks, also called {\it iso-comma
objects}, are part of a much larger family of limits called {\it
flexible limits}. To read more about flexible limits, see the work of
Street \cite{Street:1980} and Bird \cite{Bird:1989}. A vastly more
general theorem than the one we intend to prove holds in this class of
limits. Namely: for any pair of parallel functors $F,G$ from an
indexing category to $\Cat$ with a pseudonatural equivalence
$\eta\maps F \to G$, the pseudo-limits of $F$ and $G$ are equivalent.
But to make the paper self-contained, we strip this theorem down and
give a hands-on proof of the case we need.
To show that equivalent cospans of groupoids have equivalent weak
pullbacks, we need to say what it means for a pair of cospans to be
equivalent. As stated above, this means that they are given by a pair
of parallel functors $F,G$ from the category consisting of a
three-element set of objects $\lbrace 1,2,3 \rbrace$ and two morphisms
$a\maps 1 \to 3$ and $b \maps 2 \to 3$. Further there is a
pseudonatural equivalence $\eta \maps F \to G$. In simpler terms,
this means that we have equivalences $\eta_i \maps F(i) \to G(i)$ for
$i = 1,2,3$, and squares commuting up to natural isomorphism:
\[
\def\objectstyle{\scriptstyle}
\def\labelstyle{\scriptstyle}
\xy
(-30,10)*+{F(1)}="1";
(-30,-10)*+{F(3)}="2";
(-10,10)*+{G(1)}="3";
(-10,-10)*+{G(3)}="4";
(10,10)*+{F(1)}="5";
(10,-10)*+{F(3)}="6";
(30,10)*+{G(1)}="7";
(30,-10)*+{G(3)}="8";
{\ar_{\eta_1} "1";"2"};
{\ar^{F(a)} "1";"3"};
{\ar^{\eta_3} "3";"4"};
{\ar_{G(a)} "2";"4"};
{\ar_{\eta_2} "5";"6"};
{\ar^{F(b)} "5";"7"};
{\ar^{\eta_3} "7";"8"};
{\ar_{G(b)} "6";"8"};
{\ar@{=>}^{v} (-23,-3)*{}; (-17,3)*{}};
{\ar@{=>}^{w} (17,-3)*{}; (23,3)*{}};
\endxy
\]
For ease of notation we will consider the equivalent cospans:
\[
\xymatrix{
Y\ar[dr]_{g} && X\ar[dl]^{f} & \hat{Y}\ar[dr]_{\hat{g}} && \hat{X}\ar[dl]^{\hat{f}} \\
&Z& & &\hat{Z}&
}
\]
with equivalences $\hat{x}\maps X \to \hat{X}$, $\hat{y}\maps Y \to
\hat{Y}$, and $\hat{z}\maps Z \to \hat{Z}$ and natural isomorphisms
$v\maps \hat{z}f \Rightarrow \hat{f}\hat{x}$ and $w \maps \hat{z}g
\Rightarrow \hat{g}\hat{y}$.
\begin{lemma}\label{skeletal}
Given equivalent cospans of groupoids as described above, the weak
pullback of the cospan
\[
\xymatrix{
Y\ar[dr]_{g} & & X\ar[dl]^{f} \\
& Z &
}
\]
is equivalent to the weak pullback of the cospan
\[
\xymatrix{
\hat{Y}\ar[dr]_{\hat{g}} & & \hat{X}\ar[dl]^{\hat{f}} \\
& \hat{Z} &
}
\]
\end{lemma}
\begin{proof}
We construct a functor $F$ between the weak
pullbacks $XY$ and $\hat{X}\hat{Y}$ and show that this functor is an
equivalence of groupoids, i.e., that it is full, faithful and
essentially surjective. We recall that an object in the weak pullback
$XY$ is a triple $(r,s,\alpha)$ with $r\in X$, $s\in Y$ and
$\alpha \maps f(r) \to g(s)$. A morphism in
$\rho \maps (r,s,\alpha) \to (r',s',\alpha')$ in $XY$ is given by a
pair of morphisms $j \maps r\to r'$ in $X$ and $k\maps s\to s'$ in $Y$
such that $g(k)\alpha = \alpha' f(j)$. We define
\[F \maps XY \to \hat{X}\hat{Y}\]
on objects by
\[(r,s,\alpha) \mapsto (\hat{x}(r),\hat{y}(s),w_s^{-1}\hat{z}(\alpha)v_r)\]
and on a morphism $\rho$ by sending $j$ to $\hat{x}(j)$ and $k$ to
$\hat{y}(k)$. To check that this functor is well-defined we consider
the following diagram:
\[
\xymatrix{
\hat{f}\hat{x}(r)\ar[r]^{v_r}\ar[d]_{\hat{f}\hat{x}(j)} & \hat{z}f(r)\ar[r]^{\hat{z}(\alpha)}\ar[d]_{\hat{z}f(j)} & \hat{z}g(s)\ar[r]^{w_s^{-1}}\ar[d]^{\hat{z}g(k)} & \hat{g}\hat{y}(s)\ar[d]^{\hat{g}\hat{y}(k)}\\
\hat{f}\hat{x}(r')\ar[r]_{v_{r'}} & \hat{z}f(r')\ar[r]_{\hat{z}(\alpha')} & \hat{z}g(s')\ar[r]_{w_{s'}^{-1}} & \hat{g}\hat{y}(s')
}
\]
The inner square commutes by the assumption that $\rho$ is a morphism
in $XY$. The outer squares commute by the naturality of $v$ and $w$.
Showing that $F$ respects identities and composition is
straightforward.
We first check that $F$ is faithful. Let $\rho,\sigma \maps
(r,s,\alpha) \to (r',s',\alpha')$ be morphisms in $XY$ such that
$F(\rho) = F(\sigma)$. Assume $\rho$ consists of
morphisms $j\maps r \to r'$, $k\maps s \to s'$ and $\sigma$ consists
of morphisms $l\maps r \to r'$ and $m \maps s \to s'$. It follows
that $\hat{x}(j) = \hat{x}(l)$ and $\hat{y}(k) = \hat{y}(m)$. Since
$\hat{x}$ and $\hat{y}$ are faithful we have that $j=l$ and $k=m$.
Thus, we have shown that $\rho = \sigma$ and $F$ is
faithful.
To show that $F$ is full, we assume $(r,s,\alpha)$ and
$(r',s',\alpha')$ are objects in $XY$ and $\rho \maps
(\hat{x}(r),\hat{y}(s),\hat{z}(\alpha)) \to
(\hat{x}(r'),\hat{y}(s'),\hat{z}(\alpha'))$ is a morphism in
$\hat{X}\hat{Y}$ consisting of morphisms $j \maps \hat{x}(r) \to
\hat{x}(r')$ and $k \maps \hat{y}(s) \to \hat{y}(s')$. Since
$\hat{x}$ and $\hat{y}$ are full, there exist morphisms $\tilde{j}
\maps r \to r'$ and $\tilde{k}\maps s \to s'$ such that
$\hat{x}(\tilde{j}) = j$ and $\hat{y}(\tilde{k}) = k$. We consider
the following diagram:
\[
\xymatrix{
\hat{z}(f(r))\ar[r]^{v_r^{-1}} \ar[d]_{\hat{z}(f(\tilde{j}))} & \hat{f}\hat{x}(r)\ar[r]^{\hat{z}(\alpha)}\ar[d]_{\hat{f}\hat{x}(\tilde{j})} & \hat{g}\hat{y}(s)\ar[r]^{w_s}\ar[d]^{\hat{g}\hat{y}(\tilde{k})} & \hat{z}(g(s)) \ar[d]^{\hat{z}(g(\tilde{k}))} \\
\hat{z}(f(r'))\ar[r]_{v_{r'}^{-1}} & \hat{f}\hat{x}(r') \ar[r]_{\hat{z}(\alpha')} & \hat{g}\hat{y}(s')\ar[r]_{w_s} & \hat{z}(g(s'))
}
\]
The center square commutes by the assumption that $\rho$ is a morphism
in $\hat{X}\hat{Y}$, and the outer squares commute by naturality of
$v$ and $w$. Since $\hat{z}$ is full, there exists morphisms
$\bar{\alpha}\maps f(r) \to g(s)$ and $\bar{\alpha'} \maps f(r') \to
g(s')$ such that $\hat{z}(\bar{\alpha}) = w_s\hat{z}(\alpha)v_r^{-1}$
and $\hat{z}(\bar{\alpha'}) = w_{s'}\hat{z}(\alpha')v_{r'}^{-1}$. Now
since $\hat{z}$ is faithful, we have that
\[
\xymatrix{
f(r)\ar[r]^{\bar{\alpha}} \ar[d]_{f(\tilde{j})} & g(s) \ar[d]^{g(\tilde{k})} \\
f(r') \ar[r]_{\bar{\alpha'}} & g(s')
}
\]
commutes. Hence, $F$ is full.
To show $F$ is essentially surjective we let $(r,s,\alpha)$
be an object in $\hat{X}\hat{Y}$. Since $\hat{x}$ and $\hat{y}$ are
essentially surjective, there exist $\tilde{r} \in X$ and
$\tilde{s}\in Y$ with isomorphisms $\beta \maps
\hat{x}(\tilde{r}) \to r$ and $\gamma \maps \hat{y}(\tilde{s}) \to s$.
We thus have the isomorphism:
\[\hat{z}(f(\tilde{r}))\stackrel{v_{\tilde{r}^{-1}}}\longrightarrow \hat{f}(\hat{x}(\tilde{r}))\stackrel{\hat{f}(\beta)}\longrightarrow \hat{f}(r) \stackrel{\alpha}\longrightarrow \hat{g}(s) \stackrel{\hat{g}(\gamma^{-1})}\longrightarrow \hat{g}(\hat{y}(\tilde{s}))\stackrel{w_{\tilde{s}}}
\longrightarrow \hat{z}(g(\tilde{s}))\]
Since $\hat{z}$ is full, there exists an isomorphism $\mu \maps
f(\tilde{r}) \to g(\tilde{s})$ such that $\hat{z}(\mu) =
w_s\hat{g}(\gamma^{-1})\alpha\hat{f}(\beta)v_r^{-1}$. We have
constructed an object $(\tilde{r},\tilde{s},\mu)$ in $XY$ and we need
to find an isomorphism from $F((\tilde{r},\tilde{s},\mu) =
(\hat{x}(\tilde{r}), \hat{y}(\tilde{s}),w_s^{-1}\hat{z}(\mu)v_r)$ to
$(r,s,\alpha)$. This morphism consists of $\beta \maps
\hat{x}(\tilde{r}) \to r$ and $\gamma \maps \hat{y}(\tilde{s}) \to s$.
That this is an isomorphism follows from $\beta,\gamma$ being
isomorphisms and the following calculation:
\begin{eqnarray*}
\hat{g}(\gamma)w_s^{-1}\hat{z}(\mu)v_r &=& \hat{g}(\gamma)w_{\tilde{s}}^{-1}w_{\tilde{s}}\hat{g}(\gamma^{-1})\alpha\hat{f}(\beta)v_{\tilde{r}}^{-1}v_{\tilde{r}}\\
&=& \alpha\hat{f}(\beta)
\end{eqnarray*}
We have now shown that $F$ is essentially surjective, and
thus an equivalence of groupoids.
\end{proof}
\begin{thebibliography}{10}
\bibitem{AM:2008} M.\ Aguiar and S.\ Mahajan, Monoidal functors,
species and Hopf algebras, available at
\href{http://www.math.tamu.edu/~maguiar/a.pdf}
{http://www.math.tamu.edu/$\sim$maguiar/a.pdf}.
\bibitem{Baez}
J.\ Baez, Groupoidification. Available at\hfill\break
\href{http://math.ucr.edu/home/baez/groupoidification/}
{http://math.ucr.edu/home/baez/groupoidification/}.
\bibitem{BaezDolan:1998}
J.\ Dolan and J.\ Baez, Categorification, in
{\sl Higher Category Theory,} eds.\ E.\ Getzler and M.\ Kapranov,
Contemp.\ Math.\ 230, American Mathematical Society, Providence, Rhode
Island, 1998, pp.\ 1--36. Also available as
\href{http://arxiv.org/abs/math/9802029}
{arXiv:math/9802029}.
\bibitem{BaezDolan:2001}
J.\ Baez and J.\ Dolan, From finite sets to Feynman diagrams, in {\em
Mathematics Unlimited---2001 and Beyond}, eds.\ B.\ Engquist
and W.\ Schmid, Springer, Berlin, 2001, pp.\ 29--50. Also available as
\href{http://arxiv.org/abs/math/0004133}
{arXiv:math/0004133}.
\bibitem{Behrends:1993}
K.\ Behrend, The Lefschetz trace formula for algebraic stacks, {\em
Invent.\ Math.\ }{\bf 112} (1993), 127--149.
\bibitem{BLL}
F.\ Bergeron, G.\ Labelle, and P.\ Leroux, {\sl Combinatorial Species
and Tree-Like Structures}, Cambridge U.\ Press, Cambridge, 1998.
\bibitem{Bird:1989}
G.\ Bird, G.\ Kelly, A.\ Power, and R.\ Street, Flexible limits for
$2$-categories, {\sl J. Pure Appl. Algebra }{\bf 61} (1989),
1--27.
\bibitem{Brown} K.\ Brown, {\sl Buildings}, Springer, Berlin,
1989.
\bibitem{FioreLeinster:2005} M.\ Fiore and T.\ Leinster, Objects of
categories as complex numbers, {\sl Adv.\ Math.\ }{\bf 190} (2005),
264--277. Also available as
\href{http://arxiv.org/abs/math/0212377}
{arXiv:math/0212377}.
\bibitem{Humphreys} J.\ Humphreys, {\sl Reflection Groups and
Coxeter Groups}, Cambridge U.\ Press, Cambridge, 1992.
\bibitem{Garrett} P.\ Garrett, {\sl Buildings and Classical Groups},
CRC Press, New York, 1997. Preliminary version available at \hfill \break
\href{http://www.math.umn.edu/~garrett/m/buildings/}
{http://www.math.umn.edu/$\sim$garrett/m/buildings/}.
\bibitem{GM:2002} M.\ Guta and H.\ Maassen, Symmetric Hilbert spaces
arising from species of structures, {\sl Math.\ Zeit.\ }{\bf 239}
(2002), 477--513. Also available as
\href{http://arxiv.org/abs/math-ph/0007005}
{arXiv:math-ph/0007005}.
\bibitem{Jones} V.\ Jones, Hecke algebra representations of braid
groups and link polynomials, {\sl Ann.\ Math.\ }{\bf 126} (1987), 335--388.
\bibitem{Joyal} A.\ Joyal, Une th\'eorie combinatoire des s\'eries
formelles, {\sl Adv.\ Math.\ }{\bf 42} (1981), 1--82.
\bibitem{Joyal2} A.\ Joyal, Foncteurs analytiques et esp\`eces des
structures, in Springer Lecture Notes in Mathematics {\bf 1234},
Springer, Berlin, 1986, pp.\ 126--159.
\bibitem{Kim:1995}
M.\ Kim, A Lefschetz trace formula for equivariant cohomology, {\em
Ann.\ Sci.\ \'Ecole Norm.\ Sup.\ (4) }{\bf 28} (1995), no. 6, 669--688.
Also available at
\href{http://www.numdam.org/numdam-bin/fitem?id=ASENS_1995_4_28_6_669_0}
{http://www.numdam.org}.
\bibitem{Leinster:2008}
T.\ Leinster, The Euler characteristic of a category, {\em
Doc. Math.\ }{\bf 13} (2008), 21--49. Also available as
\href{http://arxiv.org/abs/math/0601458}
{arXiv:math/0610260}.
\bibitem{Mac Lane} S.\ Mac Lane, {\sl Categories for the
Working Mathematician}, Springer, Berlin, 1998.
\bibitem{Morton:2006}
J.\ Morton, Categorified algebra and quantum mechanics, {\em Theory
Appl. Categ.}{\bf 16}, (2006), 785-854. Also available as
\href{http://arxiv.org/abs/math/0601458}
{arXiv:math/0601458}.
\bibitem{Street:1980}
R. Street, Fibrations in bicategories, {\em Cahiers Top.\
G\'eom. Diff.\ }{\bf 21} (1980), 111--160.
\bibitem{Wilf} H.\ Wilf, {\sl Generatingfunctionology}, Academic Press,
Boston, 1994. Also available at
\href{http://www.cis.upenn.edu/~wilf/}
{http://www.cis.upenn.edu/$\sim$wilf/}.
\bibitem{Weinstein:2008}
A.\ Weinstein, The volume of a differentiable stack.
Available as\hfill\break
\href{http://arxiv.org/abs/0809.2130}
{arXiv:0809.2130}.
\end{thebibliography}
\end{document}