\documentclass[11pt]{article}
\usepackage{alg}
\input{lecture}
%%%%%%%%%%%%%%%%%%%%%%%%
%Please move this environment to lecture.tex
%%An example usage is when the proof of a theorem contains some interesting
%insight, leading to some auxiliary result that is not directly connected with
%the theorem itself.
\newtheorem{fact}{Fact}
%Lecture specific macros
\newcommand\Omit[1]{}
\def\B{\mathcal{B}}
\def\A{\mathcal{A}}
\def\kna{K_N^{\A}}
\def\Nat{\mathbb{N}}
\newtheorem{qoi}{Question For Instructor}
%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\lecture{6}{9/28/2011}{Space Bounded Non-Determinism}{Prathmesh Prabhu}
%Introduction
Last lecture, we discussed time bounded non-determinism. Parallel to the
treatment of time bounded deterministic computation, we presented a hierarchy
result showing that given more time, one can perform strictly more computation.
We next explored an implication of the $\P = \NP$ question being settled in the
negative - if $\P \neq \NP$, then there exists of a class of problems,
NP-intermediate, that is neither in $\P$, nor $\NP$-Complete.
We ended the lecture introducing the concept of {\it Relativization}, the
property that the verity of a statement remains unchanged if all the machines
involved are given access to a common oracle. In this lecture, we develop the
idea of relativiation further, and use it to advance an explanation of why the
$\P = \NP$ problem can be expected to be hard to settle in either direction.
In the second part of the lecture, we continue our study of non-determinism by
looking at space bounded non-determinism. We present three main results in this
setting, noting that they are in some sense stronger than the corresponding
results in the time bounded case. We also state some
important consequences of these theorems and their proofs - they provide a
natural complete problem for the classes $\NL$ and $\PSPACE$ respectively; and a
hierarchy theorem for $\NSPACE$ is obtained as a corollary of one of the
theorems.
\section{Relativization}
We discussed Relativization in the last lecture. Here, we restate the idea of
relativization, emphasizing the difference between the claims that ``a statement
relativizes'' and ``a proof (of some statement) relativizes''.
We say that a statement relativizes if, given an arbitrary oracle machine
$A$, the statements remains true if each Turing machine involved in the
statement is given access to $A$. A proof of a statement relativizes if
every step of the proof when considered a statement in itself relativizes
according to the definition above. Note that if a statement has a proof that
relativizes, then that implies that the statement itself relativizes. On the
other hand, it is possible that a given proof for a statement fails to
relativize, while the statement still relativizes.
\subsection{Relativization and the $\P$ vs $\NP$ problem}
Looking more broadly at the proof techniques used to demonstrate results in
complexity theory, we find that almost of all of these techniques (and
therefore the results obtained with them) relativize. For example, the
techniques used thus far (diagonalization and delayed, or lazy,
diagonalization) all relativize, because the conceptual core of all of them
is simulation. When both the simulating and simulated machine are given access
to a certain oracle, all proofs without using the oracle can usually be rewritten as
proofs with the use of the oracle. It is perhaps surprising just how many
proof techniques relativize---so many that one tends to assume (and hopefully
also verify) that a given statement relativizes unless proven otherwise.
This observation, and the following result that both the statements $\P = \NP$
and $\P \neq \NP$ do not relativize, together hint at the hardness of
settling the $\P$ vs $\NP$ question --- Because the statement stated in either
direction fails to relativize, a proof settling the $\P$ vs $\NP$ question also
can not relativize. This means that some step in the proof must use a result that
does not relativize - and such results are very rare in complexity theory.
\begin{theorem}\label{06:thm:P-NP}
There exists oracles $\A$ and $\B$ such that $\P^{\A} = \NP^{\A}$ and
$\P^{\B} \ne \NP^{\B}$.
\end{theorem}
Existence of $\A$ implies that there is no relativizing proof of the statement
$\P \neq \NP$, because the statement fails to hold relative to the oracle $\A$.
Similarly, there is no relativizing proof for $\P = \NP$ because the statement
does not hold relative to oracle $\B$.
\begin{proof}
We will prove both parts of the theorem by constructing the two oracles, $\A$
and $\B$.
\subsubsection*{Construction of $\A$}
$\A$ is an oracle such that $\P^{\A} = \NP^{\A}$. One approach to constructing
$\A$ relies on letting $\A$'s language be $\PSPACE$-complete. The proof
constructs such an $\A$ to expand both $\P^{\A}$ and $\NP^{\A}$ to $\PSPACE$. In
the following, we use an alternate proof technique, directly constructing an
$\A$ that lets a specific machine in $\P^{\A}$ solve a problem that is complete
in $\NP^{\A}$.
Recall from lecture 3 the $\NP$-complete language $K_N$. Below, we define
corresponding language $\kna$ that has access to oracle $\A$. We call the NTM
that accepts $\kna$ $N^{\A}$.
$\kna = \{ | $ M is an NTM that accepts x in $\le t$ computation steps
when given access to oracle $\A$ $\}$
The key observations for constructing $\A$ are
\begin{itemize}
\item
$\kna$ is complete for the class $\NP^{\A}$. Therefore, to show that
$\P^{\A}=\NP^{\A}$, it suffices to show that there is a machine in the class
$\P^{\A}$ that recognizes $\kna$. Let this specific machine be $M^{\A}$.
\item
Membership of a string $y$ in $\kna$ is influenced by the response of $\A$
on queries of length less than $|y|$. This is the case because $N^{\A}$
simulates the NTM encoded by first part of $y$ on the input (also encoded in
$y$) for time no longer than the length of $y$. To query the oracle,
$N^{\A}$ must write down the query on the oracle tape in no more than $n-1$
steps and then execute the query. Therefore, any query must be of length $<
|y|$.
\end{itemize}
\end{proof}
The idea is now to encode all responses of $N^{\A}$ inside $\A$ so that $M^{\A}$
can trivially query $\A$ for the correct response for any string $x$. As noted
above, the response of $N^{\A}$ on strings of length $n$ depends on the response
of $\A$ on strings of length $< n$. Therefore, we can build $\A$ in stages, as
follows:
\begin{algorithm}
\begin{algtab}
$\A \longleftarrow \emptyset$ \\
\algforeach{phase $i = 0, 1, 2, ...$}
\algforeach{$x \in \Sigma^i$}
\algif{$N^{\A^{so\ far}}$ accepts $x$}
$\A \longleftarrow \A \cup {x}$
\end{algtab}
\end{algorithm}
Note that, $\A^{so\ far}$, the currently constructed $\A$ in any phase, has
enough information to predict the response of $N^\A$ for the strings considered
in this phase. Also, each phase constructs $\A$ for strings of increasing size.
Therefore, no addition of an input to the membership of $\A$ affects the prior
construction of $\A$.
It then follows that for all $x \in \Sigma^*, x \in K_N^{\A} \Leftrightarrow x
\in A$. With access to an oracle for $\A$, $M_{\A}$ can simply query the oracle
on input $x$ and return the result. Since this can be done in linear time,
$K_N^{\A} \in \P^{\A}$.
\subsubsection*{Construction of $\B$}
We prove the second half of the theorem by constructing a language $\B$ that
exploits the power of non-determinism in a way that no polytime DTM can
duplicate. Consider the language $L^{\B} = \{O^n | (\exists x \in \Sigma^n)$
such that $x \in \B\}$. Regardless of $\B$, $L^{\B} \in \NP^{\B}$ since given
$x$, an NOTM $N^{\B}$ can non-deterministically guess a string of length $|x|$
in $\B$. So we construct $\B$ such that $L^{\B} \notin \P^{\B}$.
Let $M^{\B}_1, M^{\B}_2, M^{\B}_3, \ldots$ be an enumeration of all DTMs clocked
at running times $n, n^2, n^3, ...$, \textit{i.e.} all polytime DTMs. Without
loss of generality, let $M_i$ have a running time of $n^i$. Every DTM $M$ occurs
infinitely often in this enumeration. Moreover, given any input $x$ such that
$M$ halts on $x$, there are infinitely many instances \{$M_i$\} of a given
machine $M$ in the enumeration such that $M_i$ halts on $x$ (and returns the
same answer). Therefore, it suffices to show that $\forall i \quad L^{\B} \ne
L(M_i^{\B})$.
We build $\B$ also in phases; in phase $i$ we realize the condition $C_i :
L^{\B} \ne L(M_i^{\B})$, and we fix the oracle on strings longer than those
considered in the previous phase, to ensure that no prior computation results
are affected. We first assume $\B$ to accept the empty language. Each phase
computes the value of a function $f$, where $f(j)$ is greater than the length of
the longest string whose memebership in $\B$ is affected by the $j^{th}$ phase
{\it and} not affected in any of the later phases. In the $i^{th}$ phase, we
test each string of length at least $f(i-1)$ (picking strings in a well-defined
order, say, lexicographical order) to find a string such that
$M^{\B}_i(0^{|x|})$ does not query $\B$ with $x$. (Note that such a string
necessarily exists since $M^{\B}_i(x)$ is clocked to run in time $|x|^i$.
Eventually, we will hit $x$ such that $2^{|x|} > |x|^i$, {\it i.e.}, there are
more strings of length $|x|$ than the time $M^{\B}_i$ is allowed to execute on
$x$. Hence, there is some string of length $|x|$ that $M^{\B}_i$ does not query
$\B$ with). For this $x$, we are free to choose $\B(x)$ without affecting the
behaviour of $M^{\B}_i$ on strings of length $|x|$. We diagonalize for this
string, {\it i.e.}, if $M^{\B}_i$ rejects $0^{|x|}$ ($M^{\B}_i$ claims that
there is no string of length $|x|$ in $\B$) we add $x$ to $\B$. Otherwise
$M^{\B}_i$ accepts, claiming that there is a string of length $|x|$ in $\B$. We
simply ensure that no string that can be queried on inputs of length $O(|x|)$ is
added to $\B$ in any of the latter phases by by setting $f(i)$ greater than
$|x|^i$. Note that phase $i$ only changes membership in $\B$ of strings longer
than $f(i-1)$, maintaining the diagonlization setup in the earlier phases.
Hence, we must set $f(i)$ high enough keeping the last two points in mind.
\[f(i) = |x|^i + 1\]
The algorithm is given below.
\begin{algorithm}
\begin{algtab}
$\B \longleftarrow \emptyset$\\
$f(0) \longleftarrow 0$\\
\algforeach{phase $i = 1, 2, 3, ...$}
let $\{y_i\}_I$ be a lexicographically ordered list of strings from
$\Sigma^*$ such that $\forall i \in I \quad |y_i| \ge f(i-1)$\\
\algforeach{$y \in \{y_i\}_I$}
\algif{$M_i^{\B}(0^{|y|})$ does not query $\B$ with $|y|$}
\algif{$M^{\B}_i(0^{|y|})$ rejects}
$\B \longleftarrow \B \cup \{y\}$\\
\algend
$f(i) \longleftarrow |y|^i + 1$\\
{\bf break}\\
\end{algtab}
\end{algorithm}
We note that no polytime machine $M^{\B}_i$ recognizes $\B$ (by construction). This
concludes the proof.
\section{Space Bounded Non-Determinism}
We now present three theorems relating to space bounded non-deterministic
computations, and some of their implications. We will see that the results here
are stronger than their counterparts in the time bounded case. In the following,
we assume that we have a function $s(n) : \Nat \to \Nat$ such that $s(n) =
\Omega(\log(n))$. We require that the space bound (expressed via $s(n)$) is at
least logarithmic because sub-logarithmic space classes are not very well
behaved. Logarithmic space is required by random access machine to jump to
any point in the input, and sub-logarithmic space computations severely restrict
our chosen computational model.
%
\begin{theorem}\label{06.thm.space_time}
$\NSPACE(s(n)) \subseteq \underset{c>0}{\bigcup} \DTIME(2^{c\cdot s(n)})$
\end{theorem}
Theorem (\ref{06.thm.space_time}) strengthens the earlier result that $\NTIME(s(n)) \subseteq
\underset{c>0}{\bigcup} \DTIME\left(2^{c\cdot s(n)}\right)$ by interjecting the
class $\NSPACE$, {\it i.e.},
$\NTIME(s(n)) \subseteq \NSPACE(s(n) \subseteq \underset{c>0}{\bigcup}
\DTIME(2^{c \cdot s(n)})$. Direct corollaries of the theorem are $\NL \subseteq
\P$ and $\NPSPACE \subseteq \EXP$.
%
\begin{theorem}\label{06.thm.space_sqr}
$\NSPACE(s(n)) \subseteq \DSPACE(s^2(n))$
\end{theorem}
Theorem (\ref{06.thm.space_sqr}) states that only quadratically more space is
needed to perform nondeterministic computation deterministically. No similar
result is known for time-bounded computation. This result is made possible due
to the fact that space is in a sense more ``flexible" than time, {\it i.e.}, we
can reuse space on tapes whereas we cannot reuse time steps. It is in a
qualitative sense ``easier" to provide definite bounds and relationships between
space complexity classes than it was with time bound setting.
A corollary of the theorem is $\PSPACE = \NPSPACE$: $\PSPACE \subseteq \NPSPACE$
from the definition of the computational models; and from theorem
(\ref{06.thm.space_sqr}), $\NSPACE(p(n)) \subseteq \DSPACE(p^2(n)) =
\DSPACE(p(n^2)) = \DSPACE(p'(n)) \Rightarrow \NPSPACE \subseteq
\PSPACE$. Here $p$ and $p'$ are polynomial functions of $n$.
%
\begin{theorem}\label{06.thm.space_comp}
$co\NSPACE(s(n)) = \NSPACE(s(n))$
\end{theorem}
We will actually prove that $co\NSPACE(s(n)) \subseteq \NSPACE(s(n))$. The
equality is obtained by complementing twice.
Before proving these theorems, we present some direct corollaries of these
theorems.
\begin{corollary}\label{06.col.heirarchy}
$\L \subseteq \rm{NL} \subseteq \P \subseteq \NP \subset \PSPACE =
\rm{NPSPACE} \subseteq \EXP \subseteq \rm{NEXP}$.
\end{corollary}
\begin{proof}
We proved some of the inclusions above. The remaining inclusions are also easily
proved. $\L \subseteq \rm{NL}$, $\P \subseteq \NP$ and $\EXP \subseteq \NEXP$
follow trivially from the definition of the computational models. $\NP \subseteq
\PSPACE$ follows from a consideration of a $\PSPACE$ machine that iterates
through every possible witness verifying membership, and thus emulates the
computation of an $\NP$ machine.
\end{proof}
Whether these inclusions are strict is an open problem, the only
separations that are known come from the hierarchy results which we
discussed in previous lectures. For example, we know $\L \subsetneq
\PSPACE$ but not whether $\L \subsetneq \NP$.
%
\begin{corollary}[Hierarchy result for Non-Deterministic Space Bounded
computations]
If $s, s' : \mathbb{N} \rightarrow \mathbb{N}$ such that $s(n)$ is space
constructable and $s(n) = \omega(s'(n))$ then $\NSPACE(s'(n)) \subsetneq
\NSPACE(s(n))$.
\end{corollary}
The proof of this corollary follows the same diagonalization strategy
as we have seen before in the proof for the hierarchy result for deterministic
space bounded computations. In the bounded time setting, we had to employ
delayed diagonalization in extending the result from the deterministic model to
non-deterministic one, because complementation is a hard problem in that
setting. The proof carries over directly from the deterministic case for space
bounded computations because complementation is easier (Theorem
(\ref{06.thm.space_comp})).
The rest of this lecture is devoted to proving the first two theorems presented
in the section. We defer the proof for theorem (\ref{06.thm.space_comp}) to the
next lecture.
\subsection[SPACE inclusion in TIME]{$\NSPACE(s(n)) \subseteq
\underset{c>0}{\bigcup} \DTIME(2^{c\cdot s(n)})$}
\begin{proof}
We first assume that the function $s(n)$ is space constructible. The proof
hinges on the interpretation of runs of an NTM $N$ on the input $x$ as a search
problem on the infinite configuration graph defined as follows: The set of
vertices of the graph is exactly the set of all possible configurations of $N$
on input $x$; there is an edge from a vertex $n$ to $m$ iff there is a valid
transition from the configuration $n$ to $m$ for $N$ on the input $x$. For a
space bounded computation, this graph is finite. The proof proceeds by showing
that the size of the configuration graph is at worst exponential in the space
bound for $N$, and that a deterministic machine can solve the search problem on
this graph in polynomial time.
Formally, given NTM $N = (Q,\Sigma,\Gamma,\Delta,s,t,k)$ (WLOG, we assume that
the machine has a unique start (s) and terminal (t) state respectively; k is the
number of work tapes), the configuration graph corresponding to the runs of $N$
on $x$ is defined as -
\begin{itemize}
\item[Vertices]
Each vertex must correspond to a unique configuration of the space bounded
machine. In particular, it must uniquely identify the state of the machine
($Q$), the position of the input tape-head ($|x|$), the position of the
work-tape tape-heads ($s(|x|)^k$), and the contents of the work tapes
($(|\Gamma|^{s(|x|)})^k$). Note that $x$ itself is not part of the
configuration of the machine. The space bounds on machines do not include
the size of the input string. Putting the possibilities together, the total
number of vertices is
\[|V| = |Q| \times n \times (|\Gamma|^{s(|x|)})^k \times (s|x|)^k\]
Note that $|V| = 2^{O(s(|x|))}$, provided $s^{|x|} > \log(|x|)$ (assumed to
be true).
\item[Edges]
$c \to c'$ iff there is a valid transition from $c$ to $c'$ for $N$ on input
$x$.
\end{itemize}
Now, $x$ is accepted by $N$ iff there is path from $s$ to $t$ in the graph
defined above. This is an {\it s-t connectivity} problem on a graph of size
$2^{O(s(|x|))}$. It can be solved by a DTM in time linear in $|V|$, or
exponential in $s(|x|)$.
\end{proof}
An interesting consequence of the proof is a natural complete problem for the
class $\NL$. The problem of instance, {\it s-t connectivity} is defined below.
\begin{definition}[ST-CON]
Given a finite graph directed graph and two special vertices $s$ and $t$,
determine whether there is path from $s$ to $t$.
\end{definition}
\begin{fact}
ST-CON is complete for $\NL$ under $<^{log}_m$
\end{fact}
\begin{proof}
We show that (ST-CON) is (a) in-$\NL$ and (b) $\NL$-hard.
\begin{itemize}
\item[in-$\NL$]
Given an ST-CON problem instance, an NTM $N$ can
non-deterministically choose the next node from the possibilities and create
a walk on the graph starting from $s$. If the walk ever hits $t$, $N$
accepts. The space required is to store the current node. All nodes can be
indexed with indices $\le$ the size of the problem instance. Storing the
current node only requires logarithmic space in the index bound.
\item[$\NL$-hard]
Given any problem in $\NL$, the configuration graph for the problem is in
the worst case exponential in the space bound. This can be encoded
in space logarithmic in the size of the graph, {\it i.e.}, linear in the
space bound of the original problem. So, any problem in NL can be converted
to an ST-CON problem by generating the configuration graph for the problem
completely - this conversion can be done by a log-space (deterministic)
machine. We can assume, WLOG, that the configuration graph has a unique
initial and accepting state. Now, the intial machine accepts iff the answer
to the derived ST-CON problem is yes. Hence, there exist a $<^{log}_{m}$
reduction from NL to ST-CON.
\end{itemize}
\end{proof}
Another natural complete problem for $\NL$ is 2-SAT.
\begin{definition}[2-SAT]
Given a formula in CNF form (conjunction of disjunction) such that no clause
has more than 2 literals, decide whether there is an assignment of literals
for which the formula evaluates to {\it true}.
\end{definition}
\begin{fact}
2-SAT is complete for $\NL$ under $<^{log}_m$.
\end{fact}
\begin{proof}
Again, there are two parts to the proof.
\begin{itemize}
\item[in-$\NL$]
Left as an exercise.
\item[$\NL$-hard]
By theorem (\ref{06.thm.space_comp}), the complement of
ST-CON is also in $\NL$. We show the following reduction:
\[ \overline{ST-CON} <^{log}_m 2-SAT \]
The reduction is by encoding a given $\overline{ST-CON}$ problem as a 2-SAT
formula. For every node $v$ of the graph, there is a variable $x_v$ in the
formula. $x_v$ is true iff $v$ is reachable from the source node $s$. The
following 2-SAT clauses encode the problem instance $(V,E,s,t)$
\begin{itemize}
\item $ (u,v) \in E \Rightarrow (\overline{x}_u \vee x_v) \in \Phi$
\item $ x_s \in \Phi$
\item $ \overline{x}_t \in \Phi$
\end{itemize}
Note that this 2-SAT instance can be encoded in log-space. Finally $\Phi$ is
satisfiable iff $\overline{s \to t}$.
\end{itemize}
\end{proof}
\subsection[Non-Deterministic - Deterministic space relationship]{$\NSPACE(s(n))
\subseteq \DSPACE(s^2(n))$}
\begin{proof}
The main idea behind the proof here is enumerating all possible executions of
the given NTM $N$ in the form of computation tableaux in order to check if any
valid tableau leads to acceptance. Figure \ref{06.fig.tabl} shows an example of
a computation tableau. We would like to determine if there are a sequence of
states that take us from $c_0$ to $c_1$ using no more than $s(n)$ space. As
noted, the width of the tableau is $O(s(|x|))$ for input $x$, because any
configuration of $N$ can be encoded in $O(s(|x|))$ space, as noted in the last
proof. On the other hand, the height of the tableau in the worst case can be
$O(2^{O(s(n))})$, allowing for all possible configurations without repetition.
Thus, the straightforward simulation where all possible tableaux are generated
in order does not work, because it involves an exponential number of decisions
(there is a decision point at each row of the tableau).
\begin{figure}[t]
\centering
\label{05.fig.1}
$\begin{array}{|c|}\hline \mathrm{Initial \ configuration} \ c_0 \\ \hline \mathrm{Next \ configuration} \\ \hline \cdots \\ \hline \cdots \\ \hline \cdots \\ \hline \cdots \\ \hline \mathrm{Final\ configuration} \ c_1 \\ \hline
\end{array}$
\caption{An illustration of the configuration tableau used in the
proof of Theorem $\ref{06.thm.space_sqr}$. The tableau has $O(s(n))$ and
height $t = 2^{O(s(n))}$ .
\label{06.fig.tabl}}
\end{figure}
Again, a naive approach takes exponential space. To achieve the quadratic space
blow up, we use a divide and conquer strategy. Instead of searching for a valid
computation tableau of height $2^{s(|x|)}$ from $c_0$ to $c_1$, we guess an
intermediate state $c_{\frac{1}{2}}$ and verify independently that there is a
computation tableau of height $2^{s(|x|)/2}$ from $c_0$ to $c_{\frac{1}{2}}$ and
a computation tableau of height $2^{s(|x|)/2}$ from $c_{\frac{1}{2}}$ to $c_1$.
The benefit of breaking the problem into these subproblems lies in the fact that
each of the subproblems is half the size of the original problem (we are
searching for computation tableaux of half the height) and the two problems can
be solved in sequence, and the space required to solve the first instance can be
reused to solve the second instance.
Moreover, this problem reduction can be stated as a Boolean formula:
\[x \in L(M) \Leftrightarrow (\exists c_{1/2})(\forall b \in \{0,1\})
c_{\frac{b}{2}} \vdash_M^{\frac{t}{2}} c_{\frac{b+1}{2}}\]
This reduction can be applied recursively reusing the space that was taken up in
the computation of the first part of the problem. The base case is reached
after $\log(t)$ (here $t$ is the height of the original tableau) recursions where
we need only verify that one state transitions to another in one step. The fully
unrolled recursion can be stated as the following full quantified Boolean
formula:
\Omit{
\begin{align*}
x &\in L(M) \Leftrightarrow\\
&c_0^0 = c_0 \wedge c_1^0 = c_1\\
&\wedge (\exists c^1_{\frac{1}{2}})(\forall b^1 \in \{0,1\})
[(b^1 = 0) => c^1_0 = c^0_0 \wedge c^1_1 = c^1_{\frac{1}{2}}]
[(b^1 = 1) => c^1_0 = c^1_{\frac{1}{2}} \wedge c^1_1 = c^0_1]\\
&\wedge (\exists c^2_{\frac{1}{2}})(\forall b^2 \in \{0,1\})
[(b^2 = 0) => c^2_0 = c^1_0 \wedge c^2_1 = c^2_{\frac{1}{2}}]
[(b^2 = 1) => c^2_0 = c^2_{\frac{1}{2}} \wedge c^2_1 = c^1_1]\\
&\ldots\\
&\wedge (\exists c^k_{\frac{1}{2}})(\forall b^k \in \{0,1\})
[(b^k = 0) => c^k_0 = c^{k-1}_0 \wedge c^k_1 = c^k_{\frac{1}{2}}]
[(b^k = 1) => c^k_0 = c^k_{\frac{1}{2}} \wedge c^k_1 = c^{k-1}_1]\\
&\wedge c^k_0 \vdash_M^1 c^k_1
\end{align*}
}
\begin{tabbing}
$x \in$\= $L(M) \Leftrightarrow$\+\\
$c_0^0 = c_0 \wedge c_1^0 = c_1$\\
$\wedge (\exists c^1_{\frac{1}{2}})(\forall b^1 \in \{0,1\})$\=\+\\
$[(b^1 = 0) => c^1_0 = c^0_0 \wedge c^1_1 = c^1_{\frac{1}{2}}]$\\
$[(b^1 = 1) => c^1_0 = c^1_{\frac{1}{2}} \wedge c^1_1 = c^0_1]$\-\\
$\wedge (\exists c^2_{\frac{1}{2}})(\forall b^2 \in \{0,1\})$\+\\
$[(b^2 = 0) => c^2_0 = c^1_0 \wedge c^2_1 = c^2_{\frac{1}{2}}]$\\
$[(b^2 = 1) => c^2_0 = c^2_{\frac{1}{2}} \wedge c^2_1 = c^1_1]$\-\\
$\ldots$\\
$\wedge (\exists c^k_{\frac{1}{2}})(\forall b^k \in \{0,1\})$\+\\
$[(b^k = 0) => c^k_0 = c^{k-1}_0 \wedge c^k_1 = c^k_{\frac{1}{2}}]$\\
$[(b^k = 1) => c^k_0 = c^k_{\frac{1}{2}} \wedge c^k_1 = c^{k-1}_1]$\-\\
$\wedge c^k_0 \vdash_M^1 c^k_1$
\end{tabbing}
The depth of this full quantified Boolean formula is logarithmic in the height
of the original tableau, {\it i.e.}, $O(s(|x|))$. Each guessed configuration
takes $O(s(|x|))$ space. Therefore, the total space required to write this
formula down on one of the tapes is $O(s(|x|))^2$.
The innermost predicate whether the configurations guessed to come immediately
before and after $c_\frac{x}{t}$ have valid one step transitions to and from
$c_\frac{x}{t}$. This check requires space no more than linear in the
configurations being checked, {\it i.e.}, $O(s(|x|))$. Finally, iterating
through all possible configuration at each guess point is trivial (similar to
incrementing an $O(s(|x|))$ bit counter).
Hence, the total space requirement to write down the Boolean formula and check
for validity is $O(s(|x|))^2$.
\end{proof}
In both proofs above (for theorems \ref{06.thm.space_time} and
\ref{06.thm.space_sqr}), we implicitly assumed that $s(n)$ is space
constructable. This assumption is not a major handicap. If $s(n)$ is not space
constructible, we can run the procedure several times while increasing a fixed
space-bound until the computation has enough space to complete. More precisely,
iterate over space bounds $s = 1,2,3,...$. If the computation reaches an
accepting state, then accept. If the computation does not accept but tries to
exceed the space bound, repeat this procedure with a larger space bound. If the
computation does not on any path try to exceed the space bound, then accept if
there is a path to an accepting state, otherwise reject. Once again, we see the
economy of being able to reuse space to attempt simulation through
trial-and-error --- a facility we did not have in the time bounded setting.
Similar to the proof of theorem \ref{06.thm.space_time}, this proof is closely
connected to an important problem - TQBF.
\begin{definition}[TQBF]
Given a fully quantified Boolean formula, determine whether the formula is
{\it true}.
\end{definition}
%
\begin{corollary}
TQBF is complete for $\PSPACE$ under $<^{log}_m$.
\end{corollary}
\begin{proof}
The proof of Theorem \ref{06.thm.space_sqr} gives a polynomial mapping
reduction for a general $\PSPACE$ problem transforming it into a
TQBF in polynomial time. This is a polynomial time log-space reduction to TQBF.
Thereofre, TQBF is $\PSPACE$ hard. It is easy to specify a machine to solve TQBF
that runs in $\PSPACE$. The machine simply iterates over all possible valuations
to the variables in the Boolean formula. Both storing the current valuation of
the variables, and evaluating the formula for this valuation take space only
polynomial in the size of the formula.
\end{proof}
The $\PSPACE$ complete problem gives us another way of showing that there exists
oracle $A$ such that $P^A = NP^A$. Note that $\PSPACE \subseteq \P^{\rm{TQBF}}$
by the corollary and $\P^{\rm{TQBF}} \subseteq \PSPACE$ because all queries to
the oracle can be constructed in polynomial time. Therefore $\P^{\rm{TQBF}} =
\PSPACE$. Taking this idea further we get the following containment:
\[\P^{\rm{TQBF}} = \PSPACE = \NP^{\rm{TQBF}} \subseteq (\PSPACE^{\rm{TQBF}} =
\PSPACE)\]
The last equality is model dependent because it matters whether we
count the cells used on the oracle tape or not (if the oracle tape is
not counted, a $\PSPACE^{\rm{TQBF}}$ machine could abuse the oracle tape to
compute more than an unassisted $\PSPACE$ machine could).
Many $\PSPACE$ complete problems can be interpreted as adversarial game theory
calculations. TQBF can be thought of as a game between two players, $\forall$
and $\exists$ , who place quantifiers in a Boolean formula. $\exists$ wins if,
after no player can move, the formula is true. Determining if there is a
dominating strategy for $\exists$ given a formula is another $\PSPACE$ complete
problem.
\section{Next Time}
In the next lecture, we will wrap up the discussion of space bounded
non-deterministic models with the proof of Theorem~\ref{06.thm.space_comp}.
We will then introduce the polynomial-time hierarchy, look at some properties of
these classes and present complete problems for each of the class.
\section*{Acknowledgements}
In writing the notes for this lecture, I perused the notes by Baris Aydinlioglu
and Matthew Anderson for lectures 4 and 5 from the Spring 2007 offering of
CS~810, and the notes by Dmitri Svetlov and Michael Correll for lectures 4 and 5
from the Spring 2010 offering of CS~710.
\end{document}