The importance of the Baker-Gill-Solovay result

### What are recursive functions?

Recursive functions are a kind of computational model. There are definitions which each vary slightly, but they are a way to discuss the computable functions. Much of the work on the theory of computation before computational complexity came about were done in the language of the recursive functions. Things like the $S^m_n$-theorem, the recursion theorem are not exactly about recursive functions, but were originally expressed via this discourse. They are Turing complete, but are number theoretic in nature. Recursive functions are exactly the functions $\mathbb{N} \rightarrow \mathbb{N}$ which are computable. A Turing machine not halting on a specific input is represented as its equivalent recursive function being partial. Naturally then, there is a correspondence from the set of decidable languages and the set of total recursive functions. I don’t want to get too involved in the definition since it is (for this post) uselessly complicated. We don’t need their whole life story to know why they were killed.

### Diagonalization

Diagonalization is a proof technique. You order some elements, construct a “fixed point”, or a “diagonal”, and take its opposite, reaching some kind of contradiction. You may be familiar with the use of diagonalization in analysis and set theory. This technique was useful in proving early theorems in computability and computational complexity.

I will give you two proofs using diagonalization to show you what I am talking about. Then we can discuss the nature of these proofs. Lets give a proof of the halting problem, but stated in recursive function theory syntax.

Let $\varphi_1, \varphi_2,...$ be an enumeration of the recursive functions. Then there is no total recursive function

Assume that $g$ is total recursive, then we construct

Certainly then, $\psi$ is recursive, by construction from $g$. Then there exists some index $i$ such that $\psi(x) = \varphi_i(x)$. Since its recursive, it exists somewhere in our ordering, and has an index. Consider $\psi$ on its own index:

\psi(i) = 1\iff g(i,i) = 0 \iff \varphi_i(i) \text{ loops } \iff \psi(i) \text{ loops }
$$\psi(i) \text{ loops } \iff g(i,i) = 1 \iff \varphi_i(i) \text{ halts } \iff \psi(i) \text{ halts }$$

Hence, there is no total recursive function for $g$.

Now let me give you a more modern application of diagonalization, in proving a very weak kind of the time hierarchy theorem. Let $\mathsf{DTIME}(f(n))$ be the set of languages decidable by a deterministic Turing machine which takes no more than $O(f(n))$ steps. We show that $\forall k ~~ \mathsf{DTIME}(n^k) \subsetneq \mathsf{DTIME}(n^{k+1})$. The containment is obvious, so we only need to show existence of a language computable in time $O(n^{k+1})$ but not in time $O(n^k)$. We will diagonalize over all languages which run in time $n^k$ and make sure our language runs in time $n^{k+1}$. Let $M_1, M_2, ...$ be an enumeration of the Turing machines in $\mathsf{DTIME}(n^k)$. Construct a Turing machine $D$ as follows. On input $w$, compute $n = \mid w \mid$. Simulate $M_n$ on input $w$ for $n^k$ steps using a universal Turing machine. If $M_n$ halts within our timebound, then output $1-M_n(w)$. Otherwise, output $0$. Certainly $L(D) \not \in \mathsf{DTIME}(n^k)$, if you suppose that it was, then there would exist $i$ such that $L(D) = L(M_i)$. But for every $i,D(i) = (1-M_i(i)) \neq M_i(i)$. What is the run time of $D$? It depends on the cost of simulation, and the actual machine construction, but you could be convinced that running such a universal simulator with a counter for $n^k$ takes no more then $n^{k+1}$. So then $L(D) \in \mathsf{DTIME}(n^{k+1})$.

### Remarks

Diagonalization was invented by Cantor, as a second proof to show the reals were uncountable. Here it has been made useful in showing two foundational results in complexity. You have a set of somethings, and you can diagonalize out of said set. You couldn’t do the second proof with something more general, like all Turing machines because on some input, you don’t know when it will halt. The simulation counter here is important. Also notice that the tightness of our hierarchy depends on the cost of simulation, and we can prove tighter bounds for different variants of Turing machines.

I want to call attention to the syntax. I use two different kinds of notation but the idea is similar. You have some sort of self referential point, and a simulation. The simulation occurs in a “black-box” way. No intrinsic or deep properties of what is being simulated can occur, its only the relationships between the inputs and output. Objects are just wired together in a way without using anything about them other than their interface.

Diagonalization had been a useful technique. It was also useful in proving Ladner’s theorem, which (among other things) implies there exists languages in $\mathsf{NP}$ which are not $\mathsf{NP}$-complete if $\mathsf{P} \neq \mathsf{NP}$. So diagonalization was applicable just outside of $\mathsf{P}$ and within $\mathsf{NP}$. Why couldn’t we separate $\mathsf{P}$ from $\mathsf{NP}$ via diagonalization?

### Relativization

We will now show such a proof cannot seperate $\mathsf{P}$ from $\mathsf{NP}$. This was one of the first barriers found, and why $\mathsf{P,NP}$ is such a legendarily hard problem. We have no idea how to prove it, but we have several proofs showing what the proof can’t look like.

We will generalize these kinds of diagonalization proofs and call them relativization. An oracle machine is a kind of Turing machine, except it has oracle access to some language. It can determine membership of this language using a separate tape. Think of it like an API. For some complexity class $\mathsf{C} \subset \mathcal{P}(\Sigma^*)$ and some language $L \subset \Sigma^*$, we write $\mathsf{C}^L$ to define an analogue of $\mathsf{C}$ where the machines have this oracle to $L$. It can decide membership in $L$ in constant time. This is of course not realistic. You could even have $L = HALT$ and discuss what languages are decidable relative to halt. Notice that oracle access can only give you power. So you could prove things like $\mathsf{C} \subseteq \mathsf{C}^L$. As a warmup, consider $\mathsf{P}^{SAT}$, so polynomial time machines, except they have access to an oracle which can decide SAT instances. Certainly $SAT \in \mathsf{P}^{SAT}$, and since $SAT$ is $\mathsf{NP}$-complete, then $\mathsf{NP} \subseteq \mathsf{P}^{SAT}$. Polynomial time computation relative to $SAT$ must contain $\mathsf{NP}$.

Suppose that there was a diagonalization proof to separate $\mathsf{P}$ from $\mathsf{NP}$. We would simulate all machines which decide languages in $\mathsf{P}$, construct a machine which viewed the output of these simulations and behaved differently than each one. Then we would show this machine would be decide a language in $\mathsf{NP}$. This proof would still hold in a relativized world. Namely, such a proof of $\mathsf{P\neq NP}$ would also show $\mathsf{P}^L \neq \mathsf{NP}^L$ for all $L$. The simulator for each oracle machine in $\mathsf{P}^L$, on oracle access would simply query its own oracle. It should follow that if we could construct an oracle such that $\mathsf{P}^B = \mathsf{NP}^B$, then such a diagonalization proof could not exist. Similarly, any simulation style proof of equality would relativize, and if we construct an oracle such that $\mathsf{P}^A \neq \mathsf{NP}^A$, such a proof could also not exist. We can only then conclude that diagonalization cannot apply to $\mathsf{NP,P}$.

### The Proof

We construct oracle $B$ such that $\mathsf{P}^B = \mathsf{NP}^B$. The idea is we will choose $B$ to be a complete problem of a strict superset of both. By “lifting” these classes with the oracle to the same level, we have the side effect of making them equivalent. A natural choice then, is any $\mathsf{PSPACE}$-complete problem such as $B = TQBF$. The containment $\mathsf{P}^{TQBF} \subseteq \mathsf{NP}^{TQBF}$ is easy so only need to show the reverse.

$$\mathsf{NP}^{TQBF} \stackrel{1}{\subseteq} \mathsf{NPSPACE} \stackrel{2}{\subseteq} \mathsf{PSPACE} \stackrel{3}{\subseteq} \mathsf{P}^{TQBF}$$

The first containment holds by simulation of the $TQBF$ oracle. We get a nondeterministic machine which can decide all languages which use polynomial space, so $\mathsf{NPSPACE}$. The second inequality holds by Savitch’s theorem. The third containment holds because we can convert a $\mathsf{PSPACE}$ machine to an oracle machine which solves our $\mathsf{PSPACE}$-complete problem.

We will construct an oracle $A$ and a language $L_A \in \mathsf{NP}^{A}\setminus \mathsf{P}^{A}$, ironically, by diagonalization. Let $L_A$ be the set of strings such that there is some string of the same length in $A$. Namely $L_A = \{w \mid \exists x \in A( \mid x\mid =\mid w\mid )\}$. Let $M_1, M_2, ...$ be an ordering of the polynomial time oracle machines in $\mathsf{P}^A$. We may even suppose this ordering is weakly sorted so we can use the property that $M_i$ halts in time $n^i$. We construct oracle $A$ inductively so that it runs in a sequence of stages, where $M_i^A$ cannot decide $L_A$ is ensured in stage $i$. At each stage, only a finite amount of strings are said to be in $A$ and not in $A$.

Suppose we are at stage $i$. Let $w$ be the longest string in $A$. Choose $n$ such that $2^n > n^i$, and $n > \mid w \mid$. We are going to increase the knowledge about $A$ so that $M^A_i$ accepts $1^n \iff 1^n \not \in L_A$. We run $M_i^A$ on $1^n$. On its query to oracle $A$, if it has been queried by that string before, the oracle will respond consistently. Otherwise it will prophecize no. If it is queried by a string $y$ not seen yet, it will prophecize no, meaning $y \not \in A$. The fact that $M_i^A$ is polynomially bounded is essential; It does not have time to query $A$ with all strings of length $n$. So far, during its execution, it has only received a “no” answer, so it doesn’t yet have enough information to correct decide $L_A$. We can ensure it doesn’t by extending the strings contained or not of $A$. If $M_i^A$ accepted $1^n$, then all the other strings of length $n$ are declared not to be in $A$. If $M_i^A$ rejected $1^n$, then there exists a string of length $n$ not queried, which we declare to be in $A$. So we know then that $L(M_i^A) \neq L_A \not \in \mathsf{P}^A$. All thats left is to show that its in $\mathsf{NP}^A$, which is quite easy. Nondeterministically guess some $x \in A$.

### The Impact

P vs NP was already an open problem for like 5 years at this point. People were beginning to get worried. Relativized techniques had failed to demonstrably prove a separation between $\mathsf{P}$ and $\mathsf{NP}$, but it was thought this was just a hiccup in the proof. This result was the first real provable barrier to this problem. There would be many more barriers to this problem, with this one being the first.

Recursive function theory was already falling out of use, not by necessity, but by fashion. They made a bad computational model for step counting. The Turing machine model ended up being almost perfect for that. This result really showed that recursive functions had to go. Consider anything you can do with a recursive function $\varphi_i(x)$. You can either talk about the parameter(s) $x$, or its index $i$. Both of these use the recursive function in an opaque way! You can’t really discuss the inner workings of the computation. Even the three operations, composition, primitive recursion, and unbounded minimization, these only use the subfunctions in a black box way. Its all just wiring and no deeper introspection. This was sufficient for an amazing set of results, but it would no longer suffice in the future of complexity.

In “The History and Status of the P versus NP Question” [Sip92], Sipser writes:

In a sense, the work on relativization already suggests a sort of limited independence for the P versus NP question. One might be able to formulate an axiomatic system corresponding to pure recursive function theory, in which the results of Baker, Gill, and Solovay presumably would show that the question is unresolvable. However, this would be very far from establishing independence with respect to strong systems such as number theory or set theory.

In “The Complexity of Finite Functions” [BS90], Boppana and Sipser write:

We will not include the older work on lower bounds via the diagonalization method. Thought important early progress was made in that way, the relativitzation results of Baker, Gill Solovay show that such methods from recursive function theory are inadequate for the remaining interesting questions.

Instead of retiring gracefully, recursive functions were basically killed off in complexity theory. The new promised future of the next decade was circuits! Unlike recursive functions, circuits are tiny and inspectable, and you can do all kinds of things to their insides. For example, consider the threshold function $(n,k)$ where atleast $k$ of $n$ bits are set to true. This circuit can be used inductively to create a boolean circuit for the $(n+1,k)$ threshold function. We would eventually hit circuit barriers as well.

• Hartmanis and Hopcroft in [HH76] provide an oracle $C$ such that $\mathsf{P}^C \neq \mathsf{NP}^C$ is undecidable. The proof uses some theorems from recursive function theory.