Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



ECCC BOOKS, LECTURES AND SURVEYS > AVERAGE CASE COMPUTATIONAL COMPLEXITY THEORY:

Tomoyuki Yamakami:

Average Case Computational Complexity Theory

Download

Department of Computer Science
University of Toronto
Toronto, Ontario
Canada M5S 3G4


Abstract

The hardest problems in the complexity class NP are called NP-complete. However, not all NP-complete problems are equally hard to solve from the average point of view. For example, the Hamiltonian circuit problem has been shown to be solvable deterministically in polynomial time on the average, whereas the bounded tiling problem still remains hard to solve even on the average. We therefore need a thorough analysis of the average behavior of algorithms.

In response to this need, L. Levin initiated in 1984 a theory of average-case NP-completeness. Levin's theory deals with average-case NP-complete problems using polynomial-time many-one reductions. The reducibility is a method by which we can classify the distributional NP problems.

In this thesis, we develop a more general theory of average-case complexity to determine the relative complexity of all natural average-case intractable problems. We investigate structure of reducibilities, including a bounded-error probabilistic truth-table reducibility. We introduce a variety of relativizations of fundamental average-case complexity classes of distributional decision problems. These relativizations are essential when we attempt to expand our notion of average polynomial-time computability to develop a hierarchy above average NP problems.

Average-case analyses are very sensitive to the choice of probability distributions. We have observed that if the input probability distribution decays exponentially with size, for instance, all NP-complete problems are solved ``fast'' on the average. This phenomenon does not reflect a significant feature of average-case analysis. This thesis includes a thorough analysis of structural properties of feasibly computable distributions and feasibly samplable distributions.

In addition, one may ask how we can extract the essential average behavior of algorithms independent of the choice of probability distributions. To answer this question, this thesis introduces the new notion of quintessential computability, which expands the boundary of worst-case feasible computability (such as polynomial-time computability), and asserts the invariance of average-case complexity of algorithms regardless of which feasibly computable distributions are chosen. This thesis examines the hardness of this real computability and its structural properties.


Preface

The theory of average-case NP-completeness came forcibly to my attention while I was a visiting scholar at the Universit{"a}t Ulm from April to August of 1991. In June of 1991, the annual meeting of complexity theorists from the Universit{"a}t Ulm and the Universitat Polit{`e}cnica de Catalunya was held in Barcelona. Uwe Sch{"o}ning, the director of the Abteilung Theoretische Informatik of the Universit{"a}t Ulm, assigned to young researchers the topics that would be extensively studied at that year's meeting: average-case NP-complete problems and local search problems. Six years before, L. Levin had presented his idea of average-case NP-completeness, and several important studies were done along these lines.

I started reading these papers and technical reports and enjoyed discussing Levin's definition of ``polynomial on average'' with Rainer Schuler, who was finishing his thesis on probabilistic computations. The foundations of this thesis were established during this time, and the results were presented at a conference in New Delhi in December, 1992.

In June of 1994, I met Rainer Schuler again at a conference held in Amsterdam. He had with him a paper which solved a problem we had left open in our 1992 paper. We soon started working together, refining his key algorithm to construct hard sets which cannot be computable in feasible time. These results were later presented at a conference in Xi'an, China, in August of 1995 and are also included in this thesis.

This thesis demands of little preparatory knowledge in the theory of computational complexity. Most concepts are thoroughly defined in each section of this thesis or are self-explanatory.

I am extremely grateful to Stephen A. Cook for his hospitality and expert supervision. I thank him also for his direction and support, without which I could not have come to Canada to pursue my Ph.D. degree. My thanks also go to my friend Rainer Schuler who has been my collaborator since I visited the Abteilung Theoretische Informatik of the Universit{"a}t Ulm in 1991. I would like to thank Jie Wang and Osamu Watanabe for helpful comments and fruitful criticism. Special thanks go to Yuri Gurevich and Alan Selman for his kindness and support. I am also indebted to Leonid Levin and Oded Goldreich for helpful comments. I greatly appreciate the input of my thesis committee members, Steve Cook, Allan Borodin, Alasdair Urquhart, Charlie Rackoff, Yuri Gurevich, Anthony J. Bonner, Rudolf Mathon, and Radford Neal.

I thank my friends Brian Nixon and Luis Dissett at the University of Toronto for their kind advice and encouragement. My special thanks also go to Eric Harley and Debby Repka for pointing out typos and grammatical errors in an early manuscript.

My parents, Fujio and Yoshiko, have supported me emotionally and financially during my studies in Toronto. I also thank my grandmother, Nawo, from the bottom of my heart for spiritual guidance. My great appreciation should go to my fiancee Mitsue Nomura who has helped me write this thesis.

Tomoyuki Yamakami

Toronto, Canada
May 7, 1997


Submitted in conformity with the requirements
for the Degree of Doctor of Philosophy
Graduate Department of Computer Science
University of Toronto


ISSN 1433-8092 | Imprint