Pseudo-Random Generators for All Hardnesses. Explicit Constructions : In some cases a computational procedure's reliance on randomness is often implicitly confined to exploiting properties of some combinatorial structure with specific "random-like" properties. Perhaps the most venerable example is that of expander graphs , which are sparse, yet highly-connected networks.

Other fundamental combinatorial objects include error-correcting codes, families of hash functions, and randomness extractors. All of these objects have proven to be extremely useful tools in the design of algorithms, network and distributed protocols, and a diverse array of applications within computational complexity itself. Indeed it is not surprising that objects possessing certain extremal combinatorial properties as these objects do should naturally appear in arguments that explore the boundary of what is computationally possible.

Probabilistic constructions exist for all of these objects; however most applications require something much stronger -- an efficient deterministic, or explicit , construction. Explicit constructions are typically much harder to achieve, and indeed constitute a form of derandomization. My research has focused primarily on obtaining explicit constructions of randomness extractors , which are bipartite graphs with the following "random-like" property: any distribution of nodes on the left with sufficient entropy induces a nearly-uniform distribution on the nodes on the right.

The original motivation for extractors is that they supply the algorithmic component necessary for using "real" randomness emanating from a physical source to execute randomized computations as opposed to ad-hoc simulation on an otherwise inherently deterministic machine. But extractors have since emerged as a fundamental combinatorial object by dint of numerous applications in a wide variety of settings unrelated to their original motivation.

Explicit extractor constructions have been the subject of a long and intensive line of research over the past 15 or more years.

Guruswami, C. Umans and S. Ta-Shma and C. Mossel and C. Ta-Shma, C. Umans, and D.

### About this book

Loss-less Condensers, Unbalanced Expanders, and Extractors. Algorithms for Algebraic Problems: Matrix multiplication is perhaps the most fundamental problem in algorithmic linear algebra.

- Radioaktivitat in Lebensmitteln.
- About This Course.
- Post navigation?
- CS Computational Complexity.
- Borderless Business: Managing the Far-Flung Enterprise.

This claim is justified by the fact that numerous other important problems are known to have the same algorithmic complexity as matrix multiplication, including computing the matrix inverse, computing the determinant, solving a system of linear equations, and computing various matrix decompositions. These problems lie at the core of application areas within scientific computing, information indexing and retrieval, cryptanalysis, data mining, network routing, and others.

So, resolving the complexity of matrix multiplication automatically impacts the complexity of a host of other problems, via known reductions. Given the importance of matrix multiplication, it is surprising that the best known algorithms are far from optimal.

## Chris Umans Research Summary

Coppersmith and Winograd's algorithm, first published in , marked the end of a sequence of improvements that began with Strassen's breakthrough algorithm in These works developed increasingly sophisticated algorithms for matrix multiplication, in dozens of papers by numerous authors. We propose a new approach that utilizes the Discrete Fourier Transform DFT over non-abelian finite groups to perform fast matrix multiplication. This approach imports the problem into the domain of Group Theory and Representation Theory; the challenge of designing a fast algorithm for matrix multiplication is now exposed to a rich set of mathematical tools and techniques.

Cohn, R.

Kleinberg, B. Szegedy and C.

- Subscribe to RSS!
- Table of contents?
- RP (complexity).
- ECS 220 Theory of Computation?
- Computer Science- UC Davis.

Group-Theoretic Algorithms for Matrix Multiplication. Cohn and C. In the same way that matrix multiplication is a fundamental problem in the domain of linear algebraic computations, modular composition is a fundamental problem in the domain of computations with polynomials.

### You are here

It plays a similar role in that the fastest known algorithms for a diverse array of problems most notably polynomial factorization, but also irreducibility testing, computing minimal polynomials, manipulating normal bases depend on fast algorithms for modular composition. Collaborating with your classmates on assignments is encouraged, with the exception of the last assignment that should be completed alone.

Assignments should be typed. LaTeX is strongly recommended, it's free and can be downloaded here. Here is a suggested LaTaX template for problem sets' solutions. Teaching: Gillat Kol - Nassau st, Office gkol princeton.

## Donate to arXiv

Collaborating with your classmates on assignments is encouraged and may be essential to get the most out of the course. You must, however, list your collaborators for each problem. The assignment questions have been carefully selected for their pedagogical value and may be similar to questions on problem sets from past offerings of this course or courses at other universities.

Using any preexisting solutions from these or any other sources is strictly prohibited.

Princeton University Computer Science Dept. Spring Topic: Interactive Proofs cont'd - Program checking section 8. Lecture notes by Dana Moshkovitz here.