Showed that for various optimization criteria, the optimal polygon containing k of n points must be near one of the points, hence one can transform time bounds involving several factors of n to bounds linear in n but polynomial in k. Used as a subroutine are data structures for finding several nearest neighbors in rectilinear metric spaces, and algorithms for finding the deepest point in an arrangement of cubes or spheres.
Teng and others previously showed that certain geometric graphs had small separators that could be found by lifting the graph to a sphere one dimension up and choosing a random great circle. Here we show that epsilon-cuttings and the method of conditional expectations can be used to guide a deterministic prune-and-search method for the same problem. Applications include finding the intersection graph of a collection of spheres and computing or approximating the maximum number of spheres having a common intersection.
Measures how well a sample of points from a set works as a discrete approximation to the continuous measure of shapes in the set, using algorithms based on Overmars and van Leeuwen's dynamic convex hull data structure. Some versions of the problem also involve subroutines for finding the deepest point in an arrangement of quadrants or orthants.
This paper was merged with results of Mitchell to form the journal version, "Computing the discrepancy with applications to supersampling patterns".
Combines "Computing the discrepancy" with experimental results of Mitchell on the discrepancies of various point sets, emphasizing the application of low-discrepancy sets to anti-aliasing in raytraced graphics.
Given a collection of n sites, a center point is a point (not necessarily a site) such that no hyperplane through the centerpoint partitions the collection into a very small and a very large subset. Center points have been used by Teng and others as a key step in the construction of geometric separators. One can find a point with this property by choosing a random sample of the collection and applying linear programming, but the complexity of that method grows exponentially with the dimension. This paper proposes an alternate method that produces lower quality approximations (in terms of the size of the worst hyperplane partition) but takes time polynomial in both n and d.
We show that, for any set of n points in d dimensions, there exists a hyperplane with regression depth at least ceiling(n/(d+1)). as had been conjectured by Rousseeuw and Hubert. Dually, for any arrangement of n hyperplanes in d dimensions there exists a point that cannot escape to infinity without crossing at least ceiling(n/(d+1)) hyperplanes. We also apply our approach to related questions on the existence of partitions of the data into subsets such that a common plane has nonzero regression depth in each subset, and to the computational complexity of regression depth problems.
We generalize regression depth to k-flats. The k=0 case gives the classical notion of center points. We prove that for any set of n points in Rd there always exists a k-flat with depth at least a constant fraction of n. As a consequence, we derive a linear-time (1+epsilon)-approximation algorithm for the deepest flat. The full version of this paper also includes results from "Computing the Depth of a Flat".
We compute the regression depth of a k-flat in a set of n d-dimensional points, in time O(nd-2), an order of magnitude faster than the best known algorithms for computing the depth of a point or of a hyperplane. The results from this conference paper have been merged into the full version of "Multivariate Regression Depth".
(SODA talk slides – SODA paper)
Surveys projective duality and its uses in algorithms for robust regression. The MSRI talk used the alternative title "Computational geometry and robust statistics" but contained essentially the same content.
(DIMACS talk slides – video and slides of MSRI talk)
We describe an efficient streaming-model construction of epsilon-nets and epsilon-approximations, and use it to find deterministic streaming-model approximation algorithms for iceberg range queries and for various robust statistics problems.
Discusses a paper by Mizera and Müller on depth-based methods for simultaneously fitting both a center and a radius to a set of sample points, by viewing the points as lying on the boundary of a model of a higher dimensional hyperbolic space. Reformulates their method in combinatorial terms more likely to be familiar to computational geometers, and discusses the algorithmic implications of their work.
Geometry – Publications – David Eppstein – Theory Group – Inf. & Comp. Sci. – UC Irvine
Semi-automatically filtered from a common source file.