Abstract: This talk considers the use of spherical designs and non-convex minimization for recovery of sparse signals on the unit sphere S^2. The available information consists of low order, potentially noisy, Fourier coefficients for S^2. As Fourier coefficients are integrals of the product of a function and spherical harmonics, a good cubature rule is essential for the recovery. A spherical t-design is a set of points on S^2, which are nodes of an equal weight cubature rule integrating exactly all spherical polynomials of degree ≤ t. We will show that a spherical t-design provides a sharp error bound for the approximation signals. Moreover, the resulting coefficient matrix has orthonormal rows. In general the L_1 minimization model for recovery of sparse signals on S^2 using spherical harmonics has infinitely many minimizers, which means that most existing sufficient conditions for sparse recovery do not hold. To induce the sparsity, we replace the L_1-norm by the L_q-norm (0
Workshop on scientific computing and
optimization with applications to
image and data analysis
12 January 2019
Hong Kong Baptist University
Venue: WLB208, The Wing Lung Bank Building for Business Studies, Shaw Campus
Speakers:
Xiaojun Chen, The Hong Kong Polytechnic University, Hong Kong
Roland Glowinski, Hong Kong Baptist University, Hong Kong
Nicholas J. Higham, University of Manchester, England
Serena Morigi, University of Bologna, Italy
Michael Ng, Hong Kong Baptist University, Hong Kong
Fiorella Sgallari, University of Bologna, Italy
Defeng Sun, The Hong Kong Polytechnic University, Hong Kong
Xue-Cheng Tai, Hong Kong Baptist University, Hong Kong
Abstract: We show that for a class of linearly constrained convex composite optimization problems, an (inexact) symmetric Gauss-Seidel based majorized multi-block proximal alternating direction method of multipliers (ADMM) is equivalent to an inexact proximal augmented Lagrangian method (ALM). This equivalence not only provides new perspectives for understanding some ADMM-type algorithms but also supplies meaningful guidelines on implementing them to achieve better computational efficiency. Even for the two-block case, a by-product of this equivalence is the convergence of the whole sequence generated by the classic ADMM with a step-length that exceeds the conventional upper bound of (1+√5)/2, if one part of the objective is linear. This is exactly the problem setting in which the very first convergence analysis of ADMM was conducted by Gabay and Mercier in 1976, but, even under notably stronger assumptions, only the convergence of the primal sequence was known. A collection of illustrative examples are provided to demonstrate the breadth of applications for which our results can be used. Numerical experiments on solving a large number of semidefinite programming problems are conducted to illustrate how the theoretical results established here can lead to improvements on the corresponding practical implementations.
This is a joint work with Liang Chen, Xudong Li, and Kim-Chuan Toh.
Abstract: The main goal of this lecture is to discuss the numerical solution of nonlinear eigenvalue problems for the Monge-Ampère operator. Such problems have been investigated by P.L.Lions in the eighties. We will focus on
(MAEV1):
u ≠ 0, and convex, λ > 0,
det D^2u = λ|u|
u = 0 on ∂Ω
(MAEV2):
u ≠ 0, and convex, λ > 0,
det D^2u = λ e^(|u|) in Ω
u = 0 on ∂Ω
The basic idea is to associate with the above eigenvalue problems an initial value problem that we time-discretize by operator-splitting. This allows the decoupling of the Monge-Ampère nonlinearity from the term involving the eigenvalue. The resulting algorithm is closely related to the inverse power method and is easy to implement via piecewise affine continuous finite element approximations. We will present the results of numerical experiments validating the above approach.
Abstract: Traditional rounding error analysis in numerical linear algebra leads to backward error bounds involving the constant γ_n= nu/(1 − nu), for a problem size n and unit roundoff u. In the light of large-scale and possibly low-precision computations, such bounds can struggle to provide any useful information. We develop a new probabilistic rounding error analysis that can be applied to a wide range of algorithms. By using a concentration inequality and making probabilistic assumptions about the rounding errors, we show that in several core linear algebra computations γ_n can be replaced by a relaxed constant \tilde(γ_n) proportional to √(nlognu ) with a probability bounded below by a quantity independent of n. The new constant \tilde(γ_n) grows much more slowly with n than γ_n. Our results have three key features: they are backward error bounds; they are exact, not first order; and they are valid for any n, unlike results obtained by applying the central limit theorem, which apply only as n. We provide numerical experiments that show that for both random and real-life matrices the bounds can be much smaller than the standard deterministic bounds and can have the correct asymptotic growth with n. We also identify two special situations in which the assumptions underlying the analysis are not valid and the bounds do not hold. Our analysis provides, for the first time, a rigorous foundation for the rule of thumb that "one can take the square root of an error constant because of statistical effects in rounding error propagation".
Abstract: In this talk, we report the results of robust tensor completion using tubal singular value decomposition, and its applications. Several applications and theoretical results are discussed. Numerical examples are also presented for demonstration.
Abstract: A real captured image may be distorted by many expected or unexpected factors among which blur and random noise are typical and often unavoidable examples. Hence, image deblurring and denoising are fundamental tasks in the field of image processing. Over the years, one of the most studied class of noises is that of additive, independent identically distributed (in short i.i.d.) noises, which affect all the pixels by independent random corruptions coming from the same distribution. This class includes important noises such as those characterized by Gaussian, Uniform, Laplacian and Cauchy distributions, which can be found in many applications, such as e.g. medical and astronomic imaging. For any of these noise distributions, ad hoc variational models have been devised in the past for image restoration.
However, in many practical applications it is difficult to know a priori the noise distribution and, in any case, the noise might be the outcome of several sources thus giving raise to mixed noise models with very specific/complex distributions.
To overcome these inherent difficulties, in this talk we discuss a robust variational model aimed at restoring images corrupted by blur and by the generic wide class of additive white- or uncorrelated - noises, which include i.i.d noises. The key idea behind our proposal relies on a novel hard constraint imposed on the residual of the restoration, namely we characterize a residual whiteness set to which the restored image must belong. As the feasible set is un- bounded, solution existence results for the proposed variational model are given. Moreover, based on theoretical derivations as well as on Monte Carlo simulations, we provide well-founded guidelines for setting the whiteness constraint limits. The solution of the nontrivial optimiza- tion problem, due to the non-smooth non- convex proposed model, is efficiently obtained by an alternating directions method of multipliers, which in particular reduces the solution to a sequence of convex optimization subproblems. Numerical results show the potentiality of the proposed model for restoring blurred images corrupted by several kinds of additive white noises.
Joint work with Alessandro Lanza, Serena Morigi and Federica Schiacchitano.
Abstract: This talk is concerning the computation of approximate solutions of non-convex non-smooth minimization problems which involve a sparsity-inducing term as a prior to improve the quality solution. These models arise in a wide variety of research areas, and we are interested in their application to the restoration of images corrupted by blur and noise, and the segmentation of images, surfaces, and images over surfaces.
We present the Convex-NonConvex (CNC) strategy which relies on the idea of constructing and then optimizing convex functionals containing nonconvex (sparsity-promoting) regularization terms thus well-known reliable convex minimization approaches can be used to compute the (unique) solution.
Abstract: For many applications, we need to use techniques to represent convex shapes and objects. In this work, we use level set method to represent shapes and find a necessary and sufficient condition on the level set function to guarantee the convexity of the represented shapes. We take image segmentation as an example to apply our techniques. Efficient numerical algorithm is developed to solve the variational model. In order to improve the performance of segmentation for complex images, we also incorporate landmarks into the model. One option is to specify points that the object boundary must contain. Another option is to specify points that the foreground (the object) and the background must contain. Numerical experiments on different images validate the efficiency of the proposed models and algorithms. We want to emphasis that the proposed technique could be used for general shape optimization with convex shapes. For other applications, the numerical algorithms need to be extended and modified.
This talk is based on joint works with S. Luo, S. Yan, J. Liu, Yang Wang and H. Huang.
The activity is organised with the generosity of Dr Kennedy Y.H. Wong who supported the establishment of ˜Dr Kennedy Y.H. Wong Distinguished Visiting Professorship Scheme to promote international academic exchange and strengthen HKBU status in the international academic community.