We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The second smallest eigenvalue of the Laplacian matrix, known as algebraic connectivity, determines many network properties. This paper investigates the optimal design of interconnections that maximizes algebraic connectivity in multilayer networks. We identify an upper bound for maximum algebraic connectivity for total weight below a threshold, independent of interconnections pattern, and only attainable with a particular regularity condition. For efficient numerical approaches in regions of no analytical solution, we cast the problem into a convex framework and an equivalent graph embedding problem associated with the optimum diffusion phases in the multilayer. Allowing more general settings for interconnections entails regions of multiple transitions, giving more diverse diffusion phases than the more studied one-toone interconnection case. When there is no restriction on the interconnection pattern, we derive several analytical results characterizing the optimal weights using individual Fiedler vectors. We use the ratio of algebraic connectivity and layer sizes to explain the results. Finally, we study the placement of a limited number of interlinks heuristically, guided by each layer’s Fiedler vector components.
We address the problem of optimal transport with a quadratic cost functional and a constraint on the flux through a constriction along the path. The constriction, conceptually represented by a toll station, limits the flow rate across. We provide a precise formulation which, in addition, is amenable to generalization in higher dimensions. We work out in detail the case of transport in one dimension by proving existence and uniqueness of solution. Under suitable regularity assumptions, we give an explicit construction of the transport plan. Generalization of flux constraints to higher dimensions and possible extensions of the theory are discussed.
Collaborative robots are becoming intelligent assistants of human in industrial settings and daily lives. Dynamic model identification is an active topic for collaborative robots because it can provide effective ways to achieve precise control, fast collision detection and smooth lead-through programming. In this research, an improved iterative approach with a comprehensive friction model for dynamic model identification is proposed for collaborative robots when the joint velocity, temperature and load torque effects are considered. Experiments are conducted on the AUBO I5 collaborative robots. Two other existing identification algorithms are adopted to make comparison with the proposed approach. It is verified that the average error of the proposed I-IRLS algorithm is reduced by over 14% than that of the classical IRLS algorithm. The proposed I-IRLS method can be widely used in various application scenarios of collaborative robots.
Alternating direction method of multipliers (ADMM) receives much attention in the field of optimization and computer science, etc. The generalized ADMM (G-ADMM) proposed by Eckstein and Bertsekas incorporates an acceleration factor and is more efficient than the original ADMM. However, G-ADMM is not applicable in some models where the objective function value (or its gradient) is computationally costly or even impossible to compute. In this paper, we consider the two-block separable convex optimization problem with linear constraints, where only noisy estimations of the gradient of the objective function are accessible. Under this setting, we propose a stochastic linearized generalized ADMM (called SLG-ADMM) where two subproblems are approximated by some linearization strategies. And in theory, we analyze the expected convergence rates and large deviation properties of SLG-ADMM. In particular, we show that the worst-case expected convergence rates of SLG-ADMM are $\mathcal{O}\left( {{N}^{-1/2}}\right)$ and $\mathcal{O}\left({\ln N} \cdot {N}^{-1}\right)$ for solving general convex and strongly convex problems, respectively, where N is the iteration number, similarly hereinafter, and with high probability, SLG-ADMM has $\mathcal{O}\left ( \ln N \cdot N^{-1/2} \right ) $ and $\mathcal{O}\left ( \left ( \ln N \right )^{2} \cdot N^{-1} \right ) $ constraint violation bounds and objective error bounds for general convex and strongly convex problems, respectively.
In this section, we discuss fundamental methods, mostly based on gradient information, that yield descent, that is, the function value decreases at each iteration. We start with the most basic method, the steepest-descent method, analyzing its convergence under different convexity/nonconvexity assumptions on the objective function. We then discuss more general descent methods, based on descent directions other than the negative gradient, showing conditions on the search direction and the steplength that allow convergence results to be proved. We also discuss a method that also makes use of Hessian information, showing that it can find a point satisfying approximate second-order optimality conditions and finding an upper bound on the number of iterations required to do so. We then discuss mirror descent, a class of gradient methods based on more general distance metrics that are particularly useful in optimizing over the unit simplex – a problem that arises often in data science. We conclude by discussing the PL condition, a generalization of the strong convexity condition that allows linear convergence rates to be proved.
Accurate, robust and fast image reconstruction is a critical task in many scientific, industrial and medical applications. Over the last decade, image reconstruction has been revolutionized by the rise of compressive imaging. It has fundamentally changed the way modern image reconstruction is performed. This in-depth treatment of the subject commences with a practical introduction to compressive imaging, supplemented with examples and downloadable code, intended for readers without extensive background in the subject. Next, it introduces core topics in compressive imaging – including compressed sensing, wavelets and optimization – in a concise yet rigorous way, before providing a detailed treatment of the mathematics of compressive imaging. The final part is devoted to recent trends in compressive imaging: deep learning and neural networks. With an eye to the next decade of imaging research, and using both empirical and mathematical insights, it examines the potential benefits and the pitfalls of these latest approaches.
Approximate computation methods with provable performance guarantees are becoming important and relevant tools in practice. In this chapter we focus on sketching methods designed to reduce data dimensionality in computationally intensive tasks. Sketching can often provide better space, time, and communication complexity trade-offs by sacrificing minimal accuracy. This chapter discusses the role of information theory in sketching methods for solving large-scale statistical estimation and optimization problems. We investigate fundamental lower bounds on the performance of sketching. By exploring these lower bounds, we obtain interesting trade-offs in computation and accuracy. We employ Fano’s inequality and metric entropy to understand fundamental lower bounds on the accuracy of sketching, which is parallel to the information-theoretic techniques used in statistical minimax theory.
In this paper, we use convex optimization to maximize power efficiency through cascaded multi-coil wireless power transfer systems and investigate the resulting characteristic spacing. We show that although the efficiency is generally a non-convex function of the coil spacing, it can be approximated by a convex function when the effects of higher-order couplings are small. We present a method to optimize the spacing of cascaded coils for maximum efficiency by perturbing the solution of the convex approximation to account for higher-order interactions. The method relies on two consecutive applications of a local optimization algorithm in order to enable fast convergence to the global optimum. We present the optimal configurations of coil systems containing up to 20 identical coils that transfer power over distances up to 4.0 m. We show that when spacing alone is optimized, there exist an optimal number of coils that maximize transfer efficiency across a given distance. We also demonstrate the use of this method in optimizing the placement of a select number of high-Q coils within a system of low-Q relay coils, with the highest efficiencies occurring when the high-Q coils are placed on either side of the largest gaps within the relay coil chain.
This paper proposes signal detection methods for frequency domain equalization (FDE) based overloaded multiuser multiple input multiple output (MU-MIMO) systems for uplink Internet of things (IoT) environments, where a lot of IoT terminals are served by a base station having less number of antennas than that of IoT terminals. By using the fact that the transmitted signal vector has the discreteness and the group sparsity, we propose a convex discreteness and group sparsity aware (DGS) optimization problem for the signal detection. We provide an optimization algorithm for the DGS optimization on the basis of the alternating direction method of multipliers (ADMM). Moreover, we extend the DGS optimization into weighted DGS (W-DGS) optimization and propose an iterative approach named iterative weighted DGS (IW-DGS), where we iteratively solve the W-DGS optimization problem with the update of the parameters in the objective function. We also discuss the computational complexity of the proposed IW-DGS and show that we can reduce the order of the complexity by using the structure of the channel matrix. Simulation results show that the symbol error rate (SER) performance of the proposed method is close to that of the oracle zero forcing (ZF) method, which perfectly knows the activity of each IoT terminal.
In computational auditory scene analysis, the accurate estimation of binary mask or ratio mask plays a key role in noise masking. An inaccurate estimation often leads to some artifacts and temporal discontinuity in the synthesized speech. To overcome this problem, we propose a new ratio mask estimation method in terms of Wiener filtering in each Gammatone channel. In the reconstruction of Wiener filter, we utilize the relationship of the speech and noise power spectra in each Gammatone channel to build the objective function for the convex optimization of speech power. To improve the accuracy of estimation, the estimated ratio mask is further modified based on its adjacent time–frequency units, and then smoothed by interpolating with the estimated binary masks. The objective tests including the signal-to-noise ratio improvement, spectral distortion and intelligibility, and subjective listening test demonstrate the superiority of the proposed method compared with the reference methods.
To fully utilize the dynamic performance of robotic manipulators and enforce minimum motion time in path tracking, the problem of minimum time path tracking for robotic manipulators under confined torque, change rate of the torque, and voltage of the DC motor is considered. The main contribution is the introduction of the concepts of virtual change rate of the torque and the virtual voltage, which are linear functions in the state and control variables and are shown to be very tight approximation to the real ones. As a result, the computationally challenging non-convex minimum time path tracking problem is reduced to a convex optimization problem which can be solved efficiently. It is also shown that introducing dynamics constraints can significantly improve the motion precision without costing much in motion time, especially in the case of high speed motion. Extensive simulations are presented to demonstrate the effectiveness of the proposed approach.
We consider a distributed optimization problem over a multi-agent network, in which
the sum of several local convex objective functions is minimized subject to global
convex inequality constraints. We first transform the constrained optimization
problem to an unconstrained one, using the exact penalty function method. Our
transformed problem has a smaller number of variables and a simpler structure than
the existing distributed primal–dual subgradient methods for constrained
distributed optimization problems. Using the special structure of this problem, we
then propose a distributed proximal-gradient algorithm over a time-changing
connectivity network, and establish a convergence rate depending on the number of
iterations, the network topology and the number of agents. Although the transformed
problem is nonsmooth by nature, our method can still achieve a convergence rate, ${\mathcal{O}}(1/k)$, after $k$ iterations, which is faster than the rate, ${\mathcal{O}}(1/\sqrt{k})$, of existing distributed subgradient-based methods. Simulation
experiments on a distributed state estimation problem illustrate the excellent
performance of our proposed method.
Image fusion is an imaging technique to visualize information from multiple imaging sources in one single image, which is widely used in remote sensing, medical imaging etc. In this work, we study two variational approaches to image fusion which are closely related to the standard TV-L2 and TV-L1 image approximation methods. We investigate their convex optimization formulations, under the perspective of primal and dual, and propose their associated new image decomposition models. In addition, we consider the TV-L1 based image fusion approach and study the specified problem of fusing two discrete-constrained images and where and are the sets of linearly-ordered discrete values. We prove that the TV-L1 based image fusion actually gives rise to the exact convex relaxation to the corresponding nonconvex image fusion constrained by the discrete-valued set This extends the results for the global optimization of the discrete-constrained TV-L1 image approximation [8, 36] to the case of image fusion. As a big numerical advantage of the two proposed dual models, we show both of them directly lead to new fast and reliable algorithms, based on modern convex optimization techniques. Experiments with medical images, remote sensing images and multi-focus images visibly show the qualitative differences between the two studied variational models of image fusion. We also apply the new variational approaches to fusing 3D medical images.
We study the TV-L1 image approximation model from primal and dual perspective, based on a proposed equivalent convex formulations. More specifically, we apply a convex TV-L1 based approach to globally solve the discrete constrained optimization problem of image approximation, where the unknown image function u(x) ∈ {f1,…,fn}, ∀x ∈ Ω. We show that the TV-L1 formulation does provide an exact convex relaxation model to the non-convex optimization problem considered. This result greatly extends recent studies of Chan et al., from the simplest binary constrained case to the general gray-value constrained case, through the proposed rounding scheme. In addition, we construct a fast multiplier-based algorithm based on the proposed primal-dual model, which properly avoids variability of the concerning TV-L1 energy function. Numerical experiments validate the theoretical results and show that the proposed algorithm is reliable and effective.
The Monge-Kantorovich problem is revisited by means of a variantof the saddle-point method without appealing to c-conjugates. Anew abstract characterization of the optimal plans is obtained inthe case where the cost function takes infinite values. It leadsus to new explicit sufficient and necessary optimality conditions.As by-products, we obtain a new proof of the well-knownKantorovich dual equality and an improvement of the convergence ofthe minimizing sequences.
Entropic projections and dominating points are solutions to convexminimization problems related to conditional laws of largenumbers. They appear in many areas of applied mathematics such asstatistical physics, information theory, mathematical statistics, ill-posed inverse problems or large deviation theory. By means of convex conjugateduality and functional analysis, criteria are derived for theexistence of entropic projections, generalized entropicprojections and dominating points. Representations of thegeneralized entropic projections are obtained. It is shown thatthey are the “measure component" of the solutions to someextended entropy minimization problem. This approach leads to newresults and offers a unifying point of view. It also permits toextend previous results on the subject by removing unnecessarytopological restrictions. As a by-product, new proofs of alreadyknown results are provided.
In dimension one it is proved that the solution to a total variation-regularizedleast-squares problem is always a function which is "constant almost everywhere" ,provided that the data are in a certain sense outside the range of the operatorto be inverted. A similar, but weaker result is derived in dimension two.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.