Last updated on December 15, 2014. This conference program is tentative and subject to change
In this paper we illustrate the use of these techniques for two different types of optimization problems over SO(d). The first type of problem arises in jointly estimating the attitude and spin-rate of a spin-stabilized satellite. We show how to exactly reformulate such problems as semidefinite programs. The second type of problem arises when estimating the orientations of a network of objects (such as cameras, satellites or molecules) from noisy relative orientation measurements. For this class of problems we formulate new semidefinite relaxations that are tighter than those existing in the literature, and show that they are exact when the underlying graph is a tree.
The first method, when applied to continuous-time It^o Stochastic Differential Equations (SDEs), involves function space integration with respect to the Wiener measure. Its corresponding discrete-time analog generalizes Witsenhausen's 1998 ``Common Denominator Condition" and ``Change of Variables", and relates them to the Radon-Nikodym Derivative and Bayes' theorem associated with Girsanov's theorem.
The second method is based on stochastic Pontryagin's maximum principle, and applies to continuous and discrete-time stochastic dynamic systems. The decentralized optimality conditions are given by a ``Hamiltonian System" consisting of forward and backward SDEs, and conditional variational Hamiltonians, conditioned on the information structure of each of the decision makers.
In the paper we provide a characterization of the asymptotic behaviour of the gradient method, in the general case where this is applied to a general concave-convex function. We prove that for any initial conditions the gradient method is guaranteed to converge to a trajectory described by an explicit linear ODE. We further show that this result has a natural extension to subgradient methods, where the dynamics are constrained in a prescribed convex set. The results are used to provide simple characterizations of the limiting solutions for special classes of optimization problems, and modifications of the problem so as to avoid oscillations are also discussed.
We model our system using a game theoretic framework. Based on this model, we find pure and mixed equilibria, and evaluate the performance of the equilibria by finding the Price of Anarchy (PoA) in several network topologies. Finally, we give numerical illustrations of our results.
We provide sufficient conditions for GC, and demonstrate their usefulness using examples of systems that are not contractive, with respect to any norm, yet are GC.
In this paper we develop a new max-plus based randomized algorithm to solve the same class of infinite horizon optimal control problems. The major difference between the new algorithm and the previous SDP-based curse of dimensionality free method is that, instead of adding a large number of functions and then pruning the less useful ones, the new algorithm finds in cheap computation time (linear in the current number of basis functions), by a randomized procedure, useful quadratic functions and adds only those functions to the set of basis functions. Experimental results show that the max-plus randomized algorithm can reach the same precision order obtained by the SDP-based method with a speed-up varying from 10 up to 100 and that the maximal precision order attainable by the new algorithm is much better than what can be done by the SDP-based algorithm in reasonable computation time. Besides, with the randomized algorithm we are now able to tackle switched problems with more number of switches, which will allow us to extend the algorithm to more general classes of optimal control problems.
In this work, we use the concept of average dwell time (ADT) of switched control systems to develop a sufficient condition that guarantees the stability of our adaptive switching algorithm. Using this condition, we formulate a feasibility problem to compute the minimum average dwell time for the set of designed compensators. We apply the algorithm to a thyristor-controlled series compensator on a two-area power system and show that the adaptive controller is stable for time-varying latency.
This paper demonstrates an inherent trade-off between the ability of the first-stage optimization to dynamically adapt to updates of the capacity forecast, and the flexibility available to airlines in the second stage. A new stochastic optimization model that balances this tradeoff is proposed. In addition to increasing the flexibility available to the airlines and the resultant delay reduction, the proposed formulation is shown to have attractive computational properties.
In this work we study the use of L1-regularization methods for high-resolution spectral estimation. In this problem, the dictionary is typically coherent and existing theory for robust/exact recovery does not apply. In fact, the robustness cannot be guaranteed in the usual strong sense. Instead, we consider metrics inspired by the Monge-Kantorovich transportation problem and show that the magnitude can be robustly recovered if the original signal is sufficiently sparse and separated. We derive both worst case error bounds as well as error bounds based on assumptions on the noise distribution.
In previous work , we solved precisely the decentralized optimal estimation problem for a finite population of agents. In particular, we developed a non-sequential bulk estimation algorithm whereby at every time step, all past and present available measurements are considered. In this paper, a Kalman-type recursive approximate filtering approach using exchangeability arguments is presented and tested numerically.
To illustrate, we numerically solve for the optimal control of a Linear Quadratic Gaussian (LQG) system with state constraints. A reasonably accurate solution is obtained even with a very small number of collocation points (three in each dimension), which suggests that the method could be used on high order systems, mitigating the curse of dimensionality. Source code for the example is available online.
The sufficient conditions are given in terms of a convexity condition of the Hamiltonian, and it is shown that PbP optimality implies team optimality. We also show existence of relaxed team optimal strategies.
Throughout the presentation we discuss similarities to problems of centralized stochastic optimal control and to mean field stochastic optimal control.
In this work, we study evolutionary dynamics in network games where the payoff of each player is influenced both by the actions of her neighbors in the network, and by the aggregate of the actions of all the players in the network. In particular, we consider cases where the payoff increases in the number of neighbors who choose the same action (local coordination effect) and decreases in the total number of players choosing the same action (global congestion effect). We study noisy best-response dynamics in networks which are the union of two complete graphs, and prove that the asymptotic behavior of the invariant probability distribution is characterized by two phase transitions with respect to a parameter measuring the relative strength of the local coordination and the global congestion effects. Extensions to connected networks with strong community structure are studied through Monte Carlo simulations.
We show that these systems correspond to the projection of specific trajectories of classical opinion dynamics systems involving twice as many agents, to which a large number of existing results apply. We take advantage of this to prove several convergence results for models with antagonisms, extending those previously available. Our approach can be applied in both discrete and continuous time.