Graph convolutional networks are powerful tools to analyze graph-structured data; unfortunately, it is vulnerable to adversarial attacks on topology. We develop a new method based on robust spectral theory and a training paradigm called graph augmentation, which generates a sequence of higher-ordered graphs from the original graph that spans a range of spectral and spatial behaviors to facilitate learning a transferrable representation. We have shown that our method can simultaneously improve accuracies in both benign and adversarial settings against an array of strong attackers.
Ming Jin, Heng Chang, Wenwu Zhu, Somayeh Sojoudi
We propose a method to efficiently synthesize neural network policies with stability and safety guarantees through imitation learning. The proposed approach merges Lyapunov theory with local quadratic constraints to bound the nonlinear activation functions in the NN, and further convexifies the resulting constraints for efficient parameter search. The method is illustrated on an inverted pendulum system and aircraft longitudinal dynamics, which is shown to improve the safety aspect over demonstrators (e.g., optimal LQR).
He Yin, Peter Seiler, Ming Jin, and Murat Arcak
We study the problem of power system state estimation under adversarial attacks, which can be formulated as a nonconvex optimization problem. We have developed a boundary defense mechanism and a notion of graphical mutual incoherence (gMI) to study the vulnerability of the grid. For any given attacked region, if all the lines on the boundary containing the region satisfy gMI, then we can recover the rest of the states accurately. With our tool, for the first time, one can generate a single vulnerability map of the U.S. grid, which can be used to enhance the security of power grid against cyberattacks.
Ming Jin, Javad Lavaei, Somayeh Sojoudi, Ross Baldick
We investigates the important problem of certifying stability of RL policies when interconnected with nonlinear dynamical systems. We show that by regulating the input-output gradients of policies, strong guarantees of robust stability can be obtained based on a semidefinite programming feasibility problem. Empirical evaluations demonstrate that the RL agents can have high performance within the stability-certified parameter space, and also exhibit stable learning behaviors in the long run.
Ming Jin and Javad Lavaei
IEEE Access (2020) (pdf)