Entropy of trees & tree automata.
Entropy of tree automata = joint spectral radius.
The zero-error coding problem with states.
Approximating zero-error capacities of codes.
Open problems, etc.
Theoretical problem statement
Lift the Shannon/Parry Markov chain of a strongly connected
finite graph to the timed automata settings.
(aka MME of an irreducible SFT)
Practical problem statement
Generate quickly and as uniformly as possible runs of a timed
automaton.
◮ quickly: Step by step simulation as with a finite state Markov
Chain → Stochastic Process Over Runs (SPOR)
◮ ≈ uniformly → SPOR of maximal entropy + asymptotic
equipartition property.
Timed automata
• A model for verification of real-time systems
• Invented by Alur and Dill in early 1990s
• Precursors: time Petri nets (Bethomieu)
• Now: an efficient model for verification, supported by
tools (Uppaal)
• A popular researh topic (¿8000 citation for papers by Alur
and Dill)
• modeling and verification
• decidability and algorithmics
• automata and language theory
• very recent: dynamics
• Inspired by TA: hybrid automata, data automata,
automata on nominal sets
International Workshop on the Perspectives on High Dimensional Data Analysis III
Abstract:
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimal solution computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution using the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely produces the same estimator in the next iteration. We show that the LASSO is a good initial estimator, which produces the oracle estimator using the one-step LLA algorithm for folded concave penalization methods. This is demonstrated by using three classical sparse estimation problems, namely, the sparse linear regression, the sparse logistic regression and the sparse precision matrix estimation, and illustrates the power of combining the LASSO and SCAD to solve sparse inartistical estimation problem. (joint work with Lingzhou Xue and Hui Zou)