A weird property of the described abstractions is that as you go tighter (interval -> zonotope -> polyhedra), the trained networks counterintuitively become less robust. Why does more precision in verification hurt training?
A recent work not mentioned in the last chapter "Adversarial Training with Abstraction" is [1], which kind of explains this issue using the notions of continuity and sensitivity of the abstractions.
Does anyone know some employers that are hiring for this stuff in industry? I can’t imagine many startups apply this stuff due to prohibitive costs. Research in this area is obviously orders of magnitude more computationally taxing than simply training neural networks.
Any proper selfdriving or other advanced robotics company should do. In my company (specialized autonomous vehicles) we’ll probably have such role soon.
Maybe Bosch? Prof. Zico Kolter from CMU is a chief scientist associated with them, and his group does a lot of really good work in the ml verification space (e.g. the first randomized smoothing and the Wong & Kolter certificates).
This is something I am very interested in, There’s lot of work to be done when it comes to building verified and explainable learning systems (not just neural networks).
I think the verification tools are finally getting better to the point of them being useful for this kind of stuff.
L
A recent work not mentioned in the last chapter "Adversarial Training with Abstraction" is [1], which kind of explains this issue using the notions of continuity and sensitivity of the abstractions.
[1]: https://arxiv.org/abs/2102.06700