Eugene Vinitsky

Eugene Vinitsky

Assistant Professor, NYU

Eugene Vinitsky is an Assistant Professor of Civil and Urban Engineering and an affiliated professor of Computer Science Engineering. He has a BS in physics from Caltech and received his PhD from UC Berkeley in Controls Engineering with a specialization in Reinforcement Learning after which he worked as a research scientist in the Apple Special Projects Group.

Prior to that, he received his MS in physics from UC Santa Barbara and a BS in physics from Caltech. At UC Berkeley, he focused on scaling multi-agent reinforcement learning to tackle the challenges associated with transportation system optimization. As a member of the CIRCLES consortium, he is responsible for the reinforcement learning algorithms and simulators used to train and deploy energy-smoothing cruise controllers onto Tennessee highways. He is currently a visiting researcher in reinforcement learning at Facebook AI and has interned at Tesla Autopilot and at the Multi-Agent Artificial Intelligence Team at DeepMind. His research has been published at ML venues such as CORL, neurIPS, and ICRA and at transportation venues such as ITSC. He is the recipient of the NSF Graduate Student Research Award, a two time recipient of the Dwight David Eisenhower Transportation Fellowship, and received an ITS Outstanding Graduate Student award. He is one of the primary developers of Flow, a library for benchmarking and training autonomous vehicle controllers.

His research goal is to see complex, human-like behavior emerge from unsupervised interaction between groups of learning agents with an applications focus on enabling autonomous vehicles to operate in rich scenarios. Concretely this leads to a lot of questions his research pursues:

  • How can we use RL to design models of human agents?
  • How can we ensure that RL designed agents are human-compatible?
  • How can we synthesize environments that push and test the capabilities of our agents?
  • What algorithmic advances and software tools are needed to address these questions?
  • In practice this means working on understanding how to push the state of the art in multi-agent RL algorithms, designing new data-driven simulators, and trying to deploy simulator-designed controllers into real-world systems. 

C2SMART Projects

No posts found!