I am currently enrolled in a PhD program in Computer Science at the University of Manchester under the supervision of Richard Allmendinger, Wei Pan and Theodore Papamarkou. My research delves into the intricacies of applying Reinforcement Learning (RL) in non-stationary environments—an area that is critical to the deployment of RL in real-world situations.
Concurrently, I am employed at Virtual Vehicle Research GmbH in Graz, Austria, holding the position of Senior Research and Development Engineer. My key responsibilities include leading the design and implementation of RL policies for legged robots in search and rescue operations, and developing RL agent for energy management in Plug-in Hybrid Electric Vehicles, integrating cutting-edge AI with eco-friendly vehicle technology.
Master of Engineering Management, 2022
Arizona State University
Master of Automotive Mechatronics, 2021
FH Oberösterreich
Bachelor of Aerospace Engineering, 2018
Zewail City, Egypt
Key projects include:
Developing robust RL-based locomotion policy for legged robots:
Using Meta-Reinforcement Learning to develop a locomotion policy for the quadruped robot -Unitree Go1- to be able to robustly navigate different environments for survival and rescue missions.
RL-based Energy Management Strategy for P2-PHEVs:
Implementing a novel A3C agent that outperformed the existing rule-based control strategy and was integrated into the vehicle HCU for testing in HiL and ViL. The project resulted in two publications and a patent.
Key projects include:
Adaptive interiors control with a German OEM - Project Lead:
Developing an imitation learning RL agent that learns and predicts the driver preferences based on the driver mood, driving behavior and vehicle situation.
RL-based Thermal Management for Battery Electric Vehicles:
Improving the vehicle energy consumption by utilizing the heat sources to meet the cabin comfort requirements, powertrain, and battery optimal efficiencies.
This is a list of the featured publications only.
Quickly discover relevant content by filtering all the publications.