The main content of my PhD-research involves modelling AI evaluation as a prediction problem and what it would mean to maximise predictive power (and how we would do that). In general I am interested in AI evaluation & everything that is related: testing, auditing, metrics, environment & benchmark design, capability measurement, etc.
With regards to more specific applications and domains, I strongly prefer sequential decision problems, RL, planning & control and the likes. Specifically with relation to world-model learning, goal conditioning, multi-task systems.
Other concepts that spark my imagination include causality, embodied cognition, knowledge representation, grounding, AI safety, artificial life.
Most things really.