Felix Hill

Logo

Research Scientist, DeepMind, London

View My GitHub Profile

I work on replicating in artificial systems how we as humans learn, represent and use language and semantic concepts


Embodied language learning

Children learn to understand language while learning to perceive, interact-with, explain and make predictions about the world around them. Our linguistic knowledge depends critically on sensory-motor and perceptual processes, which in turn are influenced and shaped by our language. My work simulates this process of acquiring language jointly with perceptual and motor processes as a path to realistic language understanding in fully embodied systems. With many brilliant collaborators at Deepmind, I have developed agents that can learn the meaning of words and short phrases as they pertain to perceptual stimuli and complex action sequences in continuous 3D worlds. These agents naturally compose known words to successfully interpret never-seen-before phrases, a trait that matches the productivity of human language understanding. We also showed that learning is much more efficient if agents exploit multiple complementary learning algorithms, another property of human language learning.


Perception and abstraction

A child might learn what growing means by observing a sibling, a pet or a plant get physically bigger, but once understood, the same idea of growing can be applied to pocket money, a tummy ache or Dad’s age. This ability to represent relations, principles or ideas like ‘growing’ with sufficient abstraction that they can be flexibly (re-)applied in disparate, and potentially unfamiliar, contexts and domains is central to human cognition and language. Our work studies how this ability can be replicated in distributed learning systems like neural networks.



Language understanding from text

alt text

During my PhD, I worked with Anna Korhonen on ways to extract and represent meaning from text and other language data in distributed representations. I developed FastSent and Sequential Denoising Auto-Encoders, ways to learn sentence representations from unlabelled text. With Yoshua Bengio and Kyunghyun Cho, I noticed you can train a network on dictionary definitions to solve general-knowledge crosswords clues. With Jase Weston and Antoine Bordes I applied neural networks with external memory components to answer questions about passages in books. I also made SimLex-999 a way to measure how well distributed representations reflect human semantic intuitions.


Teaching

With Steve Clark I taught a Master’s course Deep Learning for NLP at the Computer Laboratory, Cambridge University in 2018. If you follow that link you can find the synopsis, lecture slides and Tensorflow code for training neural networks on dictionary definitions. We got nice feedback, and hope to do the course again (somewhere) soon.


Recent talks

W3Schools


Other stuff

I started doing cognitive science before I did any computational linguistics. And before that, I got a Master’s in pure maths.

See Google Scholar for a list of publications.

When not working, things I like to do include football, running, yoga, travelling (but not arriving) and relaxing.