CastleMachine Learning Reading Group, Cambridge, UK
Once or twice a month on weekday evenings at a local pub. Open to all.
Organisers: Frank Morley (data scientist); Robert Henderson (computer scientist, Fivey Software).
Each session we'll discuss a different paper in the field of machine learning, typically covering a foundational concept, a recent application, or an ethical or policy issue. You'll need to read the paper before the session.
Contact e-mail (Robert Henderson): rob at robjhen dot com
We also have a page on Meetup.com. (It is not necessary to join the Meetup group in order to attend the sessions - just come along!)
When: Wednesday 29th October 2025, 6 pm – 8 pm.
Where: The Castle Inn, 36 Castle Street, Cambridge, UK. (Most likely we'll be at one of the large tables upstairs.)
Paper: P. F. Christiano et al. Deep Reinforcement Learning from Human Preferences. In Advances in Neural Information Processing Systems 30, 2017.
We'll discuss Reinforcement Learning from Human Feedback (RLHF), a foundational technique for converting a deep learning model from a mere pattern predictor into an agent that can act according to the wishes of a human operator.
You'll need to read the paper in advance. Ideally please also bring along your own copy of the paper to refer to in the session.
When: Wednesday 26th November 2025, 6 pm – 8 pm.
Where: The Castle Inn, 36 Castle Street, Cambridge, UK. (Most likely we'll be at one of the large tables upstairs.)
Book: E. Yudkowsky and N. Soares. If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. The Bodley Head, 2025.
Please read the book and bring a copy with you to the discussion. We'll be following it closely for the two hours.
We encourage well-informed and good-faith critiques of the books we read (but make sure you read the book if you have a strong opinion!)
Penguin description:
AI is the greatest threat to our existence that we have ever faced.
The scramble to create superhuman AI has put us on the path to extinction – but it’s not too late to change course. Two pioneering researchers in the field, Eliezer Yudkowsky and Nate Soares, explain why artificial superintelligence would be a global suicide bomb and call for an immediate halt to its development.
The technology may be complex, but the facts are simple: companies and countries are in a race to build machines that will be smarter than any person, and the world is devastatingly unprepared for what will come next.
How could a machine superintelligence wipe out our entire species? Will it want to? Will it want anything at all? In this urgent book, Yudkowsky and Soares explore the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.
The world is racing to build something truly new – and if anyone builds it, everyone dies.
When: Wednesday 25th February 2026, 6 pm – 8 pm.
Where: The Castle Inn, 36 Castle Street, Cambridge, UK. (Most likely we'll be at one of the large tables upstairs.)
Paper: S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. In Neural Computation, 9, 1735–1780, 1997.
Long Short-Term Memory (LSTM) networks are a type of recurrent neural net capable of learning long-range dependencies in sequential data. They were the state of the art in language modelling prior to transformers, and remain highly relevant to modern machine learning.
You'll need to read the paper in advance. Ideally please also bring along your own copy of the paper to refer to in the session.
Paper: A. Vaswani et al. Attention is All you Need. In Advances in Neural Information Processing Systems 30, 2017.
Paper: D. E. Rumelhart et al. Learning representations by back-propagating errors. In Nature, 323, 533–536, 1986.
Papers:
Papers:
Paper: Y. Chaudhary and J. Penn. Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models. In Harvard Data Science Review, Special Issue 5, 2024.
Paper: D. Guo et al. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. 2025.
Related reading:
Papers:
Papers:
Paper: A. Aguirre. Keep the Future Human: Why and How We Should Close the Gates to AGI and Superintelligence, and What We Should Build Instead. 2025.
Paper: Y. LeCun et al. Deep Learning. In Nature, 521, 436–444, 2015.
Paper: Gemini Robotics Team, Google DeepMind. Gemini Robotics: Bringing AI into the Physical World. 2025.
Paper: S. Hao et al. Training Large Language Models to Reason in a Continuous Latent Space. 2024.
Book: N. Bostrom. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.
Paper: A. Templeton et al. Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet. In Transformer Circuits Thread, 2024.
Paper: D. Kokotajlo et al. AI 2027. 2025.
Papers:
Some related groups in Cambridge: