Welcome & News › Forums › Alumni Discussion Board › AI: Hype, Reality — and Why the Alumni Association Is Getting Involved › Reply To: AI: Hype, Reality — and Why the Alumni Association Is Getting Involved
AI – Magic or Science?
With such a diverse audience as our alumni, it’s always tricky deciding how much detail to put into articles. Here is some further detail if you are interested in what’s behind AI, but by all means skip down to the actual example I’ve put in the next reply.
What actually is AI?
All forms of AI rest on two core elements
- The work of a wondrously obscure 18th century British mathematician Thomas Bayes (1701-1761), who had a profound insight on probability theory giving us what we now call Bayesian stats. His obscurity was hard-earned by publishing exactly zero papers in his lifetime. Impressive for someone who is shaping the 21st century.
- And secondly, AI needs an almighty amount of computing power. Huge data farms that can process in parallel the vast amounts of data required to make Mr Bayes’s sums work. Techs call these computing centres ‘neural networks’, the rest of humanity tend to struggle in disbelief, ‘How much did you say they cost?’
What are Bayesian stats – how different is it to what I did at school ?
It’s a slightly different and many people argue very intuitive way of looking at stats. As so often, it’s best to describe with an example:
Step 1 – I toss a coin twice and it’s heads both times. What are the odds when I toss the coin a third time?
Most people would say ’50:50’ as the coin has no memory of what has occurred before.
Step 2 – I continue the experiment and toss the coin a further 18 times. It’s heads each time. What are the odds when I go to toss the coin a 21st time?
Traditionally, statisticians would say ’50:50’ because the coin still doesn’t have a memory. Quite a few people might though say ‘Hang on, something’s up here, I reckon that coin is fixed.’
And that is the application of Bayesian statistics.
With more data, an eventuality that was so unlikely after 2 coin tosses that the scenario wasn’t even worth a mention becomes the most probable explanation of the events.
Thomas Bayes wrote down a simple formula to calculate this. He seized on what might seem an obvious point – if the coin was fixed after 20 throws, it was obviously fixed after 1, it’s just we had no evidence. So there WAS a probability of the coin being fixed at the start, it’s just that it was perhaps a million to one, and wasn’t worth mentioning. With each toss of the coin though, those million to one odds drop sharply, to the point where after 20 tosses (or before) it’s actually more likely that the coin is fake than a true coin is returning all those heads.
The formula Thomas Bayes worked out to do this is in fact very simple – a lot easier than those various distribution curves you might have suffered with. You’d expect now a 15 or 16 year old to do the calculation as part of their schoolwork fairly easily.
And why does this matter? Because the formula can be put into a computer program, and then with enough data we can move away from the idea that computers are binary ‘yes/no’ to a world where they think in terms of probabilities. And that is so important, as it enables pattern recognition – whether its written words, images or voices.
AI is essentially pattern recognition on steroids. It answers questions not by knowledge, but by finding similar patterns of words in the truly vast amount of text it has access to.
How does that tie to the types of AI?
One day, there might be one thing called AI, though if there is we’ll probably call it AGI (Artificial General Intelligence) if we get there.
Until then, most of what we call AI today falls into three broad buckets:
1) Predictive AI (the quiet kind)
This is the AI that’s been around for years inside many organisations: spotting patterns in data and making predictions. For example, forecasting demand, flagging anomalies, or helping prioritise risk. You often don’t ‘chat’ to it — it runs in the background. Pattern recognition is about probability, which is where our friend Thomas comes in.
2) Generative AI (the chatty kind — GPT and friends)
This is what people mean when they say ChatGPT. It can write, summarise, translate, draft emails, and help you think through problems. It’s powerful, but it can also sound confident while being spectacularly wrong if it’s not carefully grounded in real sources. It doesn’t actually ‘understand’ very much at all. It has though read more or less everything humans have ever written (technically that’s called a Large Language Model, or LLM), and is really good at putting together sentences that appear to make sense. It does this by spotting pattens in sentences it’s read compared to what you’ve asked … which is of course just a probability problem involving huge amounts of data. Mr Bayes and those expensive neural networks again.
3) Specialist AI (the superhuman-at-one-thing kind)
These are systems designed to excel at a specific task with clear rules and feedback — like chess. That’s why AI chess engines can be astonishingly strong, while a general chat system like ChatGPT will struggle if you ask it to play.
But while chess has a fairly narrow set of rules, literally month on month the boundaries are being pushed. Google’s DeepThink is trained to solve mathematical problems. It can now (early 2026) compete at elite IMO (International Mathematical Olympiad) gold-medal level – that’s roughly equivalent to the top 50 18-year-old maths humans on the planet. A year ago, it couldn’t compete with the top 5,000. It’s quite possible that within a year, it will out-perform all humans that have ever lived in Algebra (admittedly, the easiest of the seven major mathematical disciplines, though it didn’t feel like that at school).
Could this form of AI find a cure for cancer? Yes, and many people wish everyone would just focus on that and stop all the rest of the hype.
How do they do this? The answer now should be obvious. It’s Bayesian stats with more computer power than even Turin ever dreamed of. And yes, he did dream of AI – in many senses this belongs to him at least as much as it does to Mr Bayes.
The key takeaway: AI today isn’t one thing. It’s a family of tools. It works by a combination of the application of a clever insight into probability that identities that potential factors change their importance when you consider more data, together with a ridiculous amount of computing power.
-
This reply was modified 3 months, 3 weeks ago by
Site Admin.
-
This reply was modified 3 months, 2 weeks ago by
Site Admin.