The Future Stack AI Newsletter #2
Welcome to the second edition of the newsletter. We bring you the latest developments in AI from all over the world.
⚡New this week
Be the first to know about the technological trends in AI and related areas. This week you get to know about where AI research is heading, what measures companies are taking to become AI-first companies, and why you should care about distributed workloads in ML. You also get to know how the government is supporting the commercialization of research in Australia.
💗Mental Health Check-in
Information is all around us. What we choose to focus on, grows. Let us choose to live simply and focus on a few key things this month. Doing one thing surely at one time will help ground you in the present moment.
🌍AI in the world
What Yann Lecun, the Turning award winner, thinks about the future of AI?
Yann Lecun has contributed a lot to the world of deep learning. His latest contribution is an architecture for autonomous systems which tries to break down how “common sense” can be incorporated into our AI models. This is a largely unsolved problem and the pinnacle of AI. It is theorized that to build a general AI that can operate and learn as a human, two main areas of research need to be expanded - one is building a better model of the world, and the other is using self-supervised learning to improve the world model (thereby infusing common sense). I think the paper has created more questions than it has answered, but that is what most excites me about this field.
📰 I highly recommend reading the paper here and its references for anyone looking to dive deep into a theory for creating AGI (Artificial General Intelligence).
🔗For those looking to dive even deeper, I would point to studying biopsychology and neuroscience to form your own theories.
Where is Machine Learning training heading?
As most of you know, it takes a lot of computational power to train large ML models. Another issue is that the data used to train ML models has to be accessible to the people who are developing these models.
But, what if the data needs to be private? And what if smaller companies want to train bigger models and simply do not have the money to train them?
Enter - Federated Learning!
Basically, Federated Learning is a type of ML training technique that allows data scientists to perform all operations on datasets remotely, on edge devices, without having to bring all the data to a central server.
🔗To know more about Federated Learning, click here.
Now, why do I think this is going to be the future? We all know that whoever has the data, has the most power. Therefore, the democratization of data and AI is very important if we want to prevent the polarization of technology (the consequences of polarization will be bad for everyone, doesn’t matter where they live or what their social status is). This technology is a good step toward letting users retain the power over their own data, while still enjoying the benefits of AI-enabled technologies.
Meta developed a system to verify Wikipedia entries using AI
Meta (previously Facebook) AI just released a system that can verify Wikipedia content. How does it do it? Using Natural Language Processing.
It is critical that any knowledge on Wikipedia is verifiable. This means users should be able to check citations provided on Wikipedia pages and confirm that the background material to support any claim exists and is trustworthy.
They developed an AI system to check the relevancy of a citation against the text which acts as an AI assistant and highlights to humans the trustworthiness of a source. This Wikipedia AI assistant can help a lot to fight misinformation on one of the biggest knowledge bases in the world. This makes me wonder how we can use something similar for social media and help reduce misinformation all over the internet.
🦘AI in Australia
Australia’s AI Action Plan - what is the niche that Australia is already filling in the world of AI?
The Australian government wants to position Australia as a world leader in secure, responsible, and trustworthy AI. Australia has a huge advantage because of the world-class research going on at Australian universities. If you look at the rankings and research impact, Australia comes on top in the field of Computer Vision, Deep Learning, and Robotics. The Australian National University is also minting leaders in the area of applied Cybernetics (🔗link here) which not only welcomes people from tech, but from all areas including law, policy, history, etc. The idea is to create AI-enabled physical systems and ensure they can operate safely in the real world. Australia is also home to some of the best systems engineering firms, e.g., Shoal. In my opinion, the systems engineering capability of Australian engineers is going to be a huge asset for Australia, especially in this increasingly complex world.
The Australian government’s AI plan is focusing on 4 key areas - boosting Australian businesses, attracting world-class talent, increasing commercialization of research, and focusing on responsible and inclusive AI. This plan is hitting all the right notes right now, but in this newsletter, I will keep bringing you the latest updates on where the money from the government is flowing and what’s happening to these plans.
AIDCC - AI and Digital Capability Centres to be established soon all across Australia!
The government is putting in around $52 million to establish 4 AI and Digital Capability Centres across Australia. Universities, SMEs (small-medium enterprises), and big companies participated in the grant process. These four centers will focus largely on providing the best possible AI services and products for SMEs and startups to develop their products. They will also provide a launchpad for Australian companies to access external markets. You might be thinking that the money is not enough, but if budgeted properly, such centers will have enough runway to last 2 years which can be used to demonstrate the viability of a lot of AI research and to establish a market. Once that happens, the scaling up can happen with private investors or other grants.
🔧Upskilling
If you are a deep learning engineer looking for some advice on upskilling, follow my LinkedIn here.
This week I am going to talk about writing neural networks from scratch. Writing a neural network from scratch used to intimidate me when I was starting out in deep learning.
How did I get over it?
I read a lot of code by different research groups working in my area and even talked to a few researchers to understand why they wrote the code the way they did.
There is another resource that I have found quite useful recently.
🔗Labml.ai - They have implementations of all major neural network architectures as per the original papers. The implementation is in PyTorch.
Now whenever I am reading a paper or revisiting an old one, I open labml.ai in a different tab. It helps me understand the architectural choices and see them implemented in a concrete way. It also helps me deliberate with myself on how I could make changes to the current architecture and possibly improve it. I would highly recommend trying this approach. Let me know how you go!
Hot tip: If you are writing a complex network from scratch, think about it in terms of abstractions and keep building on and incorporating different blocks.
👩💻 About me
I am an ML engineer in Canberra and my most intense obsession is to help companies and individuals become competitive by investing in data and AI. Subscribe to my newsletter if you have similar ideas on making a dent in the universe. All the content that I create is free and I intend to keep it so!
Follow me on LinkedIn here!