Hey everyone and welcome to the first-ever AITU blog post 🚀. Today marks the day of our very first meeting - and we couldn’t be more excited. In this small post, we want to explain some of the thoughts behind the founding of the organisation, as well as provide some behind-the-scenes into the foundational work that has been done and the vision for what is laying ahead.
🧑🚀 How it all started
As a data science student at ITU, learning about AI is inevitable. For us, the three founding members of AITU (Mika, Ludek and Lukas), this was during the machine learning course. While it felt great to learn about AI, we soon realised that we just saw the tip of an iceberg and that there is much more to learn. So we did. In the Deep Learning course, we got to learn about
Transformers, architecture introduced in the ground breaking paper Attention is all you need. (catchy, heh? 💯) Being able to understand state-of-the-art technology felt great and all of the sudden reading research papers started to feel enjoyable. Yet, reading them alone felt boring. And that’s when Mika came to us with the proposal of AITU. The idea of the organisation is to gather the most engaged, interested and dedicated people at ITU into a strong community around the field of AI.
Today, it is exactly two months since the initial idea and I am more than happy to say that we are ready for the very first meeting of AITU! 🥳
🤖 AITU Activites
AITU is built around three main activities. First, in our weekly reading group we summarise and discuss one selected research paper. Each week the group decides on the paper to read. Second, in the future, we plan to engage in project for our members and participate in various competitions. Last but not the least, we will also host talks from academia and industry. If this all sounds exciting and you believe that you would be a valuable member of AITU, feel free to write to us!
📆 First event
During our first event, we go into the year 2017 and to the beginning of our journey in deep learning and read Attention Is All You Need. The paper has took the AI community by storm and it has been fascinating to watch all domains of DL step-by-step adopting transformer-based architectures. Today, Transformers are in-use way beyond their initial purpose of machine translation, by behind the backbone of many state-of-the-art models of computer vision (image and video processing), audio intelligence and much more. Due to its wide adaption, we expect to read many papers building on this architecture in the future of the organisation. A little refresher therefore doesn’t seem like a bad idea. 💫
📣 Stay in touch
To stay updated about our activities, make sure you give us a follow on the LinkedIn and subscribe to our newsletter. Any questions or ideas for talks, collaboration etc.? Drop us a message at email@example.com.