Welcome to the 4th and final part of the “Image Classification novel techniques” series. So far, we have covered transfer learning, progressive image resizing and attention mechanism in CNN models. Below are the reference links for same, in case a refresher is needed.
In this post, I’ll touch base on ensemble models and test-time augmentation, and their significance in making model predictions more robust.
Ensemble modelling is a process where multiple diverse models are created to predict an outcome, either by using many different modelling algorithms or using different training data sets. …
Welcome to the 3rd part of the “Image Classification novel techniques” series. In the first post and second post, we discussed about transfer learning using pre-trained models and progressive image resizing for image classification. In this post, I’ll touch base on “attention mechanism” in CNN models and it’s significance in making the CNN models more robust and accurate.
The general idea is to take inspiration from how human eye works, i.e. retina, as illustrated below.
Welcome to the 2nd part of the “Image Classification novel techniques” series. In the first post, we discussed about transfer learning and using pre-trained models for image classification. In this post, I’ll touch base on “progressive image resizing” and it’s significance in extracting the last bit of accuracy juice out of our stand-alone CNN models.
Progressive Image Resizing is the technique to sequentially resize all the images while training the CNN models on smaller to bigger image sizes. Progressive Resizing is described briefly in his terrific fastai course, “Practical Deep Learning for Coders”.
A great way to use this technique…
In this series of posts, I’ll discuss some of the modern hacks developed in recent times, to win an Image classification hackathon. Traditionally, we have always relied on vanilla CNN architectures for performing image classification. However, that will only take you half-way in the leaderboard. To get to the top, something special will be required.
If you check the Kaggle leaderboards in recent times, the top-10 percentile are fighting within 0.0001–0.001 score range.
Keeping that in mind, below are the some of the novel methods that I have used to constantly get good LB score in different image classification hackathons.
Today a piece of software code is available in the public domain in several forms. Also, every large enterprise today has many software applications developed using multiple programming languages.
The number of programming languages exceeds 500+ and each programming language follows a specific structure. It is always difficult to identify the programming language from a piece of code especially if it is available without a file extension.
Every programming language today consists of set of keywords and a standard language syntax. …
Sentiment analysis is the interpretation and classification of emotions (positive, negative and neutral) within text data using machine learning techniques.
Understanding people’s emotions is essential for businesses since customers are able to express their thoughts and feelings more openly than ever before. By automating the process of analysing customer feedback, from survey responses to social media conversations, brands are able to listen attentively to their customers, and tailor products and services to meet their needs.
In machine learning, traditionally it’s considered best practice to use Recurrent Neural Networks (RNNs) or Long Short Term Memory (LSTM) frameworks for text analysis and…
Welcome to the 3rd and final part of the “Comprehensive Guide to Machine Learning” series. Over the course of this series, we looked at several crucial concepts which play a significant role in developing a good machine learning model.
Concepts like data cleansing and EDA help a great deal in acquiring deeper understanding of the data. Similarly, concepts like feature engineering and feature selection help in making sure that only useful and relevant data is being fed to the machine learning model. You can get a quick recap of these concepts by visiting the below links:
In this final post…
Arch Linux is one of the most popular minimalist Linux distributions in use today. It is a general-purpose (bleeding-edge) rolling release Linux distribution which truly follows the KISS (Keep-It-Simple-Silly) ideology.
Arch Linux adheres to five principles:
Arch, however, does have a steep learning curve, and while the documentation for Arch Linux is comprehensive, many new users find it overwhelming and complicated.
The default Arch installation covers only a minimal base system and expects the end users to configure the system by themselves. …
Welcome to 2nd part of the “Comprehensive Guide to Machine Learning” series. In the first part of this series, we explored the below machine learning concepts.
I hope you guys had fun playing around with these concepts on your own datasets. In this post, let’s deep-dive further and explore the below concepts, which will significantly help in making a robust machine learning model.
Note: These 3 concepts form the backbone of entire machine learning model development life-cycle. Any machine learning model is only as-good as the features…
In this comprehensive guide, I’ll try to explore the different gears and pinions that makes a machine learning model tick. If you try to google the definition of “machine learning”, most of the sources will show the below statement:
“Machine Learning is the field of study that gives computers the capability to learn without being explicitly programmed”
I like to think of machine learning as raising a newly born baby. At first, it’s pretty innocent of the ways of the world, and it doesn’t know what to do, and it requires help at every step. …
Assistant Consultant at Tata Consultancy Services