Although nothing really changes except for the date, a new year fills everyone with the hope of starting things afresh. Adding a bit of planning, well-envisioned goals and a learning roadmap makes for a great recipe for a year full of growth.
This post intends to strengthen your plan by providing you with a learning framework, resources, and project ideas to build a solid portfolio of work showcasing expertise in data science.
The roadmap defined is prepared based on my little experience in data science. This is not the be-all and end-all learning plan. The roadmap may change to better suit any specific domain/field of study. …
Back when I was working as a Systems Development Engineer at an Investment Management firm, one thing that I learned is that quantitative finance needs you to be good with mathematics, programming, and data analysis.
Algorithmic or Quantitative trading can be defined as the process of designing and developing statistical and mathematical trading strategies. It is an extremely sophisticated area of finance.
So, the question is how does one get started with Algorithmic Trading?
I am going to walk you through five essential topics that you should start with in order to pave your way in this fascinating world of trading. I personally prefer Python as it offers the right degree of customization, ease/speed of development, testing frameworks, and execution speed, therefore, all these topics are focused on Python for Trading. …
If the Skills section on your Resume states Python, R, SQL, Machine Learning, Deep Learning and you’re wondering why you get REJECTED every time, you should keep reading.
There are millions seeking a job in Data Science and the opportunities are limited. So, the important question is how would you stand apart from the pack?
The blog tries to capture everything you need to build a kickass portfolio — so good that they can’t ignore you!
Wait, but why do I need a portfolio in the first place?
For someone who has received a Master’s degree or a Ph.D. from a top tier college, getting a job might not be that difficult. The institute adds credibility to your profile which employers look for. …
It’s only in hindsight that you can tell how your decisions and action plans have turned out.
I am a big-time consumer of self-help content — books, podcasts, blogs, newsletters, I have been intoxicated with nearly all sorts of resources.
I enjoy making new year resolutions but I never deeply examine the gross outcome of this habit of mine. So, here I am looking over my failures, achievements, and struggles of 2020.
Before I start reviewing the year, it’s important to set the context and for you to have an idea of where my thoughts are coming from.
I left my first and only job in August 2018. …
So, November is over and I’ve come across a good list of blogs, research papers, books, and datasets that are worth deep-diving into.
This is the second part of the AI Monthly webcast, you can find the first one here.
Here’s what we are going to cover in the November AI updates:
Firstly, we’ll look at 2 interesting pieces of news that you’d have heard:
This is the second blog in the Stats series after explaining the taxonomy of data in the first blog. Here, we’ll learn to apply a few essential foundational concepts that help us describe the data using a set of statistical methods.
A sample is a snapshot of data from a larger dataset; this larger dataset which is all of the data that could be possibly collected is called population. In statistics, the population is a broad, defined, and often theoretical set of all possible observations that are generated from an experiment or from a domain.
These observations in the sample dataset often fit a certain kind of distribution which is commonly called the normal distribution and formally called Gaussian distribution. It is the most studied distribution because of which there is a subfield of statistics simply dedicated to Gaussian data. …
After making the need for statistics in data science apparent in my previous blog, it’s time to dive right in and get hands-on with understanding the statistical methods. This is going to be a series of blog posts and videos(on my YT channel).
I am starting this series of blogs on statistics and probability to help all the coders and analysts understand these concepts and methods. You are someone who is familiar with Python programming and is trying to get a better grip over statistics to master Data Science skills.
We know that Data Analysis has evolved beyond its original expected extent and this has happened because of the rapid development of technology, generation of more and bigger data, aggressive usage of quantitative analysis across a variety of disciplines. …
The world of AI and Data Science is accelerating at an alarming rate. It becomes very hard for AI enthusiasts and learners to keep abreast of meaningful advances in the field. Applications, Research & Development, individual projects, proprietary software — every sector is applying DS and AI in its own remarkable way.
There are two main reasons I am starting this monthly AI webcast:
In this hyper-connected world, data is being generated and consumed at an unprecedented pace. As much as we enjoy this superconductivity of data, it invites abuse as well. Data professionals need to be trained in using statistical methods not only to interpret numbers but to uncover such abuse and protect us from being misled.
Not many data scientists are formally trained in statistics and there are very few good books and courses that offer to learn these statistical methods from a data science perspective.
Through this post, I intend to shed some light on
Python 3.9.0 — the latest stable release of Python is out!
Open-source enthusiasts from all over the world have been working on new, enhanced, and deprecated features for the past year. Though the beta versions were being rolled out for quite some time, the official release of Python 3.9.0 happened on October 5, 2020.
The official documentation contains all the details of the latest features and changelog. Through this post, I’ll walk you through a few cool features that may come in handy our day-to-day programming tasks.