Prerequisite :
You must have python 3.7 or more installed on your system.
You must have hadoop and pyspark installed on your system
You must have a Spyder, Jupyter notebook on your system. Spyder or jupyter notebook come up with anaconda. you just need to launch them after installing anaconda.
If you work on a google colab no need to install python or Any other IDE, you just need to sign in with google colab and install pyspark using “!pip install pyspark” this command.
also can use a jupyter notebook to build this project.
Skilled required:
Python programming language
Basic Statistical analysis skills
Machine learning concept
What you’ll learn
How to read the data using pyspark dataframe
Perform Basic Exploratory Data analysis using pyspark
How to apply Random forest Classification using pyspark
How to evaluate the model in pyspark
Problem Statement or Description:
This project will show how to apply Random forest Classification algorithms using pyspark on a Churn modeling dataset. This dataset contains 12 features columns. Target columns are the “Exited” which shows the customer closed their account or not from the bank. In this project build the model which will predict, the customer will close their account or not based on their feature column.
Key highlights of projects or Essence
This project is about classification analysis.
This project shows you how to read the data and perform some basic Exploratory data analysis using pyspark
This project shows you how to perform data preprocessing.
This project shows you how to apply a Random forest classification using pyspark.
At the end of this project, Evaluate the model.
Packages and module used :
Pyspark
VectorAssembler
StringIndexer
OneHotEncoder
RandomForestClassifier
BinaryClassificationEvaluator
MulticlassClassificationEvaluator
Recommended projects:
Chicago crime data analysis
Census income dataset
Student performance
Divorce prediction
Cervical cancer risk factor
Skills:
Data Science, Machine Learning, Pyspark, pandas, Classification