What is PySpark?
PySpark is a Python API for Spark released by the Apache Spark community to support Python with Spark. Using PySpark, one can easily integrate and work with RDDs in Python programming language too. There are numerous features that make PySpark such an amazing framework when it comes to working with huge datasets. Whether it is to perform computations on large datasets or to just analyze them, Data Engineers are switching to this tool.
Key Features of PySpark
Real-time computations: Because of the in-memory processing in the PySpark framework, it shows low latency.
Polyglot: The PySpark framework is compatible with various languages such as Scala, Java, Python, and R, which makes it one of the most preferable frameworks for processing huge datasets.
Caching and disk persistence: This framework provides powerful caching and great disk persistence.
Fast processing: The PySpark framework is way faster than other traditional frameworks for Big Data processing.
Works well with RDDs: Python programming language is dynamically typed, which helps when working with RDDs.
Spark with Python vs Spark with Scala
As it is already discussed, Python is not the only programming language that can be used with Apache Spark. Data Scientists already prefer Spark because of the several benefits it has over other Big Data tools, but choosing which language to use with Spark is a dilemma that they face.
Being one of the most popular frameworks when it comes to Big Data Analytics, Python has gained so much popularity that you wouldn’t be shocked if it became the de-facto framework for evaluating and dealing with large datasets and Machine Learning in the coming years.
The most used programming languages with Spark are Python and Scala. Now if you are going to learn PySpark (Spark with Python), then it is important that you know why and when to use Spark with Python, instead of Spark with Scala. In this section, the basic criteria, one should keep in mind while making the choice between Python and Scala to work on Apache Spark, are explained.
Installation In Window: In this section, you will come to know how to install PySpark on Windows systems step by step.
Download the latest version of Spark from the official Spark website
What is SparkConf?
Before running any Spark application on a local cluster or on a dataset, you need to set some configurations and parameters. This is done with the help of SparkConf. As the name suggests, it offers configurations for any Spark application.
Features of SparkConf and Their Uses
Here is a list of some of the most commonly used features or attributes of SparkConf while working with PySpark:
set(key, value): This attribute is used to set a configuration property.
setMaster(value): This attribute is used to set the master URL.
setAppName(value): This attribute is used to set an application name.
get(key, defaultValue=None): This attribute is used to get a configuration value of a key.
setSparkHome(value): This attribute is used to set the Spark installation path.
Code to run SparkConf
>>> from pyspark.conf import SparkConf
>>> from pyspark.context import SparkContext
>>> conf = SparkConf().setAppName("PySpark App").setMaster("local[2]")
>>> conf.get("spark.master")
>>> conf.get("spark.app.name")
What is PySpark SparkContext?
SparkContext is the entry gate for any Spark-derived application or functionality. It is the first and foremost thing that gets initiated when you run any Spark application. In PySpark, SparkContext is available as sc by default, so creating a new SparkContext will throw an error.
Parameters
SparkContext has some parameters that are listed below:
Master: The URL of the cluster SparkContext connects to
AppName: The name of your job
SparkHome: A Spark installation directory
PyFiles: The .zip or .py files send to the cluster and then added to PYTHONPATH
Environment: Worker node environment variables
BatchSize: The number of Python objects represented. However, to disable batching, set the value to 1; to automatically choose the batch size based on the object size, set it to 0; and to use an unlimited batch size, set it to −1
Serializer: This parameter tells about an RDD serializer
Conf: An object of L{SparkConf} to set all Spark properties
profiler_cls: A class of custom profilers used to do profiling; however, pyspark.profiler.BasicProfiler is the default one
Code to Run SparkContext:
from pyspark import SparkContext
sc = SparkContext("local", "First App")
Classes of Spark SQL and DataFrames:
pyspark.sql.SparkSession Main entry point for DataFrame and SQL functionality.
pyspark.sql.DataFrame A distributed collection of data grouped into named columns.
pyspark.sql.Column A column expression in a DataFrame.
pyspark.sql.Row A row of data in a DataFrame.
pyspark.sql.GroupedData Aggregation methods, returned by DataFrame.groupBy().
pyspark.sql.DataFrameNaFunctions Methods for handling missing data (null values).
pyspark.sql.DataFrameStatFunctions Methods for statistics functionality.
pyspark.sql.functions List of built-in functions available for DataFrame.
pyspark.sql.types List of data types available.
pyspark.sql.Window For working with window functions.
Analyze Data using Spark SQL
Relational databases are used by almost all organizations for various tasks – from managing and tracking a huge amount of information to organizing and processing transactions. It’s one of the first concepts we are taught in coding school.
And let’s be grateful for that because this is a crucial cog in a data scientist’s skillset! You simply cannot get by without knowing how databases work. It’s a key aspect of any machine learning project.
Structured Query Language (SQL) is easily the most popular language when it comes to databases. Unlike other programming languages, it is easy to learn and helps us start with our data extraction process. For most of the data science jobs, proficiency in SQL ranks higher than most other programming languages.
Features of Spark SQL
Spark SQL has a ton of awesome features but I wanted to highlight a few key ones that you’ll be using a lot in your role:
Query Structure Data within Spark Programs: Most of you might already be familiar with SQL. Hence, you are not required to learn how to define a complex function in Python or Scala to use Spark. You can use the exact same query to get the results for your bigger datasets!
Compatible with Hive: Not only SQL, but you can also run the same Hive queries using the Spark SQL Engine. It allows full compatibility with current Hive queries
One Way to Access Data: In typical enterprise-level projects, you do not have a common source of data. Instead, you need to handle multiple types of files and databases. Spark SQL supports almost every type of file and gives you a common way to access a variety of data sources, like Hive, Avro, Parquet, JSON, and JDBC
Performance and Scalability: While working with large datasets, there are chances that faults might occur between the time while the query is running. Spark SQL supports full mid-query Fault Tolerance so we can work with even a thousand nodes simultaneously
User-Defined Functions: UDF is a feature of Spark SQL that defines new column-based functions that extend the vocabulary of Spark SQL for transforming datasets
Executing SQL Commands with Spark
I have created a random dataset of 25 million rows. You can download the entire dataset here. We have a text file with comma-separated values. So, first, we will import the required libraries, read the dataset, and see how Spark will divide the data into partitions:
# importing required libraries
from pyspark.sql import SQLContext
from pyspark.sql import Row
# read the text data
raw_data=sc.textFile('sample_data_final_wh.txt').cache()
How to Manage Python Dependencies in PySpark
There are different methods which is used to manage the dependencies in PySpark, Which is given below:
Using Conda
Conda is one of the most widely-used Python package management systems. PySpark users can directly use a Conda environment to ship their third-party Python packages by leveraging conda-pack which is a command line tool creating relocatable Conda environments.
Using Virtualenv
Virtualenv is a Python tool to create isolated Python environments. Since Python 3.3, a subset of its features has been integrated into Python as a standard library under the venv module. In the upcoming Apache Spark 3.1, PySpark users can use virtualenv to manage Python dependencies in their clusters by using venv-pack in a similar way as conda-pack. In the case of Apache Spark 3.0 and lower versions, it can be used only with YARN.
Using PEX
PySpark can also use PEX to ship the Python packages together. PEX is a tool that creates a self-contained Python environment. This is similar to Conda or virtualenv, but a .pex file is executable by itself.
Contact Us to Get PySpark Assignment Help, PySpark Homework Help, PySpark Project Help, and get an instant help with an affordable price, you can send your request directly at contact@codersarts.com
תגובות