Posts

Showing posts with the label pyspark

Split Datasets

Image
 The Objective of this article is to transform data set from row to column using explode() method. The scope of this article is to understand how to  unnest or explode a data set using parallel processing framework Pyspark and Python native library- Pandas . Dataset looks like as below: dept,name 10,vivek#ruby#aniket 20,rahul#john#amy 30,shankar#jagdish 40, 50,yug#alex#alexa Pandas explode() import pandas as pd pan_df=pd.read_csv(r'explode.csv') df_exp=pan_df.assign(name=pan_df['name'].str.split('#')).explode('name') df_exp Output: Dataset is transformed successfully and we are able to create new rows from nested dataset. Pandas way of explode is simple, crisp and straight forward unless the dataset is complex. In next section of this article we will cover PySpark way of exploding or unnesting dataset. PySpark explode() Import libraries and Connect to Spark from pyspark import SparkContext,SparkConf import pyspark from pyspark.sql import SparkSes...

Split Datasets

Image
 The Objective of this article is to transform data set from row to column using explode() method. The scope of this article is to understand how to  unnest or explode a data set using parallel processing framework Pyspark and Python native library- Pandas . Dataset looks like as below: dept,name 10,vivek#ruby#aniket 20,rahul#john#amy 30,shankar#jagdish 40, 50,yug#alex#alexa Pandas explode() import pandas as pd pan_df=pd.read_csv(r'explode.csv') df_exp=pan_df.assign(name=pan_df['name'].str.split('#')).explode('name') df_exp Output: Dataset is transformed successfully and we are able to create new rows from nested dataset. Pandas way of explode is simple, crisp and straight forward unless the dataset is complex. In next section of this article we will cover PySpark way of exploding or unnesting dataset. PySpark explode() Import libraries and Connect to Spark from pyspark import SparkContext,SparkConf import pyspark from pyspark.sql import SparkSes...

Spark Window Functions

Image
 The objective of this article is to understand Pyspark Window functions. The blog will do a comparative study of Pyspark window functions and Relational DB systems, Oracle Database, analytical functions. Spark Window functions operate on a group of rows (like frame, partition) and return a single value for every input row.  To perform an operation on a group first, we need to partition the data using Window.partitionBy() , and for row number and rank function we need to additionally order by on partition data using orderBy()  clause. Connect to Spark import pyspark from pyspark.sql import SparkSession print('modules imported') spark=SparkSession.builder.appName('Spark_window_functions').getOrCreate() Load Dataset emp_df=spark.read.csv(r'emp.csv',header=True,inferSchema=True) emp_df.show(10) Import necessary Libraries from pyspark.sql.window import Window from pyspark.sql.functions import col, row_number, rank, dense_rank from pyspark.sql import functions as ...

Ingest Excel Data

Image
Data can be in any format, recently I have got chance to work on excel data sets. However we have pandas as very efficient library to work with different types of datasets but, pandas degrades performance whenever data size goes beyond MBs to GBs. For efficient processing of GBs datasets parallel computing was designed and Spark shines here. And we have a library named  com.crealytics:spark-excel_xxx, this package allows querying Excel spreadsheets as Spark Data Frames and leverage the parallel computing infrastructure. The Objective of this article is to understand the usage of spark-excel library with python version of spark or Pyspark. Connect to spark (standalone cluster) import pyspark from pyspark.sql import SparkSession spark=SparkSession.builder \ .appName('Spark_DB') .config("spark.jars.packages", "com.crealytics:spark-excel_2.11:0.12.2") \ .getOrCreate() com.crealytics:spark-excel_2.11:0.12.2 is the creaytics spark-excel package used fo...

Append Datasets

Image
 In the Data universe, Joins and Unions are the most critical and frequently performed operations. In my experience, almost every other operation is either a join or a union. As joins are inevitable so do unions. In previous article we have covered how joins work in Pandas. Link to article:  https://letscodewithvivek.blogspot.com/2021/12/python-joins.html The scope of this article is to understand about how  concat()  methods helps us achieve the union of data frames. concat() Concatenate or concat pandas objects along a particular axis with optional set logic along the other axes. create two data frames to understand how concat method works. concat data frames on axis=0, default operation (union) import pandas as pd df1 = pd.DataFrame('Name': ['Vivek', 'Amy', 'Vishakha', 'Alice', 'Ayoung'], 'subject_id':['sub1','sub2','sub4','sub6','sub5'], 'Marks_scored':[98,90,87,69,...

Popular posts from this blog

Can Julia compete PySpark? A Data Comparison

Split Datasets

Spark Window Functions