Computing the footprint by need of any EXIOBASE year and region¶

In this file references to an Aggregation folder containing crucial aggregation and Support Excel file are present. See this repository for these files and more detail: https://github.com/eNextHub/Footprint

Start with importing pandas, mario and numpy

In [ ]:
import pandas as pd
import mario
import numpy as np

Importing the wanted EXIOBASE database and country with MARIO¶

In [ ]:
#%% Now is time to import our databases
user = r'your path' # Your path to the folder containing Exiobase
Modes = ['pxp'] # Which versions of Exiobase you want to use?
Years = [2019] # Which years?
Worlds = {} # Initializing the dictionaries of all the worlds
sN = slice(None) # Useful to include levels when slicing dataframes
Coun_info = pd.read_excel('Aggregations\Support.xlsx', sheet_name='Countries', index_col=[0], header=[0]) # Some information is derived from a support file. See GitHub repository for more information.

# Select the interested levels of information
Consumption_cats = ['Final consumption expenditure by households']
Countries = ['IT']

Importing a version of a chosen dataframe using the brand new mario.parse_exiobase function

In [ ]:
World = mario.parse_exiobase(path=user+f'\IOT_{Years[0]}_{Modes[0]}.zip', unit='Monetary', table='IOT') # Import the right Exiobase version and year
World.aggregate('Aggregations\Aggregation.xlsx') # Aggregate the database from a predefined aggregation file

Bulding the indeces to store the results out of MARIO's output

In [ ]:
Regions = World.get_index('Region')
Sectors = World.get_index('Sector')
Sat_accounts = World.get_index('Satellite account')
Res_col = pd.Index(Sat_accounts, name='Satellite accounts')
Res_row = pd.MultiIndex.from_product([Modes,Years,Regions,Sectors], names=['Mode','Year','Region','Sector']) # Bulding the index
Res = pd.DataFrame(0, index=Res_row, columns=Res_col) # For the results database we want to create

Computing, for all the satellite account the footprint associated with the country and consumption category chosen before¶

Loop through each combination of mode and year and parse the corresponding Exiobase data

In [ ]:
for m in Modes:
    for y in Years:
        path = user+f'\IOT_{y}_{m}.zip' # Complete the path
        Worlds[m,y] = mario.parse_exiobase_3(path, name=f'{m} - {y}') # Import the right Exiobase version and year
        Worlds[m,y].aggregate('Aggregations\Aggregation.xlsx')

        for e in Sat_accounts:
            f = Worlds[m,y].f.loc[e]
            f_diag = np.diag(f)
            Y = Worlds[m,y].Y.loc[:,(Countries,sN,Consumption_cats)].sum(1)
            Calc = pd.DataFrame(f_diag@Y.values, index= Worlds[m,y].Y.index, columns=[e])

            for r in Regions:
                for p in Sectors:
                    Res.loc[(m,y,r,p),e] = Calc.loc[(r,sN,p),e][0] # Writing the results of the calculation in the results database
            for c in Countries:
                Res.loc[(m,y,c,'Heating'),e] = Coun_info.loc[c,'GHG emiss Heating share']*Worlds[m,y].EY.loc[e,(Countries,sN,Consumption_cats)].sum() # Reallocation of the households' emissions on two dedicated categories on the basis of the assumption made in the support file
                Res.loc[(m,y,c,'Driving'),e] = Coun_info.loc[c,'GHG emiss Driving share']*Worlds[m,y].EY.loc[e,(Countries,sN,Consumption_cats)].sum() # Reallocation of the households' emissions on two dedicated categories on the basis of the assumption made in the support file
Database: to calculate f following matrices are need.
['w'].Trying to calculate dependencies.

Adding GHG with a 100-years GWP

In [ ]:
Res['GHG'] = Res['CH4']*25 + Res['CO2']*1 + Res['N2O']*298

Mapping needs of final consumers and the sector of the database

In [ ]:
Map1 = pd.read_excel('Aggregations\Support.xlsx', sheet_name='Sectors to needs', index_col=[0], header=[0]).to_dict()['Need']
Map2 = pd.read_excel('Aggregations\Support.xlsx', sheet_name='Sectors to needs', index_col=[0], header=[0]).to_dict()['Settori']
Res['Need'] = Res.index.get_level_values('Sector').map(Map1)
Res['Settori'] = Res.index.get_level_values('Sector').map(Map2)
RES = Res.reset_index().set_index(['Mode','Year','Region','Sector','Settori','Need'])

Exploring results using MARIO's brand new query function¶

World.query can be used to extract absolute differences between matrices in different scenarios.

In this case we have no more scenarios than baseline, but you can see that if a comparison with same scenario is queried no difference is detected.

In [ ]:
World.query('E', scenarios=['baseline'], type='absolute', base_scenario='baseline')
Out[ ]:
Region AT ... WM
Level Sector ... Sector
Item Paddy rice Wheat Cereal grains nec Vegetables, fruit, nuts Oil seeds Sugar cane, sugar beet Plant-based fibers Crops nec Cattle Pigs ... Paper for treatment: landfill Plastic waste for treatment: landfill Inert/metal/hazardous waste for treatment: landfill Textiles waste for treatment: landfill Wood waste for treatment: landfill Membership organisation services n.e.c. (91) Recreational, cultural and sporting services (92) Other services (93) Private households with employed persons (95) Extra-territorial organizations and bodies
Item
CO2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
CH4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
N2O 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

3 rows × 9800 columns

Importing the color palette (here the eNextGen one is used)

In [ ]:
Colors = pd.read_excel('Aggregations\Support.xlsx', sheet_name='Needs colors', index_col=[0], header=[0]).to_dict()['Color'] # Some information is derived from a support file. See GitHub repository for more information.

Plotting the results¶

The output is in Italian cause the example is taken for an application for Italy.

In [ ]:
import plotly.express as px

plot = RES.groupby(['Need','Settori']).sum().reset_index()
plot['% GHG'] = round(plot['GHG'] / plot['GHG'].sum()*100,1).astype(str) + '%'
plot['GHG pc'] = round(plot['GHG']/Coun_info.loc[Countries[0],'Population']).astype(str) + ' kgCO2eq per capita'


# Make a dataframe with GHG emissions per capita by need
GHG_need = round(plot.groupby('Need').sum()/Coun_info.loc[Countries[0],'Population']/1000,1).reset_index()
plot['GHG_need'] = plot['Need'].map(GHG_need.set_index('Need')['GHG'])

# Add a column to plot in which the name of the need and the GHG_need are displayed together
plot['Need and GHG'] = plot['Need'] + ' ~' + plot['GHG_need'].astype(str) + ' ton'

fig = px.treemap(plot, path=['Need and GHG','Settori'], values='GHG', color='Need', color_discrete_map=Colors, hover_data=['% GHG','GHG pc'])
fig.update_layout(template='plotly_white', font_family='HelveticaNeue')
fig.update_layout(
    plot_bgcolor='black', # Set dark background
    paper_bgcolor='black')
fig.update_traces(marker=dict(cornerradius=15))

# Add percentage in each section of the treemap
fig.data[0].textinfo = 'label+percent root'

# Add percentage also at the bottom of the treemap
fig.data[0].insidetextfont.size = 30
fig.data[0].insidetextfont.color = 'black'

# Add title showing the total emissions
fig.update_layout(title_text=f"Emissioni totali di gas serra: ~{round(plot['GHG'].sum()/Coun_info.loc[Countries[0],'Population']*1e-3,1)} tonCO2eq per italiano all'anno")
fig.update_layout(title_x=0.5)

# Decrease distance between title and treemap
fig.update_layout(title_y=0.95)

# Make title white
fig.update_layout(title_font_color='white')

# Make the figure available in the notebook
import plotly.offline as pyo
pyo.init_notebook_mode(connected=True)
pyo.iplot(fig)
In [ ]: