solrad package#

Subpackages#

Submodules#

solrad.Site module#

This module contains all functions, methods and classes related to the computation and manipulation of most of a site’s geographical and metheorological data.

class solrad.Site.Site(latitude, longitude, altitude, tz, name, SF_model='Rural')#

Bases: object

compute_aerosol_asymmetry_factor_using_SF_model(interp_method='linear')#

Compute the Aersol Asymmetry Factor within the spectral range 300 nm - 4000 nm, for the site, using the Ansgtrom Shettel and Fenn model, as detailed in [1].

Parameters:

interp_method ({"linear", "nearest", "cubic"}) – Method of interpolation to use on the data. Default is “linear”.

Return type:

None

Produces

Filled self.aerosol_assymetry_factor attribute. More specifically, it fills completely all the DataFrames contained by the self.aerosol_assymetry_factor dict.

Notes

1) Uses the “RH” column in the dataframes stored by the self.climate_and_air_data attribute for the calculation.

Warns:

1) Warning – f”NaN/None values detected in self.climate_and_air_df[{date}][‘RH’]. NaN input values will produce NaN output values. None input values will raise Exceptions.”

References

[1] Shettle, Eric & Fenn, Robert. (1979). Models for the Aerosols of the Lower Atmosphere and the Effects of Humidity Variations on their Optical Properties. Environ. Res.. 94.

compute_angstrom_turbidity_exponent_500nm_using_SF_model()#

Compute the Ansgtrom turbidity exponent at 500nm for the site using the Shettel and Fenn model, as detailed in [1].

Parameters:

None

Return type:

None

Produces

Partially filled self.climate_an_air_data attribute. Specifically, it fills the “alpha_500nm” column of all DataFrames contained by the self.climate_and_air_data dict.

Warns:

1) Warning – f”NaN/None values detected in self.climate_and_air_df[{date}][‘RH’]. NaN input values will produce NaN output values. None input values will raise Exceptions.”

References

[1] Copernicus Climate Change Service, Climate Data Store, (2019): Aerosol properties gridded data from 1995 to present derived from satellite observation. Copernicus Climate Change Service (C3S) Climate Data Store (CDS). DOI: 10.24381/cds.239d815c

compute_aod_500nm_using_satelite_data(path, percentile=0.5, interp_method='linear')#

Computes the monthly year-wise average or ‘percentile’-th percentile of Aerosol Optical Depth at 500nm (aod_500nm) for the site. The raw data used for the calculation is extracted from the database referenced in [1].

Parameters:
  • path (path-str) – Path of the folder where the aod_500nm raw.ny and filled_NaNs.npy files are stored. That is, the path to the local aod_550nm database.

  • percentile (float or None, optional) – If float, it computes the monthly year-wise ‘percentile’-th percentile of aod_500nm. If NONE, it computes the monthly year-wise average of aod_500nm. Default is 0.5.

  • interp_method ({'linear', 'nearest', 'slinear', 'cubic', 'quintic'}, optional) – The method of interpolation to perform when computing the res[“filled_nans_data_funcs”], res[“avg_data_funcs”] and res[“percentile_data_funcs”] dictionaries. Supported methods are the same as supported by scipy’s RegularGridInterpolator. Default is “linear”.

Return type:

None

Produces

Partially filled self.climate_and_air_data attribute. Specifically, it fills the “aod_500nm” column of all the DataFrames contained by the self.climate_and_air_data dict.

Notes

1) Uses the “alpha_500nm” column in the dataframes stored by the self.climate_and_air_data attribute for the calculation.

Warns:

1) Warning – f”NaN/None values detected in self.climate_and_air_df[{date}][‘alpha_500nm’]. NaN input values will produce NaN output values. None input values will raise Exceptions.”

References

[1] Copernicus Climate Change Service, Climate Data Store, (2019): Aerosol properties gridded data from 1995 to present derived from satellite observation. Copernicus Climate Change Service (C3S) Climate Data Store (CDS). DOI: 10.24381/cds.239d815c

compute_cummulative_time_integral_of_irradiances()#

Computes the cummulative time integral of the cols {“G(h)”, “Gb(n)”, “Gd(h)”} in self.climate_and_air_data[date], for all dates.

Parameters:

None

Return type:

None

Produces

Partially filled self.climate_and_air_data attribute. Specifically, it adds the columns {“int G(h)”, “int Gb(n)”, “int Gd(h)”} to each dataframe of the dict self.climate_and_air_data; containing the cummulative time integral of the columns {“G(h)”, “Gb(n)” and “Gd(h)”}, respectively. Units of this new columns are Wh/m^2.

Warns:

1) Warning – f”NaN/None values detected in self.climate_and_air_df[{date}][{col}]. NaN input values will produce NaN output values. None input values will raise Exceptions.”

compute_extraterrestrial_normal_irradiance(method='nrel')#

Determines extraterrestrial radiation from day of year, using pvlib’s get_extra_radiation function.

Parameters:

method ({"pyephem", "spencer", "asce", "nrel"}, opional) – The method by which the extraterrestrial radiation should be calculated. The default is “nrel”.

Return type:

None.

Produces

Partially filled self.climate_and_air_data attribute. Specifically, it fills the “extra_Gbn” column of all the DataFrames contained by the self.climate_and_air_data dict.

compute_ozone_column_using_satelite_data(path, percentile=0.5, interp_method='linear')#

Computes the monthly year-wise average or ‘percentile’-th percentile of atmospheric ozone column for the site. The raw data used for the calculation is extracted from the database referenced in [1].

Parameters:
  • path (path-str) – Path of the folder where the ozone column raw.ny and filled_NaNs.npy files are stored. That is, the path to the local ozone column database.

  • percentile (float or None, optional) – If float, it computes the monthly year-wise ‘percentile’-th percentile of ozone column. If NONE, it computes the monthly year-wise average of ozone column. Default is 0.5.

  • interp_method ({'linear', 'nearest', 'slinear', 'cubic', 'quintic'}, optional) – The method of interpolation to perform when computing the res[“filled_nans_data_funcs”], res[“avg_data_funcs”] and res[“percentile_data_funcs”] dictionaries. Supported methods are the same as supported by scipy’s RegularGridInterpolator. Default is “linear”.

Return type:

None

Produces

Partially filled self.climate_and_air_data attribute. Specifically, it fills the “O3” column of all the DataFrames contained by the self.climate_and_air_data dict.

References

[1] Copernicus Climate Change Service, Climate Data Store, (2020): Ozone monthly gridded data from 1970 to present derived from satellite observations. Copernicus Climate Change Service (C3S) Climate Data Store (CDS). DOI: 10.24381/cds.4ebfe4eb

compute_ozone_column_using_van_Heuklon_model()#

Computes the ozone column values (in atm-cm) for the site, using van Heuklon’s Ozone model.

Parameters:

None

Return type:

None

Produces

Partially filled self.climate_and_air_data attribute. Specifically, it fills the “O3” column of all the DataFrames contained by the self.climate_and_air_data dict.

compute_single_scattering_albedo_using_SF_model(interp_method='linear')#

Compute the Single Scattering Albedo within the spectral range 300 nm - 4000 nm, for the site, using the Ansgtrom Shettel and Fenn model, as detailed in [1].

Parameters:

interp_method ({"linear", "nearest", "cubic"}) – Method of interpolation to use on the data. Default is “linear”.

Return type:

None

Produces

Filled self.single_scattering_albedo attribute. More specifically, it fills completely all the DataFrames contained by the self.single_scattering_albedo dict.

Notes

1) Uses the “RH” column in the dataframes stored by the self.climate_and_air_data attribute for the calculation.

Warns:

1) Warning – f”NaN/None values detected in self.climate_and_air_df[{date}][‘RH’]. NaN input values will produce NaN output values. None input values will raise Exceptions.”

References

[1] Shettle, Eric & Fenn, Robert. (1979). Models for the Aerosols of the Lower Atmosphere and the Effects of Humidity Variations on their Optical Properties. Environ. Res.. 94.

compute_spectrally_averaged_aerosol_asymmetry_factor(spectral_range=(300, 4000))#

Compute the spectral average of the aerosol asymmetry factor, for the interval of wavelengths specified.

We take the self.aerosol_asymmetry_factor attribute, loop over all the DataFrames stored in it and compute the row-wise mean of the values for the interval of wavelengths specified by spectral range. We then use the the computed values to fill the “spectrally_averaged_aaf” column in all dataframes of the self.climate_and_air_data attribute.

Parameters:

spectral_range (2-tuple of float) – Tuple containing the lower and upper bounds of wavelengths (in nm) that make up the spectral range meant for averaging the aerosol asymmetry factor.

Return type:

None

Produces

Partially filled self.climate_an_air_data attribute. Specifically, it fills the “spectrally_averaged_aaf” column of all DataFrames contained by the self.climate_and_air_data dict.

Notes

  1. Uses the self.aerosol_asymmtry_factor attribute for the calculation.

compute_sun_data(NaN_handling='strict')#

Compute solar position data and related parameters for a specific location and time.

Parameters:

NaN_handling ({"strict", "loose", "null"}, optional) – How to handle NaN and None values when present in “SP” and “T2m” columns of the DataFrames stored in self.climate_and_air_data If “strict” an Exception is raised. If “loose”, default values are used instead (see notes for more info). If “null”, nothing is done about it and NaN/None values are directly passed onto the calculation, which may produce NaN results or raise another Exception. Default is “strict”.

Return type:

None

Produces

Filled self.sun_data attribute.

See also

pvlib.solarposition.get_solarposition

Notes

1) In case that NaN_handling is “loose”, the default value of temperature used is 15°C and the default value of pressure is computed from altitude using the function pvlib.atmosphere.alt2pres.

compute_water_column_using_gueymard94_model()#

Computes the Precipitable Water Column values (in atm-cm) for the site, using pvlib’s implementation of the gueymard94 model.

Parameters:

None

Return type:

None

Produces

Partially filled self.climate_and_air_data attribute. More specifically, it fills the “H2O” column of all the DataFrames contained by the self.climate_and_air_data dict.

Notes

1) Uses the “RH” and “T2m” columns in the dataframes stored by the self.climate_and_air_data attribute for the calculation.

Warns:
  • 1) Warning – f”NaN/None values detected in self.climate_and_air_df[{date}][‘T2m’]. NaN input values will produce NaN output values. None input values will raise Exceptions.”

  • 2) Warning – f”NaN/None values detected in self.climate_and_air_df[{date}][‘RH’]. NaN input values will produce NaN output values. None input values will raise Exceptions.”

compute_water_column_using_satelite_data(path, percentile=0.5, interp_method='linear')#

Computes the monthly year-wise average or ‘percentile’-th percentile of atmospheric water column for the site. The raw data used for the calculation is extracted from the database referenced in [1].

Parameters:
  • path (path-str) – Path of the folder where the water column raw.ny and filled_NaNs.npy files are stored. That is, the path to the local water column database.

  • percentile (float or None, optional) – If float, it computes the monthly year-wise ‘percentile’-th percentile of water_column. If NONE, it computes the monthly year-wise average of water_column. Default is 0.5.

  • interp_method ({'linear', 'nearest', 'slinear', 'cubic', 'quintic'}, optional) – The method of interpolation to perform when computing the res[“filled_nans_data_funcs”], res[“avg_data_funcs”] and res[“percentile_data_funcs”] dictionaries. Supported methods are the same as supported by scipy’s RegularGridInterpolator. Default is “linear”.

Return type:

None

Produces

Partially filled self.climate_and_air_data attribute. Specifically, it fills the “H2O” column of all the DataFrames contained by the self.climate_and_air_data dict.

References

[1] Preusker, R., El Kassar, R. (2022): Monthly total column water vapour over land and ocean from 2002 to 2012 derived from satellite observation. Copernicus Climate Change Service (C3S) Climate Data Store (CDS). DOI: 10.24381/cds.8e0e4724

define_simulation_time_data(start_time, end_time, freq, min_hms, max_hms, inclusive=False)#

Define time data used for simulation.

It generates a date range based on geographical coordinates and specified time parameters, with optional filtering for each day based on user input or sunrise and sunset times.

Parameters:
  • start_time (str) – The starting date and time in the format ‘YYYY-MM-DD HH:MM:SS’.

  • end_time (str) – The ending date and time in the format ‘YYYY-MM-DD HH:MM:SS’.

  • freq (str) – The frequency at which the date range should be generated. Any frequency accepted by pandas.date_range is valid for geo_date_range.

  • min_hms (str or None) – A string representing the minimum hour-minute-second (HH:MM:SS) value for a Timestamp within each day’s time series. If the hms values are below this threshold, they are removed. It can also be set to None to ignore this condition, or to “sunrise” to use the computed sunrise time for the location as the value for min_hms.

  • max_hms (str or None) – A string representing the maximum hour-minute-second (HH:MM:SS) value for a Timestamp within each day’s time series. If the hms values are above this threshold, they are removed. It can also be set to None to ignore this condition, or to “sunset” to use the computed sunset time for the location as the value for max_hms.

  • inclusive (bool, optional) – Whether to forcibly include the end_time in the generated date range, in case it’s left out. Defaults to False.

Return type:

None

Produces

self.simulation_time_datadict

A dictionary containing the filtered date ranges/time series, separated by day, based on the specified parameters. Its strucure is as follows: Each key is a 3-tuple of (year : int, month : int, day :int) and each corresponding value is a pandas.DatetimeIndex object containing the time series associated to that date.

Notes

1) This function also initializes other attributes such as: self.climate_and_air_data, self.sun_data, self.single_scattering_albedo, self.aerosol_asymmetry_factor.

plot_data(col, years, months, days, hours, mode=2, interp_method='linear', figsize=(16, 12))#

Plot site variable specified by col, for the period of time specified, and using the mode selected.

Parameters:
  • col (str) – Name of the variable to be plotted. Must be one of the keys of self.variables_info[“descriptions”].

  • years (list or None) – If list, it must be a list of 2 elements, containing the lower and upper bounds of the years to plot. The first element of years would be the lower bound, while the second element would be the upper bound. If None, the lower and upper bounds for the years variable are automatically selected by the program so that all avialable years are included.

  • months (list or None) – If list, it must be a list of 2 elements, containing the lower and upper bounds of the months to plot. The first element of months would be the lower bound, while the second element would be the upper bound. If None, the lower and upper bounds for the months variable are automatically selected by the program so that all avialable months are included.

  • days (list or None) – If list, it must be a list of 2 elements, containing the lower and upper bounds of the days to plot. The first element of days would be the lower bound, while the second element would be the upper bound. If None, the lower and upper bounds for the ‘days’ variable are automatically selected by the program so that all avialable days are included.

  • hours (list of hours) – If list, it must be a list of 2 elements, containing the lower and upper bounds of the hours to plot. The first element of hours would be the lower bound, while the second element would be the upper bound.

  • mode (int, optional) –

    Mode of plotting. There are 3 options:

    1. mode=1 : Plot all variable curves for all times.

    2. mode=2 : Plot all variable curves for all times + Plot average and 25th, 50th, 75th percentiles.

    3. mode=3 : Plot average and 25th, 50th, 75th percentiles.

    Default is mode=2.

interp_method{‘linear’, ‘nearest’, ‘cubic’}, optional

Method to use for the interpolation of data before plotting. The methods supported are the same ones as those supported by scipy.interpolate.griddata function. Default is “linear”.

figsize2-tuple of int, optional

Figure size of plot. Default is (16, 12).

plot_horizon(azimuth, config=None)#

Plots site’s horizon profile based on the provided azimuth data.

Parameters:

config (None or dict, optional) –

Configuration settings of the plot. When equal to None (which is the default) the default plot settings are used. When not equal to None, it must be a dict containing some or all of the following key-value pairs:

“projection””polar” or “cartesian”

If equal to “polar”, the Horizon profile is plotted using a polar plot. If equal to “cartesian”, it is plotted using a cartesian plot. “Default is “polar”.

”show_polar_elevation”bool

If true, it shows the elevation angle makers for the polar plot. If False, it does not. Default is False.

”title”str

Title of the plot. Default is ‘Horizon Profile’.

”facecolor”str

Background color of the Horizon Height part of the plot. Must be equal to str(x), where x is a float between 0 and 1. 0 means that the background color is black. 1 means that it is white. Any value in between represents a shade of gray. Default is “0.5”.

”figsize”tuple of float

Figure size of the plot.

Return type:

None

reset_horizon()#

Resets horizon profile, such that elevation is 0° everywhere.

Parameters:

None

Return type:

None

Produces

self.horizon attribute : dict Dictionary with information about the horizon. It has the following key-value pairs:

‘func’Callable

Horizon function. Its input is an array of azimuth values (in degrees) and its output is an array of horizon elevation angle values (in degrees).

‘is_clear’bool

Whether the current horizon is clear or not (i.e, null elevation everywhere).

‘is_pvgis’bool

Whether the current horizon was obtained from pvgis.

‘was_used_for_climate_data’bool

Whether the current horizon was used for the computation of climate data.

Notes

1) Horizon height is the angle between the local horizontal plane and the horizon. In other words, the Horizon height is equal to the horizon’s elevation angle.

set_climate_data_from_pvgis_tmy_data(startyear, endyear, interp_method='linear', use_site_horizon=False)#

Use the Typical Meteorological Year (TMY) data from PVGIS to partially fill the self.climate_and_air_data.

Parameters:
  • startyear (int or None) – First year to calculate TMY.

  • endyear (int or None) – Last year to calculate TMY, must be at least 10 years from first year.

  • interp_method ({'linear', 'quadratic', 'cubic'}, optional) – The interpolation method to be used. Defaults is ‘linear’.

  • use_site_horizon (bool, optional) – Wether to include effects of the site’s horizon. Default is False.

Return type:

None

Produces

Partially filled self.climate_and_air_data attribute. Specifically, it fills the “G(h)”, “Gb(n)”, “Gd(h)”, “T2m”, “SP”, “RH” columns of all the DataFrames contained by the self.climate_and_air_data dict.

See also

pvlib.iotools.get_pvgis_tmy

set_horizon_from_arrays(azimuth, elevation, interp_method='linear')#

Set site’s horizon function by interpolating provided data points.

Parameters:
  • azimuth (array_like (npoints,)) – Array of azimuth angle values in degrees, from 0 to 360. Must be monotonic-increasing.

  • elevation (array_like (npoints,)) – Array of horizon elevation angle values in degrees. Elevation values must lie between 0 and 90.

  • interp_method ({"linear", "quadratic", "cubic"}, optional) – Order of the spline interpolator to use. Default is ‘linear’.

Return type:

None

Produces

self.horizon attributedict

Dictionary with information about the horizon. It has the following key-value pairs:

‘func’Callable

Horizon function. Its input is an array of azimuth values (in degrees) and its output is an array of horizon elevation angle values (in degrees).

‘is_clear’bool

Whether the current horizon is clear or not (i.e, null elevation everywhere).

‘is_pvgis’bool

Whether the current horizon was obtained from pvgis.

‘was_used_for_climate_data’bool

Whether the current horizon was used for the computation of climate data.

Notes

1) Horizon height is the angle between the local horizontal plane and the horizon. In other words, the Horizon height is equal to the horizon’s elevation angle.

set_horizon_from_func(func)#

Set site’s horizon function by directly passing a function.

Parameters:

func (callable) – Horizon profile function. It should take in an array of azimuth values (in degrees) and return an array of elevation angle values (in degrees).

Return type:

None

Produces

self.horizon attributedict

Dictionary with information about the horizon. It has the following key-value pairs:

‘func’Callable

Horizon function. Its input is an array of azimuth values (in degrees) and its output is an array of horizon elevation angle values (in degrees).

‘is_clear’bool

Whether the current horizon is clear or not (i.e, null elevation everywhere).

‘is_pvgis’bool

Whether the current horizon was obtained from pvgis.

‘was_used_for_climate_data’bool

Whether the current horizon was used for the computation of climate data.

Notes

1) Horizon height is the angle between the local horizontal plane and the horizon. In other words, the Horizon height is equal to the horizon’s elevation angle.

set_horizon_from_pvgis(interp_method='linear', timeout=30)#

Get a site’s horizon profile from PVGIS’s API and use it for the current site.

Parameters:
  • interp_methd ({'linear', 'quadratic', 'cubic'}, optional) – Order of the spline interpolator to use. Default is ‘linear’.

  • timeout (float) – Number of seconds after which the requests library will stop waiting for a response of the server. That is, if the requests library does not receive a response in the specified number of seconds, it will raise a Timeout error.

Return type:

None

Produces

self.horizondict

Dictionary with information about the horizon. It has the following key-value pairs:

‘func’Callable

Horizon function. Its input is an array of azimuth values (in degrees) and its output is an array of horizon elevation angle values (in degrees).

‘is_clear’bool

Whether the current horizon is clear or not (i.e, null elevation everywhere).

‘is_pvgis’bool

Whether the current horizon was obtained from pvgis.

‘was_used_for_climate_data’bool

Whether the current horizon was used for the computation of climate data.

Notes

1) Horizon height is the angle between the local horizontal plane and the horizon. In other words, the Horizon height is equal to the horizon’s elevation angle.

time_interpolate_variable(col, year, month, day, new_hms_float, interp_method='linear')#

Interpolate a site variable across time.

Parameters:
  • col (str) – Name of the variable to be plotted. Must be one of the keys of self.variables_info[“descriptions”].

  • year (int) – Year at which the variable is defined.

  • month (int) – Month at which the variable is defined.

  • day (int) – Day at which the variable is defined.

  • new_hms_float (array-like of floats (npoints,)) – Fractional hours at which to evaluate the interpolated variable. The range of new_hms_float must be the same as the the variable’s original hms_float.

  • interp_method ({'linear', 'slinear', 'quadratic', 'cubic'}, optional) – Interpolation method. Any str supported by the scipy.interpolate.interp1d function is accepted. Default is “linear”.

Returns:

interpd_y – Array of interpolated values for the variable specified by “col”, at the times specified by “year”, “month”, “day”.

Return type:

numpy.array of floats (npoints,)

solrad.Sky module#

This module contains all functions, methods and classes related to the modelling of radiation coming from the sky.

class solrad.Sky.Sky(Site_obj, num_divisions=400)#

Bases: object

azimuth_to_patch(zone_num, az, start=None, end=None)#

Bin azimuth value into the correct sky patch via binary search.

Parameters:
  • zone_num (int) – Sky zone to which the azimuth value “belongs”.

  • az (float) – Azimuth value in degrees. Must be between 0 and 360.

  • start (int or None) – Lower search bound for patch. If None, it defaults to the lowest bound possible.

  • end (int or None) – Upper search bound for patch. If None, it defaults to the highest bound possible.

Returns:

local_patch_num – Sky patch (int) to which the zenith coordinate belongs, or “not found” if search failed.

Return type:

int

compute_absorbed_energy_by_unit_plane(n_uvec, absorption_func=1, component='global')#

Compute the total energy absorbed by a unit plane from the sky for a specified component of radiation.

Parameters:
  • n_uvec (numpy.array of float with shape (3,)) – The normal unit vector of the unit plane.

  • absorption_func (int, float, or callable, optional) – The absorption function. If int or float, a constant absorption coefficient is used. If callable, the function should take an array of arguments representing angle of incidence and wavelength and return the absorption coefficient. Default is 1.

  • component ({"global", "direct", "diffuse"}, optional) – The spectral component to consider. Default is “global”.

Returns:

total_absorbed_incident_energy – The total absorbed incident energy by the unit plane.

Return type:

float

Raises:

ValueError – If the absorption_func is not a valid type.

Notes

This method computes the total absorbed energy by a unit plane for a specified spectral component. The computation considers the spectral exposure vectors and absorption function. The total absorbed energy is returned as a scalar value.

compute_exposure_vectors_for_a_date_interval(start_date=None, end_date=None, nel=46, naz=181, num_iterations=150, use_site_horizon=False, int_nzen=20, int_naz=30)#

Compute the radiant exposure and spectral radiant exposure vectors of each Sky patch. That is, the time-integrated spectral irradiance over each Sky patch (for a given date interval) by integrating the time-integrated spectral radiance with respect to the solid angle, over each sky patch of the discretised Sky Vault.

Parameters:
  • start_date (3-tuple of int or None, optional) – Start date of computation. If None, the first date in self.Site_obj.simulation_times_data.keys() is chosen. Otherwise, it must be a is 3-tuple of (year, month, day) indicating the start date.

  • end_date (3-tuple of int or None, optional) – End date of computation. If None, the last date in self.Site_obj.simulation_times_data.keys() is chosen. Otherwise, it must be a is 3-tuple of (year, month, day) indicating the end date.

  • nel (int, optional) – Number of samples for dicretizing the sky vault with regards to the elevation coordinate. Default is 46.

  • naz (int, optional) – Number of samples for dicretizing the sky vault with regards to the azimuth coordinate. Default is 181.

  • num_iterations (int, optional) – Number of iterations to use when filling NaN data. Default is 150.

  • use_site_horizon (bool) – Include horizon effects. Default is False.

  • int_nzen (int) – Number of samples for dicretizing each sky patch, with regards to the zenith coordinate, in order to compute the diffuse spectral irradiance via integration. Default is 20.

  • int_naz (int) – Number of samples for dicretizing each sky patch, with regards to the zenith coordinate, in order to compute the diffuse spectral irradiance via integration. Default is 30.

Return type:

None

Produces

self.patch_data[(zone_num, local_patch_num)][“exposure”]dict of dicts

Each sky patch recieves a new key in its database called “exposure”. This is a dict with keys: “direct”, “diffuse”, “global”, “spectral_direct”, “spectral_diffuse”, “spectral_global”; and “wavelengths”. Aside from “wavelengths”, every other key holds another dictionary that stores some relevant information about the direct, diffuse and global radiant and spectral radiant exposure (respectively) related to that particular sky patch.

For keys “spectral_direct”, “spectral_diffuse”, “spectral_global”, each of these dicts contains the following key-value pairs:

“vector”np.array of floats with shape (3,122)

spectral direct/spectral diffuse/spectral global (depending on the case) radiant exposure vector. “vector”[0,:], “vector”[1,:] and “vector”[2,:], hold the x, y and z components of the spectral radiant exposure vector, respectively, for all wavelengths in key “wavelengths”. Each component has units of Wh/m^2/nm.

“magnitude”np.array of floats with shape (122,)

Magnitude of the spectral direct/spectral diffuse/spectral global (depending on the case) spectral radiant exposure vector. It has units of Wh/m^2/nm.

“spectrally_averaged_unit_vector”np.array of floats with shape (3,)

Average position of irradiance within a sky patch. That is, the unit vector version of the spectrally averageds spectral direct/spectral diffuse/spectral global (depending on the case) radiant exposure vector. In the case, however, that said Spectral radiant exposure vector is zero, we default to using the unit vector pointing to the center of the current sky patch. It is adimensional.

Now, for keys “direct”, “diffuse”, “global”, each of these dicts contains the following key-value pairs:

“vector”np.array of floats with shape (3,)

direct/diffuse/global (depending on the case) radiant exposure vector. “vector”[0], “vector”[1] and “vector”[2], hold the x, y and z components of the radiant exposure vector, respectively, for all wavelengths in key “wavelengths”. Each component has units of Wh/m^2.

“magnitude”float

Magnitude of the direct/diffuse/global (depending on the case) spectral radiant exposure vector. It has units of Wh/m^2.

“spectrally_averaged_unit_vector”np.array of floats with shape (3,)

Average position of irradiance within a sky patch. That is, the unit vector version of the spectrally averaged direct/diffuse/global (depending on the case) Radiant exposure vector. In the case, however, that said Spectral radiant exposure vector is zero, we default to using the unit vector pointing to the center of the current sky patch. It is adimensional.

Finally, in the case of the key “wavelengths”, it does not store any dicts but rather a numpy.array of float values:

np.array of floats with shape (122,)

Array of wavelengths over which the spectral irradiances are defined.

self.exposure_vectorsdict of numpy.arrays

Dict containing the same info as above, but in another format that is handier for other things. It has the following key-value pairs:

“spectral_direct”numpy.array of floats with shape (self.num_divisions, 3, 122)

Direct spectral radiant exposure vector for each of the sky patches. It has units of Wh/m^2/nm.

“spectral_direct_mag”numpy.array of floats with shape (self.num_divisions, 122)

Magnitude of the direct spectral radiant exposure vector for each of the sky patches. It has units of Wh/m^2/nm.

“spectral_direct_unit_savgd”numpy.array of floats with shape (self.num_divisions, 3)

Average position of radiant exposure within a sky patch. That is, unit vector version of the spectrally averaged spectral direct (depending on the case) radiant exposure vector. In the case, however, that said Spectral radiant exposure vector is zero, we default to using the unit vector pointing to the center of the current sky patch. It is adimensional.

“spectral_diffuse”numpy.array of floats with shape (self.num_divisions, 3, 122)

Diffuse spectral radiant exposure vector for each of the sky patches. It has units of Wh/m^2/nm.

“spectral_diffuse_mag”numpy.array of floats with shape (self.num_divisions, 122)

Magnitude of the diffuse spectral radiant exposure vector for each of the sky patches. It has units of Wh/m^2/nm.

“spectral_diffuse_unit_savgd”numpy.array of floats with shape (self.num_divisions, 3)

Average position of radiant exposure within a sky patch. That is, unit vector version of the spectrally averaged spectral diffuse (depending on the case) radiant exposure vector. In the case, however, that said Spectral radiant exposure vector is zero, we default to using the unit vector pointing to the center of the current sky patch. It is adimensional.

“spectral_global”numpy.array of floats with shape (self.num_divisions, 3, 122)

Global spectral radiant exposure vector for each of the sky patches. It has units of Wh/m^2/nm.

“spectral_global_global”numpy.array of floats with shape (self.num_divisions, 122)

Magnitude of the global spectral radiant exposure vector for each of the sky patches. It has units of Wh/m^2/nm.

“spectral_global_unit_savgd”numpy.array of floats with shape (self.num_divisions, 3)

Average position of radiant exposure within a sky patch. That is, unit vector version of the spectrally averaged spectral diffuse (depending on the case) radiant exposure vector. In the case, however, that said Spectral radiant exposure vector is zero, we default to using the unit vector pointing to the center of the current sky patch. It is adimensional.

“direct”numpy.array of floats with shape (self.num_divisions, 3, 122)

Direct radiant exposure vector for each of the sky patches. It has units of Wh/m^2.

“direct_mag”numpy.array of floats with shape (self.num_divisions, 122)

Magnitude of the direct radiant exposure vector for each of the sky patches. It has units of Wh/m^2.

“direct_unit”numpy.array of floats with shape (self.num_divisions, 3)

Average position of radiant exposure within a sky patch. That is, unit vector version of the spectrally averaged spectral direct (depending on the case) radiant exposure vector. In the case, however, that said radiant exposure vector is zero, we default to using the unit vector pointing to the center of the current sky patch. It is adimensional.

“diffuse”numpy.array of floats with shape (self.num_divisions, 3, 122)

Diffuse radiant exposure vector for each of the sky patches. It has units of Wh/m^2.

“diffuse_mag”numpy.array of floats with shape (self.num_divisions, 122)

Magnitude of the diffuse radiant exposure vector for each of the sky patches. It has units of Wh/m^2.

“diffuse_unit”numpy.array of floats with shape (self.num_divisions, 3)

Average position of radiant exposure within a sky patch. That is, unit vector version of the spectrally averaged spectral diffuse (depending on the case) radiant exposure vector. In the case, however, that said radiant exposure vector is zero, we default to using the unit vector pointing to the center of the current sky patch. It is adimensional.

“global”numpy.array of floats with shape (self.num_divisions, 3, 122)

Global radiant exposure vector for each of the sky patches. It has units of Wh/m^2.

“global_mag”numpy.array of floats with shape (self.num_divisions, 122)

Magnitude of the global radiant exposure vector for each of the sky patches. It has units of Wh/m^2.

“global_unit”numpy.array of floats with shape (self.num_divisions, 3)

Average position of radiant exposure within a sky patch. That is, unit vector version of the spectrally averaged spectral global (depending on the case) radiant exposure vector. In the case, however, that said radiant exposure vector is zero, we default to using the unit vector pointing to the center of the current sky patch. It is adimensional.

“wavelengths”np.array of floats with shape (122,)

Array of wavelengths over which the spectral irradiances are defined.

compute_optimal_plane_orientation(min_res=0.5, naz=13, nel=4, absorption_func=1, component='global')#

Compute the optimal orientation of a plane for maximum absorbed energy.

Parameters:
  • min_res (float, optional) – Minimum angular resolution (in degrees) that wants to be achieved during the optimization process. Default is 0.5.

  • naz (int, optional) – Number of azimuthal divisions for each iteration. Default is 13.

  • nel (int, optional) – Number of elevation divisions. Default is 4.

  • absorption_func (int, float, or callable, optional) – The absorption function. If int or float, a constant absorption coefficient is used. If callable, the function should take an array of arguments representing angle of incidence and wavelength and return the absorption coefficient. Default is 1.

  • component ({"global", "direct", "diffuse"}, optional) – The spectral component to consider. Default is “global”.

Returns:

opti_vals

A dictionary containing the optimal orientation information:

  • ”energy”: Total absorbed incident energy.

  • ”az”: Optimal azimuth angle in degrees.

  • ”zen”: Optimal zenith angle in degrees.

  • ”el”: Optimal elevation angle in degrees.

  • ”n_uvec”: Normalized unit vector of the optimal orientation.

  • ”az_res”: Azimuth resolution in degrees.

  • ”el_res”: Elevation resolution in degrees.

Return type:

dict

Notes

This method iteratively searches for the optimal plane orientation by dividing the azimuth and zenith angles into divisions and computing the absorbed energy for each combination. The resolution is refined until it reaches the specified minimum.

compute_radiance_for_a_date()#

Compute the radiance for a given date by integrating the (already computed) spectral radiance over the wavelength axis.

Parameters:

None

Return type:

None

Produces

self.radiancesdict

Dictionary containing result variables. It has the following Key-Value pairs:

“Az”float or numpy.array of floats with shape (nel, naz)

Grid of azimuth coordinates (in degrees) of the sky elements for which the radiances was calculated. Its values should vary along axis 1. In any case, all values should be between 0 and 360 (inclusive).

“El”float or numpy.array of floats with shape (nel, naz)

Grid of elevation coordinates (in degrees) of the sky elements for which the radiances was calculated. Its values should vary along axis 0. In any case, all values should be between 0 and 90 (inclusive).

“Siv”numpy.array of floats with shape (nt,)

Igawa’s ‘Sky Index’ parameter across time.

“Kc”numpy.array of floats with shape (nt,)

Igawa’s ‘Clear Sky Index’ parameter across time.

“Cle”numpy.array of floats with shape (nt,)

Igawa’s ‘Cloudless Index’ parameter across time.

“wavelengths”numpy.array of floats with shape (122,)

Wavelengths in nanometers.

“DatetimeIndex_obj”pandas.Series of pandas.Timestamp objects.

Series of Timestamp values detailing the times at which each of the samples of the time-dependent variables were taken. We denote its length as nt.

“spectral_direct”List with length nt of numpy.arrays of floats with shape (nel,naz,122)

Direct component of spectral radiance across time. It has units of W/m^2/sr/nm.

“spectral_diffuse”List with length nt of numpy.arrays of floats with shape (nel,naz,122)

Diffuse component of spectral radiance across time. It has units of W/m^2/sr/nm.

“direct”numpy.array of floats with shape (nel,naz,nt)

Direct component of radiance across time. It has units of W/m^2/sr.

“diffuse”numpy.array of floats with shape (nel,naz,nt)

Diffuse component of radiance across time. It has units of W/m^2/sr.

Notes

  1. This method requires the attribute self.radiances to already be defined. For this, please check out compute_spectral_radiance_for_a_date().

compute_radiances_for_a_date(year, month, day, nel=46, naz=181, num_iterations=150, use_site_horizon=False)#

Compute radiance and spectral radiance across time for a complete day on a specific date, using the data stored in the self.Site_obj attribute.

Parameters:
  • year (int) – Year for which the spectral radiance is to be computed. Must be present in ‘self.Site_obj.simulation_times_data’.

  • month (int) – Month for which the spectral radiance is to be computed. Must be present in ‘self.Site_obj.simulation_times_data’.

  • day (int) – Day for which the spectral radiance is to be computed. Must be present in ‘self.Site_obj.simulation_times_data’.

  • nel (int, optional) – Number of samples for dicretizing the sky vault with regards to the elevation coordinate. Default is 46.

  • naz (int, optional) – Number of samples for dicretizing the sky vault with regards to the azimuth coordinate. Default is 181.

  • num_iterations (int, optional) – Number of iterations to use when filling NaN data. Default is 150.

  • use_site_horizon (bool) – Include horizon effects. Default is False.

Return type:

None

Produces

self.radiancesdict

Dictionary containing result variables. It has the following Key-Value pairs:

“Az”float or numpy.array of floats with shape (nel, naz)

Grid of azimuth coordinates (in degrees) of the sky elements for which the radiances was calculated. Its values should vary along axis 1. In any case, all values should be between 0 and 360 (inclusive).

“El”float or numpy.array of floats with shape (nel, naz)

Grid of elevation coordinates (in degrees) of the sky elements for which the radiances was calculated. Its values should vary along axis 0. In any case, all values should be between 0 and 90 (inclusive).

“Siv”numpy.array of floats with shape (nt,)

Igawa’s ‘Sky Index’ parameter across time.

“Kc”numpy.array of floats with shape (nt,)

Igawa’s ‘Clear Sky Index’ parameter across time.

“Cle”numpy.array of floats with shape (nt,)

Igawa’s ‘Cloudless Index’ parameter across time.

“wavelengths”numpy.array of floats with shape (122,)

Wavelengths in nanometers.

“DatetimeIndex_obj”pandas.Series of pandas.Timestamp objects.

Series of Timestamp values detailing the times at which each of the samples of the time-dependent variables were taken. We denote its length as nt.

“spectral_direct”List with length nt of numpy.arrays of floats with shape (nel,naz,122)

Direct component of spectral radiance across time. It has units of W/m^2/sr/nm.

“spectral_diffuse”List with length nt of numpy.arrays of floats with shape (nel,naz,122)

Diffuse component of spectral radiance across time. It has units of W/m^2/sr/nm.

“direct”numpy.array of floats with shape (nel,naz,nt)

Direct component of radiance across time. It has units of W/m^2/sr.

“diffuse”numpy.array of floats with shape (nel,naz,nt)

Diffuse component of radiance across time. It has units of W/m^2/sr.

compute_spectral_radiance_for_a_date(year, month, day, nel=46, naz=181, num_iterations=150, use_site_horizon=False)#

Compute spectral radiance across time for a complete day on a specific date, using the data stored in the self.Site_obj attribute.

Parameters:
  • year (int) – Year for which the spectral radiance is to be computed. Must be present in ‘self.Site_obj.simulation_times_data’.

  • month (int) – Month for which the spectral radiance is to be computed. Must be present in ‘self.Site_obj.simulation_times_data’.

  • day (int) – Day for which the spectral radiance is to be computed. Must be present in ‘self.Site_obj.simulation_times_data’.

  • nel (int, optional) – Number of samples for dicretizing the sky vault with regards to the elevation coordinate. Default is 46.

  • naz (int, optional) – Number of samples for dicretizing the sky vault with regards to the azimuth coordinate. Default is 181.

  • num_iterations (int, optional) – Number of iterations to use when filling NaN data. Default is 150.

  • use_site_horizon (bool) – Include horizon effects. Default is False.

Return type:

None

Produces

self.radiancesdict

Dictionary containing result variables. It has the following Key-Value pairs:

“Az”float or numpy.array of floats with shape (nel, naz)

Grid of azimuth coordinates (in degrees) of the sky elements for which the spectral radiances was calculated. Its values should vary along axis 1. In any case, all values should be between 0 and 360 (inclusive).

“El”float or numpy.array of floats with shape (nel, naz)

Grid of elevation coordinates (in degrees) of the sky elements for which the spectral radiances was calculated. Its values should vary along axis 0. In any case, all values should be between 0 and 90 (inclusive).

“Siv”numpy.array of floats with shape (nt,)

Igawa’s ‘Sky Index’ parameter across time.

“Kc”numpy.array of floats with shape (nt,)

Igawa’s ‘Clear Sky Index’ parameter across time.

“Cle”numpy.array of floats with shape (nt,)

Igawa’s ‘Cloudless Index’ parameter across time.

“wavelengths”numpy.array of floats with shape (122,)

Wavelengths in nanometers.

“DatetimeIndex_obj”pandas.Series of pandas.Timestamp objects.

Series of Timestamp values detailing the times at which each of the samples of the time-dependent variables were taken. We denote its length as nt.

“spectral_direct”List with length nt of numpy.arrays of floats with shape (nel,naz,122)

Direct component of spectral radiance across time. It has units of W/m^2/sr/nm.

“spectral_diffuse”List with length nt of numpy.arrays of floats with shape (nel,naz,122)

Diffuse component of spectral radiance across time. It has units of W/m^2/sr/nm.

Notes

1) Initial time and final time of simulation are taken to be self.Site_obj_simulation_time_data[(year, month, day)][0] and self.Site_obj_simulation_time_data[(year, month, day)][-1] (respectively).

  1. Angular resolution in the Elevation coordinate is equal to 90/(nel - 1).

  2. Angular resolution in the Azimuth coordinate is equal to 360/(naz - 1).

4) The time resolution used is the same as that of self.Site_obj.simulation_times_data.

discretise(num_divisions)#

Discretise the Sky Vault into non-congruent square-like patches of similar area, according to the procedure proposed in the paper “A general rule for disk and hemisphere partition into equal-area cells” (see reference [1]).

Parameters:

num_divisions (int) – Number of patches into which the Sky Vault will be discretised.

Return type:

None

Produces

self.zone_datadict of dicts

Dictionary containing information about the discretization zones. Each key of self.zone_data corresponds to a unique zone number. The component dictionaries (stored at self.zone_data[zone_num]) have the following Key-Value pairs:

“num_patches”int

Number of sky patches contained inside the sky zone.

“inf_zen”float

Inferior zenith angle bound delimiting the sky zone, in degrees.

“sup_zen”float

Superior zenith angle bound delimiting the sky zone, in degrees.

“inf_rad”float

Inferior radius bound delimiting the sky zone’s plane projection [adm].

“sup_rad”float

Superior radius bound delimiting the sky zone’s plane projection [adm].

“azimuths”numpy.array of floats

Array containing the azimuth angle intervals delimiting each sky patch inside the zone, in degrees.

“patch_area”float

Solid angle/area, taken up by each sky patch inside the sky zone, in steradians.

“zone_area”float

Total solid angle/area of the whole sky zone, in steradians.

self.patch_datadict of dicts

Dictionary containing information about the discretization patches. Each key of self.patch_data is a 2-tuple of ints corresponding to the patch (zone number, local patch number). The component dictionaries (stored at self.patch_data[(zone_num, local_patch_num)]) have the following Key-Value pairs:

“inf_zen”float

Inferior zenith angle bound delimiting the sky patch, in degrees.

“sup_zen”float

Superior zenith angle bound delimiting the sky patch, in degrees.

“inf_az”float

Inferior azimuth angle bound delimiting the sky patch, in degrees.

“sup_az”float

Superior azimuth angle bound delimiting the sky patch, in degrees.

“patch_area”float

Solid angle/area, taken up by the sky patch, in steradians.

“unit_vector”np.array of floats with shape (3,)

Unit solid angle vector of the center of the sky patch. It is basically a unit vector with tail at the origin and which points to the center position of the sky patch. “unit_vector”[i], with i = 0,1,2; gives the unit vector’s x,y,z component respecitvely.

Notes

Doing calling this method after initialization will also errase all other radiation quantities computed up to that point.

References

[1] Benoit Beckers, Pierre Beckers, A general rule for disk and hemisphere partition into equal-area cells, Computational Geometry, Volume 45, Issue 7, 2012, Pages 275-283, ISSN 0925-7721, https://doi.org/10.1016/j.comgeo.2012.01.011. (https://www.sciencedirect.com/science/article/pii/S0925772112000296)

disk_point_to_zone_patch(rad, az)#

Bin disk point into the correct sky patch. That is, given a disk point represented by a tuple of (radius, azimuth) values, return the sky patch, represented by a tuple of (zone_num, local_patch_num), to which said disk point belongs.

Parameters:
  • rad (float) – Radius of disk point [adm]. Must be between 0 and 1.

  • az (float) – Azimuth of disk point in degrees. Must be between 0 and 360.

Returns:

  • zone_num (int) – Sky zone to which the disk point belongs.

  • local_patch_num (int) – Sky patch (int) (identified by its local patch number in reference to the sky zone) to which the disk point belongs, or “not found” if search failed.

plot_disk_patches(figsize=(12, 12))#

Visualize discretized Sky Vault in 2D.

Paramters#

figsize2-tuple of int

Size of figure.

Notes

1) This method requires that the sky-vault already be discretised to be calculated. Check out discretise()

plot_exposures(config=None)#

Plot radiant exposures.

Parameters:

config (dict or None) –

Dict of plot configuration options. If None (the default), it uses the default confifuration plot options. If dict, it should include one or more of the following key-value pairs:

“projection”{‘disk’, ‘sphere’}, optional

Type of plot projection. Supported are: “disk” The plots the radiant exposure in a 2D plot, while the second uses a 3D plot. Default is “disk”.

”mode”{‘direct’, ‘diffuse’, ‘global’}, optional

Component of radiant exposure to plot. Default is ‘global’.

”figsize”2-tuple of int

Figure size. Default is (13,13).

”unit”{“Wh/m^2”, “kWh/m^2”, “kJ/m^2”, “MJ/m^2}

Units with which to display the radiant exposure. In order, these mean: ‘Watt-hours per meter squared’, ‘kilo Watt-hours per meter squared’, ‘kilo Joules per meter squared’, and ‘Mega Joules per meter squared’. Default is “Wh/m^2”.

”n”int

Number of samples per axis to use for plot. A greater number means a more detailed plot (i.e, greater resolution) but it is resource intensive. Default is 1000.

”view”2-tuple of int

Elevation, azimuth of plot camara in degrees. It applies only for “sphere” plot. Default is (45, 120).

Return type:

None

Produces

None

Notes

1) This method requires the radiant exposure vectors to be calculated. Check out compute_exposure_vectors_for_a_date_interval()

plot_radiance_for_a_date(component, nt=None, projection='disk', figsize=(16, 12), view_init=(45, 180))#

Plot radiance for a specific component at a given time.

Parameters:
  • component (str) – The radiance component to plot (e.g., “direct”, “diffuse”).

  • nt (int or None, optional) – The time index for which to plot the radiance. If None, plots for all times.

  • projection ({'disk', 'sphere'}, optional) – The type of projection for the plot. Options are “disk” (polar) or “sphere” (3D). Default is “disk”.

  • figsize (tuple, optional) – Figure size. Default is (16, 12).

  • view_init (tuple, optional) – Elevation and azimuth of the axes in degrees. Default is (45, 180).

Return type:

None

Notes

1) This method requires the attribute self.radiances to already be defined. For this, please check out compute_radiances_for_a_date().

2) The method generates plots of radiance for the specified component at the given time. It supports two types of projections: “disk” (polar plot) and “sphere” (3D plot).

plot_spectral_radiance_for_a_date(component, nt=None, config=None)#

Plot spectral radiance for a specific component at a given time.

Parameters:
  • component ({'direct', 'diffuse}) – The component spectral radiance to plot.

  • nt (int or None, optional) – The time index for which to plot the spectral radiance. If None, plots for all times.

  • config (dict or None, optional) –

    Configuration parameters for the plot. If None (the default), it uses default parameters. If provided, it may contain the following key-value pairs:

    ‘figsize’: tuple, optional, default: (16, 12)

    Figure size.

    ’wavelength_idxs’: numpy.ndarray, optional

    2D array specifying the wavelength indices to plot.

Return type:

None

Notes

1) This method requires the attribute self.radiances to already be defined. For this, please check out compute_radiances_for_a_date().

2) The method generates polar plots of spectral radiance for the specified component at the given time. The plots show color-contoured radiance values at different azimuth and elevation angles.

plot_sphere_patches(figsize=(12, 12), axis_view=(25, 30))#

Visualize discretized Sky Vault in 3D.

Paramters#

figsize2-tuple of int

Size of figure.

axis_view = 2-tuple of int

Plot’s elevation, azimuth in degrees.

Notes

1) This method requires that the sky-vault already be discretised to be calculated. Check out discretise()

rad_to_zone(rad, start=None, end=None)#

Bin radius value into the correct sky zone via binary search.

Parameters:
  • rad (float) – radius value [adm]. Must be between 0 and 1.

  • start (int or None) – Lower search bound for zone. If None, it defaults to the lowest bound possible.

  • end (int or None) – Upper search bound for zone. If None, it defaults to the highest bound possible.

Returns:

zone_num – Sky zone (int) to which the radius coordinate belongs, or “not found” if search failed.

Return type:

int

sky_point_to_zone_patch(zen, az)#

Bin sky point into the correct sky patch. That is, given a sky point represented by a tuple of (zenith, azimuth) values, return the sky patch, represented by a tuple of (zone_num, local_patch_num), to which said sky point belongs.

Parameters:
  • zen (float) – Zenith of sky point in degrees. Must be between 0 and 90.

  • az (float) – Azimuth of sky point in degrees. Must be between 0 and 360.

Returns:

  • zone_num (int or str) – Sky zone (int) to which the sky point belongs, or “not found” if search failed.

  • local_patch_num (int or str) – Sky patch (int) (identified by its local patch number in reference to the sky zone) to which the sky point belongs, or “not found” if search failed.

zenith_to_zone(zen, start=None, end=None)#

Bin zenith value into the correct sky zone via binary search.

Parameters:
  • zen (float) – Zenith value in degrees. Must be between 0 and 90.

  • start (int or None) – Lower search bound for zone. If None, it defaults to the lowest bound possible.

  • end (int or None) – Upper search bound for zone. If None, it defaults to the highest bound possible.

Returns:

zone_num – Sky zone (int) to which the zenith coordinate belongs, or “not found” if search failed.

Return type:

int or str

solrad.auxiliary_funcs module#

Simple module for containing auxiliary functions that are commonly used in other modules.

solrad.auxiliary_funcs.fill_CDS_globe_nans_using_laplace(data, iterations=20000)#

Depending on the specific database used, some files downloaded from the Climate Data Store (CDS) may contain missing values. This is a problem as the NaN values complicate the computation of other relevant quantities, as well as the easy interpolation of the desired dataset via scipy’s RegularGridInterpolator function, which is something crucial for later on procedures. This function attempts to solve this problem by filling the missing values with the average of their neighbours, iteratively. The process, broadly speaking, is:

  1. Store the indices of all nan values of the input array.

  2. Make a copy of the array.

  3. Fill all nan values with zeros in the copied array.

  4. Loop over all the elements of the copied array that used to be NaN values and set the value of each element equal to the average of the values of its neighbours. Do this as many times as required to achieve the desired level of convergence.

Some specific, yet important, things to note are:

  1. Updated values of an element x during iteration i should not be accesible to other elements until iteration i is finished. That is, the new/updated value of element x should not be used in the computation of the new value of element y, until the next iteration; as such, during the same iteration, element y’s new value is computed using the un-updated value of element x.

  2. This code is intended to only be used on arrays which store a scalar quantity over the whole globe. This is because the boundary conditions used here are that of the surface of a sphere (see the ‘Notes’ section.).

Parameters:

data (2D numpy.array of floats) – Array of scalar data with NaN values sprinkled throughout. The array should contain values corresponding to the whole globe, with the axis 0, accounting for the variation of said values with respect to the latitude, while the axis 1 accounts for the variation with respect to the longitude. Axes 0 and 1 must have the same constant spacing, meaning that axis 1 should be twice the length of axis 0. That is, data must be defined over an equally-spaced regular rectangular grid of latitude vs longitude. Finally, regarding the coordinate system: let data be a Nx2N numpy.array of floats. Then data[0,:] is the circle of constant latitude equal to -90° (i.e, the geographic south pole), data[-1,:] is the circle of constant latitude equal to 90° (i.e, the geographic north pole), data[:,0] is the arc of constant longitude equal to -180° and data[:,-1] is the arc of constant longitude equal to 180°.

iterations: int

Number of iterations that the code should perform before stopping (must be non-negative). The greater the number of iterations, the greater the chance that convergence has been reached. However, the time of computation also increases. Default is 20000.

Returns:

new_data – Array of scalar data with the NaN values having been filled with numerical values based on the average value of their non-NaN neighbours, as per the procedure explained above.

Return type:

2D numpy.array of floats

Warns:

1) Warning – “WARNING: Length of axis 1 is not equal to 2 times the length of axis 0. This function requires for the data to be equally spaced and encompass the whole earth. That is only possible if data.shape[1] == 2*data.shape[0]. If these conditions are not satisfied, results may be incorrect or misleading.”

Notes

  1. This function is equivalent to discretely solving laplace’s equation on the surface of a sphere. In this case, the domain of solution are the NaN filled elements, while the boundary conditions are given by all the non-NaN values.

  2. Since we are operating on the surface of a sphere, two additional particular boundary conditions should be satisfied:

    1. The values of the array should ‘wrap’ along the longitudinal direction. That is, for an infinitely fine mesh: data[i,0] == data[i,-1], for all i.

    2. The values of the array at the each geographic pole should be the same for all longitudes. That is, for an infinitely fine mesh: data[0,:] and data[-1,:] are constant arrays.

    However, for finitely fine meshes we implement these conditions slightly differently. The way it is done is on how we compute the averages. Namely: data[i,j] = 0.25*( data[i-1,j] + data[i+1,j] + data[i,j-1] + data[i,j+1] ), for most cases. But when:

    1. j =  0,  data[i,j-1] equals data[i, -1]

    2. j = -1,  data[i,j+1] equals data[i, 0]

    3. i = 0,   data[i-1,j] equals data[0, bcj]

    4. i = -1,  data[i+1,j] equals data[-1, bcj]

Where bcj is and index such that lon[bcj] == lon[j] + 180, if lon[j] < 0 and lon[bcj] == lon[j] - 180, if lon[j] >= 0. Where lon = numpy.linspace(-180, 180, data.shape[1])

solrad.auxiliary_funcs.fill_nans_using_laplace_1D(arr, iterations=500)#

Fill NaN values of a flat numpy.array using the average value of its non-NaN neighbours. This procedure is iterative and, as such, the function performs the averaging until the specified number of iterations is reached.

Parameters:
  • arr (numpy.array of floats) – Array of scalar data with NaN values sprinkled throughout.

  • iterations (int) – Number of iterations that the code should perform before stopping (must be non-negative). The greater the number of iterations, the greater the chance that convergence has been reached. However, the time of computation also increases. Default is 500.

Returns:

filled_nans_arr – Array of scalar data with NaN values having been filled using the the average of their non-NaN neighbours.

Return type:

numpy.array of floats

Notes

  1. This procedure is very similar to discretely solving the 1D version of laplace’s equation. In this case, the domain of solution are the NaN filled elements, while the boundary conditions are given by the neighbouring non-NaN values. The boundary conditions here would be, more or less, zero outward-flux at the endpoints. Although, to be honest, the code takes inspiration from the laplace procedure and is not too mathematically rigorous.

solrad.auxiliary_funcs.load_obj_with_pickle(path)#

Load any Class instance saved with pickle.

Parameters:

path (path-str) – Path of the .pkl file corresponding to Class obj that is to be loaded.

Returns:

class_obj – Class instance/object loaded using pickle.

Return type:

Class obj.

solrad.auxiliary_funcs.save_obj_with_pickle(class_obj, path)#

Save any Class instance using pickle.

Parameters:
  • class_obj (Class obj.) – Class instance/object to save using pickle.

  • path (path-str) – Path of the .pkl file corresponding to Class obj that is to be saved.

Return type:

None.

solrad.geotime module#

This is a module containing functions, methods and classes related to the computation of time and date quantities.

solrad.geotime.compute_sunrise_sunset(latitude, longitude, tz, start, end=None)#

Compute sunrise and sunset times for a given location and time period, mainly using the NREL SPA algorithm [1].

Parameters:
  • latitude (float) – The latitude of the location in degrees. Must be a number between -90 and 90.

  • longitude (float) – The longitude of the location in degrees. Must be a number between -180 and 180.

  • tz (str) – Timezone information of the location in the format of +/-HHMM.

  • start (str or pandas.DatetimeIndex) – The starting date or datetime index for which to compute sunrise and sunset times. If providing a string, it should be in the format ‘YYYY-MM-DD’.

  • end (None or str, default is None) – If providing a string, it is ending date for which to compute sunrise and sunset times (in the format ‘YYYY-MM-DD’) and can only be used if ‘start’ is also a string. If None, only the sunrise and sunset times for the start date will be computed.

Returns:

A DataFrame containing the computed sunrise, sunset, and day duration times for the specified location and time period.

Return type:

pandas.DataFrame

Examples

>>> # Return DataFrame with computed sunrise and sunset times (using local time) from January 1st 2023, to January 10th 2023, for the city of Medellín, Colombia
>>> compute_sunrise_sunset(6.25184, -75.56359, '-05:00', '2023-01-01', '2023-01-10')
>>>
>>> # Return DataFrame with computed sunrise and sunset times (using local time) for June 1st 2023, for the city of Sydney, Australia.
>>> compute_sunrise_sunset(-33.86785, 151.20732, '+11:00', '2023-06-01')
>>>
>>> # Return DataFrame with computed sunrise and sunset times (using local time) for January 1st 2023, January 2nd 2023 and January 3rd 2023, for Medellín, Colombia.
>>> idx = pandas.DatetimeIndex(['2023-01-01', '2023-01-02', '2023-01-03'])
>>> compute_sunrise_sunset(6.25184, -75.56359, '-05:00', idx)
>>>
>>> # Return DataFrame with computed sunrise and sunset times (using local time) for the whole year of 2023 for a place inside the antartic circle.
>>> compute_sunrise_sunset(-82, -75.56359, '-05:00', "2023-01-01", "2023-12-31")

Notes

  1. Latitude of -90° corresponds to the geographic South pole, while a latitude of 90° corresponds to the geographic North Pole.

  2. A negative longitude correspondes to a point west of the greenwhich meridian, while a positive longitude means it is east of the greenwhich meridian.

  3. A sunrise/sunset equal to “PD” means that the place in question is experiencing a polar day. Meanwhile, a sunrise/sunset equalt to “PN” stands for polar night.

  4. This algorithm is based on the NREL SPA algorithm . As such, it calculates sunrise and sunset times without taking the altitude of the location into account. A higher altitude on a location translates to an earlier sunrise and and a later sunset compared to that same location if it were situated at sea level. Nevertheless, the effects of altitude are quite small. For every 1500 meters in elevation, a site’s sunrise occurs 1 minute earlier and its sunset occurs 1 minute later than normal [2].

  5. This function also does not take into account the effect of mountains or surrounding terrain/structures on the time when the sun first becomes visible to an observer.

References

[1] https://pvlib-python.readthedocs.io/en/stable/reference/generated/pvlib.solarposition.sun_rise_set_transit_spa.html

[2] https://www.chicagotribune.com/weather/ct-wea-0928-asktom-20160927-column.html

[3] https://pypi.org/project/suntimes/

solrad.geotime.geo_date_range(latitude, longitude, tz, start_time, end_time, freq, min_hms, max_hms, skip_polar_nights=True, inclusive=False)#

Generate a date range based on geographical coordinates and specified time parameters, with optional filtering for each day based on user input or sunrise and sunset times.

Parameters:
  • latitude (float) – The latitude of the location in degrees. Must be a number between -90 and 90.

  • longitude (float) – The longitude of the location in degrees. Must be a number between -180 and 180.

  • tz (str) – Timezone information of the location in the format of +/-HH:MM.

  • start_time (str) – The starting date and time in the format ‘YYYY-MM-DD HH:MM:SS’.

  • end_time (str) – The ending date and time in the format ‘YYYY-MM-DD HH:MM:SS’.

  • freq (str) – The frequency at which the date range should be generated. Any frequency accepted by pandas.date_range is valid for geo_date_range.

  • min_hms (str or None) – A string representing the minimum hour-minute-second (HH:MM:SS) value for a Timestamp within each day’s time series. If the hms values are below this threshold, they are removed. It can also be set to None to ignore this condition, or to “sunrise” to use the computed sunrise time for the location as the value for min_hms.

  • max_hms (str or None) – A string representing the maximum hour-minute-second (HH:MM:SS) value for a Timestamp within each day’s time series. If the hms values are above this threshold, they are removed. It can also be set to None to ignore this condition, or to “sunset” to use the computed sunset time for the location as the value for max_hms.

  • skip_polar_nights (bool, optional) – Whether to skip polar night periods during filtering. Defaults to True.

  • inclusive (bool, optional) – Whether to forcibly include the end_time in the generated date range, in case it’s left out. Defaults to False.

Returns:

res – A dictionary containing the filtered date ranges/time series, separated by day, based on the specified parameters. Its strucure is as follows: Each key is a 3-tuple of (year : int, month : int, day :int) and each corresponding value is a pandas.DatetimeIndex object containing the time series associated to that date.

Return type:

dict

Notes

This function depends on the ‘compute_sunrise_sunset’ function to compute the sunrise and sunset times if required.

Examples

>>> # Generates a date range from '2023-1-1 00:00:00' to '2023-12-31 23:59:59.999' UTC-5 time, using a frequency of 5 min.
>>> # No filtering of the day hours is carried out.
>>> latitude = 6.230833
>>> longitude = -75.56359
>>> tz = '-05:00'
>>> start_time = "2023-1-1 00:00:00"
>>> end_time   = "2023-12-31 23:59:59.999"
>>> freq = "5min"
>>> min_hms = None
>>> max_hms = None
>>> skip_polar_nights = True
>>> inclusive = False
>>> res = geo_date_range(latitude, longitude, tz, start_time, end_time, freq, min_hms, max_hms, skip_polar_nights, inclusive)
>>>
>>>
>>> # Generates a date range from '2023-1-1 00:00:00' to '2023-12-31 23:59:59.999' UTC-5 time, using a frequency of 5 min.
>>> # Filtering of the day hours is carried out using sunrise and sunset times calculated for Medellín, Colombia.
>>> latitude = 6.230833
>>> longitude = -75.56359
>>> tz = '-05:00'
>>> start_time = "2023-1-1 00:00:00"
>>> end_time   = "2023-12-31 23:59:59.999"
>>> freq = "5min"
>>> min_hms = "sunrise"
>>> max_hms = "sunset"
>>> skip_polar_nights = True
>>> inclusive = False
>>> res = geo_date_range(latitude, longitude, tz, start_time, end_time, freq, min_hms, max_hms, skip_polar_nights, inclusive)
>>>
>>>
>>> # Generates a date range from '2023-1-1 00:00:00' to '2023-12-31 23:59:59.999' UTC-5 time, using a frequency of 5 min.
>>> # Filtering of the day hours is carried out using the range specified by the user.
>>> latitude = 6.230833
>>> longitude = -75.56359
>>> tz = '-05:00'
>>> start_time = "2023-1-1 00:00:00"
>>> end_time   = "2023-12-31 23:59:59.999"
>>> freq = "5min"
>>> min_hms = "06:23:50"
>>> max_hms = "17:50:00"
>>> skip_polar_nights = True
>>> inclusive = False
>>> res = geo_date_range(latitude, longitude, tz, start_time, end_time, freq, min_hms, max_hms, skip_polar_nights, inclusive)
solrad.geotime.timestamp_hms_to_float(timestamp, unit='h')#

Convert Timestamp Hour:Minutes:Seconds information to float.

Example: timestamp_hms_to_float(timestamp, unit = “h”), where timestamp = pd.Timestamp(“2023-03-08 14:25:36”) returns 14.426667. That is, it turns the 14h 25min 36s of the timestamp to an equivalent number of hours. Had we used timestamp_hms_to_float(timestamp, unit = “s”), the result would have been 51936. That is, the equivalent of 14h 25min 36s in seconds.

Parameters:
  • timestamp (pandas.Timestamp object) – Timestamp to convert to float.

  • unit (str, optional) – Time unit to which the timestamp is to be converted. It can either be ‘d’ (day), ‘h’ (hour), ‘m’ (minute) or ‘s’ (second). Default is ‘h’.

  • tz_name (str) – Time zone string accepted by pandas.

Returns:

res – timestamp converted to float to the specified unit.

Return type:

float

solrad.geotime.utc_hour_to_tz_name(utc_hour)#

Turns float representing the time zone into a string representing the time zone which is accepted by pandas.

Parameters:

utc_hour (float) – Timezone number. Must be anumber between -12 and 12.

Returns:

tz_name – Time zone string accepted by pandas.

Return type:

str

Notes

  1. For more information about the time zone strings accepted by pandas, see the link: https://pvlib-python.readthedocs.io/en/v0.3.0/timetimezones.html

Module contents#