Module netapp_ontap.resources.aggregate
Copyright © 2023 NetApp Inc. All rights reserved.
This file has been automatically generated based on the ONTAP REST API documentation.
Updating storage aggregates
The PATCH operation is used to modify properties of the aggregate. There are several properties that can be modified on an aggregate. Only one property can be modified for each PATCH request. PATCH operations on the aggregate's disk count will be blocked while one or more nodes in the cluster are simulating or implementing automatic aggregate creation. The following is a list of properties that can be modified using the PATCH operation including a brief description for each:
- name - This property can be changed to rename the aggregate.
- node.name and node.uuid - Either property can be updated in order to relocate the aggregate to a different node in the cluster.
- state - This property can be changed to 'online' or 'offline'. Setting an aggregate 'offline' would automatically offline all the volumes currently hosted on the aggregate.
- block_storage.mirror.enabled - This property can be changed from 'false' to 'true' in order to mirror the aggregate, if the system is capable of doing so.
- block_storage.primary.disk_count - This property can be updated to increase the number of disks in an aggregate.
- block_storage.primary.raid_size - This property can be updated to set the desired RAID size.
- block_storage.primary.raid_type - This property can be updated to set the desired RAID type.
- cloud_storage.tiering_fullness_threshold - This property can be updated to set the desired tiering fullness threshold if using FabricPool.
- cloud_storage.migrate_threshold - This property can be updated to set the desired migrate threshold if using FabricPool.
- data_encryption.software_encryption_enabled - This property enables or disables NAE on the aggregate.
- block_storage.hybrid_cache.storage_pools.allocation_units_count - This property can be updated to add a storage pool to the aggregate specifying the number of allocation units.
- block_storage.hybrid_cache.storage_pools.name - This property can be updated to add a storage pool to the aggregate specifying the storage pool name. block_storage.hybrid_cache.storage_pools.uuid or this field must be specified with block_storage.hybrid_cache.storage_pools.allocation_units_count.
- block_storage.hybrid_cache.storage_pools.uuid - This property can be updated to add a storage pool to the aggregate specifying the storage pool uuid. block_storage.hybrid_cache.storage_pools.name or this field must be specified with block_storage.hybrid_cache.storage_pools.allocation_units_count.
- block_storage.hybrid_cache.raid_size - This property can be updated to set the desired RAID size. This property can also be specified on the first time addition of a storage pool to the aggregate.
- block_storage.hybrid_cache.raid_type - This property can be updated to set the desired RAID type of a physical SSD Flash Pool. This property can also be specified on the first time addition of a storage pool to the aggregate. When specifying a raidtype of raid4, the node is required to have spare SSDs for the storage pool as well.
- block_storage.hybrid_cache.disk_count - This property can be specified on the first time addition of physical SSD cache to the aggregate. It can also be updated to increase the number of disks in the physical SSD cache of a hybrid aggregate.
Aggregate expansion
The PATCH operation also supports automatically expanding an aggregate based on the spare disks which are present within the system. Running PATCH with the query "auto_provision_policy" set to "expand" starts the recommended expansion job. In order to see the expected change in capacity before starting the job, call GET on an aggregate instance with the query "auto_provision_policy" set to "expand".
Manual simulated aggregate expansion
The PATCH operation also supports simulated manual expansion of an aggregate. Running PATCH with the query "simulate" set to "true" and "block_storage.primary.disk_count" set to the final disk count will start running the prechecks associated with expanding the aggregate to the proposed size. The response body will include information on how many disks the aggregate can be expanded to, any associated warnings, along with the proposed final size of the aggregate.
Deleting storage aggregates
If volumes exist on an aggregate, they must be deleted or moved before the aggregate can be deleted. See the /storage/volumes API for details on moving or deleting volumes.
Adding a storage pool to an aggregate
A storage pool can be added to an aggregate by patching the field "block_storage.hybrid_cache.storage_pools.allocation_units_count" while also specifying the specific storage pool using the "block_storage.hybrid_cache.storage_pools.name" or "block_storage.hybrid_cache.storage_pools.uuid". Subsequent patches to the aggregate can be completed to increase allocation unit counts or adding additional storage pools. On the first time addition of a storage pool to the aggregate, the raidtype can be optionally specified using the "block_storage.hybrid_cache.raid_type" field.
Adding physical SSD cache capacity to an aggregate
The PATCH operation supports addition of a new physical SSD cache to an aggregate. It also supports expansion of existing physical SSD cache in the hybrid aggregate. Running PATCH with "block_storage.hybrid_cache.disk_count" set to the final disk count will expand the physical SSD cache of the hybrid aggregate to the proposed size. The RAID type can be optionally specified using the "block_storage.hybrid_cache.raid_type" field. The RAID size can be optionally specified using the "block_storage.hybrid_cache.raid_size" field. These operations can also be simulated by setting the query "simulate" to "true".
Examples
Retrieving a specific aggregate from the cluster
The following example shows the response of the requested aggregate. If there is no aggregate with the requested UUID, an error is returned.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="870dd9f2-bdfa-4167-b692-57d1cec874d4")
resource.get()
print(resource)
Aggregate(
{
"name": "test1",
"state": "online",
"snapshot": {
"files_total": 10,
"max_files_available": 5,
"files_used": 3,
"max_files_used": 50,
},
"home_node": {"name": "node-1", "uuid": "caf95bec-f801-11e8-8af9-005056bbe5c1"},
"create_time": "2018-12-04T15:40:38-05:00",
"space": {
"block_storage": {
"size": 235003904,
"full_threshold_percent": 98,
"physical_used_percent": 1,
"volume_footprints_percent": 14,
"volume_deduplication_shared_count": 567543,
"used_percent": 50,
"physical_used": 5271552,
"aggregate_metadata": 2655,
"data_compaction_space_saved_percent": 47,
"data_compacted_count": 666666,
"volume_deduplication_space_saved": 23765,
"volume_deduplication_space_saved_percent": 32,
"used_including_snapshot_reserve": 674685,
"available": 191942656,
"aggregate_metadata_percent": 8,
"used_including_snapshot_reserve_percent": 35,
"used": 43061248,
"data_compaction_space_saved": 654566,
},
"cloud_storage": {"used": 0},
"efficiency_without_snapshots_flexclones": {
"ratio": 2.0,
"savings": 5000,
"logical_used": 10000,
},
"efficiency_without_snapshots": {
"ratio": 1.0,
"savings": 0,
"logical_used": 737280,
},
"efficiency": {
"cross_volume_inline_dedupe": False,
"cross_volume_dedupe_savings": True,
"ratio": 6.908119720880661,
"auto_adaptive_compression_savings": False,
"wise_tsse_min_used_capacity_pct": 2,
"cross_volume_background_dedupe": True,
"logical_used": 1646350,
"savings": 1408029,
"enable_workload_informed_tsse": True,
},
"snapshot": {
"reserve_percent": 20,
"total": 5000,
"used": 3000,
"used_percent": 45,
"available": 2000,
},
},
"data_encryption": {
"software_encryption_enabled": False,
"drive_protection_enabled": False,
},
"volume-count": 0,
"uuid": "19425837-f2fa-4a9f-8f01-712f626c983c",
"snaplock_type": "non_snaplock",
"node": {"name": "node-1", "uuid": "caf95bec-f801-11e8-8af9-005056bbe5c1"},
"block_storage": {
"hybrid_cache": {"enabled": False},
"mirror": {"enabled": False, "state": "unmirrored"},
"storage_type": "vmdisk",
"uses_partitions": False,
"primary": {
"raid_type": "raid_dp",
"disk_count": 6,
"raid_size": 24,
"disk_type": "ssd",
"disk_class": "solid_state",
"checksum_style": "block",
},
"plexes": [{"name": "plex0"}],
},
"cloud_storage": {"attach_eligible": False},
"inode_attributes": {
"files_total": 31136,
"max_files_possible": 2844525,
"max_files_available": 31136,
"used_percent": 5,
"max_files_used": 97,
"files_used": 97,
},
}
)
Retrieving statistics and metric for an aggregate
In this example, the API returns the "statistics" and "metric" properties for the aggregate requested.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="538bf337-1b2c-11e8-bad0-005056b48388")
resource.get(fields="statistics,metric")
print(resource)
Aggregate(
{
"name": "aggr4",
"statistics": {
"timestamp": "2019-07-08T22:17:09+00:00",
"iops_raw": {
"other": 1586535,
"write": 1137230,
"read": 328267,
"total": 3052032,
},
"throughput_raw": {
"other": 146185560064,
"write": 63771742208,
"read": 3106045952,
"total": 213063348224,
},
"latency_raw": {
"other": 477201985,
"write": 313354426,
"read": 54072313,
"total": 844628724,
},
"status": "ok",
},
"metric": {
"duration": "PT15S",
"timestamp": "2019-07-08T22:16:45+00:00",
"iops": {"other": 11663, "write": 17, "read": 1, "total": 11682},
"throughput": {
"other": 193293789,
"write": 840226,
"read": 7099,
"total": 194141115,
},
"latency": {"other": 123, "write": 230, "read": 149, "total": 124},
"status": "ok",
},
"uuid": "538bf337-1b2c-11e8-bad0-005056b48388",
}
)
For more information and examples on viewing historical performance metrics for any given aggregate, see DOC /storage/aggregates/{uuid}/metrics
Simulating aggregate expansion
The following example shows the response for a simulated data aggregate expansion based on the values of the 'block_storage.primary.disk_count' attribute passed in. The query does not modify the existing aggregate but returns how the aggregate will look after the expansion along with any associated warnings. Simulated data aggregate expansion will be blocked while one or more nodes in the cluster are simulating or implementing automatic aggregate creation. This will be reflected in the following attributes:
- space.block_storage.size - Total usable space in bytes, not including WAFL reserve and aggregate Snapshot copy reserve.
- block_storage.primary.disk_count - Number of disks that could be used to create the aggregate.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="cae60cfe-deae-42bd-babb-ef437d118314")
resource.block_storage = {"primary": {"disk_count": 13}}
resource.patch(hydrate=True, simulate=True)
Manual aggregate expansion with disk size query
The following example shows the response for aggregate expansion based on the values of the 'block_storage.hybrid_cache.disk_count' attribute based on the disk size passed in.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="cae60cfe-deae-42bd-babb-ef437d118314")
resource.block_storage = {"hybrid_cache": {"disk_count": 4}}
resource.patch(hydrate=True, disk_size=1902379008)
Simulating a manual aggregate expansion with disk size query
The following example shows the response for a manual aggregate expansion based on the values of the 'block_storage.hybrid_cache.disk_count' attribute based on the disk size passed in. The query internally maps out the appropriate expansion as well as warnings that may be associated for the hybrid enabled aggregate.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="cae60cfe-deae-42bd-babb-ef437d118314")
resource.block_storage = {"hybrid_cache": {"disk_count": 4}}
resource.patch(hydrate=True, simulate=True, disk_size=1902379008)
Simulating a manual aggregate expansion with raid group query
The following example shows the response for a manual aggregate expansion based on the values of the 'block_storage.primary.disk_count' attribute passed in. The query internally maps out the appropriate expansion as well as warnings that may be associated and lays out the new raidgroups in a more detailed view. An additional query can be passed in to specify raidgroup addition by new raidgroup, all raidgroups or a specific raidgroup.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="cae60cfe-deae-42bd-babb-ef437d118314")
resource.block_storage = {"primary": {"disk_count": 24}}
resource.patch(hydrate=True, simulate=True, raid_group="new")
Retrieving the usable spare information for the cluster
The following example shows the response from retrieving usable spare information for the expansion of this particular aggregate. The output is restricted to only spares that are compatible with this aggregate.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
print(
list(
Aggregate.get_collection(
uuid="cae60cfe-deae-42bd-babb-ef437d118314", show_spares=True
)
)
)
[]
Retrieving the SSD spare count for the cluster
The following example shows the response from retrieving SSD spare count information for the expansion of this particular aggregate's hybrid cache tier. The output is restricted to only spares that are compatible with this aggregate.
# The API:
/api/storage/aggregates?show_spares=true&uuid={uuid}&flash_pool_eligible=true
# The response:
{
"records": [],
"num_records": 0,
"spares": [
{
"node": {
"uuid": "c35c5975-cbcb-11ec-a3e1-005056bbdb46",
"name": "node-2"
},
"disk_class": "solid_state",
"disk_type": "ssd",
"size": 1902379008,
"checksum_style": "block",
"syncmirror_pool": "pool0",
"is_partition": false,
"usable": 1,
"layout_requirements": [
{
"raid_type": "raid4",
"default": true,
"aggregate_min_disks": 2,
"raid_group": {
"min": 2,
"max": 14,
"default": 8
}
}
]
}
]
}
Retrieving a recommendation for an aggregate expansion
The following example shows the response with the recommended data aggregate expansion based on what disks are present within the system. The query does not modify the existing aggregate but returns how the aggregate will look after the expansion. The recommendation will be reflected in the attributes - 'space.block_storage.size' and 'block_storage.primary.disk_count'. Recommended data aggregate expansion will be blocked while one or more nodes in the cluster are simulating or implementing automatic aggregate creation.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="cae60cfe-deae-42bd-babb-ef437d118314")
resource.get(auto_provision_policy="expand")
print(resource)
Aggregate(
{
"name": "node_2_SSD_1",
"space": {"block_storage": {"size": 1116180480}},
"uuid": "cae60cfe-deae-42bd-babb-ef437d118314",
"node": {"name": "node-2", "uuid": "4046dda8-f802-11e8-8f6d-005056bb2030"},
"block_storage": {
"hybrid_cache": {"enabled": False},
"mirror": {"enabled": False},
"primary": {
"raid_type": "raid_dp",
"disk_count": 12,
"simulated_raid_groups": [
{
"is_partition": False,
"name": "test/plex0/rg0",
"data_disk_count": 10,
"usable_size": 12309487,
"parity_disk_count": 2,
}
],
"raid_size": 24,
"disk_type": "ssd",
"disk_class": "solid_state",
},
},
}
)
Updating an aggregate in the cluster
The following example shows the workflow of adding disks to the aggregate.
Step 1: Check the current disk count on the aggregate.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="19425837-f2fa-4a9f-8f01-712f626c983c")
resource.get(fields="block_storage.primary.disk_count")
print(resource)
Aggregate(
{
"name": "test1",
"uuid": "19425837-f2fa-4a9f-8f01-712f626c983c",
"block_storage": {"primary": {"disk_count": 6}},
}
)
Step 2: Update the aggregate with the new disk count in 'block_storage.primary.disk_count'. The response to PATCH is a job unless the request is invalid.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="19425837-f2fa-4a9f-8f01-712f626c983c")
resource.block_storage = {"primary": {"disk_count": 8}}
resource.patch()
Step 3: Wait for the job to finish, then call GET to see the reflected change.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="19425837-f2fa-4a9f-8f01-712f626c983c")
resource.get(fields="block_storage.primary.disk_count")
print(resource)
Aggregate(
{
"name": "test1",
"uuid": "19425837-f2fa-4a9f-8f01-712f626c983c",
"block_storage": {"primary": {"disk_count": 8}},
}
)
Adding a storage pool to an aggregate
The following example shows how to add cache capacity from an existing storage pool to an aggregate. Step 1: Update the aggregate with the new storage pool allocation unit in 'block_storage.hybrid_cache.storage_pools.allocation_units_count'. Additionally, specify 'block_storage.hybrid_cache.storage_pools.name' or 'block_storage.hybrid_cache.storage_pools.uuid' to the storage pool. On the first storage pool, 'block_storage.hybrid_cache.raid_type' can be specified for the raidtype of the hybrid cache. The response to PATCH is a job unless the request is invalid.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="19425837-f2fa-4a9f-8f01-712f626c983c")
resource.block_storage = {
"hybrid_cache": {
"raid_type": "raid_dp",
"storage_pools": [
{"allocation_units_count": 2, "storage_pool": {"name": "sp1"}}
],
}
}
resource.patch()
Step 2: Wait for the job to finish, then call GET to see the reflected change.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="19425837-f2fa-4a9f-8f01-712f626c983c")
resource.get(fields="block_storage.hybrid_cache")
print(resource)
Aggregate({"name": "test1", "uuid": "19425837-f2fa-4a9f-8f01-712f626c983c"})
Adding physical SSD cache capacity to an aggregate
The following example shows how to add physical SSD cache capacity to an aggregate. Step 1: Specify the number of disks to be added to cache in 'block_storage.hybrid_cache.disk_count'. 'block_storage.hybrid_cache.raid_type' can be specified for the RAID type of the hybrid cache. 'block_storage.hybrid_cache.raid_size' can be specified for the RAID size of the hybrid cache. The response to PATCH is a job unless the request is invalid.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="caa8a9f1-0219-4eaf-bcad-e29c05042fe1")
resource.block_storage.hybrid_cache.disk_count = 3
resource.block_storage.hybrid_cache.raid_type = "raid4"
resource.patch()
Step 2: Wait for the job to finish, then call GET to see the reflected change.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="caa8a9f1-0219-4eaf-bcad-e29c05042fe1")
resource.get(fields="block_storage.hybrid_cache")
print(resource)
Aggregate({"name": "test1", "uuid": "caa8a9f1-0219-4eaf-bcad-e29c05042fe1"})
Simulated addition of physical SSD cache capacity to an aggregate
The following example shows the response for a simulated addition of physical SSD cache capacity to an aggregate based on the values of the 'block_storage.hybrid_cache.disk_count', 'block_storage.hybrid_cache.raid_type' and 'block_storage.hybrid_cache.raid_size' attributes passed in. The query does not modify the existing aggregate but returns how the aggregate will look after the expansion along with any associated warnings. Simulated addition of physical SSD cache capacity to an aggregate will be blocked while one or more nodes in the cluster are simulating or implementing automatic aggregate creation. This will be reflected in the following attributes:
- block_storage.hybrid_cache.size - Total usable cache space in bytes, not including WAFL reserve and aggregate Snapshot copy reserve.
- block_storage.hybrid_cache.disk_count - Number of disks that can be added to the aggregate's cache tier.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="7eb630d1-0e55-4cb6-8d90-957d6f4db54e")
resource.block_storage.hybrid_cache.disk_count = 6
resource.block_storage.hybrid_cache.raid_type = "raid4"
resource.block_storage.hybrid_cache.raid_size = 3
resource.patch(hydrate=True, simulate=True)
The following example shows the workflow to enable software encryption on an aggregate.
Step 1: Check the current software encryption status of the aggregate.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="f3aafdc6-be35-4d93-9590-5a402bffbe4b")
resource.get(fields="data_encryption.software_encryption_enabled")
print(resource)
Aggregate(
{
"name": "aggr5",
"data_encryption": {"software_encryption_enabled": False},
"uuid": "f3aafdc6-be35-4d93-9590-5a402bffbe4b",
}
)
Step 2: Update the aggregate with the encryption status in 'data_encryption.software_encryption_enabled'. The response to PATCH is a job unless the request is invalid.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="f3aafdc6-be35-4d93-9590-5a402bffbe4b")
resource.data_encryption = {"software_encryption_enabled": "true"}
resource.patch()
Step 3: Wait for the job to finish, then call GET to see the reflected change.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate
with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
resource = Aggregate(uuid="f3aafdc6-be35-4d93-9590-5a402bffbe4b")
resource.get(fields="data_encryption.software_encryption_enabled")
print(resource)
Aggregate(
{
"name": "aggr5",
"data_encryption": {"software_encryption_enabled": True},
"uuid": "f3aafdc6-be35-4d93-9590-5a402bffbe4b",
}
)
Classes
class Aggregate (*args, **kwargs)
-
Allows interaction with Aggregate objects on the host
Initialize the instance of the resource.
Any keyword arguments are set on the instance as properties. For example, if the class was named 'MyResource', then this statement would be true:
MyResource(name='foo').name == 'foo'
Args
*args
- Each positional argument represents a parent key as used in the URL of the object. That is, each value will be used to fill in a segment of the URL which refers to some parent object. The order of these arguments must match the order they are specified in the URL, from left to right.
**kwargs
- each entry will have its key set as an attribute name on the instance and its value will be the value of that attribute.
Ancestors
Static methods
def count_collection (*args, connection: HostConnection = None, **kwargs) -> int
-
Returns a count of all Aggregate resources that match the provided query
This calls GET on the object to determine the number of records. It is more efficient than calling get_collection() because it will not construct any objects. Query parameters can be passed in as kwargs to determine a count of objects that match some filtered criteria.
Args
*args
- Each entry represents a parent key which is used to build the path to the child object. If the URL definition were /api/foos/{foo.name}/bars, then to get the count of bars for a particular foo, the foo.name value should be passed.
connection
- The
HostConnection
object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context. **kwargs
- Any key/value pairs passed will be sent as query parameters to the host. These query parameters can affect the count. A return_records query param will be ignored.
Returns
On success, returns an integer count of the objects of this type. On failure, returns -1.
Raises
NetAppRestError
: If the API call returned a status code >= 400, or if there is no connection available to use either passed in or on the library. def delete_collection (*args, records: Iterable[_ForwardRef('Aggregate')] = None, body: Union[Resource, dict] = None, poll: bool = True, poll_interval: Optional[int] = None, poll_timeout: Optional[int] = None, connection: HostConnection = None, **kwargs) -> NetAppResponse
-
Deletes the aggregate specified by the UUID. This request starts a job and returns a link to that job.
Related ONTAP commands
storage aggregate delete
Learn more
Delete all objects in a collection which match the given query.
All records on the host which match the query will be deleted.
Args
*args
- Each entry represents a parent key which is used to build the path to the child object. If the URL definition were /api/foos/{foo.name}/bars, then to delete the collection of bars for a particular foo, the foo.name value should be passed.
records
- Can be provided in place of a query. If so, this list of objects will be deleted from the host.
body
- The body of the delete request. This could be a Resource instance or a dictionary object.
poll
- If set to True, the call will not return until the asynchronous job on the host has completed. Has no effect if the host did not return a job response.
poll_interval
- If the operation returns a job, this specifies how often to query the job for updates.
poll_timeout
- If the operation returns a job, this specifies how long to continue monitoring the job's status for completion.
connection
- The
HostConnection
object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context. **kwargs
- Any key/value pairs passed will be sent as query parameters to the host. Only resources matching this query will be deleted.
Returns
A
NetAppResponse
object containing the details of the HTTP response.Raises
NetAppRestError
: If the API call returned a status code >= 400 def find (*args, connection: HostConnection = None, **kwargs) -> Resource
-
Retrieves the collection of aggregates for the entire cluster.
Expensive properties
There is an added computational cost to retrieving values for these properties. They are not included by default in GET results and must be explicitly requested using the
fields
query parameter. SeeRequesting specific fields
to learn more. *metric.*
*space.block_storage.inactive_user_data
*space.block_storage.inactive_user_data_percent
*space.footprint
*is_spare_low
*statistics.*
Related ONTAP commands
storage aggregate show
Learn more
Find an instance of an object on the host given a query.
The host will be queried with the provided key/value pairs to find a matching resource. If 0 are found, None will be returned. If more than 1 is found, an error will be raised or returned. If there is exactly 1 matching record, then it will be returned.
Args
*args
- Each entry represents a parent key which is used to build the path to the child object. If the URL definition were /api/foos/{foo.name}/bars, then to find a bar for a particular foo, the foo.name value should be passed.
connection
- The
HostConnection
object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context. **kwargs
- Any key/value pairs passed will be sent as query parameters to the host.
Returns
A
Resource
object containing the details of the object or None if no matches were found.Raises
NetAppRestError
: If the API call returned more than 1 matching resource. def get_collection (*args, connection: HostConnection = None, max_records: int = None, **kwargs) -> Iterable[Resource]
-
Retrieves the collection of aggregates for the entire cluster.
Expensive properties
There is an added computational cost to retrieving values for these properties. They are not included by default in GET results and must be explicitly requested using the
fields
query parameter. SeeRequesting specific fields
to learn more. *metric.*
*space.block_storage.inactive_user_data
*space.block_storage.inactive_user_data_percent
*space.footprint
*is_spare_low
*statistics.*
Related ONTAP commands
storage aggregate show
Learn more
Fetch a list of all objects of this type from the host.
This is a lazy fetch, making API calls only as necessary when the result of this call is iterated over. For instance, if max_records is set to 5, then iterating over the collection causes an API call to be sent to the server once for every 5 records. If the client stops iterating before getting to the 6th record, then no additional API calls are made.
Args
*args
- Each entry represents a parent key which is used to build the path to the child object. If the URL definition were /api/foos/{foo.name}/bars, then to get the collection of bars for a particular foo, the foo.name value should be passed.
connection
- The
HostConnection
object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context. max_records
- The maximum number of records to return per call
**kwargs
- Any key/value pairs passed will be sent as query parameters to the host.
Returns
A list of
Resource
objectsRaises
NetAppRestError
: If there is no connection available to use either passed in or on the library. This would be not be raised when get_collection() is called, but rather when the result is iterated. def patch_collection (body: dict, *args, records: Iterable[_ForwardRef('Aggregate')] = None, poll: bool = True, poll_interval: Optional[int] = None, poll_timeout: Optional[int] = None, connection: HostConnection = None, **kwargs) -> NetAppResponse
-
Updates the aggregate specified by the UUID with the properties in the body. This request starts a job and returns a link to that job.
Related ONTAP commands
storage aggregate add-disks
storage aggregate mirror
storage aggregate modify
storage aggregate relocation start
storage aggregate rename
Learn more
Patch all objects in a collection which match the given query.
All records on the host which match the query will be patched with the provided body.
Args
body
- A dictionary of name/value pairs to set on all matching members of the collection. The body argument will be ignored if records is provided.
*args
- Each entry represents a parent key which is used to build the path to the child object. If the URL definition were /api/foos/{foo.name}/bars, then to patch the collection of bars for a particular foo, the foo.name value should be passed.
records
- Can be provided in place of a query. If so, this list of objects will be patched on the host.
poll
- If set to True, the call will not return until the asynchronous job on the host has completed. Has no effect if the host did not return a job response.
poll_interval
- If the operation returns a job, this specifies how often to query the job for updates.
poll_timeout
- If the operation returns a job, this specifies how long to continue monitoring the job's status for completion.
connection
- The
HostConnection
object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context. **kwargs
- Any key/value pairs passed will be sent as query parameters to the host. Only resources matching this query will be patched.
Returns
A
NetAppResponse
object containing the details of the HTTP response.Raises
NetAppRestError
: If the API call returned a status code >= 400 def post_collection (records: Iterable[_ForwardRef('Aggregate')], *args, hydrate: bool = False, poll: bool = True, poll_interval: Optional[int] = None, poll_timeout: Optional[int] = None, connection: HostConnection = None, **kwargs) -> Union[List[Aggregate], NetAppResponse]
-
Automatically creates aggregates based on an optimal layout recommended by the system. Alternatively, properties can be provided to create an aggregate according to the requested specification. This request starts a job and returns a link to that job. POST operations will be blocked while one or more nodes in the cluster are simulating or implementing automatic aggregate creation.
Required properties
Properties are not required for this API. The following properties are only required if you want to specify properties for aggregate creation: *
name
- Name of the aggregate. *node.name
ornode.uuid
- Node on which the aggregate will be created. *block_storage.primary.disk_count
- Number of disks to be used to create the aggregate.Default values
If not specified in POST, the following default values are assigned. The remaining unspecified properties will receive system dependent default values. *
block_storage.mirror.enabled
- false *snaplock_type
- non_snaplockRelated ONTAP commands
storage aggregate auto-provision
storage aggregate create
Example:
POST /api/storage/aggregates {"node": {"name": "node1"}, "name": "test", "block_storage": {"primary": {"disk_count": "10"}}}
Learn more
Send this collection of objects to the host as a creation request.
Args
records
- A list of
Resource
objects to send to the server to be created. *args
- Each entry represents a parent key which is used to build the path to the child object. If the URL definition were /api/foos/{foo.name}/bars, then to create a bar for a particular foo, the foo.name value should be passed.
hydrate
- If set to True, after the response is received from the call, a a GET call will be made to refresh all fields of each object. When hydrate is set to True, poll must also be set to True.
poll
- If set to True, the call will not return until the asynchronous job on the host has completed. Has no effect if the host did not return a job response.
poll_interval
- If the operation returns a job, this specifies how often to query the job for updates.
poll_timeout
- If the operation returns a job, this specifies how long to continue monitoring the job's status for completion.
connection
- The
HostConnection
object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context. **kwargs
- Any key/value pairs passed will be sent as query parameters to the host. Only resources matching this query will be patched.
Returns
A list of
Resource
objects matching the provided type which have been created by the host and returned. This is not the same list that was provided, so to continue using the object, you should save this list. If poll is set to False, then aNetAppResponse
object is returned instead.Raises
NetAppRestError
: If the API call returned a status code >= 400
Methods
def delete (self, body: Union[Resource, dict] = None, poll: bool = True, poll_interval: Optional[int] = None, poll_timeout: Optional[int] = None, **kwargs) -> NetAppResponse
-
Deletes the aggregate specified by the UUID. This request starts a job and returns a link to that job.
Related ONTAP commands
storage aggregate delete
Learn more
Send a deletion request to the host for this object.
Args
body
- The body of the delete request. This could be a Resource instance or a dictionary object.
poll
- If set to True, the call will not return until the asynchronous job on the host has completed. Has no effect if the host did not return a job response.
poll_interval
- If the operation returns a job, this specifies how often to query the job for updates.
poll_timeout
- If the operation returns a job, this specifies how long to continue monitoring the job's status for completion.
connection
- The
HostConnection
object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context. **kwargs
- Any key/value pairs passed will be sent as query parameters to the host.
Returns
A
NetAppResponse
object containing the details of the HTTP response.Raises
NetAppRestError
: If the API call returned a status code >= 400 def get (self, **kwargs) -> NetAppResponse
-
Retrieves the aggregate specified by the UUID. The recommend query cannot be used for this operation.
Expensive properties
There is an added computational cost to retrieving values for these properties. They are not included by default in GET results and must be explicitly requested using the
fields
query parameter. SeeRequesting specific fields
to learn more. *metric.*
*space.block_storage.inactive_user_data
*space.block_storage.inactive_user_data_percent
*space.footprint
*is_spare_low
*statistics.*
Related ONTAP commands
storage aggregate show
Learn more
Fetch the details of the object from the host.
Requires the keys to be set (if any). After returning, new or changed properties from the host will be set on the instance.
Returns
A
NetAppResponse
object containing the details of the HTTP response.Raises
NetAppRestError
: If the API call returned a status code >= 400 def patch (self, hydrate: bool = False, poll: bool = True, poll_interval: Optional[int] = None, poll_timeout: Optional[int] = None, **kwargs) -> NetAppResponse
-
Updates the aggregate specified by the UUID with the properties in the body. This request starts a job and returns a link to that job.
Related ONTAP commands
storage aggregate add-disks
storage aggregate mirror
storage aggregate modify
storage aggregate relocation start
storage aggregate rename
Learn more
Send the difference in the object's state to the host as a modification request.
Calculates the difference in the object's state since the last time we interacted with the host and sends this in the request body.
Args
hydrate
- If set to True, after the response is received from the call, a a GET call will be made to refresh all fields of the object.
poll
- If set to True, the call will not return until the asynchronous job on the host has completed. Has no effect if the host did not return a job response.
poll_interval
- If the operation returns a job, this specifies how often to query the job for updates.
poll_timeout
- If the operation returns a job, this specifies how long to continue monitoring the job's status for completion.
connection
- The
HostConnection
object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context. **kwargs
- Any key/value pairs passed will normally be sent as query parameters to the host. If any of these pairs are parameters that are sent as formdata then only parameters of that type will be accepted and all others will be discarded.
Returns
A
NetAppResponse
object containing the details of the HTTP response.Raises
NetAppRestError
: If the API call returned a status code >= 400 def post (self, hydrate: bool = False, poll: bool = True, poll_interval: Optional[int] = None, poll_timeout: Optional[int] = None, **kwargs) -> NetAppResponse
-
Automatically creates aggregates based on an optimal layout recommended by the system. Alternatively, properties can be provided to create an aggregate according to the requested specification. This request starts a job and returns a link to that job. POST operations will be blocked while one or more nodes in the cluster are simulating or implementing automatic aggregate creation.
Required properties
Properties are not required for this API. The following properties are only required if you want to specify properties for aggregate creation: *
name
- Name of the aggregate. *node.name
ornode.uuid
- Node on which the aggregate will be created. *block_storage.primary.disk_count
- Number of disks to be used to create the aggregate.Default values
If not specified in POST, the following default values are assigned. The remaining unspecified properties will receive system dependent default values. *
block_storage.mirror.enabled
- false *snaplock_type
- non_snaplockRelated ONTAP commands
storage aggregate auto-provision
storage aggregate create
Example:
POST /api/storage/aggregates {"node": {"name": "node1"}, "name": "test", "block_storage": {"primary": {"disk_count": "10"}}}
Learn more
Send this object to the host as a creation request.
Args
hydrate
- If set to True, after the response is received from the call, a a GET call will be made to refresh all fields of the object.
poll
- If set to True, the call will not return until the asynchronous job on the host has completed. Has no effect if the host did not return a job response.
poll_interval
- If the operation returns a job, this specifies how often to query the job for updates.
poll_timeout
- If the operation returns a job, this specifies how long to continue monitoring the job's status for completion.
connection
- The
HostConnection
object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context. **kwargs
- Any key/value pairs passed will normally be sent as query parameters to the host. If any of these pairs are parameters that are sent as formdata then only parameters of that type will be accepted and all others will be discarded.
Returns
A
NetAppResponse
object containing the details of the HTTP response.Raises
NetAppRestError
: If the API call returned a status code >= 400
Inherited members
class AggregateSchema (*, only: Union[Sequence[str], Set[str]] = None, exclude: Union[Sequence[str], Set[str]] = (), many: bool = False, context: Dict = None, load_only: Union[Sequence[str], Set[str]] = (), dump_only: Union[Sequence[str], Set[str]] = (), partial: Union[bool, Sequence[str], Set[str]] = False, unknown: str = None)
-
The fields of the Aggregate object
Ancestors
- netapp_ontap.resource.ResourceSchema
- marshmallow.schema.Schema
- marshmallow.base.SchemaABC
Class variables
-
block_storage: AggregateBlockStorage GET POST PATCH
-
The block_storage field of the aggregate.
-
cloud_storage: AggregateCloudStorage PATCH
-
The cloud_storage field of the aggregate.
-
create_time: str GET
-
Timestamp of aggregate creation.
Example: 2018-01-01T16:00:00.000+0000
-
data_encryption: AggregateDataEncryption GET POST PATCH
-
The data_encryption field of the aggregate.
-
dr_home_node: DrNode GET POST PATCH
-
The dr_home_node field of the aggregate.
-
home_node: Node GET POST PATCH
-
The home_node field of the aggregate.
-
inactive_data_reporting: AggregateInactiveDataReporting GET POST PATCH
-
The inactive_data_reporting field of the aggregate.
-
inode_attributes: AggregateInodeAttributes GET
-
The inode_attributes field of the aggregate.
-
is_spare_low: bool GET
-
Specifies whether the aggregate is in a spares low condition on any of the RAID groups. This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.
Example: false
-
links: SelfLink GET
-
The links field of the aggregate.
-
metric: PerformanceMetric GET
-
The metric field of the aggregate.
-
name: str GET POST PATCH
-
Aggregate name.
Example: node1_aggr_1
-
node: Node GET POST PATCH
-
The node field of the aggregate.
-
recommendation_spares: List[AggregateSpare] GET POST PATCH
-
Information on the aggregate's remaining hot spare disks.
-
sidl_enabled: bool GET POST PATCH
-
Specifies whether or not SIDL is enabled on the aggregate.
-
snaplock_type: str GET POST
-
SnapLock type.
Valid choices:
- non_snaplock
- compliance
- enterprise
-
snapshot: AggregateSnapshot GET POST PATCH
-
The snapshot field of the aggregate.
-
space: AggregateSpace GET POST PATCH
-
The space field of the aggregate.
-
state: str GET POST PATCH
-
Operational state of the aggregate.
Valid choices:
- online
- onlining
- offline
- offlining
- relocating
- unmounted
- restricted
- inconsistent
- failed
- unknown
-
statistics: PerformanceMetricRaw GET
-
The statistics field of the aggregate.
-
Tags are an optional way to track the uses of a resource. Tag values must be formatted as key:value strings.
Example: ["team:csi","environment:test"]
-
uuid: str GET
-
Aggregate UUID.
-
volume_count: Size GET
-
Number of volumes in the aggregate.