When talking about attaching a Cinder volume there are three steps that must happen before the volume is available in the host:
If we are running cinderlib and doing the attach in the same host, then all steps will be done in the same host. But in many cases you may want to manage the storage backend in one host and attach a volume in another. In such cases, steps 1 and 3 will happen in the host that needs the attach and step 2 on the node running cinderlib.
Projects in OpenStack use the OS-Brick library to manage the attaching and detaching processes. Same thing happens in cinderlib. The only difference is that there are some connection types that are handled by the hypervisors in OpenStack, so we need some alternative code in cinderlib to manage them.
Connection objects’ most interesting attributes are:
Once we have created a volume with cinderlib doing a local attachment is really simple, we just have to call the attach method from the Volume and we’ll get the Connection information from the attached volume, and once we are done we call the detach method on the Volume.
vol = lvm.create_volume(size=1)
attach = vol.attach()
with open(attach.path, 'w') as f:
f.write('*' * 100)
vol.detach()
This attach method will take care of everything, from gathering our local connection information, to exporting the volume, initializing the connection, and finally doing the local attachment of the volume to our host.
The detach operation works in a similar way, but performing the exact opposite steps and in reverse. It will detach the volume from our host, terminate the connection, and if there are no more connections to the volume it will also remove the export of the volume.
Attention
The Connection instance returned by the Volume attach method also has a detach method, but this one behaves differently than the one we’ve seen in the Volume, as it will just perform the local detach step and not the termiante connection or the remove export method.
For a remote connection, where you don’t have the driver configuration or access to the management storage network, attaching and detaching volumes is a little more inconvenient, and how you do it will depend on whether you have access to the metadata persistence storage or not.
In any case the general attach flow looks something like this:
In this case things are easier, as you can use the persistence storage to pass information between the consumer and the controller node.
Assuming you have the following variables:
The consumer node must store its connector properties on start using the key-value storage provided by the persistence plugin:
import socket
import cinderlib as cl
cl.setup(persistence_config=persistence_config)
kv = cl.Backend.persistence.get_key_values(node_id)
if not kv:
storage_nw_ip = socket.gethostbyname(socket.gethostname())
connector_dict = cl.get_connector_properties('sudo', storage_nw_ip,
True, False)
value = json.dumps(connector_dict, separators=(',', ':'))
kv = cl.KeyValue(node_id, value)
cl.Backend.persistence.set_key_value(kv)
Then when we want to attach a volume to node_id the controller can retrieve this information using the persistence plugin and export and map the volume for the specific host.
import cinderlib as cl
cl.setup(persistence_config=persistence_config)
storage = cl.Backend(**cinderlib_driver_configuration)
kv = cl.Backend.persistence.get_key_values(node_id)
if not kv:
raise Exception('Unknown node')
connector_info = json.loads(kv[0].value)
vol = storage.Volume.get_by_id(volume_id)
vol.connect(connector_info, attached_host=node_id)
Once the volume has been exported and mapped, the connection information is automatically stored by the persistence plugin and the consumer host can attach the volume:
vol = storage.Volume.get_by_id(volume_id)
connection = vol.connections[0]
connection.attach()
print('Volume %s attached to %s' % (vol.id, connection.path))
When attaching the volume the metadata plugin will store changes to the Connection instance that are needed for the detaching.
This is more inconvenient, as you’ll have to handle the data exchange manually as well as the OS-Brick library calls to do the attach/detach.
First we need to get the connector information on the host that is going to do the attach:
from os_brick.initiator import connector
connector_dict = connector.get_connector_properties('sudo', storage_nw_ip,
True, False)
Now we need to pass this connector information dictionary to the controller node. This part will depend on your specific application/system.
In the controller node, once we have the contents of the connector_dict variable we can export and map the volume and get the info needed by the consumer:
import cinderlib as cl
cl.setup(persistence_config=persistence_config)
storage = cl.Backend(**cinderlib_driver_configuration)
vol = storage.Volume.get_by_id(volume_id)
conn = vol.connect(connector_info, attached_host=node_id)
connection_info = conn.connection_info
We have to pass the contents of connection_info information to the consumer node, and that node will use it to attach the volume:
import os_brick
from os_brick.initiator import connector
connector_dict = connection_info['connector']
conn_info = connection_info['conn']
protocol = conn_info['driver_volume_type']
conn = connector.InitiatorConnector.factory(
protocol, 'sudo', user_multipath=True,
device_scan_attempts=3, conn=connector_dict)
device = conn.connect_volume(conn_info['data'])
print('Volume attached to %s' % device.get('path'))
At this point we have the device variable that needs to be stored for the disconnection, so we have to either store it on the consumer node, or pass it to the controller node so we can save it with the connector info.
Here’s an example on how to save it on the controller node:
conn = vol.connections[0]
conn.device = device
conn.save()
Warning
At the time of this writing this mechanism doesn’t support RBD connections, as this support is added by cinderlib itself.
If we want to use multipathing for local attachments we must let the Backend know when instantiating the driver by passing the use_multipath_for_image_xfer=True:
import cinderlib
lvm = cinderlib.Backend(
volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi',
use_multipath_for_image_xfer=True,
)
The Connection object has an extend method that will refresh the host’s view of an attached volume to reflect the latest size of the volume and return the new size in bytes.
There is no need to manually call this method for volumes that are locally attached to the node that calls the Volume’s extend method, since that call takes care of it.
When extending volumes that are attached to nodes other than the one calling the Volume’s extend method we will need to either detach and re-attach the volume on the host following the mechanisms explained above, or refresh the current view of the volume.
How we refresh the host’s view of an attached volume will depend on how we are attaching the volumes.
In this case things are easier, just like it was on the Remote connection.
Assuming we have a volume_id variable with the volume, and storage has the Backend instance, all we need to do is:
vol = storage.Volume.get_by_id(volume_id)
vol.connections[0].extend()
This is more inconvenient, as you’ll have to handle the data exchange manually as well as the OS-Brick library calls to do the extend.
We’ll need to get the connector information on the host that is going to do the attach. Asuuming the dictionary is available in connection_info the code would look like this:
from os_brick.initiator import connector
connector_dict = connection_info['connector']
protocol = connection_info['conn']['driver_volume_type']
conn = connector.InitiatorConnector.factory(
protocol, 'sudo', user_multipath=True,
device_scan_attempts=3, conn=connector_dict)
conn.extend()
Multi attach support has been added to Cinder in the Queens cycle, and it’s not currently supported by cinderlib.
All other methods available in the Snapshot class will be explained in their relevant sections:
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.