summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--unit5/assets/bcwn.pngbin0 -> 51660 bytes
-rw-r--r--unit5/assets/cachetier.pngbin0 -> 22488 bytes
-rw-r--r--unit5/assets/virtstorpro.pngbin0 -> 80188 bytes
-rw-r--r--unit5/unit5.typ74
4 files changed, 74 insertions, 0 deletions
diff --git a/unit5/assets/bcwn.png b/unit5/assets/bcwn.png
new file mode 100644
index 0000000..da1fef8
--- /dev/null
+++ b/unit5/assets/bcwn.png
Binary files differ
diff --git a/unit5/assets/cachetier.png b/unit5/assets/cachetier.png
new file mode 100644
index 0000000..9e15a95
--- /dev/null
+++ b/unit5/assets/cachetier.png
Binary files differ
diff --git a/unit5/assets/virtstorpro.png b/unit5/assets/virtstorpro.png
new file mode 100644
index 0000000..ef4c881
--- /dev/null
+++ b/unit5/assets/virtstorpro.png
Binary files differ
diff --git a/unit5/unit5.typ b/unit5/unit5.typ
index 069a164..0844ea1 100644
--- a/unit5/unit5.typ
+++ b/unit5/unit5.typ
@@ -233,3 +233,77 @@ _Resource Management is the process of allocating resources effectivly to a serv
- Copy of the hottest data resides on the flash cache.
- Server flash cache needs warm up time before performance increase is observed.
- Warm up time is the time required to move significant amount of data into server flash cache.
+== Storage
+=== Virtual Storage Provisioning
+#figure(image("./assets/virtstorpro.png", width: 50%))
+_Virtual storage provisioning is the process that enables the presentation of a LUN to an application with more storage than is physically allocated to it on the storage system._\
+- Administrators often over-anticipate storage requirements.
+- This leads to unused space and lower capacity utilization.
+- It also leads to excess storage capacity, which begets higher cost, increased power consumption, cooling and floor space.
+- With virtual storage provisioning, physical storage is allocated on demand.
+- More efficient storage utilization by reducing allocated but unused storage.
+- Provides rapid elasticity by adapting to variations in workloads by dynamically expanding or reducing storage levels.
+=== Storage pool rebalancing
+_Storage pool rebalancing is a technique that allows to automatically rebalance allocated extents on physical disk drives over the entire pool when new drives are added to the pool._\
+- When storage pool is expanded, sudden introduction of empty drives combined with old full drives causes data imbalance.
+- Also causes performance impact because new data would be added mostly to new drives.
+- Restripes data across all drives in the storage pool.
+- This enables spreading out data equally on all physical disks within the shared pool.
+- This ensures that used capacity of each drive is uniform across the pool and helps in increasing overall pool performance.
+=== Storage space reclamation
+- Identifies unused space in thin LUNs and returns it to the storage pools.
+- Two options for storage reclamation are:
+ 1. *Zero extent reclamation*\
+ - Commonly implemented at the storage level.
+ - Provides the ability to free storage extents that only contain zeros.
+ 2. *API based reclamation*\
+ - Uses APIs to communicate locations of unused space in LUNs to storage system.
+ - Allows storage system to reclaim all unsued physical storage back into the storage pools.
+=== Automated storage tiering
+_Automated storage tiering is the technique of establishing a hierarchy of different storage types for different categories of data that enables storing the right data automatically to the right tier, to meet the service level requirements._\
+- Many services have predictable spikes in activity with lower activity at other times.
+- Automated storage solution addresses these cyclical fluctuations as well as unpredictable spikes.
+- It can replace manul storage management and significantly benefit cloud environments.
+- Tiers are differentiated based on protection, performance, cost.
+- Example, using tier 0 SSDs to store hot data and tier 1 HHDs to store cold data.
+- Optimizes complete use of all kinds of storage.
+- Data is moved between tiers based on defined tiering policies.
+- These policies are based on file type, frequency of access, etc.
+- Data movement between tiers can happen within or between storage arrays.
+=== Cache tiering
+#figure(image("./assets/cachetier.png", width: 50%))
+- Tiering can also happen at the cache tier.
+- A large cache improves performance by retaining large amount of frequently accessed data.
+- High proportion of reads happen from the cache.
+- Configuring large cache can be costly.
+- The size of cache can be increased by using SSDs on the storage system to create a large capacity secondary cache seperate from compute's primary cache.
+- This enables tiering between DRAM primary cache and SSDs secondary cache.
+- Enables storage system to store a lot of hot data on the cache tier.
+- Most reads can now happen from cache tier increasing performance.
+=== Dynamic VM load balancing across storage volumes
+- During provisioning for VMs, volumes are selected randomly.
+- This can lead to underutilized volumes.
+- Dynamic VM load balancing across storage volumes enables intelligent placement of VMs during creation.
+- It does so based on I/O load, available storage in hypervisor's native FS or NAS FS.
+- Implementd in a centralized management server the manages virtualized environments.
+- The server performs ongoing dynamic VM load balancing within a cluster of volumes.
+_A cluster volume is a collection of pool of a hypervisor's native FS or NAS FS volumes that are aggregated as a single volume to enable efficient and rapid placement of new virtual machines._\
+- User configurable space utilization and I/O latency thresholds are defined to ensure space efficiency.
+- I/O bottlenecks avoided.
+- Thresholds configured during configuration of clustering volumes.
+== Network
+_Network traffic flow in a cloud network infrastructure is controlled to optimize both performance and availability of cloud service._\
+- Administrators use several traffic management techniques.
+- Some enable distribution of traffic load across nodes or parallel network links.
+- This prevents overutilization and underutilization of resources.
+- Othes enable automatic failover of network traffic from a failed network component.
+- Some also ensure garenteed service levels for a class of traffic contending with other classes for network bandwidth.
+=== Balancing client workload across nodes
+#figure(image("./assets/bcwn.png"))
+
+=== Network storm control
+=== Quality of Service (QoS)
+=== Traffic shaping
+=== Link aggregation
+=== NIC Teaming
+=== Multipathing