1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
|
#let title = [
*Unit 3: Physical Layer*
]
#set text(12pt)
#set heading(numbering: "1.1")
#set page(
header: [
#box()[
_*Knowledge not shared, remains unknown.*_
]
#h(1fr)
#box()[#title]
],
numbering: "1 of 1",
)
#align(center, text(20pt)[
*#title*
])
#show table.cell.where(y: 0): strong
#outline()
#pagebreak()
= Physical Layer Overview
_Physical compute systems host applications that a provider offers as services to consumers and also execute the software used by the provider to manage the cloud infrastructure and deliver services._
- Consists of compute, storage and network resources.
- A provider offers compute systems to consumers to execute their own applications.
- Storage systems store business data, and data generated or processed by them.
- Networks connect various compute systems and storage systems with each other.
- Networks can also connect various clouds to one another.
= Compute
_A compute system is a computing platform that runs platform and application software._
- Consists of the following:
1. Processors
2. Memory
3. IO devices
4. OS
5. File system
6. Logical volume manager
7. Device drivers
- Providers typically deploy on x86 hosts.
- Compute systems provided in two main ways:
- *Shared hosting*: Multiple consumers share compute systems.
- *Dedicated hosting*: Individual consumers have decided compute systems.
- Compute virtualization is usually used to create virtual compute.
== Key components of compute system
1. *Processor*
- IC that executes the instructions of software by performing:
- Arithmetical operations
- Logical operations
- Input/Output operations
- x86 is a common architecture used with 32 and 64 bit varients.
- Many have multiple cores capable of functioning as individual processors.
2. *RAM*
- Volatile internal data storage
- Holds software programs and for execution and data used by the processor.
3. *ROM*
- Semiconductor memory containing:
- Boot firmware
- Power management firmware
- Device specific firmware
4. *Motherboard*
- PCB on which all compute systems connect
- Contains sockets to hold components
- Contains network ports, I/O ports, etc.
- May contain additional integrated componentsts such as GPU, NIC, and adapters to connect storage drives.
5. *Chipset*
- Collection of microchips on a motherboard designed to perform specific functions.
- Two main types are:
- _Nothbridge_: Manages processor access to RAM and GPU
- _Southbridge_: Connects processor to peripheral ports
== Software on Compute Systems
#table(
columns: (auto,auto),
table.header([Methodology], [Description]),
[Self-service portal], [Enables consumers to view and request cloud services],
[Platform software], [Includes software that the provider offers through PaaS],
[Application software], [Includes application that the provider offers through SaaS],
[Virtualization software], [Enables resource pooling and creating of virtual resources],
[Cloud management software], [Enables a provider to manage the cloud infrastructure and services],
[Consumer software], [Includes a consumer's platform software and business applications]
)
== Types of compute systems
=== Tower compute system
- Built in an upright enclosure called "tower".
- Has integrated power and cooling.
- Require significant floorspace, complex cabling and generate a lot of noise.
- Deploying in large environments may require substancial expenditure.
=== Rack compute system
- Designed to fit on a frame called "rack".
- A rack is a standardized system enclosure containing multiple mounting slots called "bays", each holding a server with the help of screws.
- A single rack contains multiple servers stacked.
- This simplifies network cabling, consolidates network equipment and reduces floorspace use.
- Each rack has it's own power and cooling.
- Administrators may use a console mounted on the rack to manage the computer systems.
- Cumbersome to work with, generate a lot of heat, increased power costs.
=== Blade compute system
- Electronic circuit board containing only core processing components.
- Each is a self contained compute system dedicated to a single application.
- Housed inside a blade enclosure which holds multiple blade servers.
- Blade enclosures provide power, cooling, networking, management functions.
- The modular design minimizes floorspace usage, increases compute density and scalability.
- Best energy effeciency.
- Simplifies compute infrastructure management.
- High in cost and proprietary architecture.
= Storage
_Data created by individuals, businesses, and applications needs to be persistently stored so that it can be retrieved when required for processing or analysis. A storage system is a repository for saving and retrieving data._
- Providers offer storage capacity along with compute systems, or as a service.
- Storage as a service allows for data backup and long term data retention.
- Cloud storage provides massive scalability and rapid elasticity.
- Typically, a provider used virtualization to create storage pools that are shared by multiple consumers.
== Types of Storage Devices
#table(
columns: (auto, auto),
table.header([Type], [Description]),
[Magnetic disk drive], [
- Stores data on a circular disk wirh a ferromagnetic coating.
- Provides random read/write access.
- Most popular storage device with large storage capacity.
], [Solid-State drive], [
- Stores data on a Semiconductor-based memory.
- Very low-latency per I/O, low power requirements, and very high throughput.
], [Magnetic tape drive], [
- Stores data on a thin plastic film with a magnetic coating.
- Provides only sequential data access.
- Low-cost solution for long-term data storage.
], [Optical disk drive], [
- Stores data on a polycarbonate disk with a reflective coating.
- Write once and read many capability: CD, DVD, BD.
- Low-cost solution for long-term storage.
]
)
== Redundant Array of Independent Disks
_RAID is a storage technology in which data is written in blocks across multiple disk drives that are combined into a logical unit called a RAID group._
- Improves storage system performance as I/O is served simultaneously across multiple disks.
- Implemented using a specialized hardware controller present on the host or the array.
- Functions of RAID are:
1. Management and control of drive aggregations
2. Translations of I/O requests between logical and physical drives.
3. Data regeneration in the event of drive failures.
=== Types of RAID
==== Striping
#figure(
image("./assets/striping.png")
)
_Striping is a technique to spread data across multiple drives in order to use drives in parallel and increase performance as compared to the use of a single drive._
- Each drive as a predefined numbwe of contiguously addressable blocks called a strip.
- Stripe is a set of aligned strips that span across all the drives.
- All strips in a stripe have the same number of blocks.
- Does not provide any data protection.
==== Mirroring
#figure(
image("./assets/mirroring.png")
)
_Mirroring is a technique in which the same data is stored simultainously in two different drives, resulting in two copies of data. This is called a "Mirrored Pair"._
- Even if one fails, the data is safe in the surviving drive.
- When a failed disk is replaced, the controller copies the data from the surviving drive to the mirrored pair.
- Mirroring provides the following:
- Data redundency
- Fast recovery from disk failure
- Twice the number of drives are required.
- Increase in costs.
- Mirroring used for mission critical operations.
- Better read performance, worse write performance.
==== Parity
_Parity is a RAID technique to protect striped data from drive failure by performing a mathematical operation on individual strips and storing the result on a portion of the RAID group._
- RAID controller finds parity using techniques like XOR.
- Parity data can be stored on seperate drives or distributed across drives in a RAID group.
- Parity is calculated everytime data is modified, affecting the performance.
=== RAID levels
#table(
columns: (auto, auto),
table.header([ RAID Level ], [ Meaning ]),
[ RAID 0 ], [ Striped set with no fault tolerance. ],
[ RAID 1 ], [ Disk Mirroring ],
[ RAID 1+0 ], [ Nested RAID ( striping and mirroring ). ],
[ RAID 3 ], [ Striped set with parallel access and a dedicated parity disk. ],
[ RAID 5 ], [ Striped set with independent disk access and distributed parity. ],
[ RAID 6 ], [ Striped set with independent disk access and dual distributed parity. ]
)
== Data Access methods
- External storage can be connected directly or over network.
- Applications request data by specifying file name and location.
- File systems map file attributes to logical block address (LBA).
- LBA simplifies addressing by using a linear address to access a block of data.
- File system converts LBA to a physical/cylinder-head-sector/CHS address and fetches data.
=== Three schemes of data access.
#figure(
image("./assets/dataaccessmethods.png")
)
==== Block Level Access
- Storage volume is created and assigned to the compute system.
- Application data request is sent to file system and converted to block-level request.
- Request sent to the storage system.
- Converts LBA to CHSA and fetches data in block sized units.
==== File Level Access
- File system created on a seperatee file server.
- File-level request sent to file server.
- File server converts file-level request to block-level request.
- Then block-level request is sent to storage.
==== Object Level Access
- Data is accessed over the network in terms of self contained objects.
- Each object has a unique object identifier.
- Application request is sent to file system.
- File system communicates with the object-based storage device (OSD) interface.
- OSD interface sends the request to the storage system.
- Storage system has OSD storage component.
- This component manages access to the object on the storage system.
- OSD storage component converts object-level request to block-level request.
== Storage System Architecture
- Critical design consideration for building cloud infrastructure.
- Provider must choose additional storage and ensure capacity to maintain overall performance of the environment.
- Based on data access methods.
=== Types of storage system architectures.
==== Block-Based storage system.
#figure(
image("./assets/blockedbasedstoragesystem.png")
)
- Enables the creation and assigning of storage volumes to compute systems.
- The compute OS discovers these storage volumes as local drives.
- A file system can be created on these storage volumes.
- Block-based storage system consists of:
1. *Front-end controller*
- This provides interface between storage system and compute system.
- Typically, redundant controllers with additional ports are present for high availability.
- Each controller has processing logic that executes approximate transport protocol for storage connections.
- These controllers route data to and from a cache memory via an internal data bus.
2. *Cache memory*
- Semiconductor memory where data is placed temporarily to reduce time required to service I/O requests from the compute system.
- Improves performance by isolating compute from mechanical delays of disk writes.
- Accessing data from chache takes less than a milisecond.
- If requested data is found in cache, it is a cache hit else it is a cache miss.
- Write operation is implemented in two ways:
1. In *write-back*, data of several write operations are placed in cache and acknowledgement is sent immediatly. Data is written to disk later.
2. In *write-through*, data is placed in cache and immediatly written to disk, and acknowledgement is sent to compute system.
3. *Back-end controller*
- Provides an interface between cache and physical disk.
- The data in the cache is sent to the backed which routes it to the destination disk.
4. *Physical disk*
- Connect to ports on the back-end.
- In some cases the front-end, cache, back-end are integrated on a single board called a storage controller.
==== File-based storage system
#figure(
image("./assets/filebasedstorage.png")
)
- Dedicated high performance storage aka NAS with internal or external storage.
- Enables clients to share files over IP network.
- Supports NFS and CIFS protocols to work with Unix and Windows systems.
- Uses a specialized OS that is optimal for file I/O.
- Consolidates distributed data into a large, centralized data pool accessable to and shared by hetrogeneous clients and applications across a network.
- Results in efficient management and improved storage utilization.
- Lowers operating and maintenance costs.
===== NAS deployment options
- Two common ways of NAS deployment:
1. *Scale-up/Traditional*
- Scales capacity and performance of singular NAS system.
- Involves upgrading or adding components to the NAS.
- Fixed ceiling for capacity and performance.
2. *Scale-out*
- Designed to address Big Data.
- Enables creation of clustered NAS systems by pooling multiple processing and storage nodes.
- Works as a single NAS system and is managed centrally.
- Capacity can be increased by adding nodes to it.
- Each added node increases aggregated disk, cache, processor, network capcity of the cluster.
- Can be non-disruptivly added to the server.
==== Object-Based Storage
#figure(
image("./assets/objectbasedstorage1.png")
)
- Stores data in the form of objects based on the content and other attributes rather than the name and location.
- Objects contain user data, metadata, user defined attributes.
- Additional metadata allows for optimized search, retention and deletion of objects.
- Each object identified by an object ID. This allows easy access to objects without having to specify storage location.
- Object ID is generated using specialized algorithms on the data the garentees all object names are unique.
- Changes result in new object IDs. This makes it prefered for long term archiving to meet regulatory or complience requirements.
- Uses a flat, non-hierarchial address space to store data, providing flexibility to scale massively.
- Providers leverage object-based storage to offer Storage aaS because of it's inherent security, scalability and automated data management capability.
- Supports web service access via REST and SOAP.
===== Components of Object-Based Storage
#figure(
image("./assets/objectbased2.png")
)
1. *Nodes*\
- Node is a server that runs the OBS environment and provides services to store, retrieve, and manage data in the system.
- OBS is composed of one or more nodes.
- Each node has two key services:
1. Metadata
- Responsible for generating the object ID from the contents of a file.
- Maintains the mapping between object IDs and file system namespace.
2. Storage service
- Manages a set of drives on which data is stored .
2. *Private network*\
- Nodes connect to the storage via the private network.
- Private network provides node-to-node connectivity and node-to-storage connectivity.
3. *Storage*\
- Application server accesses the object-based storage node to store and retrieve data over an external network.
- In some implementations the metadata might reside on application server or a seperate server.
==== Unified Storage System
#figure(
image("./assets/unifiedstorage.png")
)
- Consolidates block, file, and object-based model into one model.
- Supports multiple protocols for data access.
- Managed using a single interface.
- Consists of the following components:
1. *Storage controller*\
- Provides block-level access to compute systems through various protocols.
- Contains front-end ports for direct block access.
- Responsible for managing the back-end pool.
- Configures storage volumes and presents them to the NAS head, OSD node, and compute systems.
2. *NAS head*\
- Dedicated file server that provides access to NAS clients.
- Connects to the storage via the storage controller.
- Usually two or more are present for redundency.
- Configures file systems on assigned volumes, creates NFS, CIFS, or mixed shares, and exports and shares to the NAS clients.
3. *OSD node*\
- Also accesses storage through storage controller.
- Volumes assigned on OSD appear node appear as physical disks.
- These disks are configured by the OSD node to store object data.
4. *Storage*\
= Network
- Establishes communication pathways between devices.
- Devices in networks called nodes.
- Enables information exchange and resource sharing among large number of nodes over long distances.
- Networks can connect to other networks to enable data transfer between nodes.
- Providers leverage different types of networks, with different protocols supporting different classes of network traffic.
- Cloud requires reliable and secure network connectivity to access cloud services.
- Provider connects cloud to a network enabling clients to connect to cloud over over network and use cloud services.
- Providers may also use IT resources at one or more data centers to provide cloud services.
- If multiple data centers are deployed, IT resources are logically aggregated by connecting them in a WAN.
== Types of network communications
=== Compute-to-Compute Communication.
#figure(
image("./assets/compute to compute.png")
)
- Interconnecting physical compute systems enables compute-to-compute communication.
- Typically uses IP-based protocols.
- Each physical compute is connected to the network over a physical NIC.
- Physical switches and routers are commonly used.
- Providers have to ensure appropriate switches and routers, with appropriate bandwidth and ports are provided.
=== Compute-to-Storage Communication
_A Storage Area Network (SAN) is a network that interconnects storage systems with compute systems, enabling compute systems to access and share the storage systems._
- Sharing improves the utilization of the storage systems.
- Using a SAN facilitates centralized storage management.
- This simplifies and standardizes management efforts.
==== Types of SAN
===== Fibre Channel SAN (FC SAN)
_FC SAN is a high speed, dedicated network of compute systems and shared storage systems that uses Fibre Channel protocols to transport data, commands, and status information between compute and storage systems._
- FC protocol implements the Small Computer System Interface (SCSI) command set.
- It also supports:
1. Asynchronous Transfer Mode (ATM)
2. Fibre Connection (FICOM)
3. IP
- SCSI over FC overcomes distance and accessability limitations associated with traditional SCSI.
- FC protocol provides block-level access to storage systems.
- It provides a serial data transfer interface.
- FC architecture is very scalable and a single FC SAN can accommodate approximatly 15 million nodes.
====== FC SAN Components
#table(
columns: (auto, auto),
table.header([ Component ], [ Description ]),
[ Network Adapters ],[
- Provide physical interface to a node for communicating with other nodes.
- Examples: FC HBAs and storage system front-end adapters.
], [ Cables and connectors ], [
- Optical fiber cables are predominantly used to provide connectivity.
- Connectors enable cables to be swiftly connected to and disconnected from ports.
], [ Interconnecting devices ], [
- FC switches and directors.
- Directors have a modular design, higher port count, and better fault tolerance.
- Switches have either a fixed port count or modular design.
]
)
====== Fabric Connect and Addressing
- A fabric is created with an FC switch/director/network of switches that enable all nodes to connect and communicate.
- Each switch has unique domain identifier (ID).
- Each network adapter and port has a globally unique 64 bit identifier called WWN (World Wide Name).
- WWN is a static name.
- WWNs are burned into hardware or assigned through software.
- An FC network adapter is physically identified by a World Wide Node Name (WWNN).
- Each port on the adapter has a unique World Wide Port Name (WWPN).
- Each FC adapter port in fabric has a unique 24 bit FC address for communication.
====== Fabric Port Types
#figure(
image("./assets/fabricporttypes.png")
)
A port in a switched fabric can be one of the following types:
1. *N_Port* is an end-point in fabric. This is also known as the node port. Typically, it is a compute system port or a storage system port connected to a switch in fabric.
2. *E_Port* is a switch port that forms a connection between two FC switches. This port is also known as an expansion port. The E_Port of an FC switch connects to the E_Port of another FC switch in the fabric through ISLs.
3. *F_Port* is a port on a switch that connects an N_Port. It is also known as a fabric port.
4. *G_Port* is a generic port on some vendors' switches. It can operate as an E_Port of F_Port and determines it;s functionality.
====== Zoning
_An FC switch function that enables node ports within a fabric to be logically segmented into groups and to communicate with each other within the group._
- When fabric is changed, it sends a Registered State Change Notification to the nodes in the fabric.
- Without zoning, RSCNs are revieved by all nodes even those not impacted by the change.
- This results in increased traffic.
- For large fabrics, this increase in traffic can be significant and impact compute-to-storage data traffic.
- Zoning limits the number of RSCNs in fabric.
- Zoning allows for fabric to send RSCNs only to affected nodes.
- Both node and switch ports can be part of a zone.
- A port or node can be part of multiple zones.
- HBA ports called initiator ports and storage ports called target ports.
- Single-initiator-single-target zoning is considered industry standard.
- Single-initiator-single-target zoning eliminates unnecessary compute-compute interactions and minimizes RSCNs.
====== Types of Zoning
#figure(
image("./assets/typesofzoning.png")
)
#table(
columns: (auto, auto),
table.header([ Type of zoning ], [ Description ]),
[ WWN Zoning],[
- Uses WWNs to define zones.
- Zone members are WWPN addresses of ports in HBA and it's targets.
- A major advantage of WWN zoning is it's flexibility.
- It allows nodes to be moved to another switch port in the fabric and maintain connectivity to their zone partners without modifying zone configuration.
- This is possible because WWN is static to the node port.
], [ Port Zoning ], [
- Uses switch port identifier to define zones.
- Access to data is determined by the physical switch port to which the node is connected.
- Zone members are the port identifier to which HBA and targets are connected.
- If node is moved to another switch port, zone configuration must be altered.
- If an HBA fails, it can be replaced without changing zoning configuration.
], [ Mixed Zoning ], [
- Combines qualities of WWN zoning and Port zoning.
- Enables specific node ports to be tied to the WWN of a node.
]
)
===== Internet Protocol SAN (IP SAN)
_A SAN that uses Internet Portocol (IP) for the transport of storage traffic. It transports block I/O over an IP-based network._
- Providers may have existing IP-based network infrastructure which could be used for storage networking.
- More economical as it leverages existing IP-based network instead of creating a new FC SAN network.
- Robust and mature security options available for IP networks.
- Many long-range disaster recovery solutions leverage IP-based networks.
- Two Main Protocols are iSCSI and FCIP.
====== iSCSI
#figure(
image("./assets/iSCSi.png")
)
_iSCSI encapsulates SCSI commands and data into IP packets that are transported over an IP-based network._
- The network components include:
1. *iSCSI initiators* such as software iSCSI adapter and iSCSI HBA.
2. *iSCSI targets* such as a storage system with iSCSI port or an iSCSI gateway.
3. *IP-based network*.
- This sends commands and associated data to a target and the target returns data and responses to the initiator.
- The software iSCSI adapter is an OS kernal-resident software that uses an existing NIC of the compute system to emulate an iSCSI initiator.
- An iSCSI HBA has a built-in iSCSI initiator and is capable of providing performance benefits over software iSCSI adapters.
- This is done by offloading the entire iSCSI and TCP/IP from the processor of the comput system.
- If an iSCSI capable storage system is deployed then iSCSI initiator con directly communicate with the storage system over IP-based network.
- If storage is not compatible then iSCSI gateways are used.
- The gateway transforms the IP packets into FC frames and vice versa.
- If gateway is present then it is bridged iSCSI else it is native iSCSI.
====== iSCSI Name
_A worldwide unique iSCSI identifier that identifies the initiators and the targets within an iSCSI network to facilitate communication._
- Can be a combination of:
1. Department name
2. Application
3. Manufacturer
4. Serial number
5. Asset number
6. Tag used to recognise and manage devices.
- Allowed special characters are dots, dashes and blank spaces.
- Two types of iSCSI names are:
1. *iSCSI Quantified Name (IQN)*
- An organization must own a registered domain name to generate IQN.
- Domain need not be active or resolve to an address.
- Needs to be reserved to prevent domain reuse.
- Date is included with the name to prevent collisions.
- Any identifiers like serial number, asset number are added to the end.
- Example: _iqn.2025-12.com.example:optional_string_
2. *Extended Unique Identifier (EUI)*
- Globally unique identifier based on IEEE EUI-64 standard.
- Composed of "eui" prefix and 16-character hexadecimal name.
- Example: _eui.123456789ABCDEF_
====== FCIP
#figure(
image("./assets/fcip.png")
)
_FCIP is an encapsulation of FC frames into IP packets that are transported between FC SANs over an IP-based network through FCIP tunnel._
- Enables data transfer between disparate FC SANs.
- FCIP entity deployed at either ends of a tunnel between two FC SAN islands.
- The gateway encapsulates FC frames into IP packets and transfers them through the tunnel.
- The remote gateway decapsulates the FC frames from the IP packets and sends them to the FC SAN.
- Used extensivly for disaster recovery in which data is replicated at a remote site.
- Capable of merging interconnected fabrics into a single fabric.
- In a merged fabric, the traffic travels between interconnected FC SANs through FCIP tunnel.
- Only a small subset of nodes need to be connected via FCIP.
- Majority of FCIP implementations use some switch-specific feature to prevent the fabrics from merging.
- They also restrict nodes allowed to communicate across fabrics.
===== Fibre Channel over Ethernet SAN (FCoE SAN)
_FCoE SAN is a converged enhanced ethernet (CEE) network that uses the FCoE protocol to transport FC data along with regular ethernet traffic over high speed ethernet links. FCoE encapsulates FC frames into ethernet frames._
- Supports Data Center Bridging (DCB) functionalities.
- DCB ensures for lossless transmission of FC traffic over ethernet.
- Allows us to deploy the same network components for transferring compute-to-compute and FC storage traffic.
- Reduces the complexity of managing multiple discrete networks.
- Uses multi-functional network adapters and switches.
- Reduces the infrastructure, power, and space consumed in a data center.
#figure(
image("./assets/fcoesan.png")
)
#table(
columns: (auto, auto),
table.header([ Component ], [ Description ]),
[ Converged Network Adapter (CNA) ], [
- Provides functionality of both NIC and FC HBA in a single device.
- Encapsulates FC traffic onto Ethernet frames.
- Consolidates both FC and regular Ethernet traffic over CEE links.
], [ Software FCoE adapter ], [
- A software on the compute system that performs FCoE processing.
- Supported NICs transfer both FCoE and regular Ethernet traffic.
], [ FCoE Switch ], [
- Contains Fibre Channel Forwarder (FCF), Ethernet Bridge, and a set of ports for FC, Ethernet, or FCoE connectivity.
- FCF encapsulates FC frames into Ethernet frames and vice versa.
], [ FCoE storage port ], [
- Connects to FCoE switch
- Enables end-to-end FCoE environment.
]
)
=== Inter-Cloud communication
#figure(
image("./assets/icc.png")
)
- Cloud tenents of rapid elasticity, resource pooling, broad network create a sense of availability of limitless resources.
- Create a sense they can be accessed from anywhere over a network.
- However, single cloud does not have infinite resources.
- A cloud without adequate resources, may be able to satisfy requests if it is able to access resources from another cloud.
- Several combinations of inter-cloud communication.
- Allows clouds balance workloads by accessing and using computing resources from other cloud infrastructures.
- Providers must ensure network connectivity o cloud infrastructure over WAN to other clouds for resource access and workload distribution.
|