Detailed about Fibre channel | DAS | SAN | NAS | Components of a Storage Sytem

Intelligent Storage System

Hi Friends,

Now we will get an overview about the intelligent storage system, DAS, SAN and NAS. To know about the Storage Raid Technology, refer the link below.

Basic Introduction

Business-critical applications require high levels of performance, availability, security, and scalability. RAID technology made an important contribution to enhancing storage performance and reliability, but hard disk drives even with a RAID implementation could not meet performance requirements of today’s applications.

With advancements in technology, a new breed of storage solutions known as an intelligent storage system has evolved. These arrays have an operating environment that controls the management, allocation, and utilization of storage resources. These storage systems are configured with large amounts of memory called cache.

Components of an Intelligent Storage System

An intelligent storage system consists of four key components: front end, cache, back end, and physical disks. An I/O request received from the host at the front-end port is processed through cache and the back end, to enable storage and retrieval of data from the physical disk. A read request can be serviced directly from cache if the requested data is found in cache.

Components of an intelligent storage system

Front End

The front end provides the interface between the storage system and the host. It consists of two components: front-end ports and front-end controllers. The front-end ports enable hosts to connect to the intelligent storage system.

Front-end controllers route data to and from cache via the internal data bus. When cache receives write data, the controller sends an acknowledgment message back to the host.


Cache is an important component that enhances the I/O performance in an intelligent storage system. Cache is semiconductor memory where data is placed temporarily to reduce the time required to service I/O requests from the host.

Structure of Cache

Cache is organized into pages or slots, which is the smallest unit of cache allocation. The size of a cache page is configured according to the application I/O size. Cache consists of the data store and tag RAM. The data store holds the data while tag RAM tracks the location of the data in the data store and in disk. Entries in tag RAM indicate where data is found in cache and where the data belongs on the disk. Tag RAM includes a dirty bit flag, which indicates whether the data in cache has been committed to the disk or not.

Structure of cache
Read Operation with Cache

When a host issues a read request, the front-end controller accesses the TAG RAM to determine whether the required data is available in cache. If the requested data is found in the cache, it is called a read cache hit or read hit.

Read performance is measured in terms of the read hit ratio, or the hit rate, usually expressed as a percentage. This ratio is the number of read hits with respect to the total number of read requests. A higher read hit ratio improves the read performance.

(b) Read Miss

Write Operation with Cache

Write operations with cache provide performance advantages over writing directly to disks. When an I/O is written to cache and acknowledged, it is completed in far less time (from the host’s perspective) than it would take to write directly to disk.

A write operation with cache is implemented in the following ways:

Write-back cache: Data is placed in cache and an acknowledgment is sent to the host immediately. Later, data from several writes are committed (de-staged) to the disk. Write response times are much faster. However, uncommitted data is at risk of loss in the event of cache failures.

Write-through cache: Data is placed in the cache and immediately written to the disk, and an acknowledgment is sent to the host. Because data is committed to disk as it arrives, the risks of data loss are low but write response time is longer because of the disk operations.

Cache Management

Cache is a finite and expensive resource that needs proper management. Even though intelligent storage systems can be configured with large amounts of cache, when all cache pages are filled, some pages have to be freed up to accommodate new data and avoid performance degradation.

Least Recently Used (LRU): An algorithm that continuously monitors data access in cache and identifies the cache pages that have not been accessed for a long time. LRU either frees up these pages or marks them for reuse.

Most Recently Used (MRU): An algorithm that is the converse of LRU. In MRU, the pages that have been accessed most recently are freed up or marked for reuse.

As cache fills, the storage system must take action to flush dirty pages (data written into the cache but not yet written to the disk) in order to manage its availability. Flushing is the process of committing data from cache to the disk.

On the basis of the I/O access rate and pattern, high and low levels called watermarks are set in cache to manage the flushing process. High watermark (HWM) is the cache utilization level at which the storage system starts high speed flushing of cache data. Low watermark (LWM) is the point at which the storage system stops the  high-speed or forced flushing and returns to idle flush behavior.

Idle flushing: Occurs continuously, at a modest rate, when the cache utilization level is between the high and low watermark.

High watermark flushing: Activated when cache utilization hits the high watermark. The storage system dedicates some additional resources to flushing. This type of flushing has minimal impact on host I/O processing.

Forced flushing: Occurs in the event of a large I/O burst when cache reaches 100 percent of its capacity, which significantly affects the I/O response time.  In forced flushing, dirty pages are forcibly flushed to disk.

Types of flushing
Back End

The back end provides an interface between cache and the physical disks. It consists of two components: back-end ports and back-end controllers. The back end controls data transfers between cache and the physical disks. From cache, data is sent to the back end and then routed to the destination disk. Physical disks are connected to ports on the back end.

Storage Topologies 

Three topologies are there.

1. Direct Attached Storage (DAS)
2. Storage Area Network (SAN)
3. Network Attached Storage (NAS)

Direct Attached Storage:  The storage device is internally connected to the host by a serial or parallel bus. The physical bus has distance limitations and can only be sustained over a shorter distance for high-speed connectivity.

DAS Architecture

Storage Area Network: A storage area network (SAN) carries data between servers (also known as hosts) and storage devices through fibre channel switches. A SAN enables storage consolidation and allows storage to be shared across multiple servers.

SAN Implementation

Structure of SAN

Before going to learn more about SAN, we have to look about Fibre Channel overview.

The FC architecture forms the fundamental construct of the SAN infrastructure. Fibre Channel is a high-speed network technology that runs on high-speed optical fiber cables (preferred for front-end SAN connectivity) and serial copper cables (preferred for back-end disk connectivity). The FC technology was created to meet the demand for increased speeds of data transfer among computers and servers.

FC Connectivity

The FC architecture supports three basic inter connectivity options: point-to point, arbitrated loop (FC-AL), and fibre Channel switched fabric.


Point-to-point is the simplest FC configuration — two devices are connected directly to each other. This configuration provides a dedicated connection for data transmission between nodes. However, the point-to-point configuration offers limited connectivity, as only two devices can communicate with each other at a given time.
Point-to-Point Topology

Fibre Channel Arbitrated Loop

In the FC-AL configuration, devices are attached to a shared loop. FC-AL has the characteristics of a token ring topology and a physical star topology. In FC-AL, each device contends with other devices to perform I/O operations. At any given time, only one device can perform I/O operations on the loop.

FC-AL shares the bandwidth in the loop. Only one device can perform I/O operations at a time. Because each device in a loop has to wait for its turn to process an I/O request, the speed of data transmission is low in an FC-AL topology.

FC-AL uses 8-bit addressing. It can support up to 127 devices on a loop.

Fibre Channel arbitrated loop

Fibre Channel Switched Fabric

A Fibre Channel switched fabric (FC-SW) network provides interconnected devices, dedicated bandwidth, and scalability. The addition or removal of a device in a switched fabric is minimally disruptive; it does not affect the ongoing traffic between other devices.

Fibre Channel switched fabric

Fibre Channel Ports

Ports on the switch can be one of the following types:

N_port: An end point in the fabric. This port is also known as the node port. Typically, it is a host port (HBA) or a storage array port that is connected to a switch in a switched fabric.

NL_port: A node port that supports the arbitrated loop topology. This port is also known as the node loop port.

E_port: An FC port that forms the connection between two FC switches. This port is also known as the expansion port. The E_port on an FC switch connects to the E_port of another FC switch in the fabric through a link, which is called an Inter-Switch Link (ISL). ISLs are used to transfer host-to-storage data as well as the fabric management traffic from one switch to another. ISL is also one of the scaling mechanisms in SAN connectivity.

F_port: A port on a switch that connects an N_port. It is also known as a fabric port and cannot participate in FC-AL.

FL_port: A fabric port that participates in FC-AL. This port is connected to the NL_ports on an FC-AL loop. A FL_port also connects a loop to a switch in a switched fabric. As a result, all NL_ports in the loop can participate in FC-SW. This configuration is referred to as a public loop. In contrast, an arbitrated loop without any switches is referred to as a private loop. A private loop contains nodes with NL_ports, and does not contain FL_port.

G_port: A generic port that can operate as an E_port or an F_port and determines its functionality automatically during initialization.

SAN Topologies
World Wide Names

Each device in the FC environment is assigned a 64-bit unique identifier called the World Wide Name (WWN).

The Fibre Channel environment uses two types of WWNs: World Wide Node Name (WWNN) and World Wide Port Name (WWPN).

Structure of a WWPN format

Description of WWN

Network-Attached Storage: Network-attached storage (NAS) is an IP-based file-sharing device attached to a local area network. It provides storage consolidation through file-level data access and sharing.

NAS uses network and file-sharing protocols to perform filing and storage functions. These protocols include TCP/IP for data transfer and CIFS and NFS for remote file service. NAS enables both UNIX and Microsoft Windows users to share the same data seamlessly. To enable data sharing, NAS typically uses NFS for UNIX, CIFS for Windows. A NAS device is a dedicated, high-performance, high-speed, single-purpose file serving and storage system.

Benefits of NAS

NAS offers the following benefits:

Improved efficiency
Improved flexibility
Centralized storage
Simplified management
High availability

FC Protocol Architecture 

FC Protocol Architecture
FC Cables and Transceivers

FC Cables and Transceivers

Fibre Channel cabling

Multimode Fiber

Multimode Step-Index Fiber

Single-Mode Fiber

Single_Mode_Step_Index_ Fiber
Single-Mode Step-Index Fiber

Inside of a Single Mode Step Index Fiber
 Fibre Channel Frame

Frame of a Fibre Channel

To know more about the difference between SAN and NAS, manufactures of the Storage Arrays, FC SAN Switches and other hardware components, refer this link below

Share this

Related Posts

Next Post »


December 15, 2015 at 7:23 PM delete

If you have any suggestion or modifications do let me know.

Amarraj Saroj
December 18, 2015 at 4:01 AM delete

Thanks Mehboob for sharing the link on FB, this is very easy understanding and helpful post.

Kartheek M N
January 7, 2016 at 9:06 AM delete

Really Appreciating and thank you Shaik. Please keep on updating the site regularly. It will be a revision for all persons who want to start their career in storage. Keep on updating Latest Storage Technologies and their usages.

January 8, 2016 at 11:25 AM delete

Hi..Appreciating your efforts..update symmetrix and Vmax content also.

January 10, 2016 at 2:27 AM delete

Thanks and one more doubt it is same for all storage. Please reply

January 21, 2016 at 12:08 PM delete

Hello Amarraj,

How are you doing today?

We had added few topics about Interview questions for SAN Basics. Please refer them and more articles are in it's way soon.


January 21, 2016 at 12:10 PM delete

Hi Kartheek,

Definitely we will update the contents to the site regularly. Please visit the site for new articles.

January 21, 2016 at 12:12 PM delete

Hi Mani,

Thanks for your valuable comments.

Very soon we will post the articles on Symmetrix and VMAX.

Thank you


January 21, 2016 at 12:13 PM delete

Hi Selvin,

Yes. The basics is same for all storage.

Just visit the site for more aticles.

August 7, 2019 at 12:47 AM delete

Please share start to end storage structure with name, diagram and description