This is a discussion on Implementing iSCSI - Input Desired - Storage ; Our company needs to expand storage capacity, for various reasons. Users need more harddrive space, we need more harddrive space, we need databases to have access to lots of harddrive space... etc. SAN seems to be the solution to the ...
Our company needs to expand storage capacity, for various reasons.
Users need more harddrive space, we need more harddrive space, we need
databases to have access to lots of harddrive space... etc.
SAN seems to be the solution to the problems, so I have been
I'll start off by saying cost *IS* a concern, from the white paper's I
have seen, FC vs iSCSI is a 4-6ms different, and thousands of dollars
in cost difference.
We have a GigE network currently, and would like to setup an iSCSI SAN
on the backend of that network. We are talking about starting 1TB
storage, with a definite need to have all sorts of tiny pools taken out
of it and allocated out. It will definitely need to be able to expand
to more storage as time goes on.
Okay, so that was the brief synopsis. My question comes in several
phases, first I am trying to decide, Hardware or Software iSCSI? I am
currently waiting on some White Papers from SANRAD about Hardware vs
Software iSCSI performance issues.
Would it be possible (or wise) to sit a single server in front of my
SAN ("In-House built"), with say a QLogic HBA (So far the only iSCSI
Target device I have found? It seems the Emulex GN9000 is EOL, the
Adaptec ASA 7211 are initiator's only, the Alacritech SES1001 is more
of a TOE, not something I could use as a Target HBA) Is that how "it
Then have all of my servers (lets say 4 servers) who need access to the
SAN be using either a software iSCSI (or hardware) solution and all of
them using the single server with a Hardware HBA as their Target for
iSCSI. Is that server with the single HBA, sitting in front of my SAN,
going to be a huge bottleneck?
When I look at SANRAD or FalconStor I see them trying to sell me what
the company wants to build, the system that sits in front of the SAN,
but behind the "network". Has anyone had any success building such a
machine, or is this a "you ought to buy that part" sort of solution?
What sort of spec's are your system if you did this solution in house?
The Storage itself will most likely be 30-100 SATA drives in a RAID
Hardware configuration, built in house.
Anyone had any success building a homegrown iSCSI solution, using the
http://linux-iscsi.sourceforge.net/ free iSCSI software? Even if I am
using a software initiator, we will *HAVE* to purchase an iSCSI Target
device? (Like the QLogic 4010?)
Anymore input would be grand, I am still trying to ensure 1) I have a
firm grasp on iSCSI conceptually, 2)I understand what would be needed
to implement it functionally & successfully.