This is a discussion on good practical solution for a switching architecture - Connectivity ; Hi All I have 13 app servers that need to talk to 2 storage servers. We are currently using a 24-port switch and want to replace it for a faster, better one. We also predict we will grow to 30-40 ...
I have 13 app servers that need to talk to 2 storage servers. We are
currently using a 24-port switch and want to replace it for a faster,
better one. We also predict we will grow to 30-40 servers within next
year or so. So I want to invest in a good scalable solution but have
not been able to find many guides out there as to which path to choose.
I know chaining switches isn't good idea. I know some switches can be
stacked - though there doesn't seem to be a very good number of those
models out there. I also know that the most ideal way to go (assuming
you have tens of servers) is to use a distribution switch that talks to
bunch of 24-port switches. However, since we right now only have enogh
servers to fill up one switch, it seems to be an overkill to go with a
We do need 1Gb interfaces and decent-speed for the backplane. Seems to
be the best thing for us to do would be for now to get a stackable
24-giga-port switch (E.g. Foundry FGS624P). When we end up with 30
servers, we can get another one of those and stack them together. If we
need to support 100 servers down the road, we can unstack all switches
we got so far and buy a good distro switch to route the traffic. This
way we end up throwing no hardware away and don't have to spend a lot
of cash up front right now.
So if you have any thoughts on this or can recommend a book or (better
yet) online guide, please let me know. What do companies do out there
for typical situations like this?
Cheers & thanks.