Scaling my web server's content delivery capacity? - SUN

This is a discussion on Scaling my web server's content delivery capacity? - SUN ; I have been running a HTTP mirror site for free software on a single Sun server for the past 13 months and it has now grown to the stage where I need to scale up, but I can't actually scale ...

+ Reply to Thread
Results 1 to 2 of 2

Thread: Scaling my web server's content delivery capacity?

  1. Scaling my web server's content delivery capacity?

    I have been running a HTTP mirror site for free software on a single
    Sun server for the past 13 months and it has now grown to the
    stage where I need to scale up, but I can't actually scale this
    machine any further (vertically).

    My first instinct was to find another machine of similar specs, get
    another RAID array and mirror my system entirely, then round-robin
    DNS them. This is affordable but I'm not convinced it is the most
    cost-effective solution, nor an efficient use of the resources. It
    would also become increasingly difficult to manage with the addition
    of each node.

    Right now I deliver files directly to the client from direct-attached
    RAID storage. My file set is large and growing and I have plenty of
    room to scale there by simply adding more RAID enclosures. What I am
    yet to settle on is a method of scaling my delivery capacity as my
    audience expands.

    I am imagining I would place a cluster of reverse-proxy servers in
    front of my current server, effectively making it the "origin" server.

    The proxy servers would each have a fraction of the disk space of my
    current server, just enough to handle current downloads. Perhaps a
    couple of small 15KRPM disks in RAID-0.

    Each new request should be forwarded to the proxy server bearing the
    least load at the time of the request (using something like
    mod_backhand for Apache 1.3). The proxy will then fetch the file
    from the origin server and deliver it to the client.

    The reason I am holding off on deploying such an architecture is
    because I am not totally convinced by it. Also,
    mod_backhand/Wackamole is the only software I have found that would
    enable this kind of setup and they seem to be best supported on
    FreeBSD, so I'm interested in what other methods are being used out
    there on Solaris and Linux.

    I realize that most of the big corporations have moved towards CDNs,
    but how would somewhere like CNET have scaled their delivery capacity
    before they outsourced it to Kontiki/Akamai?

    If anyone can offer me some advice, feedback on my architecture, or
    topics I should read up on, I would really appreciate it.

    Cheers

    -colin


  2. Re: Scaling my web server's content delivery capacity?

    colindermott@gmail.com wrote:

    > The reason I am holding off on deploying such an architecture is
    > because I am not totally convinced by it. Also,
    > mod_backhand/Wackamole is the only software I have found that would
    > enable this kind of setup and they seem to be best supported on
    > FreeBSD, so I'm interested in what other methods are being used out
    > there on Solaris and Linux.
    >


    I think squid can do exactly this sort of thing (as in: I've worked at
    a large website which used a (heavily customised) squid set up to do
    this and it worked very well, and much better than the commercial web
    cache layer used before, but I don't know the details and probably
    shouldn't tell you them if I did...)

    --tim


+ Reply to Thread