Status of Intel's Common System Interconnect ? - VMS

This is a discussion on Status of Intel's Common System Interconnect ? - VMS ; I lost track of what has been and hasn't been released yet. Has Intel released any CSI (or whatever it name might be this week) based systems yet on either the 8086 or Ia64 architectures ? If so, have any ...

+ Reply to Thread
Results 1 to 10 of 10

Thread: Status of Intel's Common System Interconnect ?

  1. Status of Intel's Common System Interconnect ?

    I lost track of what has been and hasn't been released yet. Has Intel
    released any CSI (or whatever it name might be this week) based systems
    yet on either the 8086 or Ia64 architectures ?

    If so, have any vendors started to assemble such systems and start to
    market/sell them ?

    Is this something that is coming real soon now, or have there been
    delays that put this well into next year ?

    During a presentation 2 years ago, an HP guy had said that HP might not
    adopt CSI for its IA64 based systems since it has its own proprietary
    chips. Does anyone know if this is still true ? Or will HP adopt the CSI
    for both 8086 and IA64 systems ?

    At the motherboard level, would components such as those used to build
    "blades" have to be changed to interface betwen the blade system
    architecture and the motherboard's CSI interface ? Or are those
    connected at a higher level or abstraction and not affected by CSI ?

  2. Re: Status of Intel's Common System Interconnect ?

    On Oct 12, 1:20*am, JF Mezei wrote:
    > I lost track of what has been and hasn't been released yet. Has Intel
    > released any CSI (or whatever it name might be this week) based systems
    > yet on either the 8086 or Ia64 architectures ?
    >
    > If so, have any vendors started to assemble such systems and start to
    > market/sell them ?
    >
    > Is this something that is coming real soon now, or have there been
    > delays that put this well into next year ?
    >
    > During a presentation 2 years ago, an HP guy had said that HP might not
    > adopt CSI for its IA64 based systems since it has its own proprietary
    > chips. Does anyone know if this is still true ? Or will HP adopt the CSI
    > *for both 8086 and IA64 systems ?
    >
    > At the motherboard level, would components such as those used to build
    > "blades" have to be changed to interface betwen the blade system
    > architecture and the motherboard's CSI interface ? Or are those
    > connected at a higher level or abstraction and not affected by CSI ?



    several seconds spent searching intel.com revealed that the Common
    System Inconnect is now called QuickPath (or Quick Path Interconnect
    or even QPI) and
    "Starting in 2008 with its next generation microarchitectures—code
    named Nehalem and
    Tukwila—Intel is incorporating a scalable shared memory (also known as
    non-uniform shared
    access or NUMA). Intel’s new system architecture and platform
    technology will be called
    Intel® QuickPath Technology."

    I leave as an exercise to the reader to search hp.com for quickpath
    and Nehalem and tukwila.

    According to the VMS public roadmap VMS V8.4 will run on Tukwila

  3. Re: Status of Intel's Common System Interconnect ?

    IanMiller wrote:

    > I leave as an exercise to the reader to search hp.com for quickpath
    > and Nehalem and tukwila.


    I searched Tukwila quickpath. It gave me 9 results. The most likely
    document (a roadmap) was a neat Intel marketing garbage devoid of
    specific information.

    You have obvioulsy way overestimated my capabilities if you thought I
    could easily find the results from an HP.COM search.



    > According to the VMS public roadmap VMS V8.4 will run on Tukwila


    Woopty doo.VMS roadmap is not a document where I would expect details on
    HP's hardware and whether HP's own use of Tukwila will also involve
    Quickpath or if it will have some HP proprietary chipset as I had been
    lead to believe before.

  4. Re: Status of Intel's Common System Interconnect ?

    On Oct 13, 5:30 am, JF Mezei wrote:
    > IanMiller wrote:
    > > I leave as an exercise to the reader to search hp.com for quickpath
    > > and Nehalem and tukwila.

    >
    > I searched Tukwila quickpath. It gave me 9 results. The most likely
    > document (a roadmap) was a neat Intel marketing garbage devoid of
    > specific information.
    >
    > You have obvioulsy way overestimated my capabilities if you thought I
    > could easily find the results from an HP.COM search.
    >
    > > According to the VMS public roadmap VMS V8.4 will run on Tukwila

    >
    > Woopty doo.VMS roadmap is not a document where I would expect details on
    > HP's hardware and whether HP's own use of Tukwila will also involve
    > Quickpath or if it will have some HP proprietary chipset as I had been
    > lead to believe before.


    This week's name has been Quickpath for a while now but CSI is
    shorter.

    I'm not 100% sure I'm up to date either, but if I recall correctly,
    CSI-based systems require Nehalem (x86-64) or Tukwila (IA64) chips.
    And if I recall correctly from coverage of Intel Developer Forum from
    August this year, both of them are late by months or even a year or
    more. Nehalem was then said to be shipping in Q4 2008, and Tukwila in
    early 2009; this may have changed, correction welcome.

    If those timescales are accurate, they presumably mean that Those In
    The Know already have access to early chips, boards, and are maybe
    even playing with systems, but nobody will really know when they'll
    hit the market for real, and any pre-release information which is
    floating around (be it HP or anyone else) will likely either be
    covered by NDA and subject to change, or be content-free. Examples to
    the contrary most welcome.

    I don't recall seeing any worthwhile pre-publicity for *systems* based
    around either x86 or IA64 variants of CSI.

    Realistically, what does CSI buy anyone anyway that HyperTransport
    hasn't offered for years (and before that there was its close relative
    the EV7 bus...). Access to nicely-interconnectable chips from Intel as
    well as different and electronically incompatible chips from AMD, I
    suppose?

    One other thing I imagine CSI will do *if* it succeeds is make life
    even harder for the relatively high-cost low-return IA64-specific
    sections of Intel and HP; why continue to duplicate two sets of
    engineering effort, one for x86-64 and one for electronically-
    compatible near-identical kit with IA64? When the consolidation does
    happen, maybe we'll eventually see a CSI-based Proliant-class system
    and there'll be a second chance for IA64 to have a go in the Proliant-
    class market (the first attempt was in 2003), without the IA64-
    specifics making those systems cost a relative fortune to design and
    build as they did back then. Then what happens with entry-level VMS
    systems?

    Incidentally, speaking of costs of Itanium, elsewhere I've recently
    seen it written that Itanium has brought down the cost of high quality
    servers. I'm not sure about that myself. For most purposes an entry or
    mid-range Itanium (the "volume market" ones) offers no features I can
    see that a suitable Proliant hasn't offered for years, with Proliants
    at a very realistic price (just ask the many people whose businesses
    depend on them). Except Proliants don't do VMS, you need IA64 if you
    want to buy a VMS box today.

    Equally, other than the CPU cost, there's been no need for any huge
    difference in the bill of materials cost between an Alpha and an x86
    box since the days of the PWS family (or before that, the AlphaStation
    400 and its PC equivalent whose name I forget). Any difference in the
    price these technically-similar boxes sold at was down to things other
    than the cost to manufacture (ie it's a political decision), and thus
    any difference in today's cost of a VMS box vs the cost a few years
    ago is also a political not technical effect, not necessarily to do
    with the box having "Itanium Inside" rather than "Alpha Instead", more
    to do with things going on in the market in general. Maybe CSI will
    change that too, but when EV6 had the same interconnect as AMD64
    Hammer, did it help EV6 conquer the world, or not? EV7 and LDT/
    Hypertransport? Soon, Itanium and CSI?

    If CSI does catch on for both x86-64 and IA64, there'll be some
    interesting questions to answer (again) about whether VMS still needs
    VMS-specific system hardware, and that will inevitably lead right back
    to why it needs a VMS-specific CPU architecture... VMS running on VAX,
    Alpha, Itanium, Charon, SIMH and maybe others not so well known say
    that the CPU architecture isn't a showstopper, politics is.

  5. Re: Status of Intel's Common System Interconnect ?

    On Oct 13, 7:59*am, johnwalla...@yahoo.co.uk wrote:
    > On Oct 13, 5:30 am, JF Mezei wrote:
    >
    >
    >
    > > IanMiller wrote:
    > > > I leave as an exercise to the reader to search hp.com for quickpath
    > > > and Nehalem and tukwila.

    >
    > > I searched Tukwila quickpath. It gave me 9 results. The most likely
    > > document (a roadmap) was a neat Intel marketing garbage devoid of
    > > specific information.

    >
    > > You have obvioulsy way overestimated my capabilities if you thought I
    > > could easily find the results from an HP.COM search.

    >
    > > > According to the VMS public roadmap VMS V8.4 will run on Tukwila

    >
    > > Woopty doo.VMS roadmap is not a document where I would expect details on
    > > HP's hardware and whether HP's own use of Tukwila will also involve
    > > Quickpath or if it will have some HP proprietary chipset as I had been
    > > lead to believe before.

    >
    > This week's name has been Quickpath for a while now but CSI is
    > shorter.
    >
    > I'm not 100% sure I'm up to date either, but if I recall correctly,
    > CSI-based systems require Nehalem (x86-64) or Tukwila (IA64) chips.
    > And if I recall correctly from coverage of Intel Developer Forum from
    > August this year, *both of them are late by months or even a year or
    > more. Nehalem was then said to be shipping in Q4 2008, and Tukwila in
    > early 2009; this may have changed, correction welcome.
    >
    > If those timescales are accurate, they presumably mean that Those In
    > The Know already have access to early chips, boards, and are maybe
    > even playing with systems, but nobody will really know when they'll
    > hit the market for real, and any pre-release information which is
    > floating around (be it HP or anyone else) will likely either be
    > covered by NDA and subject to change, or be content-free. Examples to
    > the contrary most welcome.
    >
    > I don't recall seeing any worthwhile pre-publicity for *systems* based
    > around either x86 or IA64 variants of CSI.
    >
    > Realistically, what does CSI buy anyone anyway that HyperTransport
    > hasn't offered for years (and before that there was its close relative
    > the EV7 bus...). Access to nicely-interconnectable chips from Intel as
    > well as different and electronically incompatible chips from AMD, I
    > suppose?
    >
    > One other thing I imagine CSI will do *if* it succeeds is make life
    > even harder for the relatively high-cost low-return IA64-specific
    > sections of Intel and HP; why continue to duplicate two sets of
    > engineering effort, one for x86-64 and one for electronically-
    > compatible near-identical kit with IA64? When the consolidation does
    > happen, maybe we'll eventually see a CSI-based Proliant-class system
    > and there'll be a second chance for IA64 to have a go in the Proliant-
    > class market (the first attempt was in 2003), without the IA64-
    > specifics making those systems cost a relative fortune to design and
    > build as they did back then. Then what happens with entry-level VMS
    > systems?
    >
    > Incidentally, speaking of costs of Itanium, elsewhere I've recently
    > seen it written that Itanium has brought down the cost of high quality
    > servers. I'm not sure about that myself. For most purposes an entry or
    > mid-range Itanium (the "volume market" ones) offers no features I can
    > see that a suitable Proliant hasn't offered for years, with Proliants
    > at a very realistic price (just ask the many people whose businesses
    > depend on them). Except Proliants don't do VMS, you need IA64 if you
    > want to buy a VMS box today.
    >
    > Equally, other than the CPU cost, there's been no need for any huge
    > difference in the bill of materials cost between an Alpha and an x86
    > box since the days of the PWS family (or before that, the AlphaStation
    > 400 and its PC equivalent whose name I forget). Any difference in the
    > price these technically-similar boxes sold at was down to things other
    > than the cost to manufacture (ie it's a political decision), and thus
    > any difference in today's cost of a VMS box vs the cost a few years
    > ago is also a political not technical effect, not necessarily to do
    > with the box having "Itanium Inside" rather than "Alpha Instead", more
    > to do with things going on in the market in general. Maybe CSI will
    > change that too, but when EV6 had the same interconnect as AMD64
    > Hammer, did it help EV6 conquer the world, or not? EV7 and LDT/
    > Hypertransport? Soon, Itanium and CSI?
    >
    > If CSI does catch on for both x86-64 and IA64, there'll be some
    > interesting questions to answer (again) about whether VMS still needs
    > VMS-specific system hardware, and that will inevitably lead right back
    > to why it needs a VMS-specific CPU architecture... VMS running on VAX,
    > Alpha, Itanium, Charon, SIMH and maybe others not so well known say
    > that the CPU architecture isn't a showstopper, politics is.



    VMS does not need VMS specific hardware - the current Itanium systems
    run VMS or Unix or windows on the same hardware/firmware. It seems -
    There are a smaller range of configurations which are qualified for
    VMS.

    HP have said there will be increasing amounts of common hardware
    between the x86 (IA32) systems and the Itanium systems and this will
    result in cheaper Itanium systems. VMS now runs on the cheapest
    hardware it ever has run on and I expect it to get cheaper. Is it
    cheap enough? - I don't know - perhaps not.

  6. Re: Status of Intel's Common System Interconnect ?


    "IanMiller" wrote in message
    news:c272542e-f885-47e4-a0bc-e31ad891529b@k7g2000hsd.googlegroups.com...

    > VMS now runs on the cheapest hardware it ever has run on
    > and I expect it to get cheaper. Is it cheap enough? - I don't know
    > - perhaps not.


    I'm not convinced about the PCL licence model, where the licence
    cost doubles every chip generation. But that's another story...



  7. Re: Status of Intel's Common System Interconnect ?

    johnwallace4@yahoo.co.uk wrote:

    > I'm not 100% sure I'm up to date either, but if I recall correctly,
    > CSI-based systems require Nehalem (x86-64) or Tukwila (IA64) chips.





    The question is whether Tukwila requires CSI or whether HP can have have
    its own proprietary chipset. As I recall, HP has had its own chipset for
    a while. (then again, if they shipped all their chip designers to Intel,
    perhaps they don't have that capability anymore).

    > floating around (be it HP or anyone else) will likely either be
    > covered by NDA and subject to change, or be content-free. Examples to
    > the contrary most welcome.


    Ok, so the answer to my question should have been "the information has
    not been made public yet" instead of "any 9 year old could look it up by
    searching X and Y on the HP.com web site".



    > Realistically, what does CSI buy anyone anyway that HyperTransport
    > hasn't offered for years (and before that there was its close relative
    > the EV7 bus...).


    AMD may now lag intel in raw CPU horsepower, but they still have
    hypertransport advantage. With CSI, Intel catches up on that aspect of
    system performance. And it will allow the 8086 to scale to much larger
    systems.


    > For most purposes an entry or
    > mid-range Itanium (the "volume market" ones) offers no features I can
    > see that a suitable Proliant hasn't offered for years,


    Do 8086 based servers have the equivalent of the management console card
    that gives telnet/terminal access to the system firmware prompts ?

    Also, HP's 8086 servers are BIOS based. You would need to buy an Apple
    server to get an EFI console.

    > If CSI does catch on for both x86-64 and IA64,


    I don't think there is a neeed for "IF" in there. The market will
    consume whatever Intel produces for the 8086. CSI should propagate down
    the chain of systems and eventually reach laptops. (for intel based
    machines, of course)


    > interesting questions to answer (again) about whether VMS still needs
    > VMS-specific system hardware, and that will inevitably lead right back
    > to why it needs a VMS-specific CPU architecture...


    These questions are no longer relevant. As long as HP refuses to discuss
    porting VMS beyond IA64, the assumption is that VMS dies with IA64, just
    like MPE and Tru64 died when their platform stopped being developped.

  8. Re: Status of Intel's Common System Interconnect ?

    On Oct 13, 8:59 am, johnwalla...@yahoo.co.uk wrote:
    > On Oct 13, 5:30 am, JF Mezei wrote:
    >
    >
    >
    > > IanMiller wrote:
    > > > I leave as an exercise to the reader to search hp.com for quickpath
    > > > and Nehalem and tukwila.

    >
    > > I searched Tukwila quickpath. It gave me 9 results. The most likely
    > > document (a roadmap) was a neat Intel marketing garbage devoid of
    > > specific information.

    >
    > > You have obvioulsy way overestimated my capabilities if you thought I
    > > could easily find the results from an HP.COM search.

    >
    > > > According to the VMS public roadmap VMS V8.4 will run on Tukwila

    >
    > > Woopty doo.VMS roadmap is not a document where I would expect details on
    > > HP's hardware and whether HP's own use of Tukwila will also involve
    > > Quickpath or if it will have some HP proprietary chipset as I had been
    > > lead to believe before.

    >
    > This week's name has been Quickpath for a while now but CSI is
    > shorter.
    >
    > I'm not 100% sure I'm up to date either, but if I recall correctly,
    > CSI-based systems require Nehalem (x86-64) or Tukwila (IA64) chips.
    > And if I recall correctly from coverage of Intel Developer Forum from
    > August this year, both of them are late by months or even a year or
    > more. Nehalem was then said to be shipping in Q4 2008, and Tukwila in
    > early 2009; this may have changed, correction welcome.
    >
    > If those timescales are accurate, they presumably mean that Those In
    > The Know already have access to early chips, boards, and are maybe
    > even playing with systems, but nobody will really know when they'll
    > hit the market for real, and any pre-release information which is
    > floating around (be it HP or anyone else) will likely either be
    > covered by NDA and subject to change, or be content-free. Examples to
    > the contrary most welcome.
    >
    > I don't recall seeing any worthwhile pre-publicity for *systems* based
    > around either x86 or IA64 variants of CSI.
    >
    > Realistically, what does CSI buy anyone anyway that HyperTransport
    > hasn't offered for years (and before that there was its close relative
    > the EV7 bus...). Access to nicely-interconnectable chips from Intel as
    > well as different and electronically incompatible chips from AMD, I
    > suppose?
    >
    > One other thing I imagine CSI will do *if* it succeeds is make life
    > even harder for the relatively high-cost low-return IA64-specific
    > sections of Intel and HP; why continue to duplicate two sets of
    > engineering effort, one for x86-64 and one for electronically-
    > compatible near-identical kit with IA64? When the consolidation does
    > happen, maybe we'll eventually see a CSI-based Proliant-class system
    > and there'll be a second chance for IA64 to have a go in the Proliant-
    > class market (the first attempt was in 2003), without the IA64-
    > specifics making those systems cost a relative fortune to design and
    > build as they did back then. Then what happens with entry-level VMS
    > systems?
    >
    > Incidentally, speaking of costs of Itanium, elsewhere I've recently
    > seen it written that Itanium has brought down the cost of high quality
    > servers. I'm not sure about that myself. For most purposes an entry or
    > mid-range Itanium (the "volume market" ones) offers no features I can
    > see that a suitable Proliant hasn't offered for years, with Proliants
    > at a very realistic price (just ask the many people whose businesses
    > depend on them). Except Proliants don't do VMS, you need IA64 if you
    > want to buy a VMS box today.
    >
    > Equally, other than the CPU cost, there's been no need for any huge
    > difference in the bill of materials cost between an Alpha and an x86
    > box since the days of the PWS family (or before that, the AlphaStation
    > 400 and its PC equivalent whose name I forget). Any difference in the
    > price these technically-similar boxes sold at was down to things other
    > than the cost to manufacture (ie it's a political decision), and thus
    > any difference in today's cost of a VMS box vs the cost a few years
    > ago is also a political not technical effect, not necessarily to do
    > with the box having "Itanium Inside" rather than "Alpha Instead", more
    > to do with things going on in the market in general. Maybe CSI will
    > change that too, but when EV6 had the same interconnect as AMD64
    > Hammer, did it help EV6 conquer the world, or not? EV7 and LDT/
    > Hypertransport? Soon, Itanium and CSI?
    >


    Nitpick: it's K7=Athlon (32-bit predecessor of Hammer) that had EV6-
    compatible external interface.

    Hammer is built around Hypertransport 2.x which is neither related nor
    similar to EV7 interconnects. The two are different at all layers,
    from physical, to packets, to cache-coherence protocol (EV7 cc is
    directory-based while Hammer uses broadcasting that delivers better
    small-system performance at cost of less than stellar scalability).



  9. Re: Status of Intel's Common System Interconnect ?

    On Oct 13, 7:52 pm, JF Mezei wrote:
    > johnwalla...@yahoo.co.uk wrote:
    > > I'm not 100% sure I'm up to date either, but if I recall correctly,
    > > CSI-based systems require Nehalem (x86-64) or Tukwila (IA64) chips.

    >
    > The question is whether Tukwila requires CSI or whether HP can have have
    > its own proprietary chipset. As I recall, HP has had its own chipset for
    > a while. (then again, if they shipped all their chip designers to Intel,
    > perhaps they don't have that capability anymore).
    >


    You should realize that QPI is the only way in which Tukwila could
    talk to the outside world.
    The GTL+ demultiplexed double pumped 128b-wide etc etc McKinley bus
    gone. Completely gone.

    So in theory HP doesn't have to connect between Tukwilas directly.
    They doesn't even have to connect Tukwila's integrated memory
    controller with actual DIMMs (although not doing it would be extremely
    stupid) . In theory they can build 4-way Tukwila-based system with
    topology similar to current 4-way Xeon systems where all CPUs and all
    memory channels are connected to the same huge chip called memory
    controller hub (MCH). Stupid but possible. However even in theory the
    MCH has to be connected through QPI.

  10. Re: Status of Intel's Common System Interconnect ?

    On Oct 11, 8:20*pm, JF Mezei wrote:
    > I lost track of what has been and hasn't been released yet. Has Intel
    > released any CSI (or whatever it name might be this week) based systems
    > yet on either the 8086 or Ia64 architectures ?
    >
    > If so, have any vendors started to assemble such systems and start to
    > market/sell them ?
    >
    > Is this something that is coming real soon now, or have there been
    > delays that put this well into next year ?
    >
    > During a presentation 2 years ago, an HP guy had said that HP might not
    > adopt CSI for its IA64 based systems since it has its own proprietary
    > chips. Does anyone know if this is still true ? Or will HP adopt the CSI
    > *for both 8086 and IA64 systems ?
    >
    > At the motherboard level, would components such as those used to build
    > "blades" have to be changed to interface betwen the blade system
    > architecture and the motherboard's CSI interface ? Or are those
    > connected at a higher level or abstraction and not affected by CSI ?


    As others have already posted, Intel's new "Core i7" product will
    support Quickpath

    http://en.wikipedia.org/wiki/Intel_Core_i7
    http://en.wikipedia.org/wiki/Intel_N...roarchitecture)

    BTW, under the Nehalem brand you will see both quadcode and octacore
    variants. Quadcores should be out by Q4 of 2008.

    Neil Rieck
    Kitchener/Waterloo/Cambridge,
    Ontario, Canada.
    http://www3.sympatico.ca/n.rieck/OpenVMS.html

+ Reply to Thread