Re: ramtest.zip Memory test utility, v 0.3 does not work - OS2

This is a discussion on Re: ramtest.zip Memory test utility, v 0.3 does not work - OS2 ; On Wed, 12 Jan 2005 15:18:55 UTC in comp.os.os2.apps, "Yuri Proniakin" wrote: > In your situation (467Mb of free memory) error 8 will be returned only if > DosAllocMem() with OBJ_ANY flag fails. He has no any idea why this ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 22

Thread: Re: ramtest.zip Memory test utility, v 0.3 does not work

  1. Re: ramtest.zip Memory test utility, v 0.3 does not work

    On Wed, 12 Jan 2005 15:18:55 UTC in comp.os.os2.apps, "Yuri Proniakin"
    wrote:

    > In your situation (467Mb of free memory) error 8 will be returned only if
    > DosAllocMem() with OBJ_ANY flag fails. He has no any idea why this may
    > happens with your 14.100j_W4 kernel.
    >
    > Perhaps, someone else there can explain it.


    467M is probably larger than QSV_MAXHPRMEM so it can't be satisfied in a
    single DosAllocMem(). To test this, the OP could try adding
    VIRTUALADDRESSLIMIT=2048 to his config.sys to expand QSV_MAXHPRMEM to a
    large enough value such that 467MB can be allocated from it in one
    chunk. My system has VIRTUALADDRESSLIMT=1024 defaulted and I have

    QSV_MAXHPRMEM = 469762048

    or 448MB so would also fail. Current values can be obtained by
    downloading http://www.os2warp.org/sysbench/qsysinfo.zip which just
    prints all values returned by DosQuerySysInfo().

    --
    Trevor Hemsley, Brighton, UK.
    Trevor-Hemsley at dsl dot pipex dot com

  2. Re: ramtest.zip Memory test utility, v 0.3 does not work

    Trevor Hemsley wrote:
    > On Wed, 12 Jan 2005 15:18:55 UTC in comp.os.os2.apps, "Yuri Proniakin"
    > wrote:
    >
    >
    >>In your situation (467Mb of free memory) error 8 will be returned only if
    >>DosAllocMem() with OBJ_ANY flag fails. He has no any idea why this may
    >>happens with your 14.100j_W4 kernel.
    >>
    >>Perhaps, someone else there can explain it.

    >
    >
    > 467M is probably larger than QSV_MAXHPRMEM so it can't be satisfied in a
    > single DosAllocMem(). To test this, the OP could try adding
    > VIRTUALADDRESSLIMIT=2048 to his config.sys to expand QSV_MAXHPRMEM to a
    > large enough value such that 467MB can be allocated from it in one
    > chunk. My system has VIRTUALADDRESSLIMT=1024 defaulted and I have
    >
    > QSV_MAXHPRMEM = 469762048
    >
    > or 448MB so would also fail. Current values can be obtained by
    > downloading http://www.os2warp.org/sysbench/qsysinfo.zip which just
    > prints all values returned by DosQuerySysInfo().
    >


    Currently I also have those values:

    QSV_MAXHPRMEM = 469762048
    QSV_MAXHSHMEM = 433586176
    QSV_MAXPROCESSES = 256
    QSV_VIRTUALADDRESSLIMIT = 1024

    I'm changing the VIRTUALADDRESSLIMT= in Config.sys as suggested and report
    back, once I have been forced to reboot the next time.

    Thanks so far all involved for their help :-)

    Wolfi

  3. Re: ramtest.zip Memory test utility, v 0.3 does not work

    Wolfi wrote:
    > Trevor Hemsley wrote:
    >
    >> 467M is probably larger than QSV_MAXHPRMEM so it can't be satisfied in a
    >> single DosAllocMem(). To test this, the OP could try adding
    >> VIRTUALADDRESSLIMIT=2048 to his config.sys to expand QSV_MAXHPRMEM to
    >> a large enough value such that 467MB can be allocated from it in one
    >> chunk. My system has VIRTUALADDRESSLIMT=1024 defaulted and I have
    >> QSV_MAXHPRMEM = 469762048


    I just found a note in one of older Config.sys versions, that around a
    year ago, I had some trouble after changing VIRTUALADDRESSLIMIT= to
    something other than the default 1024 in the valid range up to the max. of
    3072.

    For Odin/W32 apparently the limit or suggested value is 2048; but back at
    that time, there where some strange side effects, like f.e. the max.
    amount of data, that RamFS can hold.

    I'm trying it now with =2048, but won't see the outcome until the next
    forced reboot is required.

    Wolfi

  4. Re: ramtest.zip Memory test utility, v 0.3 does not work

    Wolfi wrote:
    > Trevor Hemsley wrote:
    >
    >> 467M is probably larger than QSV_MAXHPRMEM so it can't be satisfied in a
    >> single DosAllocMem(). To test this, the OP could try adding
    >> VIRTUALADDRESSLIMIT=2048 to his config.sys to expand QSV_MAXHPRMEM to
    >> a large enough value such that 467MB can be allocated from it in one
    >> chunk. My system has VIRTUALADDRESSLIMT=1024 defaulted and I have
    >> QSV_MAXHPRMEM = 469762048
    >>
    >> or 448MB so would also fail. Current values can be obtained by
    >> downloading http://www.os2warp.org/sysbench/qsysinfo.zip which just
    >> prints all values returned by DosQuerySysInfo().
    >>

    >
    > Currently I also have those values:
    >
    > QSV_MAXHPRMEM = 469762048
    > QSV_MAXHSHMEM = 433586176
    > QSV_MAXPROCESSES = 256
    > QSV_VIRTUALADDRESSLIMIT = 1024
    >
    > I'm changing the VIRTUALADDRESSLIMT= in Config.sys as suggested and
    > report back, once I have been forced to reboot the next time.


    Well, in the meantime I had to reboot, due to a short power outage and
    used the opportunity to play around a but with the VIRTUALADDRESSLIMIT=
    setting.

    With a value of VIRTUALADDRESSLIMIT=1408 it now works fine :-), also from
    within Config.sys, so big thanks for pointing me straight into the right
    direction, and QSysInfo.exe is now reporting:
    QSV_MAXHPRMEM = 822083584
    QSV_MAXHSHMEM = 785907712
    QSV_MAXPROCESSES = 256
    QSV_VIRTUALADDRESSLIMIT = 1408

    This now covers up to the maximum amount of 768MB of memory possible with
    my mobo.

    Now I only still have to find out, if and in case that, which bad side
    effects this change might cause with Odin and/or RamFS.

    Wolfi

  5. Re: ramtest.zip Memory test utility, v 0.3 does not work

    [A complimentary Cc of this posting was sent to
    Wolfi
    ], who wrote in article <34u96iF4fneopU1@individual.net>:
    > QSV_MAXHPRMEM = 822083584
    > QSV_MAXHSHMEM = 785907712
    > QSV_MAXPROCESSES = 256
    > QSV_VIRTUALADDRESSLIMIT = 1408
    >
    > This now covers up to the maximum amount of 768MB of memory possible with
    > my mobo.


    These numbers are about virtual memory; they have nothing to do with
    the available physical memory.

    A real life example: I was one of the first people to be beaten by low
    QSV_MAXHPRMEM on Warp 3. This was on a system with 32M of physical
    RAM.

    Hope this helps,
    Ilya

  6. Re: ramtest.zip Memory test utility, v 0.3 does not work

    Ilya Zakharevich wrote:
    > [A complimentary Cc of this posting was sent to
    > Wolfi
    > ], who wrote in article <34u96iF4fneopU1@individual.net>:
    >
    >> QSV_MAXHPRMEM = 822083584
    >> QSV_MAXHSHMEM = 785907712
    >> QSV_MAXPROCESSES = 256
    >> QSV_VIRTUALADDRESSLIMIT = 1408
    >>
    >>This now covers up to the maximum amount of 768MB of memory possible with
    >>my mobo.

    >
    >
    > These numbers are about virtual memory; they have nothing to do with
    > the available physical memory.
    >
    > A real life example: I was one of the first people to be beaten by low
    > QSV_MAXHPRMEM on Warp 3. This was on a system with 32M of physical
    > RAM.


    Well, due to a lack of expertise in this area, I cannot comment about your
    arguments.
    But while I was fiddling around with the "right" size of the
    VIRTUALADDRESSLIMIT= value, I made the experience, that values between
    1024 and some 1344. if I recall it correctly, which resulted in a slightly
    smaller QSV_MAXHPRMEM= value than my physical RAM, still caused Ramtest
    to fail.
    Hence I concluded, that VIRTUALADDRESSLIMIT= has to be set so large, that
    its resulting QSV_MAXHPRMEM= value exceeds the amount of physically
    installed memory.

    Wolfi

  7. Re: ramtest.zip Memory test utility, v 0.3 does not work

    On Sun, 16 Jan 2005 04:44:57 UTC in comp.os.os2.apps, Ilya Zakharevich
    wrote:

    > These numbers are about virtual memory; they have nothing to do with
    > the available physical memory.


    The way the program works that he's been trying to run is that it does a
    single DosAllocMem() for the size of physical RAM minus the size of RAM
    that OS/2 has locked. It does this to force the rest of the data in
    physical RAM to be paged out. In his situation, the largest single
    contiguous virtual memory chunk in the system was that available in high
    memory and that was not large enough to satisfy the DosAllocMem.
    Expanding vrtualaddresslimit= has expanded the size of the largest chunk
    of virtual memory available and allows the DosAllocMem to work.

    --
    Trevor Hemsley, Brighton, UK.
    Trevor-Hemsley at dsl dot pipex dot com

  8. Re: ramtest.zip Memory test utility, v 0.3 does not work

    [A complimentary Cc of this posting was sent to
    Trevor Hemsley
    ], who wrote in article :
    > The way the program works that he's been trying to run is that it does a
    > single DosAllocMem() for the size of physical RAM minus the size of RAM
    > that OS/2 has locked.


    So the purpose of tuning the memory was to avoid a bug in a program?
    Moreover, in a program which produces 1 bit (binary ;-) of new
    knowledge? ;-)

    Thanks for disambiguating,
    Ilya

  9. Re: ramtest.zip Memory test utility, v 0.3 does not work

    Ilya Zakharevich wrote:
    > [A complimentary Cc of this posting was sent to
    > Wolfi
    > ], who wrote in article <34u96iF4fneopU1@individual.net>:
    >
    >> QSV_MAXHPRMEM = 822083584
    >> QSV_MAXHSHMEM = 785907712
    >> QSV_MAXPROCESSES = 256
    >> QSV_VIRTUALADDRESSLIMIT = 1408
    >>
    >>This now covers up to the maximum amount of 768MB of memory possible with
    >>my mobo.

    >
    > These numbers are about virtual memory; they have nothing to do with
    > the available physical memory.
    >
    > A real life example: I was one of the first people to be beaten by low
    > QSV_MAXHPRMEM on Warp 3. This was on a system with 32M of physical
    > RAM.


    While we're on the subject, I just had an experience lately that made me
    question how OS/2 is organizing its memory.

    I've got my VIRTUALADDRESSLIMIT set to 3072. I went to run LEECH to rip
    a CD into a WAV file and told it to buffer up to 400MB (since I have
    512MB of RAM and OS/2 was using up less than 100MB). It yelled at me
    and said that it couldn't be allocated. So I fiddled around until I
    found the max number that it would allow. This number wound up being
    around 307MB.

    After goofing around updating my IDE and CDROM drivers (because I was
    disgusted with how slow the CD ripping was going), I rebooted and
    started the process again. This time I could only get 115MB allocated
    for the buffer! (Memory fragmentation, I figure, but still a bit weird.)

    When I was done with my session, I pushed the lockup button, expecting
    to see my lockup bitmap on the screen (which is 704x490 24bpp bitmap,
    stretched to 2560x1024). The bitmap never came up. It was just a solid
    color background. I was scratching my head for a while, and finally
    once when the CD stopped ripping, I tried it again, and this time the
    bitmap displayed. Just to confirm, I started burning another CD and
    went to lockup. The bitmap wasn't there.

    From what I understand bitmap data is stored in the shared arena in
    PMShell. But how is LEECH's storage buffer affecting the availability
    of memory in the shared arena?? Is it also allocating from the shared
    arena for some unknown reason?

    And one more thing, more appropriate to the person to whom I'm responding...

    I had a Perl script that I wrote and ran at work on Solaris which is
    nothing more than a really involved text filter. I needed it to operate
    on a 800MB file, and the resulting file would be approximately the same
    size. While it is running on Solaris, it has a memory footprint of
    500MB. Tried to run it on OS/2 and I got a SIGSEGV. Any ideas why?
    This was running on version 5.004_55 (which, I'll admit, is ancient).

    --
    [Reverse the parts of the e-mail address to reply.]


  10. Re: ramtest.zip Memory test utility, v 0.3 does not work

    On Sun, 16 Jan 2005 21:22:40 UTC in comp.os.os2.apps, Marty
    wrote:

    > I've got my VIRTUALADDRESSLIMIT set to 3072. I went to run LEECH to rip
    > a CD into a WAV file and told it to buffer up to 400MB (since I have
    > 512MB of RAM and OS/2 was using up less than 100MB). It yelled at me
    > and said that it couldn't be allocated. So I fiddled around until I
    > found the max number that it would allow. This number wound up being
    > around 307MB.


    I would assume that the program is compiled with a compiler whose
    runtime either does not know about or does not use the OBJ_ANY parameter
    to DosAllocMem(). Without this it'll be restricted to the largest amount
    of space it can allocate from

    QSV_MAXPRMEM = 300613632

    (http://www.os2warp.org/sysbench/qsysinfo.zip)

    You'd have to grab the source to Leech and bodge it to use DosAllocMem
    and test if the underlying o/s supports OBJ_ANY and then grab the buffer
    directly. Either that or amend the CRT to use it!

    --
    Trevor Hemsley, Brighton, UK.
    Trevor-Hemsley at dsl dot pipex dot com

  11. Re: ramtest.zip Memory test utility, v 0.3 does not work

    Sir:

    Marty wrote:
    > Ilya Zakharevich wrote:
    >
    >> [A complimentary Cc of this posting was sent to
    >> Wolfi ], who wrote in article
    >> <34u96iF4fneopU1@individual.net>:
    >>
    >>> QSV_MAXHPRMEM = 822083584
    >>> QSV_MAXHSHMEM = 785907712
    >>> QSV_MAXPROCESSES = 256
    >>> QSV_VIRTUALADDRESSLIMIT = 1408
    >>>
    >>> This now covers up to the maximum amount of 768MB of memory possible
    >>> with my mobo.

    >>
    >>
    >> These numbers are about virtual memory; they have nothing to do with
    >> the available physical memory.
    >>
    >> A real life example: I was one of the first people to be beaten by low
    >> QSV_MAXHPRMEM on Warp 3. This was on a system with 32M of physical
    >> RAM.

    >
    >
    > While we're on the subject, I just had an experience lately that made me
    > question how OS/2 is organizing its memory.
    >
    > I've got my VIRTUALADDRESSLIMIT set to 3072. I went to run LEECH to rip
    > a CD into a WAV file and told it to buffer up to 400MB (since I have
    > 512MB of RAM and OS/2 was using up less than 100MB). It yelled at me
    > and said that it couldn't be allocated. So I fiddled around until I
    > found the max number that it would allow. This number wound up being
    > around 307MB.
    >
    > After goofing around updating my IDE and CDROM drivers (because I was
    > disgusted with how slow the CD ripping was going), I rebooted and
    > started the process again. This time I could only get 115MB allocated
    > for the buffer! (Memory fragmentation, I figure, but still a bit weird.)
    >
    > When I was done with my session, I pushed the lockup button, expecting
    > to see my lockup bitmap on the screen (which is 704x490 24bpp bitmap,
    > stretched to 2560x1024). The bitmap never came up. It was just a solid
    > color background. I was scratching my head for a while, and finally
    > once when the CD stopped ripping, I tried it again, and this time the
    > bitmap displayed. Just to confirm, I started burning another CD and
    > went to lockup. The bitmap wasn't there.
    >
    > From what I understand bitmap data is stored in the shared arena in
    > PMShell. But how is LEECH's storage buffer affecting the availability
    > of memory in the shared arena?? Is it also allocating from the shared
    > arena for some unknown reason?
    >
    > And one more thing, more appropriate to the person to whom I'm
    > responding...
    >
    > I had a Perl script that I wrote and ran at work on Solaris which is
    > nothing more than a really involved text filter. I needed it to operate
    > on a 800MB file, and the resulting file would be approximately the same
    > size. While it is running on Solaris, it has a memory footprint of
    > 500MB. Tried to run it on OS/2 and I got a SIGSEGV. Any ideas why?
    > This was running on version 5.004_55 (which, I'll admit, is ancient).
    >

    You got to remember that OS/2 has both a low memory area and a high
    memory area, due to the DOS compatibility limits. The dividing line
    between them is at 512 MiB. Unless you have high memory enabled and the
    program is fixed to allocate from there, it will default to the low
    memory area. Most of OS/2 will allocate itsown use in the low memory
    area just because no one has updated it. This includes loading that
    bitmap for the lockup function into video memory, which is mapped to
    below that 512 MiB limit. It would be nice if Perl allocated its memory
    requirements from High Memory area first. Then it would not get in the
    way of this limited resource of the low memory area. Maybe the latest
    version does.
    --
    Bill
    Thanks a Million!

  12. Re: ramtest.zip Memory test utility, v 0.3 does not work

    Trevor Hemsley wrote:
    > On Sun, 16 Jan 2005 21:22:40 UTC in comp.os.os2.apps, Marty
    > wrote:
    >
    >>I've got my VIRTUALADDRESSLIMIT set to 3072. I went to run LEECH to rip
    >>a CD into a WAV file and told it to buffer up to 400MB (since I have
    >>512MB of RAM and OS/2 was using up less than 100MB). It yelled at me
    >>and said that it couldn't be allocated. So I fiddled around until I
    >>found the max number that it would allow. This number wound up being
    >>around 307MB.

    >
    > I would assume that the program is compiled with a compiler whose
    > runtime either does not know about or does not use the OBJ_ANY parameter
    > to DosAllocMem(). Without this it'll be restricted to the largest amount
    > of space it can allocate from
    >
    > QSV_MAXPRMEM = 300613632
    >
    > (http://www.os2warp.org/sysbench/qsysinfo.zip)
    >
    > You'd have to grab the source to Leech and bodge it to use DosAllocMem
    > and test if the underlying o/s supports OBJ_ANY and then grab the buffer
    > directly. Either that or amend the CRT to use it!


    Right. I don't have a problem with the fact that there was a
    limitation. What I was really weirded out about was the fact that my
    bitmap wouldn't load in PMShell while this large chunk was allocated.
    That didn't make any sense to me, unless Leech was also using the shared
    arena, which it doesn't seem like there's any reason it should.

    --
    [Reverse the parts of the e-mail address to reply.]


  13. Re: ramtest.zip Memory test utility, v 0.3 does not work

    On Sun, 16 Jan 2005 22:29:57 UTC in comp.os.os2.apps, Marty
    wrote:

    > What I was really weirded out about was the fact that my
    > bitmap wouldn't load in PMShell while this large chunk was allocated.
    > That didn't make any sense to me, unless Leech was also using the shared
    > arena, which it doesn't seem like there's any reason it should.


    Shared allocation starts up top and moves down, private starts low and
    works up. You caused them to meet and then shared can't be expanded
    downwards - though I would have expected there to be 'holes' available
    that could have been used, maybe just not big enough.

    --
    Trevor Hemsley, Brighton, UK.
    Trevor-Hemsley at dsl dot pipex dot com

  14. Re: ramtest.zip Memory test utility, v 0.3 does not work

    [A complimentary Cc of this posting was sent to
    Marty
    ], who wrote in article :
    > I had a Perl script that I wrote and ran at work on Solaris which is
    > nothing more than a really involved text filter. I needed it to operate
    > on a 800MB file, and the resulting file would be approximately the same
    > size. While it is running on Solaris, it has a memory footprint of
    > 500MB. Tried to run it on OS/2 and I got a SIGSEGV. Any ideas why?
    > This was running on version 5.004_55 (which, I'll admit, is ancient).


    Same reason. EMX applications can't use memory above 512M without a
    major rewrite.

    Hope this helps,
    Ilya


  15. Re: ramtest.zip Memory test utility, v 0.3 does not work

    [A complimentary Cc of this posting was sent to
    Trevor Hemsley
    ], who wrote in article :
    > You'd have to grab the source to Leech and bodge it to use DosAllocMem
    > and test if the underlying o/s supports OBJ_ANY and then grab the buffer
    > directly. Either that or amend the CRT to use it!


    a) Probably you mean cdrecord;

    b) Won't work. High memory is not "simply" usable; so if CRT does not
    go through hoola-hoops to make this memory "transparent" to the
    program, it does not make sense to allocate high memory for
    "general purpose" usage.

    [Using high memory is easy if you do not pass it any external API,
    CRT or OS/2's.]

    Hope this helps,
    Ilya

  16. Re: ramtest.zip Memory test utility, v 0.3 does not work

    Ilya Zakharevich wrote:
    > [A complimentary Cc of this posting was sent to
    > Marty
    > ], who wrote in article :
    >
    >>I had a Perl script that I wrote and ran at work on Solaris which is
    >>nothing more than a really involved text filter. I needed it to operate
    >>on a 800MB file, and the resulting file would be approximately the same
    >>size. While it is running on Solaris, it has a memory footprint of
    >>500MB. Tried to run it on OS/2 and I got a SIGSEGV. Any ideas why?
    >>This was running on version 5.004_55 (which, I'll admit, is ancient).

    >
    > Same reason. EMX applications can't use memory above 512M without a
    > major rewrite.


    Has an OBJ_ANY Perl been written that will use high memory for
    user-defined variables and arrays? I would imagine that anything in the
    user space of variables is never passed directly to device drivers or 16
    bit API calls anyway, given that the way Perl stores variables is often
    not in the format needed for such things and a copy would have to be
    done up front anyway.

    --
    [Reverse the parts of the e-mail address to reply.]


  17. Re: ramtest.zip Memory test utility, v 0.3 does not work

    On Sun, 16 Jan 2005 22:16:20 UTC "William L. Hartzell"
    wrote:

    > You got to remember that OS/2 has both a low memory area and a high
    > memory area, due to the DOS compatibility limits. The dividing line
    > between them is at 512 MiB. Unless you have high memory enabled and the
    > program is fixed to allocate from there, it will default to the low
    > memory area. Most of OS/2 will allocate itsown use in the low memory
    > area just because no one has updated it. This includes loading that
    > bitmap for the lockup function into video memory, which is mapped to
    > below that 512 MiB limit. It would be nice if Perl allocated its memory
    > requirements from High Memory area first. Then it would not get in the
    > way of this limited resource of the low memory area. Maybe the latest
    > version does.


    Not so much DOS but OS/2 vs OS/2. The 512 MiB division, as far as
    OS/2 is concerned, is in consideration of tiled memory in order to
    stay compatible with OS/2 apps going back to day one.

    --
    Will Honea

  18. Re: ramtest.zip Memory test utility, v 0.3 does not work

    [A complimentary Cc of this posting was sent to
    Marty
    ], who wrote in article :
    > > Same reason. EMX applications can't use memory above 512M without a
    > > major rewrite.


    > Has an OBJ_ANY Perl been written that will use high memory for
    > user-defined variables and arrays?


    This has nothing to do with Perl. EMX must me modified, not Perl. To
    do this, one needs a QA framework. Nobody wants to participate in it
    (except myself, but I do not trust QA of one pair of eyes).

    > I would imagine that anything in the user space of variables is
    > never passed directly to device drivers or 16 bit API calls anyway,


    This is reported to have nothing to do with device drivers. However,
    most of OS/2 API calls do (or *may* on some systems?) pass through
    some 16-bit subsystem. Of thousand(s) of API calls, less than a
    hundred is documented to be save with high memory. (Some of these
    documented actually are not fully save!)

    Hope this helps,
    Ilya

  19. Re: ramtest.zip Memory test utility, v 0.3 does not work

    Ilya Zakharevich wrote:
    > [A complimentary Cc of this posting was sent to
    > Marty
    > ], who wrote in article :
    >
    >>>Same reason. EMX applications can't use memory above 512M without a
    >>>major rewrite.

    >
    >>Has an OBJ_ANY Perl been written that will use high memory for
    >>user-defined variables and arrays?

    >
    > This has nothing to do with Perl. EMX must me modified, not Perl. To
    > do this, one needs a QA framework. Nobody wants to participate in it
    > (except myself, but I do not trust QA of one pair of eyes).


    That would apply to application-wide usage. I was theorizing out loud
    that limiting it to user variables would limit the scope to something
    that could be done.

    >>I would imagine that anything in the user space of variables is
    >>never passed directly to device drivers or 16 bit API calls anyway,

    >
    > This is reported to have nothing to do with device drivers. However,
    > most of OS/2 API calls do (or *may* on some systems?) pass through
    > some 16-bit subsystem. Of thousand(s) of API calls, less than a
    > hundred is documented to be save with high memory. (Some of these
    > documented actually are not fully save!)


    How many of these API calls, as called from Perl, are pointed directly
    to the area in which user variables are stored without being moved,
    translated, or duplicated in some way to another (potentially safe) area?

    For example, are you ever going to pass a Perl associative array, with
    its incredibly extensible, yet downright potentially wacky indices to an
    OS/2 API call? Certainly there will be some pre-processing involved.

    Please note that I'm not suggesting, recommending, or demanding that you
    or any one given person does this work. I think Perl on OS/2 works very
    well and I'm very grateful to even have it at all. I'm just curious
    what would be involved in even doing this "half way", because I think it
    would make Perl on OS/2 a lot more powerful than it currently is.

    --
    [Reverse the parts of the e-mail address to reply.]


  20. Re: ramtest.zip Memory test utility, v 0.3 does not work

    [A complimentary Cc of this posting was sent to
    Marty
    ], who wrote in article <3LOdndJRRNnNw3bcRVn-1Q@comcast.com>:
    > That would apply to application-wide usage. I was theorizing out loud
    > that limiting it to user variables would limit the scope to something
    > that could be done.


    a) The "string" part of a Perl variable is more or less a window into
    a computer memory (if you forget about encodings). So it must be
    in lower memory.

    b) The other stuff could theoretically be moved to upper memory
    independently of CRT library modifications. But this would mean a
    significant rewrite of Perl memory allocation routines: while a
    memory allocation request *is* already accompanied by a "purpose"
    index (it looks like

    long *buf;
    New(765, buf, array_length, long);

    here 765 is [more or less random] id of the request), one needs to
    identify which ids correspond to "string parts", one needs a
    special framework for realloc().

    Then given the size of perl interpreter code, one needs to pray
    that nothing was missed.

    c) IDs of memory allocation requests from Perl extensions written in
    C are also more or less random. One would need to decide what to
    do with them on an extension-per-extension basis.

    Then pray again that everything works.

    So it boils down to the same question: QA. And, in my estimates, it
    would be much easier to update EMX than to mutiliate a large
    particular application.

    > > This is reported to have nothing to do with device drivers. However,
    > > most of OS/2 API calls do (or *may* on some systems?) pass through
    > > some 16-bit subsystem. Of thousand(s) of API calls, less than a
    > > hundred is documented to be save with high memory. (Some of these
    > > documented actually are not fully save!)

    >
    > How many of these API calls, as called from Perl, are pointed directly
    > to the area in which user variables are stored without being moved,
    > translated, or duplicated in some way to another (potentially safe) area?


    Perl is slow; but not because it is lousely coded; it is just the
    multifaceted nature of Perl variables, and the overhead of dispatching
    the "virtual machine opcodes" which make it slow. When Perl knows how
    to translate a request to an API call, it is doing it as quick as
    possible.

    No extraneous buffers, no nothing.

    > For example, are you ever going to pass a Perl associative array, with
    > its incredibly extensible, yet downright potentially wacky indices to an
    > OS/2 API call? Certainly there will be some pre-processing involved.


    The indices of the array are not \0-delimited; so they cannot be
    directly passed to external programs. But the values of the array are
    just normal Perl variables. They are required to be \0-delimited
    exactly for the purpose of passing them to API calls without any translation.

    > Please note that I'm not suggesting, recommending, or demanding that you
    > or any one given person does this work. I think Perl on OS/2 works very
    > well and I'm very grateful to even have it at all. I'm just curious
    > what would be involved in even doing this "half way", because I think it
    > would make Perl on OS/2 a lot more powerful than it currently is.


    You want a good Perl, you participate in QA of EMX. ;-) As simple as
    this.

    Hope this helps,
    Ilya

+ Reply to Thread
Page 1 of 2 1 2 LastLast