Large array size in C++ - SGI

This is a discussion on Large array size in C++ - SGI ; Hi all! I am trying to run a Neural network program written in C++ on my SGI prism server. But for a larger array size, it is giving me segmentation fault. Is there any where I can make compiler to ...

+ Reply to Thread
Results 1 to 5 of 5

Thread: Large array size in C++

  1. Large array size in C++

    Hi all!

    I am trying to run a Neural network program written in C++ on my SGI
    prism server. But for a larger array size, it is giving me segmentation
    fault.

    Is there any where I can make compiler to accept higher array size?



    Regards,
    swagat


  2. Re: Large array size in C++

    In article <1151644162.350979.284950@h44g2000cwa.googlegroups. com>,
    swagat wrote:
    >I am trying to run a Neural network program written in C++ on my SGI
    >prism server. But for a larger array size, it is giving me segmentation
    >fault.


    >Is there any where I can make compiler to accept higher array size?


    Maybe. Is the array a static variable or an automatic variable
    (local to a function), or is it dynamically allocated memory?

    Have you attempted to adjust your ulimit values at the shell level?
    $ ulimit -a
    time(seconds) unlimited
    file(blocks) unlimited
    data(kbytes) 524288
    stack(kbytes) 65536
    memory(kbytes) 121916
    coredump(blocks) unlimited
    nofiles(descriptors) 200
    vmemory(kbytes) 524288

    Have you checked with systune to see what the maximum resources are?

    Have you used the linker options to reposition the libraries to
    give you more room? See the 'dso' man page; if you need to
    do the repositioning for an existing executable, see rqs .

  3. Re: Large array size in C++


    > Have you attempted to adjust your ulimit values at the shell level?


    'ulimit -a' gives me following output on my system
    ---------------------------------------------------------------------
    $ ulimit -a
    core file size (blocks, -c) 0
    data seg size (kbytes, -d) unlimited
    file size (blocks, -f) unlimited
    max locked memory (kbytes, -l) unlimited
    max memory size (kbytes, -m) unlimited
    open files (-n) 1024
    pipe size (512 bytes, -p) 8
    stack size (kbytes, -s) 8192
    cpu time (seconds, -t) unlimited
    max user processes (-u) unlimited
    virtual memory (kbytes, -v) unlimited
    ---------------------------------------------------------------------------

    How do I adjust the 'ulimit' values? I read man page but it does not
    provide much information. Infact, it does not say anything about how
    to use this function.


    > Have you checked with systune to see what the maximum resources are?


    the file '/etc/systune.conf ' contains following information:
    --------------------------------------------------------------------------------
    # /etc/systune.conf

    # Format:
    # :
    # :
    # ...

    # Filesystem tuning

    # defaults*10 for kernel 2.0
    # /proc/sys/kernel/file-max:10240
    # /proc/sys/kernel/inode-max:30720

    # defaults*10 for kernel 2.2
    # /proc/sys/fs/file-max:40960
    # /proc/sys/fs/inode-max:81920
    -------------------------------------------------------------------------

    other things deal with VM tuning, network tuning, network setting etc.
    and I think those things are not relevant to this discussion.

    >
    > Have you used the linker options to reposition the libraries to
    > give you more room? See the 'dso' man page; if you need to
    > do the repositioning for an existing executable, see rqs .


    'dso' is not available on my system. Can you give me some more
    information regarding how can I increase the memory allocated to the
    arrays used by c++ programs.

    Regards,
    swagat


  4. Re: Large array size in C++

    In article <1151731364.454706.270480@p79g2000cwp.googlegroups. com>,
    "swagat" wrote:

    : 'ulimit -a' gives me following output on my system
    : ---------------------------------------------------------------------
    : $ ulimit -a
    : core file size (blocks, -c) 0
    : data seg size (kbytes, -d) unlimited
    : file size (blocks, -f) unlimited
    : max locked memory (kbytes, -l) unlimited
    : max memory size (kbytes, -m) unlimited
    : open files (-n) 1024
    : pipe size (512 bytes, -p) 8
    : stack size (kbytes, -s) 8192
    : cpu time (seconds, -t) unlimited
    : max user processes (-u) unlimited
    : virtual memory (kbytes, -v) unlimited
    : ---------------------------------------------------------------------------
    :
    : How do I adjust the 'ulimit' values? I read man page but it does not
    : provide much information. Infact, it does not say anything about how
    : to use this function.

    ulimit is normally a shell builtin, so you'll want to consult the manpage for
    your shell to determine how to set different values.

    It would appear you're running bash (on linux, would be my guess from the values
    you've pasted). I've excerpted the relevant section of the bash manpage below
    for your reading convenience.

    However, I don't suggest you change the ulimit values. You shouldn't be
    allocating huge structures on the stack; I would suggest that the best plan is
    to change your code to allocate it on the heap with malloc(3) and friends


    : ulimit [-SHacdefilmnpqrstuvx [limit]]
    : Provides control over the resources available to the shell and
    : to processes started by it, on systems that allow such control.
    : The -H and -S options specify that the hard or soft limit is set
    : for the given resource. A hard limit cannot be increased once
    : it is set; a soft limit may be increased up to the value of the
    : hard limit. If neither -H nor -S is specified, both the soft
    : and hard limits are set. The value of limit can be a number in
    : the unit specified for the resource or one of the special values
    : hard, soft, or unlimited, which stand for the current hard
    : limit, the current soft limit, and no limit, respectively. If
    : limit is omitted, the current value of the soft limit of the
    : resource is printed, unless the -H option is given. When more
    : than one resource is specified, the limit name and unit are
    : printed before the value. Other options are interpreted as fol-
    : lows:
    : -a All current limits are reported
    : -c The maximum size of core files created
    : -d The maximum size of a process's data segment
    : -e The maximum scheduling priority (`nice')
    : -f The maximum size of files created by the shell
    : -i The maximum number of pending signals
    : -l The maximum size that may be locked into memory
    : -m The maximum resident set size
    : -n The maximum number of open file descriptors (most systems
    : do not allow this value to be set)
    : -p The pipe size in 512-byte blocks (this may not be set)
    : -q The maximum number of bytes in POSIX message queues
    : -r The maximum rt priority
    : -s The maximum stack size
    : -t The maximum amount of cpu time in seconds
    : -u The maximum number of processes available to a single
    : user
    : -v The maximum amount of virtual memory available to the
    : shell
    : -x The maximum number of file locks
    :
    : If limit is given, it is the new value of the specified resource
    : (the -a option is display only). If no option is given, then -f
    : is assumed. Values are in 1024-byte increments, except for -t,
    : which is in seconds, -p, which is in units of 512-byte blocks,
    : and -n and -u, which are unscaled values. The return status is
    : 0 unless an invalid option or argument is supplied, or an error
    : occurs while setting a new limit.


    Cheers - Tony 'Nicoya' Mantler

    --
    Tony 'Nicoya' Mantler - Master of Code-fu
    -- nicoya@ubb.ca --*http://www.ubb.ca/ --

  5. Re: Large array size in C++

    In article <1151731364.454706.270480@p79g2000cwp.googlegroups. com>,
    swagat wrote:

    >> Have you checked with systune to see what the maximum resources are?


    >the file '/etc/systune.conf ' contains following information:
    >--------------------------------------------------------------------------------
    ># defaults*10 for kernel 2.2
    ># /proc/sys/fs/file-max:40960


    Sorry, I didn't notice the 'prism' part of your question before.
    IRIX has a bunch of rlimit_* in systune; I have no idea what the
    linux equivilent is. Similarily, the dso man page is for IRIX.

    Tony Mantler is correct that large items should not be allocated
    on the stack -- but you did not answer my question as to how the
    variable was being allocated, so we can't tell yet if his advice
    was the best for the situation.

+ Reply to Thread