Very slow performances (NBU5.1 MP3) - Veritas Net Backup

This is a discussion on Very slow performances (NBU5.1 MP3) - Veritas Net Backup ; in article 4374cd56@ROSASTDMZ05., Francois at Francois.Abbet@swissquote.ch wrote on 11/11/05 8:56 AM: > > Hi, > With a good config : > Master (SUN V240 Sol9, 2x1.28GHz) with 2x LTO-3, network Gbits/sec. > Media server (same HW, with a EMC CX500, ...

+ Reply to Thread
Results 1 to 8 of 8

Thread: Very slow performances (NBU5.1 MP3)

  1. Re: Very slow performances (NBU5.1 MP3)

    in article 4374cd56@ROSASTDMZ05., Francois at Francois.Abbet@swissquote.ch
    wrote on 11/11/05 8:56 AM:

    >
    > Hi,
    > With a good config :
    > Master (SUN V240 Sol9, 2x1.28GHz) with 2x LTO-3, network Gbits/sec.
    > Media server (same HW, with a EMC CX500, RAID-3 configured), net Gbit/sec.
    >
    > The disk staging is only 6-7 MBytes/sec !
    > In parallel of the disk-staging, I can do a ftp from the Media server to
    > the Master, and I can have 30-40 MB/s...
    > The problem is not the SAN.
    >
    > /etc/system :
    > * Message queues
    > set msgsys:msginfo_msgmap=500
    > set msgsys:msginfo_msgmax=8192
    > set msgsys:msginfo_msgmnb=65536
    > set msgsys:msginfo_msgmni=256
    > set msgsys:msginfo_msgssz=32
    > set msgsys:msginfo_msgtql=500
    > set msgsys:msginfo_msgseg=8192
    >
    > * Semaphores
    > set semsys:seminfo_semmap=64
    > set semsys:seminfo_semmni=1024
    > set semsys:seminfo_semmns=1024
    > set semsys:seminfo_semmnu=1024
    > set semsys:seminfo_semmsl=300
    > set semsys:seminfo_semopm=32
    > set semsys:seminfo_semume=64
    >
    > * Shared memory
    > set shmsys:shminfo_shmmax=16777216
    > set shmsys:shminfo_shmmin=1
    > set shmsys:shminfo_shmmni=230
    > set shmsys:shminfo_shmseg=100
    >
    > NUMBER_DATA_BUFFERS : 4
    > NUMBER_DATA_BUFFERS_DISK : 16
    > SIZE_DATA_BUFFERS : 65536
    > SIZE_DATA_BUFFERS_DISK : 1048576
    >
    > Where is the bug ?
    >
    > Many thanks for help


    Perhaps the client you are backing up to disk is only capable of 6-7MB/sec.


  2. Very slow performances (NBU5.1 MP3)


    Hi,
    With a good config :
    Master (SUN V240 Sol9, 2x1.28GHz) with 2x LTO-3, network Gbits/sec.
    Media server (same HW, with a EMC CX500, RAID-3 configured), net Gbit/sec.

    The disk staging is only 6-7 MBytes/sec !
    In parallel of the disk-staging, I can do a ftp from the Media server to
    the Master, and I can have 30-40 MB/s...
    The problem is not the SAN.

    /etc/system :
    * Message queues
    set msgsys:msginfo_msgmap=500
    set msgsys:msginfo_msgmax=8192
    set msgsys:msginfo_msgmnb=65536
    set msgsys:msginfo_msgmni=256
    set msgsys:msginfo_msgssz=32
    set msgsys:msginfo_msgtql=500
    set msgsys:msginfo_msgseg=8192

    * Semaphores
    set semsys:seminfo_semmap=64
    set semsys:seminfo_semmni=1024
    set semsys:seminfo_semmns=1024
    set semsys:seminfo_semmnu=1024
    set semsys:seminfo_semmsl=300
    set semsys:seminfo_semopm=32
    set semsys:seminfo_semume=64

    * Shared memory
    set shmsys:shminfo_shmmax=16777216
    set shmsys:shminfo_shmmin=1
    set shmsys:shminfo_shmmni=230
    set shmsys:shminfo_shmseg=100

    NUMBER_DATA_BUFFERS : 4
    NUMBER_DATA_BUFFERS_DISK : 16
    SIZE_DATA_BUFFERS : 65536
    SIZE_DATA_BUFFERS_DISK : 1048576

    Where is the bug ?

    Many thanks for help

  3. Re: Very slow performances (NBU5.1 MP3)


    ps wrote:
    >in article 4374cd56@ROSASTDMZ05., Francois at Francois.Abbet@swissquote.ch
    >wrote on 11/11/05 8:56 AM:
    >
    >>
    >> Hi,
    >> With a good config :
    >> Master (SUN V240 Sol9, 2x1.28GHz) with 2x LTO-3, network Gbits/sec.
    >> Media server (same HW, with a EMC CX500, RAID-3 configured), net Gbit/sec.
    >>
    >> The disk staging is only 6-7 MBytes/sec !
    >> In parallel of the disk-staging, I can do a ftp from the Media server

    to
    >> the Master, and I can have 30-40 MB/s...
    >> The problem is not the SAN.
    >>
    >> /etc/system :
    >> * Message queues
    >> set msgsys:msginfo_msgmap=500
    >> set msgsys:msginfo_msgmax=8192
    >> set msgsys:msginfo_msgmnb=65536
    >> set msgsys:msginfo_msgmni=256
    >> set msgsys:msginfo_msgssz=32
    >> set msgsys:msginfo_msgtql=500
    >> set msgsys:msginfo_msgseg=8192
    >>
    >> * Semaphores
    >> set semsys:seminfo_semmap=64
    >> set semsys:seminfo_semmni=1024
    >> set semsys:seminfo_semmns=1024
    >> set semsys:seminfo_semmnu=1024
    >> set semsys:seminfo_semmsl=300
    >> set semsys:seminfo_semopm=32
    >> set semsys:seminfo_semume=64
    >>
    >> * Shared memory
    >> set shmsys:shminfo_shmmax=16777216
    >> set shmsys:shminfo_shmmin=1
    >> set shmsys:shminfo_shmmni=230
    >> set shmsys:shminfo_shmseg=100
    >>
    >> NUMBER_DATA_BUFFERS : 4
    >> NUMBER_DATA_BUFFERS_DISK : 16
    >> SIZE_DATA_BUFFERS : 65536
    >> SIZE_DATA_BUFFERS_DISK : 1048576
    >>
    >> Where is the bug ?
    >>
    >> Many thanks for help

    >
    >Perhaps the client you are backing up to disk is only capable of 6-7MB/sec.
    >


    Did something change in you enviroment?
    Nic card config?
    Switch Config?
    Ask the Network admin to check the port stats on the two cards below.

    Where are the " The disk staging is only 6-7 MBytes/sec !" taking place?
    On the master or the media server?

    you said that:
    Master (SUN V240 Sol9, 2x1.28GHz) with 2x LTO-3, network Gbits/sec.
    Media server (same HW, with a EMC CX500, RAID-3 configured), net Gbit/sec

    If the kernel was set wrong then Netbackup would not run correctly, so I
    doubt its the kernel. (look at the Netbackup PDF install for Solaris)

    Pedro For president!

    J

  4. Re: Very slow performances (NBU5.1 MP3)


    "Francois" wrote:
    >
    >Hi,
    >With a good config :
    >Master (SUN V240 Sol9, 2x1.28GHz) with 2x LTO-3, network Gbits/sec.
    >Media server (same HW, with a EMC CX500, RAID-3 configured), net Gbit/sec.
    >
    >The disk staging is only 6-7 MBytes/sec !
    >In parallel of the disk-staging, I can do a ftp from the Media server to
    >the Master, and I can have 30-40 MB/s...
    >The problem is not the SAN.
    >
    >/etc/system :
    >* Message queues
    >set msgsys:msginfo_msgmap=500
    >set msgsys:msginfo_msgmax=8192
    >set msgsys:msginfo_msgmnb=65536
    >set msgsys:msginfo_msgmni=256
    >set msgsys:msginfo_msgssz=32
    >set msgsys:msginfo_msgtql=500
    >set msgsys:msginfo_msgseg=8192
    >
    >* Semaphores
    >set semsys:seminfo_semmap=64
    >set semsys:seminfo_semmni=1024
    >set semsys:seminfo_semmns=1024
    >set semsys:seminfo_semmnu=1024
    >set semsys:seminfo_semmsl=300
    >set semsys:seminfo_semopm=32
    >set semsys:seminfo_semume=64
    >
    >* Shared memory
    >set shmsys:shminfo_shmmax=16777216
    >set shmsys:shminfo_shmmin=1
    >set shmsys:shminfo_shmmni=230
    >set shmsys:shminfo_shmseg=100
    >
    >NUMBER_DATA_BUFFERS : 4
    >NUMBER_DATA_BUFFERS_DISK : 16
    >SIZE_DATA_BUFFERS : 65536
    >SIZE_DATA_BUFFERS_DISK : 1048576
    >
    >Where is the bug ?
    >
    >Many thanks for help


    I doubt the kernel parameters are the culprit, usually jobs fail completely
    when these are set to low. (as in the case of message queues, this can be
    easily checked with ipcs)

    I'm no guru with the backup to disk method (currently we are starting to
    test the feasbility of this in our data center), I do know that the buffer
    settings can have huge impacts on performance. I would start looking there.

    For example, your NUMBER_DATA_BUFFERS seems low, we run most of our servers
    at 32, with a buffer size of 262144. (We use Solaris 8/9 and LTO 1 and LTO
    2 drives)

    How I locked down the best buffer settings was this:
    1) Create 2 test data sets: 1 large 100gb file and 1 100gb collection of
    small files.
    2) Starting with default number of buffers and size of buffers, run each
    backup on an idle system and record results.
    3) Rinse and repeat with incremental changes to each setting.

    Time consuming but worth the effort in the long run. We improved performance
    by a factor of 5 in some cases. Another setting you might need to look at
    is NET_BUFER_SZ, which affects the buffers used when reading/writing data
    from a network. (I couldn't determine which way and to where your data is
    traveling based on your OP) Also, make sure these files are in the correct
    directory. I have found information via google that listed wrong locations
    for these files. They are, and anyone correct me if Im wrong, Im working
    from memory here:
    /usr/openv/netbackup/NET_BUFFER_SZ
    /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS
    /usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS
    ...

    You can tell for sure what buffer settings were used for a particuliar backup
    by looking at your bptm log. You can also tell if your buffers are too few/too
    many or loo large/too small. Look for something along these lines:
    13:58:15.452 [11491] <2> write_data: waited for full buffer 4892 times, delayed
    6536 times
    15:03:05.924 [15682] <2> write_data: waited for full buffer 5365 times, delayed
    7085 times

    Note: There is a huge grey area as to the number of delays and when a buffer
    is too large or too small. (It is a moving target because your backups are
    not all the same, etc...) I have seen some people try and keep delays under
    1000, I have seen others keep these under 100,000. These numbers here are
    from one of our servers that has been tuned to a level we are happy with.

    Lastly, make sure you are not saturating the PCI busses on your servers.
    This is easy to do on smaller tier Sun system as busses are shared. (IE:
    Dont put your SCSI controller and your HBA with 3 LTO2 drives attached on
    the same 256mb/s bus) We use Sun v1280 for most of our backup servers, and
    I use a dual-port 2gb QLogic HBA in the 66Mhz PCI and make certain that no
    other PCI adapters share that bus. (Also beware what bus the onbaord gige
    ports are on as well)

    Hope this helps, I just went thru all this with our servers so it's quasi-fresh
    in my head.

    Most important of all though, make sure you test several restores with each
    buffer setting as well. Remember to record the current buffer settings just
    in case you have trouble restoring from backups done prior to the buffer
    tweaking. I have heard stories of shops that made drastic changes to buffer
    settings only to have restore troubles later on.

    Hope this helps,
    CC



  5. Re: Very slow performances (NBU5.1 MP3)


    "Chuck" wrote:
    >
    >"Francois" wrote:
    >>
    >>Hi,
    >>With a good config :
    >>Master (SUN V240 Sol9, 2x1.28GHz) with 2x LTO-3, network Gbits/sec.
    >>Media server (same HW, with a EMC CX500, RAID-3 configured), net Gbit/sec.
    >>
    >>The disk staging is only 6-7 MBytes/sec !
    >>In parallel of the disk-staging, I can do a ftp from the Media server to
    >>the Master, and I can have 30-40 MB/s...
    >>The problem is not the SAN.
    >>
    >>/etc/system :
    >>* Message queues
    >>set msgsys:msginfo_msgmap=500
    >>set msgsys:msginfo_msgmax=8192
    >>set msgsys:msginfo_msgmnb=65536
    >>set msgsys:msginfo_msgmni=256
    >>set msgsys:msginfo_msgssz=32
    >>set msgsys:msginfo_msgtql=500
    >>set msgsys:msginfo_msgseg=8192
    >>
    >>* Semaphores
    >>set semsys:seminfo_semmap=64
    >>set semsys:seminfo_semmni=1024
    >>set semsys:seminfo_semmns=1024
    >>set semsys:seminfo_semmnu=1024
    >>set semsys:seminfo_semmsl=300
    >>set semsys:seminfo_semopm=32
    >>set semsys:seminfo_semume=64
    >>
    >>* Shared memory
    >>set shmsys:shminfo_shmmax=16777216
    >>set shmsys:shminfo_shmmin=1
    >>set shmsys:shminfo_shmmni=230
    >>set shmsys:shminfo_shmseg=100
    >>
    >>NUMBER_DATA_BUFFERS : 4
    >>NUMBER_DATA_BUFFERS_DISK : 16
    >>SIZE_DATA_BUFFERS : 65536
    >>SIZE_DATA_BUFFERS_DISK : 1048576
    >>
    >>Where is the bug ?
    >>
    >>Many thanks for help

    >
    >I doubt the kernel parameters are the culprit, usually jobs fail completely
    >when these are set to low. (as in the case of message queues, this can be
    >easily checked with ipcs)
    >
    >I'm no guru with the backup to disk method (currently we are starting to
    >test the feasbility of this in our data center), I do know that the buffer
    >settings can have huge impacts on performance. I would start looking there.
    >
    >For example, your NUMBER_DATA_BUFFERS seems low, we run most of our servers
    >at 32, with a buffer size of 262144. (We use Solaris 8/9 and LTO 1 and LTO
    >2 drives)
    >
    >How I locked down the best buffer settings was this:
    >1) Create 2 test data sets: 1 large 100gb file and 1 100gb collection of
    >small files.
    >2) Starting with default number of buffers and size of buffers, run each
    >backup on an idle system and record results.
    >3) Rinse and repeat with incremental changes to each setting.
    >
    >Time consuming but worth the effort in the long run. We improved performance
    >by a factor of 5 in some cases. Another setting you might need to look at
    >is NET_BUFER_SZ, which affects the buffers used when reading/writing data
    >from a network. (I couldn't determine which way and to where your data is
    >traveling based on your OP) Also, make sure these files are in the correct
    >directory. I have found information via google that listed wrong locations
    >for these files. They are, and anyone correct me if Im wrong, Im working
    >from memory here:
    >/usr/openv/netbackup/NET_BUFFER_SZ
    >/usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS
    >/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS
    >...
    >
    >You can tell for sure what buffer settings were used for a particuliar backup
    >by looking at your bptm log. You can also tell if your buffers are too few/too
    >many or loo large/too small. Look for something along these lines:
    >13:58:15.452 [11491] <2> write_data: waited for full buffer 4892 times,

    delayed
    >6536 times
    >15:03:05.924 [15682] <2> write_data: waited for full buffer 5365 times,

    delayed
    >7085 times
    >
    >Note: There is a huge grey area as to the number of delays and when a buffer
    >is too large or too small. (It is a moving target because your backups are
    >not all the same, etc...) I have seen some people try and keep delays under
    >1000, I have seen others keep these under 100,000. These numbers here are
    >from one of our servers that has been tuned to a level we are happy with.
    >
    >Lastly, make sure you are not saturating the PCI busses on your servers.
    >This is easy to do on smaller tier Sun system as busses are shared. (IE:
    >Dont put your SCSI controller and your HBA with 3 LTO2 drives attached on
    >the same 256mb/s bus) We use Sun v1280 for most of our backup servers, and
    >I use a dual-port 2gb QLogic HBA in the 66Mhz PCI and make certain that

    no
    >other PCI adapters share that bus. (Also beware what bus the onbaord gige
    >ports are on as well)
    >
    >Hope this helps, I just went thru all this with our servers so it's quasi-fresh
    >in my head.
    >
    >Most important of all though, make sure you test several restores with each
    >buffer setting as well. Remember to record the current buffer settings just
    >in case you have trouble restoring from backups done prior to the buffer
    >tweaking. I have heard stories of shops that made drastic changes to buffer
    >settings only to have restore troubles later on.
    >
    >Hope this helps,
    >CC
    >
    >


    One thing I forgot to add: From your bptm log, look for these entries to
    determine if you have enough buffers (NUMBER_DATA_BUFFERS):
    09:30:54.760 [5504] <2> mpx_read_data: waited for empty buffer 0 times, delayed
    0 times
    08:12:20.453 [16453] <2> mpx_read_data: waited for empty buffer 471 times,
    delayed 647 times

    In the case of no available buffers, each delay resutls in 30ms latency.
    (If my memory serves me correctly) So you may want to be a little more granular
    with this setting.

    -Chuck




  6. Re: Very slow performances (NBU5.1 MP3)


    "Chuck" wrote:
    >
    >"Chuck" wrote:
    >>
    >>"Francois" wrote:
    >>>
    >>>Hi,
    >>>With a good config :
    >>>Master (SUN V240 Sol9, 2x1.28GHz) with 2x LTO-3, network Gbits/sec.
    >>>Media server (same HW, with a EMC CX500, RAID-3 configured), net Gbit/sec.
    >>>
    >>>The disk staging is only 6-7 MBytes/sec !
    >>>In parallel of the disk-staging, I can do a ftp from the Media server

    to
    >>>the Master, and I can have 30-40 MB/s...
    >>>The problem is not the SAN.
    >>>
    >>>/etc/system :
    >>>* Message queues
    >>>set msgsys:msginfo_msgmap=500
    >>>set msgsys:msginfo_msgmax=8192
    >>>set msgsys:msginfo_msgmnb=65536
    >>>set msgsys:msginfo_msgmni=256
    >>>set msgsys:msginfo_msgssz=32
    >>>set msgsys:msginfo_msgtql=500
    >>>set msgsys:msginfo_msgseg=8192
    >>>
    >>>* Semaphores
    >>>set semsys:seminfo_semmap=64
    >>>set semsys:seminfo_semmni=1024
    >>>set semsys:seminfo_semmns=1024
    >>>set semsys:seminfo_semmnu=1024
    >>>set semsys:seminfo_semmsl=300
    >>>set semsys:seminfo_semopm=32
    >>>set semsys:seminfo_semume=64
    >>>
    >>>* Shared memory
    >>>set shmsys:shminfo_shmmax=16777216
    >>>set shmsys:shminfo_shmmin=1
    >>>set shmsys:shminfo_shmmni=230
    >>>set shmsys:shminfo_shmseg=100
    >>>
    >>>NUMBER_DATA_BUFFERS : 4
    >>>NUMBER_DATA_BUFFERS_DISK : 16
    >>>SIZE_DATA_BUFFERS : 65536
    >>>SIZE_DATA_BUFFERS_DISK : 1048576
    >>>
    >>>Where is the bug ?
    >>>
    >>>Many thanks for help

    >>
    >>I doubt the kernel parameters are the culprit, usually jobs fail completely
    >>when these are set to low. (as in the case of message queues, this can

    be
    >>easily checked with ipcs)
    >>
    >>I'm no guru with the backup to disk method (currently we are starting to
    >>test the feasbility of this in our data center), I do know that the buffer
    >>settings can have huge impacts on performance. I would start looking there.
    >>
    >>For example, your NUMBER_DATA_BUFFERS seems low, we run most of our servers
    >>at 32, with a buffer size of 262144. (We use Solaris 8/9 and LTO 1 and

    LTO
    >>2 drives)
    >>
    >>How I locked down the best buffer settings was this:
    >>1) Create 2 test data sets: 1 large 100gb file and 1 100gb collection of
    >>small files.
    >>2) Starting with default number of buffers and size of buffers, run each
    >>backup on an idle system and record results.
    >>3) Rinse and repeat with incremental changes to each setting.
    >>
    >>Time consuming but worth the effort in the long run. We improved performance
    >>by a factor of 5 in some cases. Another setting you might need to look

    at
    >>is NET_BUFER_SZ, which affects the buffers used when reading/writing data
    >>from a network. (I couldn't determine which way and to where your data

    is
    >>traveling based on your OP) Also, make sure these files are in the correct
    >>directory. I have found information via google that listed wrong locations
    >>for these files. They are, and anyone correct me if Im wrong, Im working
    >>from memory here:
    >>/usr/openv/netbackup/NET_BUFFER_SZ
    >>/usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS
    >>/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS
    >>...
    >>
    >>You can tell for sure what buffer settings were used for a particuliar

    backup
    >>by looking at your bptm log. You can also tell if your buffers are too

    few/too
    >>many or loo large/too small. Look for something along these lines:
    >>13:58:15.452 [11491] <2> write_data: waited for full buffer 4892 times,

    >delayed
    >>6536 times
    >>15:03:05.924 [15682] <2> write_data: waited for full buffer 5365 times,

    >delayed
    >>7085 times
    >>
    >>Note: There is a huge grey area as to the number of delays and when a buffer
    >>is too large or too small. (It is a moving target because your backups

    are
    >>not all the same, etc...) I have seen some people try and keep delays under
    >>1000, I have seen others keep these under 100,000. These numbers here

    are
    >>from one of our servers that has been tuned to a level we are happy with.
    >>
    >>Lastly, make sure you are not saturating the PCI busses on your servers.
    >>This is easy to do on smaller tier Sun system as busses are shared. (IE:
    >>Dont put your SCSI controller and your HBA with 3 LTO2 drives attached

    on
    >>the same 256mb/s bus) We use Sun v1280 for most of our backup servers,

    and
    >>I use a dual-port 2gb QLogic HBA in the 66Mhz PCI and make certain that

    >no
    >>other PCI adapters share that bus. (Also beware what bus the onbaord gige
    >>ports are on as well)
    >>
    >>Hope this helps, I just went thru all this with our servers so it's quasi-fresh
    >>in my head.
    >>
    >>Most important of all though, make sure you test several restores with

    each
    >>buffer setting as well. Remember to record the current buffer settings

    just
    >>in case you have trouble restoring from backups done prior to the buffer
    >>tweaking. I have heard stories of shops that made drastic changes to buffer
    >>settings only to have restore troubles later on.
    >>
    >>Hope this helps,
    >>CC
    >>
    >>

    >
    >One thing I forgot to add: From your bptm log, look for these entries to
    >determine if you have enough buffers (NUMBER_DATA_BUFFERS):
    >09:30:54.760 [5504] <2> mpx_read_data: waited for empty buffer 0 times,

    delayed
    >0 times
    >08:12:20.453 [16453] <2> mpx_read_data: waited for empty buffer 471 times,
    >delayed 647 times
    >
    >In the case of no available buffers, each delay resutls in 30ms latency.
    >(If my memory serves me correctly) So you may want to be a little more granular
    >with this setting.
    >
    >-Chuck
    >
    >
    >

    We have made those modifications :
    NUMBER_DATA_BUFFERS : 48
    SIZE_DATA_BUFFERS : 262144
    The performances are growing to (max) 32MBytes/s.
    I'm going to check the NET_BUFFER_SZ, because in the
    http://seer.support.veritas.com/docs/183702.htm
    this param is only for Versions <= 4.5

    Thank you for your help
    Francois

  7. Very slow performances (NBU5.1 MP2)


    Hi,
    I have the follow configuration :
    Master (IBM P570 series AIX 5.2 2x1,65Ghz) with 10 xLTO-2 via SAN
    Media server (IBM P570 series AIX 5.2 8x1,65Ghz) with SAP 4.7
    with a HDS Tagma Store, RAID-5 configured)

    the Master, and I can have 30-60 MBps making file system backup... and the
    media server too without problem. The problem is when I use the SAP module
    for NBU the performnace drop to 15-18 MBps.

    I am using the follow Buffers:

    NUMBER_DATA_BUFFERS : 64
    SIZE_DATA_BUFFERS : 262144

    Many thanks for help...

  8. Re: Very slow performances (NBU5.1 MP2)


    "Adrian" wrote:
    >
    >Hi,
    >I have the follow configuration :
    >Master (IBM P570 series AIX 5.2 2x1,65Ghz) with 10 xLTO-2 via SAN
    >Media server (IBM P570 series AIX 5.2 8x1,65Ghz) with SAP 4.7
    > with a HDS Tagma Store, RAID-5 configured)
    >
    >the Master, and I can have 30-60 MBps making file system backup... and the
    >media server too without problem. The problem is when I use the SAP module
    >for NBU the performnace drop to 15-18 MBps.
    >
    >I am using the follow Buffers:
    >
    >NUMBER_DATA_BUFFERS : 64
    >SIZE_DATA_BUFFERS : 262144
    >
    >Many thanks for help...



    Set "NUMBER_DATA_BUFFERS = 128" then try backup again.

+ Reply to Thread