Search Results

Search found 8 results on 1 pages for 'httperf'.

Page 1/1 | 1 

  • running autobench (httperf)

    - by Matthew
    So I ran apt-get install httperf on my system and I can now run httperf. But how can I run 'autobench'? I downloaded the file and unarchived it and if I go in it and run autobench it says -bash command not found I think it's a perl script but if I run perl autobench, it says: root@example:/tmp/autobench-2.1.2# perl autobench Autobench configuration file not found - installing new copy in /root/.autobench.conf cp: cannot stat `/etc/autobench.conf': No such file or directory Installation complete - please rerun autobench Even if I run it again it says the same thing.

    Read the article

  • How do I Benchmark RESTful Service with Variable Parameters?

    - by Eli
    I'm currently working on benchmarking a RESTful service I've made, and part of that is making sure it runs in a reasonable amount of times for a large array of parameters. For example, let's say I have RESTful API of the form some_site.com/item?item_id=y. In that case to be sure my service is working as fast as I'd like it to work, I'd want to try out many values for y one by one, preferably coming from some text file. I can't figure out any way of doing this in ab or httperf. I'm open to using a different benchmarking program if I have, but would prefer something simple and light. What I want to do seems like something pretty standard, so I'm guessing there must already be a program that let's me do it, but an hour or so of googling hasn't gotten me an answer. Ideas?

    Read the article

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • autobench in ubuntu 8.10

    - by mamathahl
    Hi, I'm using ubuntu 8.10. I want to do benchmarking using autobench. I could install httperf by the command sudo apt-get install httperf I thought I should be installing autobench in the same way using apt-get. But the package was not found. Can anybody please suggest me what should I be doing in order make this "autobench" command work for me in ubuntu? Any help in this regard will be appreciated. Thanks in advance.

    Read the article

  • autobench in ubuntu 8.10

    - by mamatha
    Hi, I'm using ubuntu 8.10. I want to do benchmarking using autobench. I could install httperf by the command sudo apt-get install httperf I thought I should be installing autobench in the same way using apt-get. But the package was not found. Can anybody please suggest me what should I be doing in order make this "autobench" command work for me in ubuntu? Any help in this regard will be appreciated. Thanks in advance.

    Read the article

  • Specifying --host1 as localhost with port 8983 in autobench

    - by mamatha
    I am using autobench for benchmarking in ubuntu 8.10 autobench --single_host --host1 localhost --uri1 /solr/admin --low_rate 20 --high_rate 200 --rate_step 20 --num_call 10 --num_conn 5000 --timeout 5 --file bench1.tsv This is the command which I gave. It is taking the default port as 80 and the number of replies and requests are as shown below **Errors: total 5000 client-timo 0 socket-timo 0 connrefused 5000 connreset 0 Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 Zero replies received, test invalid: rate 20 httperf --timeout=5 --client=0/1 --server=localhost --port=80 --uri=/solr/admin --rate=40 --send-buffer=4096 --recv-buffer=16384 --num-conns=5000 --num-calls=10 Maximum connect burst length: 4 Total: connections 5000 requests 0 replies 0 test-duration 124.976 s** But, I want the port to be 8983. In all the examples that I have seen in the autobench tutorial, --host1 is a website (such as, www.test.com). Can anyone suggest how to use localhost taking the port as 8983? Thanks, in advance.

    Read the article

  • MongoDB on 128mb 32-bit VPS (plus Tornado and Redis)

    - by apito
    i am curious about how mongodb will perform in a limited vps. specifically, i'll deploy this configuration on 32-bit ubuntu 9.04 server with 128Mb memory (UPDATE: now i'm considering 360mb too). nginx and redis three instances of tornado apps (one is for mobile site; limited app, not my primary audience); has around 8 Collections. social webapp for my community. mongodb all beside mongodb seems to have small footprint. memory-mapping-wise, i dont know how mongodb will behave. i know it's a little bit a stretch to use this kind of config on a tiny vps, but that's what i can afford for now. i expect to have.. hmm.. maybe ~50 15rps. i did my homework doing a lot of frontend optimizations and yslow says grade A 91 (ruleset V2) :-) anyone willing to share experiences? eg. how big the data set size when mongo hit the ceiling, performance when mongo do a lot of disk IO, etc. thanks. UPDATE: this is my pet project. i'll get back to you when i have next spare time to do same httperf in a vbox with exact spec. suggestion how to do stress testing welcomed. i'm new to this kind of stuff.

    Read the article

  • PHP-FPM Pool, Child Processes and Memory Consumption

    - by Jhilke Dai
    In my PHP-FPM configuration I have 3 Pools, the eg: Config is: ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 1 ; ;;;;;;;;;;;;;;;;;;;;;;; [www1] user = www group = www listen = /tmp/php-fpm1.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 2 ; ;;;;;;;;;;;;;;;;;;;;;;; [www2] user = www group = www listen = /tmp/php-fpm2.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 3 ; ;;;;;;;;;;;;;;;;;;;;;;; [www3] user = www group = www listen = /tmp/php-fpm3.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 I calculated the pm.max_children processes according to some example calculations on the web like 40 x 40 Mb = 1600 Mb. I have separated 4 GB of RAM for PHP, now according to the calculations 40 Child Processes via one socket, and I have total of 3 sockets in my Nginx and FPM configuration. My doubt is about the amount of memory consumption by those child processes. I tried to create high load in the server via httperf hog and siege but I could not calculate the accurate memory usage by all the PHP processes (other processes like MySQL and Nginx were also running). And all the sockets were in use, So, I seek guidance from anyone who have done this before or know how exactly the pm.max_children in PHP Works. Since I have 3 Pools/sockets with 40 child processes does that count to 3 x 40 x 40 Mb of Memory usage ? or it is just like 40 Max. Child processes sharing 3 sockets (and the total memory usage is just 40 x 40 Mb) ?

    Read the article

1