Search Results

Search found 8687 results on 348 pages for 'per'.

Page 6/348 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Free space on SSD (over provisioning) per disk or per partition?

    - by Horst Walter
    It is recommended to keep some percentage of an SSD free for relocation ( Is free space required on a SSD for performance? ). However, is this rule meant per partition or per disk (whole SSD)? So, if I want to keep 20% free for performance reasons, is it acceptable if one partition is 95% filled, while another is almost empty and the overall empty disk space still is 20. Or does each partition has to fulfill the rule of 20% empty space?

    Read the article

  • Per connection bandwidth limit

    - by Kyr
    Apparently, our server box running Windows Server 2008 R2 has a per connection bandwidth limit of 0.2 MB/s. Meaning, while one TCP connection can pull at max 0.2 MB/s, 60 parallel connections can pull 12 MB/s. We first noticed this when trying to checkout large SVN repository from this server. I used a simple Java application to test this, transferring data from server to workstation using variable number of threads (one connection per thread). Server part of the application simply writes 1 MB memory buffer to socket 100 times, so there is no disk involvement. Each connection topped at 0.2 MB/s. Same per connection limit was for only one as was for 60 parallel connections. The problem is that I have no idea from where this limit comes from. I have very little experience administrating Windows Server, so I was mostly trying to find something by googling. I have checked the following: Local Computer Policy QoS Packet Scheduler Limit reservable bandwidth: it's Not configured; Group Policy Management Console: we have two GOPs, but neiher has any Policy-based QoS defined; There isn't any bandwidth limiter program installed, as far as I can tell. We're using standard Windows Firewall. I can update this question with any additional information if needed.

    Read the article

  • Requests per second slower when using nginx for load balancing

    - by Ed Eliot
    I've set up nginx as a load balancer that reverse proxies requests to 2 Apache servers. I've benchmarked the setup with ab and am getting approx 35 requests per second with requests distributed between the 2 backend servers (not using ip_hash). What is confusing me is that if I query either of the backend servers directly via ab I get around 50 requests per second. I've experimented with a number of different values in ab the most common being 1000 requests with 100 concurrent connections. Any idea why traffic distributed across 2 servers would result in fewer requests per second than hitting either directly? Additional info: I've experimented with worker_processes values of between 1 and 8, worker_connections between 1024 and 8092 and have also tried keepalive 0 and 65. My main conf currently looks like this: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I've got one virtual host (in sites available) that redirects everything under / to 2 backends across a local network.

    Read the article

  • amazon ec2-medium apache requests per second terrible

    - by TheDayIsDone
    EDITED -- test running from localhost now to rule out network... i have a c1.medium using EBS. when i do an apache benchmark and i'm just printing a "hello" for the test from localhost - no database hits, it's very slow. i can repeat this test many times with the same results. any thoughts? thanks in advance. ab -n 1000 -c 100 http://localhost/home/test/ Benchmarking localhost (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Apache/2.2.23 Server Hostname: localhost Server Port: 80 Document Path: /home/test/ Document Length: 5 bytes Concurrency Level: 100 Time taken for tests: 25.300 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 816000 bytes HTML transferred: 5000 bytes Requests per second: 39.53 [#/sec] (mean) Time per request: 2530.037 [ms] (mean) Time per request: 25.300 [ms] (mean, across all concurrent requests) Transfer rate: 31.50 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 7 21.0 0 73 Processing: 81 2489 665.7 2500 4057 Waiting: 80 2443 654.0 2445 4057 Total: 85 2496 653.5 2500 4057 Percentage of the requests served within a certain time (ms) 50% 2500 66% 2651 75% 2842 80% 2932 90% 3301 95% 3506 98% 3762 99% 3838 100% 4057 (longest request)

    Read the article

  • Video capture tool: capture n frames per second

    - by Keikoku
    Is there a program that allows me to pass in a video and capture the screen at every n frames? For example maybe I want to export screenshots at 24 frames per second from point A to point B, so the program will be exporting 24 images per second. The amount of frames I would like to capture can be specified (maybe I only want 1 fps, maybe 10 fps, maybe 24, 30, 60, ..) Preferably, it would be part of a larger program that supports various video formats. At least, it should support the more common formats out there.

    Read the article

  • Import PDF in Inkscape without per character spacing

    - by Morinar
    I have a PDF form I imported into Inkscape. It did a pretty great job of created an SVG from it, however it appears to have given per character x coordinates to each tspan in every text block. I'm not sure, but I think this is making the FOP render that our software does ridiculously slow (running it through other renderers like IBEX it seems fine). I'd like to import it but have it not do per character positions. I can't seem to find any sort of PDF importing options at all. Does such a thing exist? Or is there perhaps some other, better freeware application I could use to generate the SVG from the PDF then use Inkscape to do adjustments from there? Thanks in advance.

    Read the article

  • Okular (on Ubuntu 9.10) prints multiple pages per sheet (n-up) very small

    - by dgleich
    I'm trying to print a set of beamer slides with multiple slides per page (4-up or 6-up). When I select 4 pages or 6 pages per sheet in the Okular print dialog, the pages print quite small (perhaps even tiny -- about 1.75" by 1.25") and leave significant white-space on the page. I can get around this behavior by using the pdfnup utility (in the pdfjam package); which will correctly generate a 4- or 6-up pdf file but it's annoying to generate a second pdf file when I should be able to accomplish this task from the print dialog. Details: Ubuntu 9.10 (Karmic), 64-bit, Color Postscript printer.

    Read the article

  • Gmail - Ways to have more than 20 items per page in the search results

    - by Andrei
    Gmail has the notorious limitation of 20 results per page when searching your mail. Is there an extension (Chrome - preferable, Firefox, etc.) that can fix this (i.e. allow more than 20 items per page)? Based on my experience this should be entirely doable (have the extension move across pages in the background and then collect the results). Is there an extension that can already do this? I'm asking because I couldn't find one.

    Read the article

  • Throttling bandwidth on a per group basis

    - by Robreylen
    I am wondering if it is possible to create a bandwidth shaping/throttling script that shapes traffic based on user group. That is, if user1, user2, are in user group group1, they will have 1mb/s download and 1mb/s upload, whilst if user3 and user4 are in group2, they will have 256kb/s download and 256kb/s upload. I've read a bit about this and I found some iptables and TC implementations of a per user solution, but I have not seen anything for a user group. Hopefully it can be simply implemented in form of a custom iptables rules and script running with TC or the like. Here is a script I was looking into that does a system wide throttle: http://atmail.com/kb/2009/throttling-bandwidth/ I assume it is possible to do user group throttling since it is possible for throttling on a per user basis. Thanks for any info you can provide for this question.

    Read the article

  • Suhosin per-URL exceptions?

    - by STATUS_ACCESS_DENIED
    I am using SimpleID as my OpenID provider and it turns out that if I log on via pages like those on StackExchange, one of the parameters of the GET request gets dropped by Suhosin. The name of the variable is s and I presume it's responsible for the "return to URL" part after login. All of this is not a problem as long as I am already logged into SimpleID from before. However, as soon as the site on which I want to log in via OpenID ends up at the login screen of SimpleID, the redirect back to the site I came from does not work anymore due to the dropped variable. Is there a method to configure either on a per-virtual-host or per-URL basis to ignore the maximum length for GET requests with a parameter s exceeding the (globally) set limit? I'm using Apache 2.2, so I was wondering whether a mechanism similar to setting the PHP ini variables from within the server configuration exists for Suhosin.

    Read the article

  • 1K incoming http post requests per second, each with a 10-50K file

    - by Blankman
    I'm trying to figure out what kind of server setup I will need to support: 1K http post requests per second each post will contain a xml file between 5-50K (average of 25 kilobytes) Even if I get a 100 Mb/s connection with my dedicated box (they usually give 10 Mb/s but you can upgrade), from my calculations that is about 12K kb/s which means about 480 25kb files per second. So this means I need around 3 servers then, each with 100 Mb/s connection. Would a single server running HAProxy be able to redirect the requests to other servers or does this mean I need to get something else that can handle more than 100 Mb/s to proxy things out to the other servers? If my math is off I'd appreciate any corrections you may have.

    Read the article

  • Okular (on Ubuntu 9.10) prints multiple pages per sheet (n-up) very small

    - by user23884
    I'm trying to print a set of beamer slides with multiple slides per page (4-up or 6-up). When I select 4 pages or 6 pages per sheet in the Okular print dialog, the pages print quite small (perhaps even tiny -- about 1.75" by 1.25") and leave significant white-space on the page. I can get around this behavior by using the pdfnup utility (in the pdfjam package); which will correctly generate a 4- or 6-up pdf file but it's annoying to generate a second pdf file when I should be able to accomplish this task from the print dialog. Details: Ubuntu 9.10 (Karmic), 64-bit, Color Postscript printer.

    Read the article

  • vsftpd per group configuration

    - by roqs
    I want to configure a vsftpd in a per group fashion instead of per user configuration. It's possible? Suppose i have two groups: groupA and groupB, so my goal is: users in groupA have permission (wrx) to all files in directory dir1 users in groupB have permission (wrx) to all files in directory dir2 users of the system have permission (wrx) to all files in directory dir3 For example: ftp@test:/home/ftp# ls -l drwxrwxr-x 16 root groupA 4096 Jun 3 10:45 dir1 drwxrwxr-x 2 root groupB 4096 Jun 3 10:56 dir2 drwxrwxr-x 8 root users 4096 Jun 3 11:01 dir3 How to do that with vsftpd?

    Read the article

  • How do you apply proxy settings per computer instead of per user?

    - by Oliver Salzburg
    So far, I've used a user group policy object utilizing Internet Explorer maintenance to set a proxy for the user in IE. We have now deployed the Enterprise Client (EC) starter group policy to our domain and this policy affects this behavior. The EC group policy uses the policy Make proxy settings per-machine (rather than per-user). This policy describes itself as: This policy is intended to ensure that proxy settings apply uniformly to the same computer and do not vary from user to user. Great! This seems to be an improvement over my previous setup. If you enable this policy, users cannot set user-specific proxy settings. They must use the zones created for all users of the computer. What zones and where do I configure the proxy settings for them? I assumed the policy would simply take the user settings and apply them, but that's not what's happening. Now no proxy server is set at all. So my previous settings obviously no longer have any effect. So far, I've only come up with solutions that involved direct manipulation of the Windows registry. Which is fine, I guess, but the way the proxy is configured for users makes it appear as if there could be a higher level approach.

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • How many MSDN accounts per MSDN subscription?

    - by ladenedge
    We recently bought a handful of MSDN subscriptions. Should we have received one MSDN account per subscription? Or must all of our subscriptions be managed through a single account? (In case you're curious about why we can't just examine whatever we received from Microsoft, the problem is that engineering and IT have a very antagonistic relationship around here.)

    Read the article

  • Keyboard layout per window

    - by Giorgi
    Hello, As you know when you change keyboard layout with alt-shift it affects all the windows of current application. Is it there any program which changes the behavior to per window? OS: Windows 7

    Read the article

  • Per-application volume control in windows XP?

    - by Chris Gow
    Is there a way either built-in (don't think there is) or via a third-party to specify the volume on a per application basis in Windows XP? For example, right now I am listening to some music on fairly loudly and I would like all of the alerts I am getting from other apps (email, twitter updates, IMs) to be reduced.

    Read the article

  • Getting a per thread cpu stats

    - by viraptor
    I'm interested in the current usage of cpu - precisely cpu% and wait% - for each thread in a specific application. Is it possible to get that information from somewhere? I know that top can split information per real thread (ones with pid), but it doesn't show the system/user/wait cpu usage split for each of them. I would also like some way to log that info. Do you know any apps (or apis) that can do that?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >