Search Results

Search found 4681 results on 188 pages for 'handling'.

Page 101/188 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • How do MaxSpareServers work in Apache?

    - by John Hunt
    I've scoured the web but I can't find out what MaxSpareServers are in Apache MPM prefork.. The MaxSpareServers directive sets the desired maximum number of idle child server processes. An idle process is one which is not handling a request. If there are more than MaxSpareServers idle, then the parent process will kill off the excess processes. Great, but what causes a spareserver to be created? More importantly, when does a spare server go away? I understand that minspareservers are created gradually after the server is started.. How do maxspareservers relate to maxclients? Basically I'm at a bit of a loss on how best to configure Apache.. there's a lot of documentation out there but it isn't that clear. Thanks, John.

    Read the article

  • DHCP for Multiple Subnets

    - by TheD
    So this is the current setup - essentially I would like to get my DHCP server, serving DHCP requests for two seperate subnets. Netgear DG834G acting as a modem connected to a Sonicwall Pro 2040. X0 - LAN - 192.168.1.0/24 X1 - WAN - <WAN-IP> X2 - WLAN - 192.168.10.0/24 At the moment, I have a 2008R2 server with DHCP installed, with an IP address on the 192.168.1.0/24 range handling DHCP fine for this subnet. The Sonicwall is configured correctly - anything connected to the WLAN has Full Allow to anything in the LAN, and vice versa but it will not lease an IP from my Server. I've also added another IP address to the server, so the physical NIC now has two IP's: 192.168.1.2 and 192.168.10.2 with a DHCP scope configured for each. Still no luck! Any ideas? Thanks!

    Read the article

  • Is FreeBSD more suitable than CentOS for firing 40k concurrent connections (for Jmeter)?

    - by blacklotus
    Hi, I am trying to run Jmeter to simulate 40k concurrent users and stress test a particular system. Putting aside the possibility that Jmeter may not be able to push such a high number (although I have read that it is at least possible to handle 10k concurrent threads on a very powerful machine), is FreeBSD a more suitable OS as compare to CentOS to be used for my Jmeter machine for handling 40k (or as high as possible) of concurrent outbound connections? Reason for asking this is that, I have found articles on FreeBSD for tuning and optimizing for maximum outbound connections, but seem to have little luck with CentOS. It makes me wonder if for some specific reasons, people don't use CentOS for such high number of outbound connections. Personally however, I am more familiar with CentOS and would like to stick with it if possible. Any input is greatly appreciated!

    Read the article

  • Apache 410 Gone instructions not working with mod_alias nor mod_rewrite

    - by Peter Boughton
    Apache 2.2 seems to be ignoring instructions to return a 410 status. This happens for both mod_alias's Redirect (using 410 or gone) and mod_rewrite's RewriteRule (using [G]), being used inside a .htaccess file. This works: Redirect 302 /somewhere /gone But this doesn't: Redirect 410 /somewhere That line is ignored (as if it had been commented) and the request falls through to other rules (which direct it to an unrelated generic error handling script). Similarly, trying to use a RewriteRule with a [G] flag doesn't work, but the same rule rewriting to a script that generates a 410 does - so the rules aren't the problem and it seems instead to be something about 410/gone that isn't behaving. I can workaround it by having a script sending the 410, but that's annoying and I don't get why it's not working. Any ideas?

    Read the article

  • RJ45 female to male fault

    - by GeoPhoenix
    i have the following - common - problem, on one end of the cable, there are RJ45 males T586 A which are connected to a 8-port switcher, on the other end, there supposed to be RJ45 females _T586 A which in turn will allow another RJ45 male to be connected. the commonly used color scheme was followed on males (having the head down) blue white blue orange white green green white orange brown white brown the problem i believe is located on female end of cable, which i try multiple time to follow the numbers designated by the module, which is the list above in reverse, and as listed resulting in no signal transmission. tried the T586B (both ends) for device to device once, but no results. Which is the proper way of handling this wiring? There were also additional RJ45 females with the numbers 6-5-4-3, but 1 to 8 isn't supposed to be used for this?

    Read the article

  • Disable CD eject button on Windows laptop

    - by Marc Gravell
    Most laptops have a CD eject that is very sensitive, and placed such that it regularly gets triggered when handling the laptop. This is in particular a problem (for clumsy-handed me) when picking up the laptop to stow in in a laptop bag; I've lost count of the number of times it has ejected just as I am lowering it into the case! I rarely use a CD, but I am wondering whether some crafty software hack (or other trick) might be possible to make it less vulnerable. Perhaps trying to fool it into thinking it is busy (but ideally without destroying my battery). Otherwise, I might as well bow to the inevitable and snap the darned thing off. I'm not making this brand-specific, as I've seen this problem on a range of both branded and re-badged laptops. I am, however, mainly interested in windows-friendly solutions.

    Read the article

  • "This file can't be previewed because of an error in the Microsoft Word previewer."

    - by danielson
    The issue is: Outlook 2013 simply will not give a preview of Word (nor Excel) docs in attachments. Never had the issue with Outlook 2010. Using Outlook 2013 on Windows 7 64bit SSD with Word 2010. I did notice that "Microsoft Word" is not listed specifically in Trust Center attachment handling, could that be part of the problem? Excel, Visio, RTF and many more are there. Update: strange, search can be performed in Word attachments... but can't preview Word file. So, Outlook can 'see' Word docs but won't let us have that preview. For reference, here is a similar question I posted in the Microsoft Answers forum.

    Read the article

  • Web Server for SVN+PHP+Django+Rails

    - by NetStudent
    Foreword: I am not asking for the differences between Nginx and Apache, nor do I want to start a "which one is better discussion. I would like to ask for help with choosing the most adequate solution for this particular situation. I need to setup one or more l SVN repositories accessible via HTTP, plus some PHP, Django and Ruby websites. However, and since I only have 512Mb of RAM at my disposal, I fear that Apache will be a too heavy choice... On the other hand, I have heard that Nginx does not fully support SVN (WebDAV) and Django without reverse proxying to Apache. Is this still true? Should I go for Apache/Nginx alone? Or should I set up both and have Nginx handling static content and proxying to Apacge for dynamic content?

    Read the article

  • What are some good, IP Address Management solutions for IPv6? [closed]

    - by Russell Heilling
    There are a number of open source IPAM tools available for IPv4 address management; however there seems to be a distinct lack of actively updated tools available for IPv6. Other than FreeIPdb (code no longer maintained) or the RIPE Database (I have seen some customisations to the RIPEdb that allow for enterprise/ISP IPAM but it seems like overkill for a system that will probably only ever handle one /32 worth of space). Are there any other options that I'm missing? (Database only please. I know vi can be used for flat text IPAM, that's how I'm handling our /32 at the moment, but I don't see it scaling for much longer) It doesn't have to be open source but what are folks doing to manage IPv6 in a dual stack environment

    Read the article

  • Mail.app doesn't see mail changes; requires mailbox "rebuild"

    - by vy32
    I use Mail.app on 4 different computers to access the same mailboxes. I see the same problem on all of them --- if the Mail.app is running, then it catches all of the mail updates. But if a machine is turned off, when it turns back on the mailbox doesn't synchronize properly. I might delete 20 mail messages on one system and they are still on system #2 until I "rebuild." I have seen this happen with the Mobile Me IMAP server, with Gmail, with courier IMAP, and with dovecot. Does anybody else see this? What should I do? Is there a mail server that this happens less with? It's very frustrating, as I get 200-300 mail messages a day and frequently end up handling email messages multiple times.

    Read the article

  • Good set of web hosting permissions?

    - by Jorge Israel Peña
    Hey guys, I just got a linode and I'm in the process of configuring it. It's running nginx with php-fpm and passenger. nginx was compiled and is running as user nginx. php-fpm (php with fastcgi process manager) is running as www-data (in group www-data). My sites are currently in /var/www, so for example /var/www/test.com I'm just wondering what the general 'flow' of things is. So for example, /var/www is owned by root, should I chown of /var/www/test.com to nginx or www-data? Or should I put nginx in the www-data group? How should site uploading work, I just transfer files to the /var/www/test.com directory as root (sudo) and then chown -R www-data:www-data .? Thanks. I'm capable of figuring things out on my own, I'm just wondering what the typical/general way of handling users/groups/permissions/site-files is on linux with a webserver.

    Read the article

  • How to present shared storage for MS Cluster Services running on vSphere 5

    - by MDMarra
    I've seen two approaches to handling the presentation of shared storage to Windows Server 2008 R2 cluster VMs on VMWare vSphere. One is the traditional method of carving out a LUN on your SAN and presenting it to both hosts through the Microsoft ISCSI software initiator. The other method is to make a vmdk on an existing LUN and attach it to both hosts and made it an independent disk so that it isn't affected by snapshots. Is one way the "correct" way, or are both viable? Is there any advantage or disadvantage to doing either?

    Read the article

  • windows server secondary plex or windows server default

    - by shiva
    I am new to handling issues on server. when my system tried to reboot I could see two installables to boot from. windows server 2012- current os windows server secondary plex. So when ever there is a system restart the system it stops at this screen. And since I am connecting to this server using RDP I have to wait for the hetzner console to click on of the os to boot. Even though the current os is set as default and time given is 30 sec, it still waits for a user input. So I want to know which of the two should I be using to boot and I just want one os.

    Read the article

  • NGINX 301 and 302 serving small nginx document body. Any way to remove this behaviour?

    - by anonymous-one
    We have noticed that when using nginx internal 301 and 302 handling, nginx will serve a small document body with the appropriate Location: ... header. Something along the lines of (in html): 301 redirect - nginx. As appropriate in the above behaviour, a content-type text/html and content-length header is also sent. We do a lot of 302 and some 301 redirects, the above behaviour is wasted bandwidth in our opinion. Any way to disable this behaviour? One idea that crossed our mind was to set error_page 301 302 to an empty text file. We have not tested this yet, but I am assuming even with the above, the content-type and content-length (0) headers will be sent. So, is there a clean way to send a "body-less" 301/302 redirect with nginx?

    Read the article

  • 504 Gateway Time-out after php fatal error

    - by tiagojsag
    I'm using nginx and php-fpm to develop a symfony2 based website, under ubuntu 12.10 (yes, I know I'm using a beta OS). Everything was working out fine until, due to an error on my code, I called an unexisting function, and got the following: Fatal error: Call to a member function (....) This isn't a problem (it's a bug in my code, easily fixable), but after this, no other page loads. My browser just keeps trying to load the page from the webserver, until nginx timeouts (after +- 30s, which should be some default timeout) and returns: 504 Gateway Time-out Restarting php-fpm solves the issue. Nginx logs show a timeout message, and nothing appears on php-fpm logs, even if I set them to debug level. I tried switching from fpm to fastcgi, and the same thing happens. I've looked around, but all similar error are related to big requests/file handling, which isn't the case. All the pages on my website load in a few seconds, even under development conditions (no caching, etc).

    Read the article

  • SVN command that returns wether a user has a valid login for a repository?

    - by Zárate
    Hi there, I'm trying to find out an SVN command that would return some kind of true / false value depending on wether the user running it has access to a given repository. I'm building a tool for automated deployment and part of the process is checking out the code from the SVN repository. I'd like to find out if the user running the tool has a valid login already. If there's no valid login, just show a message and exit the tool (handling the SVN login internally is not an option at the moment). Plan B would be parsing the file in svn.simple looking for the repo URL, but thought about asking first. Thanks, Juan

    Read the article

  • Puppet templates and undefined/nil variables

    - by larsks
    I often want to include default values in Puppet templates. I was hoping that given a class like this: class myclass ($a_variable=undef) { file { '/tmp/myfile': content => template('myclass/myfile.erb'), } } I could make a template like this: a_variable = <%= a_variable || "a default value" %> Unfortunately, undef in Puppet doesn't translate to a Ruby nil value in the context of the template, so this doesn't actually work. What is the canonical way of handling default values in Puppet templates? I can set the default value to an empty string and then use the empty? test... a variable = <%= a_variable.empty? ? "a default value" : a_variable %> ...but that seems a little clunky.

    Read the article

  • Web Folder size/quota reporting tool?

    - by nctrnl
    I am currently using a Visual Basic script to determine how big the web folders are and what quota is decided for each folder. The quota is in no way a physical limit, just a value inserted by me to decide whether a user is using too much space or not. The script does the job quite neatly and sends an html file by mail on a regular basis. The problem is that it's such a hassle to insert new quotas since I have to fiddle around with the code. A central "control panel" with an overview and ability to insert new quotas would be more suitable. Is there any software that can do the following: Scan specified folder/subfolders Report the file size and present it in some sort of interface (could be a php/mysql solution) Ability to specify a quota and see the difference value ? It is really important that the quota handling is made simple so that some non-technician can handle this.

    Read the article

  • Web Folder size/quota reporting tool?

    - by nctrnl
    I am currently using a Visual Basic script to determine how big the web folders are and what quota is decided for each folder. The quota is in no way a physical limit, just a value inserted by me to decide whether a user is using too much space or not. The script does the job quite neatly and sends an html file by mail on a regular basis. The problem is that it's such a hassle to insert new quotas since I have to fiddle around with the code. A central "control panel" with an overview and ability to insert new quotas would be more suitable. Is there any software that can do the following: Scan specified folder/subfolders Report the file size and present it in some sort of interface (could be a php/mysql solution) Ability to specify a quota and see the difference value ? It is really important that the quota handling is made simple so that some non-technician can handle this.

    Read the article

  • 301 redirect, canonical question

    - by Dave
    I've designed my own 'latest news' page for my site - and I'm trying to keep the URL's clean. (eg) It should look like this : http://www.domain.com/21/this-is-a-clean-url/ When someone links to the article, they sometimes mess it up and do : http://www.domain.com/21/this-is-a-clean-url/#random-hash-tag So what I have been doing is looking for "http://www.domain.com/21" and 301 (moved permantly) redirecting to the proper url + adding a canonical meta tag for it. Is this going overboard? Should I instead be using a (302 Found) header - and just let the canonical tag tell search engines what the proper URL for the article is? What is the best way of handling this?

    Read the article

  • How do I rewrite *.example.com to www.example.com?

    - by Lekensteyn
    In my network, I've some Ubuntu machines which need to download files from nl.archive.ubuntu.com. Since it's quite a waste of time to download everything multiple times, I've setup a squid proxy for caching the data. Another use for this proxy was rewriting requests for archive.ubuntu.com or *.archive.ubuntu.com to nl.archive.ubuntu.com because this mirror is faster than the US mirrors. This has worked quite well, but after a recent install of my caching machine, the configuration was lost. I remember having a separate perl program for handling this rewrite. How do I setup such a squid proxy which rewrites the host *.example.com to www.example.com and cache the result of the latter?

    Read the article

  • Linux: Alternative to rsync? (ie, scp with resume)

    - by Joernsn
    I've been using rsync to automatically send files from one box to another, which is great compared to scp, since it supports resuming. However, when resuming a very large file (10gb) rsync has to read both files and compare them, which is very slow. I don't need fancy error handling, just "scp with resume", so here's my question: Is there an alternative to rsync/scp, that supports resuming without having to read both source and destination files? I've read the manuals without finding anything I can use, please let me know if I've missed something. This is the rsync line I've been using: rsync -av --partial --progress --inplace SRC DST

    Read the article

  • Lockless queue implementation ends up having a loop under stress

    - by Fozi
    I have lockless queues written in C in form of a linked list that contains requests from several threads posted to and handled in a single thread. After a few hours of stress I end up having the last request's next pointer pointing to itself, which creates an endless loop and locks up the handling thread. The application runs (and fails) on both Linux and Windows. I'm debugging on Windows, where my COMPARE_EXCHANGE_PTR maps to InterlockedCompareExchangePointer. This is the code that pushes a request to the head of the list, and is called from several threads: void push_request(struct request * volatile * root, struct request * request) { assert(request); do { request->next = *root; } while(COMPARE_EXCHANGE_PTR(root, request, request->next) != request->next); } This is the code that gets a request from the end of the list, and is only called by a single thread that is handling them: struct request * pop_request(struct request * volatile * root) { struct request * volatile * p; struct request * request; do { p = root; while(*p && (*p)->next) p = &(*p)->next; // <- loops here request = *p; } while(COMPARE_EXCHANGE_PTR(p, NULL, request) != request); assert(request->next == NULL); return request; } Note that I'm not using a tail pointer because I wanted to avoid the complication of having to deal with the tail pointer in push_request. However I suspect that the problem might be in the way I find the end of the list. There are several places that push a request into the queue, but they all look generaly like this: // device->requests is defined as struct request * volatile requests; struct request * request = malloc(sizeof(struct request)); if(request) { // fill out request fields push_request(&device->requests, request); sem_post(device->request_sem); } The code that handles the request is doing more than that, but in essence does this in a loop: if(sem_wait_timeout(device->request_sem, timeout) == sem_success) { struct request * request = pop_request(&device->requests); // handle request free(request); } I also just added a function that is checking the list for duplicates before and after each operation, but I'm afraid that this check will change the timing so that I will never encounter the point where it fails. (I'm waiting for it to break as I'm writing this.) When I break the hanging program the handler thread loops in pop_request at the marked position. I have a valid list of one or more requests and the last one's next pointer points to itself. The request queues are usually short, I've never seen more then 10, and only 1 and 3 the two times I could take a look at this failure in the debugger. I thought this through as much as I could and I came to the conclusion that I should never be able to end up with a loop in my list unless I push the same request twice. I'm quite sure that this never happens. I'm also fairly sure (although not completely) that it's not the ABA problem. I know that I might pop more than one request at the same time, but I believe this is irrelevant here, and I've never seen it happening. (I'll fix this as well) I thought long and hard about how I can break my function, but I don't see a way to end up with a loop. So the question is: Can someone see a way how this can break? Can someone prove that this can not? Eventually I will solve this (maybe by using a tail pointer or some other solution - locking would be a problem because the threads that post should not be locked, I do have a RW lock at hand though) but I would like to make sure that changing the list actually solves my problem (as opposed to makes it just less likely because of different timing).

    Read the article

  • Identify Long Running or Slow PHP Scripts

    - by Kirk
    I have web server that is getting around 25K visits a day up at yougetsignal.com. Sometimes the site feels a bit sluggish. I am hosting it on nginx with php5-fpm. Is there a way for me to see a list of all of the long running requests that are coming to the site? I'd love to have a real-time list of all of the active requests that PHP is handling and how long they have been running. Kind of like top, but just for the web server. This would let me know how long requests are taking and which script is the culprit. Anyone have any ideas on how I can do this?

    Read the article

  • Ubuntu server loses network connection after ADSL2+ modem reset

    - by squashbuff
    I am using an ubuntu 10.04 server (running on a Lenovo Thinkpad notebook) as my webserver. It is performing well in terms of handling the traffic etc. However my internet connection is ADSL2+ (using Thomson TG782T modem-router) and if the modem is reset, then my server loses network connection. The networkmanager icon shows a red exclamation mark showing that is has no connection. But as soon as I click on it and tell it to connect to eth0, the connection is back on. It must be something that networkmanager is failing to do and because of this, the reliability of my webserver is suffering. Any advice on how this can be fixed?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >