Search Results

Search found 1087 results on 44 pages for 'serving'.

Page 18/44 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • LVM2 vs MDADM performance

    - by archer
    I've used MDADM + LVM2 on many boxes for quite a while. MDADM was serving for both RAID0 and RAID1 arrays, while LVM2 where used for logical volumes on top of MDADM. Recently I've found that LVM2 could be used w/o MDADM (thus minus one layer, as the result - less overhead) for both mirroring and stripping. However, some guys claims that READ PERFORMANCE on LVM2 for mirrored array is not that fast as for LVM2 (linear) on top of MDADM (RAID1) as LVM2 does not read from 2+ devices at a time, but use 2nd and higher devices in case of 1st device failure. MDADM reads from 2 devices at a time (even in mirrored mode). Who could confirm that?

    Read the article

  • Will using Apache's ProxyPass directive on persistent Ajax connections alleviate the connection limit error?

    - by naurus
    I've got some javascript that keeps a persistent Ajax connection open for each client, and I know that this can cause some serious issues for apache, but not for lighttpd. One thing I learned from researching how to get around this was how to use the ProxyPass directive to send all requests for a certain directory to another address:port combination (without letting the user know). What I want to know is, if I put my PHP in a proxy'd (to lighttpd) directory and call that with javascript, will this still count against my apache connection limit? The reason I wonder is that apache is still serving the content, just not processing it. Seems to me that this would be a connection. Thanks

    Read the article

  • Nexenta/OpenSolaris filer kernel panic/crash

    - by ewwhite
    I've an x4540 Sun storage server running NexentaStor Enterprise. It's serving NFS over 10GbE CX4 for several VMWare vSphere hosts. There are 30 virtual machines running. For the past few weeks, I've had random crashes spaced 10-14 days apart. This system used to open OpenSolaris and was stable in that arrangement. The crashes trigger the automated system recovery feature on the hardware, forcing a hard system reset. Here's the output from mdb debugger: panic[cpu5]/thread=ffffff003fefbc60: Deadlock: cycle in blocking chain ffffff003fefb570 genunix:turnstile_block+795 () ffffff003fefb5d0 unix:mutex_vector_enter+261 () ffffff003fefb630 zfs:dbuf_find+5d () ffffff003fefb6c0 zfs:dbuf_hold_impl+59 () ffffff003fefb700 zfs:dbuf_hold+2e () ffffff003fefb780 zfs:dmu_buf_hold+8e () ffffff003fefb820 zfs:zap_lockdir+6d () ffffff003fefb8b0 zfs:zap_update+5b () ffffff003fefb930 zfs:zap_increment+9b () ffffff003fefb9b0 zfs:zap_increment_int+68 () ffffff003fefba10 zfs:do_userquota_update+8a () ffffff003fefba70 zfs:dmu_objset_do_userquota_updates+de () ffffff003fefbaf0 zfs:dsl_pool_sync+112 () ffffff003fefbba0 zfs:spa_sync+37b () ffffff003fefbc40 zfs:txg_sync_thread+247 () ffffff003fefbc50 unix:thread_start+8 () Any ideas what this means?

    Read the article

  • Apache hanging with MaxClients is reached

    - by Ash White
    My Apache 2.2 (preform MPM) is hanging when MaxClients is reached, rather than queueing up requests and serving them when child processes become free. When this happens, the web server is totally unresponsive until it is manually restarted. The server stack is Ubuntu 8, MySQL 5, PHP 5. Hardware is Dual Xeons (2.8) with 2GB of RAM. It serves 30,000 - 50,000 pageviews per day. Static images, CSS, and JS are offloaded to a separate server and PHP is cached using eAccelerator. The HTML output of many pages is cached to the filesystem. Relevant Apache directives: KeepAlive On MaxKeepAliveRequests 50 KeepAliveTimeout 2 StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 2000

    Read the article

  • Purpose of "computer" section in MySQL Cluster 7.2?

    - by dpk
    According to the cluster documentation, you can either define data nodes with: [ndbd] NodeId=n HostName=1.2.3.4 or [ndbd] NodeId=n ExecuteOnComputer=m [computer] Id=m HostName=1.2.3.4 I don't see a substantial difference between the two. The documentation has this to say: The [computer] section has no real significance other than serving as a way to avoid the need of defining host names for each node in the system. I'm stumped. If I have to define a hostname, what benefit is there to defining it in [computer] instead of [ndbd]?

    Read the article

  • Retrieving an RSA key from a running instance of Apache?

    - by Nathan Osman
    I created an RSA keypair for an SSL certificate and stored the private key in /etc/ssl/private/server.key. Unfortunately this was the only copy of the private key that I had. Then I accidentally overwrote the file on disk (yes, I know). Apache is still running and still serving SSL requests, leading me to believe that there may be hope in recovering the private key. (Perhaps there is a symbolic link somewhere in /proc or something?) This server is running Ubuntu 12.04 LTS.

    Read the article

  • What version of SCO Open Server are guys using out there and on what hardware?

    - by Gath
    I have some old applications which are running on SCO Open Server 5.0.5, and i would love to move them to SCO Open Server 5.0.7 and on modern hardware(servers), currently am running SCO on old IBM PL 300 personal computer, on 92MB Memory, one processor, and it has been serving the clients pretty well. Now i have New Modern IBM xseries Servers and i would love to migrate the same applications to those new servers. Problem is, SCO 5.0.5 is unable to detect some of the hardware components in the new servers. I read somewhere that SCO 5.0.7 is able to detect the newer hardware even the USB ports etc. Is there anyone running SCO Openserver out there, and on what hardware architecture are they running on? Gath

    Read the article

  • XAMPP: Access Forbidden!

    - by Yar
    I just installed a fresh XAMPP on OSX. Apache runs and I can see the splash page. I open the httpd.conf and I set both places that point to htdocs to someplace else, which results in Apache showing an "Access Forbidden!" message. I plugged my directory here: <Directory "/Applications/XAMPP/xamppfiles/htdocs"> and here: DocumentRoot "/Applications/XAMPP/xamppfiles/htdocs" I have set the permissions to 777 for everything including the enclosing directory, but to no avail. Strangely, I just did this whole thing with MAMP and had no problems serving that directory, but it was slow.

    Read the article

  • IP route ppp0 + eth0 access to outside network

    - by Vitor
    I need some help in define a route I have two connections one from eth0 and other a ppp0 (a 3G card) Not having the ppp0 connection active my route table is: Destination Gateway Genmask Flags Metric Ref Use Iface default DD-WRT 0.0.0.0 UG 100 0 0 eth0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 I can access my webserver from an outside network through ethernet interface Than I have also my ppp0 3G connection active havig the following route table: D estination Gateway Genmask Flags Metric Ref Use Iface default 10.64.64.64 0.0.0.0 UG 0 0 0 ppp0 10.64.64.64 * 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 Now I only can access my webserver in outside networks through the IP of the 3G connection Note that my server is serving at 0.0.0.0 IP (to all interfaces) But I need to get access to webserver to both interfaces ethernet and 3G connection I only can have access to both connection in local network Any help to configure this network to have both interfaces with outside networks access is welcome Can anyone give me an example to configure this network with 2 gateways to give outside networks access One for IP 192.168.1.149 and other for the ppp0 IP 89.214.60.196 Tanks

    Read the article

  • Can I use adsutil.vbs to remove the 'X—Powered-By' response header from IIS6

    - by Anthony
    Can I use adsutil.vbs to remove the 'X—Powered-By' response header from IIS6 configuration? We have a site that is running IIS6. Migration to IIS7 is a few months away at least. In the mean time, I have noticed that it is serving the header X—Powered-By: ASP.NET on all responses. I want this gone, and I would strongly prefer to script this so that future deployments also ensure that the header is not present. Is there a way to do this using adsutil.vbs?

    Read the article

  • Good/Better config for MySQL on an EC2 Large Instance

    - by Tim Reynolds
    I have an EC2 Large instance dedicated to MySQL. It will be serving a Joomla/Magento combo so it has a blend of InnoDB and MyISAM tables. I have only worked with MyISAM in the past and am therefore unfamiliar with the settings InnoDB uses. Experiments so far have been less than fruitful, as I keep causing the InnoDB engine to be disabled. My instance is running Ubuntu 10.04 64 bit server edition and has ~7.5G of ram. MySQL is currently using ~0.6% of that, with somewhat poor performance. I would like to configure it to use as much of the system RAM as is reasonable. Testing some settings I learned that the InnoDB logs can't collectively be larger than 4G. Would anyone be able to provide some base InnoDB and MyISAM settings to get my started. Thank you Tim

    Read the article

  • Nginx user subdomains, should I proxy_pass?

    - by Kevin L.
    I am trying to setup user subdomains, serving content from specific folders: www.example.com/username served from username.example.com (just like github pages). I've looked at Nginx rewrites, but I don't want the browser to redirect--I want the domain to be username.example.com. Anyway, a comment on this question says that I cannot rewrite host, only proxy to it. I tried to setup a proxy_pass, but all of the documentation and examples show it being used to (obviously) proxy to a service on another host or port, but in my case I want to just proxy to another location on the same host and port. Is this the appropriate way to tackle this problem, and if so, what is the right Nginx config syntax?

    Read the article

  • How can I effectively block torrenting?

    - by Chauncellor
    My WNR1000v3 is serving six people and two of them have decided that despite my warnings they're going to torrent heavily all day. Not dealing with that crap I decided to reserve their IPs and set up port blocking 1000-65535 at all times of the day. However.... looking at the log reveals that stuff is still going through. Half of the entries are saying: [LAN access from remote] from <externalIP>:16001 to 192.168.1.7:18946 Friday, Oct 12,2012 22:47:05 and half are saying: [Service blocked: BlockTorrents] from source 192.168.1.7, Friday, Oct 12,2012 22:46:26 Is this because of uPNP? Or does the 'block services' feature Netgear has only work with outgoing connections? Is there something that I'm missing? If it is indeed uPNP, how could I effectively block their torrenting without hurting everyone's use of services like Skype, Playstation Network, etc.?

    Read the article

  • How can I tell which config file Apache is using?

    - by Claudiu
    I'm trying to set up virtual hosts on Mac OS X. I've been modifying httpd.conf and restarting the server, but haven't had any luck in getting it to work. Furthermore, I notice that it's not serving files in the DocumentRoot mentioned in httpd.conf (Libraries/WebServer/Documents), but in a different directory (/usr/local/apache2/htdocs). I don't see this folder mentioned anywhere in httpd.conf. Furthermore, PHP works, but the "LoadModule php5_module" line is commented out. This makes me think it's using another .conf file. How can I figure out which config is actually being loaded? Update: I just deleted that httpd.conf and apache behaves the same after restart, so it definitely wasn't using it!

    Read the article

  • Webserver: chrooted PHP gives mysql.sock error when attempting to reach mysql

    - by Jon L.
    Hey guys, I've configured an Ubuntu webserver with Nginx + PHP5-FPM. I've created a chrooted environment (using jailkit) that I'm tossing my developers into, from where they can develop their test applications. Chroot jail: /home/jail Nginx and PHP5-FPM run outside the chroot, but are configured to function with websites within the chrooted environment. So far, Nginx and PHP5-FPM are serving up files without issue, except for the following: When attempting to connect to MySQL, we receive this error: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' Now, I believe the issue is due to the non-chrooted php.ini referencing mysqld.sock outside of the chroot environment (it's actually using the MySQL default setting currently). My question is, how can I configure PHP to access MySQL via loopback or similar? (Found that as a suggestion in a google result, but without any instructions) Or if I'm missing some other obvious setting, let me know. If there's an option of creating a hardlink (that would remain available even if mysql is restarted), that would be handy as well.

    Read the article

  • zero-config CGI enabled web server

    - by halp
    To serve static content of a directory over http, one can simply navigate to that directory and type: python -m SimpleHTTPServer 11111 which will start a http server on port 11111. This hack is nice because it requires zero-config: no stand-alone web server, no config files at all. Is it possible to extend this example, or have an alternate way to achieve this goal, but also have CGI support? The final goal is to have a quick and lazy way of serving a web site from a certain directory. The site has static content (HTML pages, images), but also a CGI script. The CGI script must work properly when accessed via browser. Of course I could setup a virtual host in apache, allow CGI inside it etc. But that's not a zero-config approach.

    Read the article

  • automatic IIS worker process recycle fails

    - by Sander Rijken
    The server is set to its default configuration to recycle the app pool every 1740 minutes. When this happens the following message is logged: A worker process with process id of '1234' serving application pool 'XX' has requested a recycle because the worker process reached its allowed processing time limit. Directly after logging this message, the web site is unresponsive. The only way to get it back online is by running iisreset manually. Does anyone know a fix for this behavior, other than turning the recycle feature off? Is it a known problem?

    Read the article

  • Can't access VirtualBox host-only network from windows host

    - by Markus Orreilly
    I've got two vms running on a windows host, each with a host-only network and ips in the 192.168.56.XXX range. One of them is running apache and serving some content that I want to access from my windows host. However, the windows host can't access the apache server at all. The server is running on 192.168.56.103. ipconfig from windows says it's ip for the Virtual Box interface is 169.254.143.37. I tried route add to route 192.168.56.XX traffic, but everything I tried didn't work and I was probably using it wrong. Any ideas on how to make this work?

    Read the article

  • Multiple domains, one config, hosted on apache2

    - by Kristoffer Sall Hansen
    First a quick disclaimer, I'm not a 'server guy' or a 'unix pro' or anything like that, I'm a web programmer who got stuck doing server works since I ran linux (ubuntu) on my netbook. I'm trying to set up an apache server running on Debian to automagically serve multiple domains, each domain needs to have its own directory in /var/www. Since this is the last thing I do for this company I really need it to be easy for my successor (who is even more a beginner at servers than I am), to create more domains without having to muck around with ssh or /etc/apache2/sites-available, so what I'm looking for is basically any magic mumbo-jumbo in default (or apt-get, or conf.d) that makes the server start serving any domain that has a matching folder in /var/www they will ofcourse have to initiate domain transfers the usual way. I have no problem setting up domains individually. Ick... hope the above makes sense to someone.

    Read the article

  • Setup asp.net mvc application as subdomain website

    - by a_m0d
    I'm trying to setup a local application on a subdomain on our company server. There is already an installation of sharepoint running on http://companyweb/, but I would like my application to run on http://orders.companyweb/. I tried creating a new website, leaving the IP address the same as it is for http://companyweb, and just changing the host header value to orders.companyweb. However, no matter where I try to access the site from (different computers around the network, including the server itself), I keep getting 404 errors. I then tried setting up a simple index.html and serving that up as the highest priority; however, I still got 404 errors. This makes me think that I have actually setup the site itself wrong. What should I change to be able to access this application correctly on all the local computers?

    Read the article

  • How to change from own Internal/Extrernal DNS to use an outsourced service like DNS Made Easy?

    - by Joakim
    Our current setup is a co-located linux box with an openvz kernel with a handful virtual containers for www, mail etc. and one container run Bind9 with a split views configuration serving External and Internal DNS. The HW-Node runs a shorewall firewall and all containers uses private ip's. The box (and DNS) basically handles web and mail for a handful domains and it works well but we still think it would be a good idea to outsource the public DNS and now to my question... Although I am fairly comfortable with the server stuff and DNS, I'm far from a pro and guess I basically need some confirmation that I am thinking in the right direction in that I basically just move the content of our external view (with zone files) to the external service and keep the internal view (or actually remove the view), update the new external DNS with thier names servers, update the info at my registrar and wait for propagation or have I missed something? Maybe someone else here run something similar already and can share some exteriences? I found this question which at least confirms it can be done.

    Read the article

  • How can I chainload a USB drive from GRUB2?

    - by magic.plane
    I'm using GNU GRUB version 1.99-12ubuntu5, booted over the network using PXE. I used grub-mknetdir to generate the GRUB image and directory tree, which I'm serving on a TFTP server using Tftpd32 in Windows. I've put the latest version of Clonezilla on my USB drive using Tuxboot. Right now, in GRUB's CLI, using ls lists only the (pxe) device, even if the USB drive is plugged in before the computer is on. Is there any way I can chainload Clonezilla on my USB from GRUB, which is booted over the network?

    Read the article

  • Mac OSX server command equivalent for dhclient?

    - by John Hall
    Is there an MacOS command that makes a dhcp request, and renews the old lease, drops it for a new one, or usefully reports errors or lack of response from a dhcp server? This would both help fix networking on the machine after problems on the network without rebooting and would also be useful to diagnose wider networking problems from a mac. I can not find any command equivalent of dhclient though obviously some component must be serving this purpose. The question is, is that component exposed to a command line interface? I am biased to the command line for these features and may have overlooked settings panels or tools that might solve it using a gui interface. I believe this question is at the heart of this other question: Is there an equivalent command for 'init.d/networking restart' in OS X

    Read the article

  • Run one virtual machine on a Linux server + standard Linux functions

    - by fistameeny
    Hi, I am looking for a method to setup a Linux server (running Ubuntu Server) that uses Samba for file sharing, as well as hosting a Windows virtual machine (in this case, Windows Small Business Server 2003, which in turn hosts SQL Server Express - Exchange won't be used on this). I would like to have the Linux server serving the files over Samba, and hosting the Virtual Machine. This obviously rules ESXi out as it couldn't do Samba at the same time. What would be the next best solution to give reasonable speed? Vmware Server 2.0, VirtualBox, Xen? There will be 10-15 users accessing the Samba shares and the SQL Express virtual machine. Matt

    Read the article

  • Nginx static files exclude one or some file extensions

    - by Evgeniy
    I'm serving up a static site via nginx. location ~* \.(avi|bin|bmp|dmg|doc|docx|dpkg|exe|flv|gif|htm|html|ico|ics|img|jpeg|jpg|m2a|m2v|mov|mp3|mp4|mpeg|mpg|msi|pdf|pkg|png|ppt|pptx|ps|rar|rss|rtf|swf|tif|tiff|txt|wmv|xhtml|xls|xml|zip)$ { root /var/www/html1; access_log off; expires 1d; } And my goal is to exclude requests like http://connect1.webinar.ru/converter/task/. Full view is like http://mydomain.tld/converter/task/setComplete/fid/34330/fn/7c2cfed32ec2eef6788e728fa46f7a80.ppt.swf. Despite the fact these URLs ends in such a format they are not static, but fake script requests, so I have a problems with them. What is the best way to do this? How can I add an exclusion for this URL or maybe I can to exclude the specific file exptension (.ppt.swf, pptx.swf) from the list of this Nginx location? Thanks.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >