Search Results

Search found 5955 results on 239 pages for 'clr hosting'.

Page 201/239 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • apache2 + mod_fastcgi + suexec + php5.2 = unstable on high load

    I am hosting several (~30) different sites on one server with apache2+fastcgi+suexec+php5. Sites have different loads and different execution times of their scripts (some of them process request for 5-7 seconds, some <1sek). Sometimes when single site receives very high load (all php instances of this site are created and used) - whole apache server hangs. Apache (worker mpm) creates new processes up to the upper limit. It looks like it is starting to queue ALL new request for EVERY site, not only the one that has high load and quickly achieves process limits... restart of apache solves the problem... config: FastCgiConfig -singleThreshold 1 -multiThreshold 10 -listen-queue-depth 30 -maxProcesses 80 -maxClassProcesses 12 -idle-timeout 30 -pass-header HTTP_AUTHORIZATION -pass-header If-Modified-Since -pass-header If-None-Match (earlier have default -listen-queue-depth = 100, but it didn't change anything...) Any suggestions? Another question - how is implemented this listen queue? is it one queue for whole apache, or unique queue for every defined php apllication (suexec site)? I would like to achieve something like this: when one site receives high load and its queue is full - server bounces next request, but only for this one site.. Other sites should work properly...

    Read the article

  • Using 1and1.com Servers, SMTP Mail is Limited - Local XAMPP Server Works As Expected

    - by nicorellius
    I'm starting to not like 1and1.com that much. I've used them for years, but mainly for simple sites without much need for configuration. I know there are better hosting companies out and I may go seeking them. The problem here is that on my Local XAMPP server (sitting on a network with Comcast ISP), I have a PHP script that uses PEAR::Mail to send mail using MIME. The script works find locally with either smtp.1and1.com and corresponding credentials and smtp.gmail.com with corresponding credentials, using appropriate ports, etc. 1and1 tells me that I have to change the MX record on the domain where this script runs in order to make this work. This doesn't make sense to me. Now I'm pretty new to all this, but how is it that this is the case? Why can my local server work just fine, out of the box, but their servers not? I have asked them these questions, but they are very vague and I cannot get any good answers from them. Versions: PEAR Version: 1.5.0 PHP Version: 4.4.9 Zend Engine Version: 1.3.0 My apologies in advance for my ignorance. Thanks for the help in advance.

    Read the article

  • apache performance timing out

    - by Mike
    Im running a webserver where I'm hosting about 6-7 websites. Most of these websites get their content from MySQL which is hosted on the same server. Traffic average per day is about 500-600 unique visitors, about 150K hits per week. But for some reason sometimes websites send a timeout, OR sometimes websites dont load all images. I know that I should perhaps separate static content from dynamic content, but for now I think that's not a possibility. I would appreciate any suggestions on how could I improve the performance of apache, so it doesn't keep timing out. Server is running on Sempron LE 1300; 2.3GHz,512K Cache 2GB RAM 10Mbps/1Mbps Services: MySQL, ProFTPD, Apache. Private + Shared = RAM used Program ---------------------------------------------------- 1.2 MiB + 54.0 KiB = 1.2 MiB proftpd 4.1 MiB + 23.0 KiB = 4.1 MiB munin-node 20.8 MiB + 120.5 KiB = 20.9 MiB mysqld 47.3 MiB + 9.9 MiB = 57.3 MiB apache2 (22) top: Mem: 2075356k total, 1826196k used, 249160k free, Timeout 35 KeepAlive On MaxKeepAliveRequests 300 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 10 MinSpareServers 20 MaxSpareServers 20 MaxClients 60 MaxRequestsPerChild 1000 </IfModule> <IfModule mpm_worker_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>

    Read the article

  • internet-based sync software that will keep running after Windows Live Sync stops doing PC-to-PC-syncs?

    - by Warren P
    According to the wikipedia page, Microsoft Live Sync will shortly stop offering the PC-to-PC sync service. There are lots of apps to sync two PCs on the same LAN, but I want to sync two PCs that are in different cities, across the internet, traversing two different NATs, and that requires some kind of service running in the internet that both connect into. There is already a few questions about syncing folders and files, but this is not a duplicate because none of them answer this basic question: Microsoft Live Sync works better than RSYNC, or any of the linked SYNC solutions in any of the "not really duplicates" because it works even when the two PCs have NAT and firewalls between them that forbid direct connectivity, because Windows Live Sync has a free always-on internet server that all the client PCs connect into. I'm looking for a FREE (no-fees) Microsoft Live Sync work-alike PC-to-PC sync solution that works between PCs and Macs, at least, as well as between PCs, and works behind NAT and firewalls at least as well as Microsoft's solution. (Note that Microsoft's solution makes only outbound socket calls to a microsoft server, so this solution must necessarily include a server-hub component that is hosted publically on a free site and which does not require that I set up and manage and pay for my own public internet hosting site) Hint: None of the answers in the linked duplicate are equivalent (PureSync,FreeFileSync,BestSync 2010,SyncButler,Comodo BackUp,QuickShadow,Gbridge) in that none of them work for the PC to Mac situation, where firewalls and nats prevent direct connection, or else they require money to be paid. When Microsoft Live Sync / Live Mesh finally kills direct PC-to-PC mode, the limitation will be that you will have to pay for more than 25 GB of cloud service, and you can then only sync PC #1 to PC #2 if you first sync to the cloud, then down to other clients. I can currently sync 100 gb of data from one computer to another, only temporarily "moving the data" through Microsoft's data servers without using up my Skydrive storage quota.

    Read the article

  • IIS 7 doesn't do a 301 redirect

    - by rohde
    Hi there! I have just migrated some sites to IIS7 from IIS6, and am experiencing problems with one of the sites. When a request comes in for the site (www.ourdomain.com/site1/) with a trailing slash everything is fine. But if the trailing slash is left out (www.ourdomain.com/site1), then the request fails. Apparently it doesn't do a 301 redirect on the slashl-less URL, resulting in ASP.NET throwing an exception. This is not happening with the other sites at the same domain. What can be the cause of this? EDIT: The exception I get is this: System.Web.HttpException: Failed to Execute URL. at System.Web.Hosting. ISAPIWorkerRequestInProcForIIS6.BeginExecuteUrl(String url, String method, String childHeaders, Boolean sendHeaders, Boolean addUserIndo, IntPtr token, String name, String authType, Byte[] entity, AsyncCallback cb, Object state) at System.Web.HttpResponse.BeginExecuteUrlForEntireResponse(String pathOverride, NameValueCollection requestHeaders, AsyncCallback cb, Object state) at System.Web.DefaultHttpHandler.BeginProcessRequest(HttpContext context, AsyncCallback callback, Object state) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

    Read the article

  • Java VM problem in OpenVZ

    - by Ginnun
    Hi, I bought a vps for hosting my java needs. But I can't run java on it. Everything about java is correctly installed but when I try to run java ("java -version" forexample) I get this error : Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I don't think this is a java centered problem, Out of memory for sure. I contacted the vps admin, but he says everything is fine, you have 2gb ram, expandable to 4gb! I did a bit search on the subject, here is my BEANS file (numbers converted to humanredable form using a script). By the way do JVM heap memory allocs count on kmemsize or privvmpages ? How much ram does that configuration allows me to allocate with jvm for a single process? resource held maxheld barrier limit failcnt kmemsize 2.25 mb 2.35 mb 13.71 mb 14.10 mb 0 lockedpages 0 0 1024.00 kb 1024.00 kb 0 privvmpages 20.54 mb 21.33 mb 256.00 mb 272.00 mb 156 shmpages 5.00 mb 5.00 mb 84.00 mb 84.00 mb 0 numproc 13 14 240 240 0 physpages 9.36 mb 9.45 mb 0 MAX_ULONG 0 vmguarpages 0 0 132.00 mb MAX_ULONG 0 oomguarpages 9.36 mb 9.45 mb MAX_ULONG MAX_ULONG 0 numtcpsock 3 3 360 360 0 numflock 3 3 188 206 0 numpty 2 2 16 16 0 numsiginfo 0 1 256 256 0 tcpsndbuf 69.17 kb 69.17 kb 1.64 mb 2.58 mb 0 tcprcvbuf 48.00 kb 48.00 kb 1.64 mb 2.58 mb 0 othersockbuf 6.80 kb 6.80 kb 1.07 mb 2.00 mb 0 dgramrcvbuf 0.00 kb 0.00 kb 256.00 kb 256.00 kb 0 numothersock 9 10 360 360 0 dcachesize 0.00 kb 0.00 kb 3.25 mb 3.46 mb 0 numfile 704 746 9312 9312 0 numiptent 10 10 128 128 0 Thanks in advance!

    Read the article

  • How to replace the domain name in a Wordpress database?

    - by Cristian
    I have a Wordpress database which was installed in a development environment... thus, all references to the site itself have a fixed IP address (say 192.168.16.2). Now, I have to migrate that database to a new Wordpress installation on a hosting. The problem is that the SQL dump contains a lot of references to the IP address, and I have to replace it with: my_domain.com. I could use sed or some other command to change the that from the command line, the problem is that there are a lot of configuration data which uses JSON. So what? Well, as you know, JSON arrays uses things like: s:4: to know how many chars an element has, and thus, if I just replace the IP with the domain name, the configuration files will get corrupted. I used an app for Windows some years ago that allows to change values in a database and takes care of the JSON arrays. Unfortunately, I forgot the name of the app... so the question is: do you know any app that allows me to do what I want?

    Read the article

  • How to fake ip at localhost without LoopBack.

    - by sexer
    How can i fake an ip on my own PC? for example if there were an ip address lets say 201.91.81.71, that Host is somewhere outside of my red and is hosting a webserver. How can set a website on my own PC, and when i go to browser and try to explore 201.91.81.71 it actually explore the website at my own PC? pd: I need it with IP addresses not domain names, since I need to implement it on a non-web service. First guess was installing a LoopBack with 201.191.81.71 as ip, but since some times the subnet works and some other it doesn't isn't a stable solution. Second guess was adding a route to route table : route add 201.91.81.71 mask 255.255.255.255 192.168.1.2 192.168.1.2 is the ip address of my NIC. If i could add this route it would work but windows doesn't let me do so. route add 201.91.81.71 mask 255.255.255.255 127.0.0.1 it doesn't let me set as gateway 127.0.0.1 if 201.91.81.71 isn't set in a NIC, so thats why i set sometimes loopback and this route add is auto, but it needs a subnet mask which doesn't match the ip and cannot set 255.255.255.255, im in real throubles here. can i get some help? thx.

    Read the article

  • Setting up Windows network on Xen

    - by samyboy
    I'm trying to install a Windows XP server in a Xen environment. The OS is booting fine. Unfortunately I can't figure out how to set up the network settings. Dom0 is a Debian Lenny currently hosting around 10 Linux virtual servers. Windows tells me I have a "limited connection". It can't get any DHCP response, nor access other hosts in the network Here is the Xen's client config file: kernel = '/usr/lib/xen-3.2-1/boot/hvmloader' builder = 'hvm' memory = '1024' device_model='/usr/lib/xen-3.2-1/bin/qemu-dm' acpi=1 apic=1 pae=1 vcpus=1 name = 'winexchange' # Disks disk = [ 'phy:/dev/wnghosts/exchange-disk,ioemu:hda,w', 'file:/mnt/freespace/ISO/DVD1_Installation.iso,ioemu:hdc:cdrom,r' ] # Networking vif = [ 'mac=00:16:3E:0A:D0:1B, type=ioemu, bridge=xenbr0'] # video stdvga=0 serial='pty' ne2000=0 # Behaviour boot='c' sdl=0 # VNC vfb = [ 'type=vnc' ] vnc=1 vncdisplay=1 vncunused=1 usbdevice='tablet' Server config (/etc/xen/xend-config.sxp) (network-script network-bridge) (network-script network-dummy) (vif-script vif-bridge) (dom0-min-mem 512) (dom0-cpus 0) (vnc-listen '0.0.0.0') Since I use Debian I had to create a link like this: /etc/xen/qemu-ifup - /etc/xen/scripts/qemu-ifup What did I do wrong? Please tell me if you want some more info (logs, etc)

    Read the article

  • Configuring CakePHP on Hostgator

    - by yaeger
    I have absolutely no idea what I am doing wrong here. I have followed just about every guide there is with installing cakephp on shared hosting and I am still having problems. I have also started over each time when following a guide. Maybe someone can help me out here as I am out of options. Here is my current setup: / app webroot vendors lib cake public_html .htaccess index.php plugins I have configured the index.php file in the public_html to point to the correct files. I have also done this in the index.php file located in webroot folder. I am getting an Internal 500 server error and it says to check my logs for what the error specifically is. However there are no logs being generated. I removed the .htaccess file from the public_html folder and I get the following errors: Warning: require(/app/webroot/index.php) [function.require]: failed to open stream: No such file or directory in /home/user/public_html/index.php on line 40 Fatal error: require() [function.require]: Failed opening required '/app/webroot/index.php' (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/user/public_html/index.php on line 40 line 40 is require APP_DIR . DS . WEBROOT_DIR . DS . 'index.php'; DS = "/" WEBROOT_DIR = "app" Anyone have any suggestions? I am at lost at what I am doing wrong.

    Read the article

  • How to choose the most optimal RAID settings on PE2950

    - by javano
    I have some Dell PowerEdge 2950's with 4x 15k, 150GB Cheetah SAS drives in them. They are going to be VM hosts, CentOS running ESXi with Windows Server 2k8 guests. Some guests will be hosting IIS servers, and others MSSQL servers. I am trying to set the RAID virtual disks settings and can't decide which is more optimal given this situation; Read Policy: Out of Read-Ahead, No-Read-Ahead and Adaptive Read-Ahead, the default is Read-Ahead. I will be making large sequential writes initially, writing out blank images for virtual machine hard drives (lets say 30GBs from /dev/zero for example) so Read-Ahead seems good at first. But within the virtual machines reads could be random from anywhere within their file systems as they are IIS and MSSQL servers, so perhaps No-Read-Ahead is a better idea? Now I think Adaptive Read-Ahead would be better then as a compromise but I don't know much about this option, how does it compare in performance to the others? Write Policy: write-back caching, write-through caching, the default is write-back caching. The default of write-back caching is safer than write-through caching but at a performance expense. My thinking here is that in the event of power loss for example, it seems more likely in my head (this is why I need some clarification!) that damage will occur to a guest VM with write-back caching enabled, so I should favour write-through? I have searched around and there is obviously no definitive answer, so I would like to find out what is best for my situation.

    Read the article

  • switching dns server providers

    - by Yoav Aner
    I'm trying to wrap my head around something that I thought I kinda understood, but clearly there's some piece missing. We're currently using Zerigo as our primary dns, with slave dns running on linode. This works quite well. However, recent DDOS attacks on zerigo meant that whilst dns queries were still resolved, we were unable to make any dns changes. Since we rely on dns changes on our own infrastructure, I'm looking to improve this somehow. I'd rather not ditch zerigo completely, and realise that this or similar problems can happen with ANY primary dns hosting provider. It might not be DDOS, but a bug on their server, or something that means we can no longer issue updates. For this I want to have some fallback option: a completely independent (primary) dns provider (maybe AWS), which we will keep in-sync manually. We will switch-over to it when there's a problem. This brings me to my question: How do I make sure we can switch those providers quickly enough? specifically, on our registrar, there's a list of name servers, but no settings like TTL etc. How do dns clients know to use the newly updated name server records? Is this configured in the SOA? However, the SOA itself is hosted with the dns provider and we might not be able to update it... This is not a question about a one-time move, which can be planned and scheduled and tested, but rather to be able to do so when things are half-broken.

    Read the article

  • Write permissions LAMP (Debian Lenny)

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • Possible to IPSec VPN Tunnel Public IP Addresses?

    - by caleban
    A customer uses an IBM SAS product over the internet. Traffic flows from the IBM hosting data center to the customer network through Juniper VPN appliances. IBM says they're not tunneling private IP addresses. IBM says they're tunneling public IP addresses. Is this possible? What does this look like in the VPN configuration and in the packets? I'd like to know what the source/destination ip/ports would look like in the encrypted tunneled IPSec Payload and in the IP packet carrying the IPSec Payload. IPSec Payload: source:1.1.1.101:1001 destination:2.2.2.101:2001 IP Packet: source:1.1.1.1:101 destination:2.2.2.1:201 Is it possible to send public IP addresses through an IPSec VPN tunnel? Is it possible for IBM to send a print job from a server on their network using the static-nat public address over a VPN to a printer at a customer network using the printer's static-nat public address? Or can a VPN not do this? Can a VPN only work with interesting traffic from and to private IP addresses?

    Read the article

  • HTTP Error: 413 Request Entity Too Large

    - by Torben Gundtofte-Bruun
    What I have: I have an iPhone app that sends HTTP POST requests (XML format) to a web service written in PHP. This is on a hosted virtual private server so I can edit httpd.conf and other files on the server, and restart Apache. The problem: The web service works perfectly as long as the request is not too large, but around 1MB is the limit. After that, the server responds with: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>413 Request Entity Too Large</title> </head><body> <h1>Request Entity Too Large</h1> The requested resource<br />/<br /> does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. </body></html> The web service writes its own log file, and I can see that small messages are processed fine. Larger messages are not logged at all so I guess that something in Apache rejects them before they even reach the web service? Things I've tried without success: (I've restarted Apache after every change. These steps are incremental.) hosting provider's web-based configuration panel: disable mod_security httpd.conf: LimitXMLRequestBody 0 and LimitRequestBody 0 httpd.conf: LimitXMLRequestBody 100000000 and LimitRequestBody 100000000 httpd.conf: SecRequestBodyLimit 100000000 At this stage, Apache's error.log contains a message: ModSecurity: Request body no files data length is larger than the configured limit (1048576) It looks like my step #4 didn't really take, which is consistent with step #1 but does not explain why mod_security appears to be active after all. What more can I try, to get the web service to receive large messages?

    Read the article

  • Port forwarding + shared connection with Ubuntu

    - by Joey Adams
    Because my wireless router's ethernet ports are defective, I set up a shared wireless connection from my laptop (which has wifi) to my eMac (which does not) via a crossover ethernet cable. The laptop is behind a router as 192.168.1.131, and the eMac is behind the laptop as 10.42.43.1 . The laptop is running Ubuntu 9.10 (Karmic). I achieved the shared connection through NetworkManager Applet. I right-clicked on the network icon at the topright, went to Edit Connections, selected the Wired connection named "Auto eth0", clicked "Edit...", went to the "IPv4 Settings" tab, and selected the Method "Shared to other computers". The eMac can now access the Internet. Now I want to enable port forwarding. There's a game I want to play that needs port 6112 forwarded (both TCP and UDP) in order to host games. I set up the router to enable port forwarding for 192.168.1.131 (the laptop), but port forwarding still isn't available on the eMac. I suppose I need to pretend my laptop is a router and configure port forwarding on it, indicating that incoming connections to the laptop (192.168.1.131) should be forwarded to the eMac on the shared connection (10.42.43.1 ). Thus, packets coming into the router on port 6112 would be redirected to the laptop (by the router), then to the eMac (by the laptop). My question is, how would I do that on Ubuntu (in light of NetworkManager's presence)? Also, if I can't get this to work, does anyone mind hosting a comp stomp? :D

    Read the article

  • Can any postfix guru assist me determine how emails are still being sent via my server from unauthorized sources?

    - by Dave
    Hi all, I'm getting a little concerned as I run a small server hosting a number of websites and manage the email for a few dozen people. Just recently though I've had a couple of notifications from spamcop alerting me that spam has been sent from my server, and when I have a look over the logs from time to time I can indeed see that there are many repeated attempts of mail being sent from my server. Most of the time it gets knocked back from the destination servers but sometimes its getting through. Unfortunately I'm not linux or postfix expert, I can get by but had though I had my machine locked down quite securely, I don't allow relaying, when I check the online DNS/MX tools they tend to report my server as being OK so I'm not sure where to take it now and hoping someone might be able to throw me a few pointers. I get lots of entries like this in my MAIL.INFO log Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 66B88257C12F: from=<>, size=3116, nrcpt=1 (queue active) Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 614C2257C1BC: from=<[email protected]>, size=2490, nrcpt=3 (queue active) and Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6471]: 0A316257C204: to=<[email protected]>, relay=none, delay=384387, delays=384384/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6470]: 5848C257C20D: to=<[email protected]>, relay=none, delay=384373, delays=384370/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) then there tends to be connection timeouts, so from what I see even though I had relaying disabled.. something is getting by and trying to send.. So if you can help that will be greatly appreciated, and any further logging/config info I can supply. Thanks

    Read the article

  • Nginx: Serve static files out of a given directory - one level too deep

    - by Joe J
    I'm pretty new to nginx configs. I'm having some difficulty with a pretty basic problem. I'd like to host some static files at /doc (index.html, some images, etc). The files are located in a directory called /sites/mysite/proj/doc/. The problem is, is that with the nginx config below, nginx tries to look for a directory called "/sites/mysite/proj/doc/doc". Perhaps this can be fixed by setting the root to /sites/mysite/proj/, but I don't want to potentially expose other (non-static) assets in the proj/ directory. And for various reasons, I can't really move the doc/ directory from where it is. I think there is a way to use a Rewrite rule to solve this situation, but I don't really understand all the parts, so having some difficulty formulating the rule. rewrite ^/doc/(.*)$ /$1 permanent; I've also included a working example of hosting files out of a /sites/mysite/htdocs/static/ directory. > vim locations.conf location /static { root /sites/mysite/htdocs/; access_log off; autoindex on; } location /doc { root /sites/mysite/proj/doc/; access_log on; autoindex on; } 2011/11/19 23:49:00 [error] 2314#0: *42 open() "/sites/mysite/proj/doc/doc" failed (2: No such file or directory), client: 100.100.100.100, server: , request: "GET /doc HTTP/1.1", host: "myhost.com" Does anyone have any ideas how I might go about serving this static content? Any help is much appreciated. Thanks, Joe

    Read the article

  • What is /etc/apache2/sites-available used for and is it necessary?

    - by Mariane
    I have 3 sites, each with a specific IP, running on apache2 (up-to-date Ubuntu). To put a site online, I just created a file in: /etc/apache2/sites-enabled and in this file I told apache which directory was the root directory for this site, and to which IP it should correspond. So I have 000-default 001-www.lapf.eu 002-www.felkin.info 003-www.seidhr.fr in this directory. My first site, lapf suddenly lost contact with its database after the domain name was transferred from another registrar unto the registrar who is also hosting the site's data. Then I did an update, and I reinstalled mysql-server and mysql-common, and I did I-have-forgotten-what to reinstall the locales (uft8 and such) which had vanished for some reason. This fixed my first site. Now I noticed that the other 2 sites are offline. Pointing a browser to them just hangs until timeout. They used to function, and their domain names did not move, they are still registered at the same place. The files are still in /etc/apache2/sites-enabled I noticed another directory: /etc/apache2/sites-available with just defaut and default.ssl in it. Why are there 2 directories, sites-enabled and sites-available? Should I copy the files from "sites-enabled" into "sites-available"? Or should I put a modified version of each in "sites-available"? command: "apache2ctl -S" VirtualHost configuration: 92.243.20.169:80 Charlotte (/etc/apache2/sites-enabled/001-www.lapf.eu:1) 92.243.21.141:80 xvm-21-141.ghst.net (/etc/apache2/sites-enabled/002-www.felkin.info:1) 92.243.4.114:80 xvm-4-114.ghst.net (/etc/apache2/sites-enabled/003-www.seidhr.fr:1) wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server Charlotte (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost Charlotte (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • On what should i not be pennywise buying a machine for SqlServer 2008?

    - by Michel
    Hi, i'm going to do a project for a client and i'll be hosting the database server myself. Normally it would be on my dev machine, but there will also be data pushed into it during developing and testing, so i would like to setup a dedicated test sql server. But, as you might guess, i can't afford to go to Dell and buy one mega 16 core 16 GIG 10 TB raid 5 machine (wow, that sounds cool) So i have to save the money somewhere... the hardware only has to live for a year (longer is nice of course), and the sql server won't be hit too hard: i guess the average server will only see it as a cough once in a while. But i do want the machine to be a bit performant: if it does get some data, it must be a bit responsive. So my question is were can i leave out the expensive parts: is 2 GB enough, or must i take 4GB, is an average processor enough or should it be a top of the bill? Is Sql server a large resource user or is a simple desktop pc good enough? It wil run on win2008 by the way.

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • Setting up WHM nameservers

    - by Miskone
    I am new to server administration and I've just got a dedicated root server from Hetzner. First I set up in Hetzner's robot DNS entries Registered nameservers: ns1.raybear.com 88.198.32.57 ns2.raybear.com 88.198.32.57 Under DNS entires I have buzz-buzz.me pointing to 88.198.32.57 // My server IP address and on my WHM I have DNS zone for buzz-buzz.me ; cPanel first:11.42.1.17 (update_time):1402062640 Cpanel::ZoneFile::VERSION:1.3 hostname:hosting.raybear.com latest:11.42.1.17 ; Zone file for buzz-buzz.me $TTL 14400 buzz-buzz.me. 86400 IN SOA ns1.raybear.com. miskone.gmail.com. ( 2014060605 ;Serial Number 86400 ;refresh 7200 ;retry 3600000 ;expire 86400 ) buzz-buzz.me. 86400 IN NS ns1.raybear.com. buzz-buzz.me. 86400 IN NS ns2.raybear.com. buzz-buzz.me. 14400 IN A 88.198.32.57 localhost 14400 IN A 127.0.0.1 buzz-buzz.me. 14400 IN MX 0 buzz-buzz.me. mail 14400 IN CNAME buzz-buzz.me. www 14400 IN CNAME buzz-buzz.me. ftp 14400 IN CNAME buzz-buzz.me. agent 14400 IN A 88.198.32.57 src 14400 IN A 88.198.32.57 platform 14400 IN A 88.198.32.57 But still I have some problems accesing buzz-buzz.me, agent.buzz-buzz.me and platform.buzz-buzz.me Also I have problem getting mails on Google account, I can send but not receive emails. How to solve this. As I said I am completly new here and I need urgent help.:(

    Read the article

  • Cannot resolve Hostname to IP, but IP to hostname works

    - by blade
    Hi, I have deployed a bunch of windows server VMs on a cloud hosting service. These machines are all joined to a domain controller on the same service, which also hosts DNS. All of the domain-joined machines have dynamic IP (along with the DC). If I try to resolve any of the hostnames remotely, it fails. For example, I am in SQL Server Reporting Services and I need to connect to a remote server. I provide the hostname of the desired target server and this fails, but then if I provide the IP, this works. How can I pass the hostname and have this resolve to IP? Is there anything I need to look for in the DNS server? It has records of the hostnames (in forward lookup I think), but reverse is empty. Isn't it the case that forward lookup resolves ip to hostname and reverse resolves hostname to ip? Also, I don't know what he subnet mask because this is not in my control, so the machines may not be in the same subnet - can this be a cause of the problem? Where is the problem? Thanks

    Read the article

  • How to route public static IP to a virtual machine on a vmware ESXi host?

    - by Kevin Southworth
    I have 5 static IPs from my ISP (Comcast) and I have a physical machine with VMware ESXi 4.0 on it that is hosting multiple virtual machines. Right now I am just using the default vmware virtual network (vswitch0) with DHCP from the Comcast IP Gateway Router and everything is working fine. Each virtual machine can access the internet, etc. One of my virtual machines is a webserver (Windows Server 2008) and I want to assign it to 1 of my 5 static IPs so it's accessible from the public internet, while leaving the other VMs on the internal LAN still using DHCP. If I just plug my laptop directly into the Comcast IP Gateway (it has 4 ports on the back) and assign my laptop a Static IP using the windows networking dialogs, then I can hit my laptop from the public internet and it works great. However, if I try to do the same steps to set a static IP config on my Windows Server 2008 VM, it does not work. The VM cannot access the internet (open Firefox and try to visit google.com), and I cannnot see the VM from the public internet either. I'm assuming I'm missing something in the ESXi config somewhere, but I'm pretty new to ESXi and I'm not sure how to configure it to work this way.

    Read the article

  • Looking for a host based network monitor solution

    - by Ole Martin Handeland
    Hi all! Problem So, my hosting company has a network usage graph for my dedicated server. It seems that one day earlier this month, my network usage suddenly spiked with several hundred megabytes transferred (usually it's in the tens, not hundreds). It was probably me, but i just can't be sure who or what it was. Question So my question is; does anyone know of any host based solution for monitoring network usage that would tell me the client's IP-address, the port/service he/she used? What I don't want I'm just guessing that someone will suggest i use nagios, munin, zabbix, cacti, mrtg - I've also looked at those, but a graph over network usage will not give me the answers I'm looking for. :-) Almost there I've already looked at a lot of monitoring solutions, and I've tried [ntop][http://www.ntop.org/], [darkstat][http://unix4lyfe.org/darkstat/] and others. Darkstat just didn't give me the answers. Although it listed a lot of statistics, and i could list the clients - it doesn't show me the network usage for a particular period. Ntop is by far the best I've seen so far - but i think it mostly shows current network usage, not the historical part. I could run apt-get upgrade and download a whole bunch of software, but not see it in the log afterwards.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >