Search Results

Search found 12836 results on 514 pages for 'host mechanic'.

Page 408/514 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • mod_jk problem: Tomcat is probably not started or is listening on the wrong port

    - by Konrad
    Hi, I am running some application on Tomcat 6.0.26. There is Apache in front of web server talking to it over mod_jk. Every few hours when I try to access application browser simply spins, and no content is retrieved. No error is reported in Tomcat logs, but I fond such errors in mod_jk log: [Sun Jul 04 21:19:13 2010][error] ajp_service::jk_ajp_common.c (1758): Error connecting to tomcat. Tomcat is probably not started or is listening on the wrong port. worker=***** failed [Sun Jul 04 21:19:13 2010][info] jk_handler::mod_jk.c (1985): Service error=0 for worker==***** [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][error] ajp_get_reply::jk_ajp_common.c (1503): Tomcat is down or refused connection. No response has been sent to the client (yet) [Sun Jul 04 21:19:13 2010][error] ajp_get_reply::jk_ajp_common.c (1503): Tomcat is down or refused connection. No response has been sent to the client (yet) [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][error] ajp_get_reply::jk_ajp_common.c (1503): Tomcat is down or refused connection. No response has been sent to the client (yet) [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 45 [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][info] ajp_service::jk_ajp_common.c (1721): Receiving from tomcat failed, recoverable operation attempt=0 my worker is configured in following way: worker.admanagonode.port=8009 worker.admanagonode.host=*****.com worker.admanagonode.type=ajp13 worker.admanagonode.ping_mode=A worker.admanagonode.socket_timeout=60 worker.admanagonode.prepost_timeout=10000 worker.admanagonode.connect_timeout=10000 worker.admanagonode.connection_pool_size=200 worker.admanagonode.connection_pool_timeout=300 worker.admanagonode.retries=20 worker.admanagonode.socket_keepalive=1 worker.admanagonode.cachesize=10 worker.admanagonode.cache_timeout=600 Tomcat has same port number in Connector configuration: <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" address="*********" /> Does any of you has any ideas what i am missing? What can cause such problems? Cheers Konrad

    Read the article

  • Apache2 - 500 internal server error

    - by Lucio Coire Galibone
    i'm running a VPS with Linux CentOs 6 with 4 GB of RAM, 10 GB of HD and 2 virtual CPU Intel(R) Xeon(R)CPU L5640 @ 2.27GHz. As my host says each virtual CPU must be at least 0.5 physical cpu. At certain times of the day, those with more traffic, trying accessing my php script i receive intermittently "500 internal server error". I activate logging to debug level from apache, and also the PHP logging with E_ALL, but I can't find reference to Error 500 in any logs(I checked the right logs!). I haven't got any .htaccess file in path script. The strange thing is that the error start at first php line in the script (the previous html displays correctly, but at the first php line the script send 500 error). The cpu load is always good (max 0.15 0.08 0.01) and RAM is close to 95% but it arrived to swap just 2 times in a month with 2-5 MB. Apache works with prefork with this values: <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 280 MaxClients 280 MaxRequestsPerChild 4000 </IfModule> Everthing works correctly and I don't get any error in quiet times, but i start receive errors when traffic rises (6-9000 visits per hour). Can i solve the problem increasing resources? (i can upgrade RAM up to 16 GB). It can depend from reaching MaxClients (but apache must write it on log, right?)? If I upgrade RAM to 6 or 8 GB i have to calculate MaxClients value with this? MaxClients = Total RAM dedicated to the web server / Max child process size Max child process size is around 20M. How else can the problem be? Thanks in advance

    Read the article

  • Instabilities with Bridged and bonded interfaces

    - by Henry-Nicolas Tourneur
    I did post yesterday to get a working setup with several bridged interfaces used for virtual machines (KVM/libvirt). One of the bridged interface is just using eth3 as its ports while the second one (public traffic) is using an ethernet bonded interface. That setup is working but not all the time ! I can start a download from a vm, then it will stop and freeze! So I don't know if my bridge parameters are correct, could you check the below config ? iface eth3 inet manual auto bond0 iface bond0 inet manual slaves eth1 eth2 pre-up ip link set bond0 up down ip link set bond0 down auto br0 iface br0 inet static address 10.160.0.7 netmask 255.255.255.128 bridge_ports eth3 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp on auto br0:1 iface br0:1 inet static address 10.160.0.9 netmask 255.255.255.255 auto br0:2 iface br0:2 inet static address 10.160.0.10 netmask 255.255.255.255 auto br1 iface br1 inet static address 217.4.40.242 netmask 255.255.255.240 gateway 217.4.40.241 pre-up /etc/network/firewall start bridge_ports bond0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp on auto br1:1 iface br1:1 inet static address 217.4.40.252 netmask 255.255.255.255 auto br1:2 iface br1:2 inet static address 217.4.40.253 netmask 255.255.255.255 And yes, it also sometimes speaks about martian on the host: kernel: [249146.055172] martian source 10.160.0.17 from 10.160.0.10, on dev vnet2 kernel: [249146.073122] ll header: ff:ff:ff:ff:ff:ff:54:52:00:76:c3:5c:08:06

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • How do I get a subdomain on Xampp Apache @ localhost?

    - by jasondavis
    **UPDATE- I got it working now, I just had to change to The port number is important here. I just modified my windows HOST file @ C:\Windows\System32\drivers\etc and added this to the end of it 127.0.0.1 images.localhost 127.0.0.1 w-w-w.friendproject-.com 127.0.0.1 friendproject.-com Then I modified my httpd-vhosts.conf file on Apache under Xampp @ C:\webserver\apache\conf\extra Under the part where it shows examples for adding virtualhost I added this code below: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /htdocs/images/ ServerName images.localhost </VirtualHost> <VirtualHost *:80> DocumentRoot /htdocs/ ServerName friendproject.com/ </VirtualHost> <VirtualHost *:80> DocumentRoot /htdocs/ ServerName w-ww-.friendproject.c-om/ </VirtualHost> Now the problem is when I go to any of the newly added domains in the browser I get this error below and even worse news is I now get this error even when going to http://localhost/ which worked fine before doing this I realize I can change everything back but I really need to at least get htt-p://im-ages.localhost to work. What do I do? Access forbidden! You don't have permission to access the requested directory. There is either no index document or the directory is read-protected. If you think this is a server error, please contact the webmaster . Error 403 localhost 07/25/09 21:20:14 Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9

    Read the article

  • Can't ping a DNS zone on windows server 2008 R2

    - by Roberto Fernandes
    I´ve just configured a windows server 2008 r2, but got a lot of problems on DNS role. Let me talk about the server configuration: name: fdserver IP address: 192.168.0.10 I have a DNS zone called "fd.local". This is my domain and it´s working ok. I´ve created a zone called fdserver, and inside this zone a record (A) with "*" as a host. because this is a webserver, i´ve configured apache so if you enter something like "site.fdserver" it will point you to the "site" folder. This is working ok ONLY inside the server. This server is a DNS server too... and have 3 entries: 192.168.0.10 (his own IP), 8.8.8.8 and 8.8.4.4 (google public DNS). Now start the problems... Most of the computers on my network, CAN join the domain without problems. But just CAN'T ping "something.fdserver". Now comes the strange thing... If I remove the twoo secondary entries on my DNS server (8.8.8.8 and 8.8.4.4), it obvious stop accessing websites (like microsoft.com), but now the computer CAN ping "something.fdserver". I don´t know If I explained correctly... and my English is terrible... but inside the server is all working as it supposed to work. But in the workstation machines, it work only if I remove the secondary DNS!! If you need any details, just ask! thanks!

    Read the article

  • Mac OS X Client With Static DHCP Assignment Requests Wrong IP via Option 50

    - by Starchy
    I have a number of Mac (and a few Linux) laptops getting DHCP from a Force10 layer 3 switch, the only DHCP server on the subnet. There's a global dynamic pool, and for each full-time employee's laptop I have a single IP static pool set by MAC address. One and only one of the clients, running OS X 10.7.5, consistently fails to get a static assignment. The MAC address in the static pool definition has been carefully re-checked. Running tcpdump on a mirrored port when the laptop connects, I see that it is specifically requesting 10.100.0.252 (a dynamic address): 11:32:10.108280 IP (tos 0x0, ttl 255, id 28293, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > broadcasthost.bootps: [udp sum ok] BOOTP/DHCP, Request from 3c:07:54:xx:xx:xx (oui Unknown), length 300, xid 0x1399da89, Flags [none] (0x0000) Client-Ethernet-Address 3c:07:54:xx:xx:xx (oui Unknown) Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Request Parameter-Request Option 55, length 9: Subnet-Mask, Default-Gateway, Domain-Name-Server, Domain-Name Option 119, LDAP, Option 252, Netbios-Name-Server Netbios-Node MSZ Option 57, length 2: 1500 Client-ID Option 61, length 7: ether 3c:07:54:xx:xx:xx Requested-IP Option 50, length 4: 10.100.0.252 Lease-Time Option 51, length 4: 7776000 Hostname Option 12, length 10: "host-name" END Option 255, length 0 PAD Option 0, length 0, occurs 8 I haven't been able to find any extra system prefs or unusual software on the laptop. Disabling the interface and rebooting or temporarily setting the IP manually both fail to make any difference. Any suggestions appreciated.

    Read the article

  • Recommendations for good FTP server for Win 2008 x64

    - by sfhtimssf1970
    I spent a bunch of time learning/configuring the "all new and better" FTP feature for IIS7. In my opinion, it still fails hard: In order to have multiple FTP sites on the same machine, you have to use host|user usernames (like domain.com|jason) for every account. Using IIS Manager auth doesn't seem to work at all. I'm sure I'm doing something wrong, but I can't figure out what the hell it is. I've read all the official articles on it and configured it a hundred different ways. Doesn't play well with passive connection types. That has to be disabled on the client in order for it to work. Doesn't have any way to allow one user to see multiple sites no matter what binding they are connected to. For instance, if "jason" connects to ftp.domain.com, he should be able to see domain2.com, domain3.com without seeing domain4.com and domain5.com. It takes an act of God to set this up with IIS7. So I'm wanting to install a third party FTP server instead. I've looked at FileZilla both ZFTPServer. Anyone know of any pros/cons on these? Any other recommendations?

    Read the article

  • Fatal Execution Engine Error on the Windows2008 r2, IIS7.5

    - by user66524
    Hi Guys We are running some asp.net(3.5) applications on the Windows2008 r2, IIS7.5. Recently we got some event logs so difficult, we have not idea hope some guys can help. 1.EventID: 1334 (9-1-2011 8:41:57) Error message An error occurred during a process host idle check. Exception: System.AccessViolationException Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. StackTrace: at System.Collections.Hashtable.GetEnumerator() at System.Web.Hosting.ApplicationManager.IsIdle() at System.Web.Hosting.ProcessHost.IsIdle() 2.EventID: 1023 (9-1-2011 19:44:02) Error message .NET Runtime version 2.0.50727.4952 - Fatal Execution Engine Error (742B851A) (80131506) 3.EventID: 1000 (9-1-2011 19:44:03) Error message Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bcd2b Faulting module name: mscorwks.dll, version: 2.0.50727.4952, time stamp: 0x4bebd49a Exception code: 0xc0000005 Fault offset: 0x0000c262 Faulting process id: 0x%9 Faulting application start time: 0x%10 Faulting application path: %11 Faulting module path: %12 Report Id: %13 4.EventID: 5011 (9-1-2011 19:44:03) Error message A process serving application pool 'AppPoolName' suffered a fatal communication error with the Windows Process Activation Service. The process id was '2552'. The data field contains the error number. 5.some info: we got the memory.hdmp(234MB) and minidump.mdmp(19.2) from control panel action center but I donot know how to use that :(

    Read the article

  • SocketException (Timeout) only when running as scheduled task

    - by BVartin
    I'm running a C# web-scrapper application (that I wrote) on a Windows Server 2003 instance under a user belonging to the local Administrator group. When I run it within a desktop/remote-desktop session the application runs successfully but when I schedule it to run under the same user/security-context outside of the desktop session, all socket connections timeout. The scheduled task calls a batch file which in-turn calls the application. The Windows Server 2003 instance has a very basic configuration and isn't even connected to a domain. I cannot find anything in any firewall or security configuration which is preventing this but maybe I have overlooked something, can anyone be of any assistance? System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond X.X.X.X:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • virtualbox snapshot size

    - by intuited
    I've started using Windows 7 under VirtualBox on an Ubuntu 10.10 host. I took about 6 snapshots over the course of setting up the VM from the Windows restore image that came with the computer. My installations were more or less limited to windows updates, antivirus, and the VB Guest Additions. I uninstalled much more than I installed. The VM was running for about 24 hours total. The snapshots increased in size at a worrisome rate, even when the machine was idle: the snapshot .vdi file for the period between 11:22 PM and 9:02 AM is 6 gigs in size; during that time very little happened. The other .vdi files are between 0.5 and 3 GB, most between 1 and 2 GB. The corresponding .sav files are between 0.5 and 1 GB. The Internet connection where I was doing this is limited to 30KB/s download, which, constantly saturated, works out to less than 3 GB per 24 hour period. Is this normal? Is there something that can be done to make snapshots more practical? update On starting up the VM again, I've noticed that mscorsvw is using significant processing time. Apparently this process [precompiles .NET assemblies]. This may have been going on during the period when I was taking snapshots, which might explain some of the snapshot size increase. I would be somewhat surprised to learn that this could be responsible for over 10 GB of additional disk usage, or that it would run for roughly 24 hours. Is this possible?

    Read the article

  • PHP & IIS 6 Encoding problem

    - by Alexander
    The server is running Windows 2003 with IIS 6.0.3790.1830 x86 (iis.dll). My database server is Microsoft SQL Server 2000. My PHP version is 5.3. The original application is hosted on appserv1 and it's database is on dbserv1. It's working fine, everything is tuned up, running great. It was needed to place the same application (different modules) on another server, for other uses, so I copied the database on dbserv2, configured appserv2 to host the application, so I achieved 2 almost identical copies. Both dbserv1 and dbserv2 use the same encoding, both appserv1 and appserv2 are on IIS6 with the same PHP configurations. I also tried my best to have the same settings in the IIS. I also made sure that I pass the encoding information both in the HTTP headers and in the meta tags with http-equiv. Both applications use utf-8. The Problem is that the copy of the application doesn't display the non-ASCII characters normally in the browser, even if the browser detects correctly the UTF-8 encoding of the page. First I thought it was a database issue, given the fact that MSSQL 2000 doesn't support UTF-8, and instead it uses UCS-2, but when I redirected the application on appserv2 to work with the database on dbserv1, it had the same encoding problems. This is why I am asking in what way I can make it work. thank you for reading.

    Read the article

  • cannot connect to sql server express from sql server standard

    - by Jackson Sunuwar
    ... like my title says... I cannot connect to my instance on sql server express from sql server standard... I have tried disabling firing wall and checked sqlbrowser is started but for some reason I cannnot connect to my datbase... called server_name\sqlexpress.. I have a virtual machine and a full scale MS SQL Server 2008 R2 running on it... and I have several other vm running sqlexpress. they run fine and I can connect to them using sqlexpress... but when i try to access from sqlserver... I get this error. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) Digging deep into the error, I found this Error Number: -1 Severity: 20 State: 0 and finally this... Program Location: at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at Microsoft.SqlServer.Management.SqlStudio.Explorer.ObjectExplorerService.ValidateConnection(UIConnectionInfo ci, IServerType server) at Microsoft.SqlServer.Management.UI.ConnectionDlg.Connector.ConnectionThreadUser() Firewall is turned off on the VM that's running mssqlserver... I turned of firewall on one of the vm that's running the sqlexpress but I still get the error... can someone please help... thank you

    Read the article

  • NAT cause huge External (actually internal) bandwidth usage

    - by user67953
    We have 4 servers running in a data center, with internal IP: 192.168.3.* assigned. A hardware (FORTIGATE) firewall configured NAT, and it will lead the traffic as: external IP: 111.222.333.10 -> 192.168.3.10 www.server1.com 111.222.333.11 -> 192.168.3.11 www.server2.com 111.222.333.12 -> 192.168.3.12 www.server3.com In DNS, we have www.server1.com A 111.222.333.10 Now if I send a lot of data to www.server1.com from www.server2.com, the data will be send through 111.222.333.10 (external IP) and this cause our bandwidth usage huge (expensive!). The work around I have is to add a local host mapping to server2: 192.168.3.10 www.server1.com. That way when send files from server2 to www.server1.com, it will be internal. However, we are having more and more servers, it would be hard to manually add mapping to every server. Just wondering do we have another solution for this? Can we do something in the FORTIGATE firewall? ps. The DNS server being used is public, such as opendns, Google dns etc.

    Read the article

  • Etag configuration with multiple apache servers or CDN / How does Google do ETags?

    - by perrierism
    I have an application which is served from two apache2 servers and I want to configure the ETags on static content. In the future I would also like to use a CDN. I see that this is supposed to be a problem because the Etag information will be different from server to server... The ETag format for Apache 1.3 and 2.x is inode-size-timestamp. Although a given file may reside in the same directory across multiple servers, and have the same file size, permissions, timestamp, etc., its inode is different from one server to the next. So if you're using more than one webserver to host your app (like 90% of the webapps you use everyday do), it's supposed to be an issue. However I see Google uses Etags, and certainly they use multiple servers and CDN and edge caching, etc... I get a 304 response for any cached Google content. How do they do it? How do you get around the multiple server issue? Is there a way to configure this with Apache?

    Read the article

  • Unable to create new virtual hosts using MAMP with OSX Mavericks

    - by user2961676
    I have been using virtual hosts on my Mac with MAMP, which has worked up until now. I have 2 working virtual hosts that i created in the same manner, which still work, but for some reason I am unable to create any new virtual hosts. When i attempt to go to a newly crated virtual host in my browser it generates a 404 Not Found error. The only thing i can think of possibly after i updated OSX to Mavericks, but i'm not sure what that would have done, or why the old virtual hosts still work. See excerpt below from vhosts.conf file. So, franklin.dev works, jamiepjones.dev works, but sheilahixson.dev does not. <VirtualHost *:80> DocumentRoot "/Users/jamiejones/Sites/franklin" ServerName franklin.dev ErrorLog "logs/franlkin.dev-error_log" CustomLog "logs/franklin.dev-access_log" common </VirtualHost> <VirtualHost *:80> DocumentRoot "/Users/jamiejones/Sites/jamiepjones-wp" ServerName jamiepjones.dev ErrorLog "logs/jamiepjones.dev-error_log" CustomLog "logs/jamiepjones.dev-access_log" common </VirtualHost> <VirtualHost *:80> DocumentRoot "/Users/jamiejones/Sites/sheilahixson” ServerName sheilahixson.dev ServerAlias www.sheilahixson.dev ErrorLog "logs/sheilahixson.dev-error_log" CustomLog "logs/sheilahixson.dev-access_log" common </VirtualHost> and hosts file: 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 jamies-MacBook-Pro.Belkin # MAMP PRO - Do NOT remove this entry! 127.0.0.1 hixson # MAMP PRO - Do NOT remove this entry! 127.0.0.1 franklin.dev 127.0.0.1 jamiepjones.dev 127.0.0.1 sheilahixson.dev Please help!

    Read the article

  • How to create VirtualHost in Ubuntu 12.10

    - by Mifas
    I had followed many articles to 'How to create VirtualHost in Ubuntu'. This is what have I done Installed Apache sudo apt-get install lamp-server^ phpmyadmin I created folder called site1.com in /var/www/ Then I have created the file in /etc/apache2/sites-available/site1.com Then added the following code to that site1.com file <VirtualHost *:80> ServerName www.site1.com ServerAdmin [email protected] ServerAlias site1.com DocumentRoot /var/www/site1.com # Other directives here <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site1.com/> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> </VirtualHost> Then after that I edit the host file added the following line of code 127.0.0.1 site1.com Edit Also I enable the site1.com via sudo a2ensite site1.com Then i restart the apache serivice. (Even i restarted the pc) When I go to the site1.com, It will say The connection has timed out Error Message. But I can browse via localhost/site1.com. I have been trying since last two days. No solution. And followed many articles and videos.

    Read the article

  • Debian amd64 on Dell Studio 540 reboot hangs

    - by Shcheklein
    Hi, I have Dell Studio 540 desktop and Debian Lenny installed on it: 2.6.26-2-amd64 #1 SMP Tue Mar 9 22:29:32 UTC 2010 x86_64 GNU/Linux The problem is that I can't reboot it. It just hangs after "Will now restart" message. I've already tried: reboot=b, reboot=a, reboot=h kernel options. Nothing helps. Additional info (I can provide any other information): dmidecode System Information Manufacturer: Dell Inc. Product Name: Studio 540 lspci 00:00.0 Host bridge: Intel Corporation 4 Series Chipset DRAM Controller (rev 03) 00:02.0 VGA compatible controller: Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03) 00:1a.0 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #4 00:1a.1 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #5 00:1a.2 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #6 00:1a.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #2 00:1b.0 Audio device: Intel Corporation 82801JI (ICH10 Family) HD Audio Controller 00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Port 1 00:1c.2 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Port 3 00:1c.5 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Port 6 00:1d.0 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1 00:1d.1 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2 00:1d.2 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3 00:1d.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90) 00:1f.0 ISA bridge: Intel Corporation 82801JIR (ICH10R) LPC Interface Controller 00:1f.2 IDE interface: Intel Corporation 82801JI (ICH10 Family) 4 port SATA IDE Controller 00:1f.3 SMBus: Intel Corporation 82801JI (ICH10 Family) SMBus Controller 00:1f.5 IDE interface: Intel Corporation 82801JI (ICH10 Family) 2 port SATA IDE Controller 02:00.0 FireWire (IEEE 1394): JMicron Technologies, Inc. Device 2380 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)

    Read the article

  • Azure load-balancing strategy

    - by growse
    I'm currently building out a small web deployment using VM instances on MS Azure. The main problem I'm facing at the moment is trying to figure out how to get the load-balancing to detect if a particular VM has failed and not route traffic to that VM. As far as I can tell, there are only only two load-balancing options: Have multiple VMs (web01, web02, web03 etc.) within the same 'cloud service' behind a single VIP, and configure the endpoints to be load balanced. Create multiple 'cloud services', put a single web VM in each and create a traffic manager service across all these services. It appears that (1) is extremely simplistic and doesn't attempt to do any host failure detection. (2) appears to be much more varied, but requires me to put all my webservers in their own individual cloud service. Traffic manager appears to be much more directed at a geographic failover scenario, where you have multiple cloud services across different regions. This approach also has the disadvantage in that my web servers won't be able to communicate with my databases on internal IP addresses, unlike scenario (1). What's the best approach here?

    Read the article

  • Why are snapshots considered as temporary backups not real backups?

    - by Samselvaprabu
    I am using VMware ESXi. In our team we use to provide snapshots for long term backup. Then we faced issues like memory spillover and the server got hang up. I started reading in VMware knowledgebase articles and everywhere. Everywhere it was recommended not to have snapshots for a long time. Even VMware advised to keep snapshots for maximum of three days. But our team kept asking us to have at least two permanent snapshots (till deleting the VM). Sometimes we may use the VM for a year). one snapshot is for fresh machine state. (So when we complete testing an application, we will revert back to fresh state and install another application) (If I did not allow that, I may often need to host the VM.) Next snapshot for keeping the VM in some state (maybe they would have found an issue and keep that state for some time. Or they may install prerequisites for the application and keep the machine ready for testing.) Logically, their needs seems to be fair. But if I allow that, I am to permit them to hold the snapshots for long time. We are not using our VM as a mail server or database server. Why is keeping snapshots for long time having an adverse effect? Why are snapshots considered as temporary backups, not real backups?

    Read the article

  • What to do before connecting Ubuntu Server to the internet for the first time?

    - by CodeMonkey
    I just finished installing Ubuntu Server 12.10 on an Asus Eee PC 1000H (to be used as a home server/sandbox) from USB. I installed this software during installation: OpenSSH server LAMP server Samba file server Virtual Machine host I won't use 2, 3 or 4 for a while though. Can/should I turn these off somehow? I have turned home directory encryption on. Security updates are installed automatically. I have chosen a strong password for the single user. I have never plugged in the internet cable so far. Before doing so I'd like to ask: What can/should I do/install to increase security before connecting to the internet? Firewall? Fail2ban? Users/Passwords? Encryption? Enable/Disable functionality? etc. I'm sorry if you get this question a lot. I've searched around quite a while, but it still feels like I might overlook something important.

    Read the article

  • Registering publicly Mail server and Web server in a free dns server

    - by Bruno Vieira
    I'm trying to host the e-mails and the site of our company into our private server. I've already followed the Gentoo Virtual Mailhosting System with Postfix Guide and my mail server is working (actually it sends mails for the local users and for external users it goes to spam) and know how to set an Apache 2 server. What I don't know (and I mean really don't) is how to make them public. I did some research and found that I should ask my ISP to change the reverse DNS to my company domain in order to prevent my mails to be marked as spam, they are doing. I already know I have to configure a DNS Server, it seems like my register provider already has one but I don't know how I can configure CNET, A, MX, TXT and all those tags (Is it tags the name?) and If I must do some other configuration on my server. My Server: Linux mail 3.2.21-gentoo #1 SMP My /etc/hosts: 127.0.0.1 mail.example.com.br example example.com.br ::1 mail.example.com.br mail example.com.br My /etc/conf.d/hostname: hostname ="mail" What am I missing? If there's a guide about how to configure I would really be grate. Thanks in advance for the help. Cheers

    Read the article

  • Request bursting from web application Load Tests

    - by MaseBase
    I'm migrating our web and database hosting to a new environment on all new machines. I've recently performed a Load Test using WAPT to generate load from multiple distributed clients. The server has plenty of room to handle the traffic load, but I'm seeing an odd pattern of incoming traffic during the load tests. Here is the gist of our setup: Firewall server running MS Forefront TMG 2010 on Win 2k8 server Request routing done by IIS Application Request Routing on firewall machine Web server is a Hyper-V VM on the Database server (which is the host OS) These machines are hefty with dual-CPU's with six cores (12 total procs) Web server running IIS 7.5 Web applications built in ASP.NET 2.0, with 1 ISAPI filter (Url Rewrite) in front What I'm seeing during the load tests is that the requests all come through in bursts. Even though I have 7 different distributed clients sending traffic loads, the requests come through about 300-500 requests at a time. The performance monitor shows nearly all of the counters moving through this pattern, where a burst of requests comes in the req/sec jumps to 70, the queued requests jumps to 500, the current requests jumps up, the CPU jumps up, everything. Then once it's handled that group of requests, it has a lull for nearly 10 seconds where nearly nothing is happening. 0-5 req/sec, 0 queued requests, minimal CPU usage. Then after 10 seconds of inactivity, another burst comes through, spiking all of the counters once again. What I can't figure out is why the requests are coming through in bursts when I know that the load being generated is not sent that way, especially considering the various load-generating clients sending traffic all in different intervals with random think time's between each request. Is there something in the layers between Hyper-V or perhaps in the hardware which might cause this coalesce of requests together? Here is what i'm looking at, the highlighted metric is Requests/sec, but the others critical counter go with it: Requests Queued (which I'd obviously like to keep as close to 0 as possible). Any ideas on this?

    Read the article

  • ssh timeout issue connecting to an EC2 instance on OS X

    - by mamusr
    I am new to AWS and not a networking expert but curious to know more about it. I created a VPC with a public subnet only. Then i created an EC2 instance using an Ubuntu 14.04 64-bit pv AMI image (ami-e84d8480) as well generating the key pair needed to connect to it through ssh. I followed amazon's instructions to connect to an EC2 instance via ssh which did not work. Here is my attempted input and debug log: Running on OS X 10.9.4 user$ ssh -vvv -i key.pem [email protected] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 102: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: connect to address xxx.xxx.xxx.xxx port 22: Operation timed out ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out To attempt to resolve the issue: I enabled the SSH port. Tried different usernames other than ubuntu, like ec2-user and root. Initially set an inbound ssh rule in the security group to connect to only my ip address. When that did not work, i changed it to allow any ip to connect. But those actions did not fix the problem. Here are my guesses as to what i am missing in getting the EC2 instance connection to work. My etc/ssh_config file may be preventing the connection from taking place. I may have missed an important networking detail when setting up the VPC. I do not have a public ip address specified for the instance. I am connecting through the private ip address. My questions for the community: Am i going about it the wrong way connecting to the instance through the private ip address? if so, do i need to specify a public ip address for it to connect or some other method?

    Read the article

  • Unable to access stackexchange sites from this system

    - by Sandeepan Nath
    Earlier, I was not able to access most of the stackexchange sites like stackoverflow, programmers.SE etc. on my home Windows XP system. I was able to access only a few like http://meta.stackexchange.com and not even http://www.meta.stackexchange.com (note the www). I tried many other sites like http://www.stackoverflow.com, http://area51.stackexchange.com/ but was getting page not found errors on all browsers. Even pinging from terminal was saying destination host unreachable. I did not check recently but may be all SE sites are unreachable now. I was clueless about what could be the issue. I thought some firewall issue? So, I stopped AVG antivirus's firewall, then completely uninstalled it and even turned of windows firewall. But still not reachable even after fresh installation of Windows 7. Then I noticed a "Too many requests" notice on google. This page - http://www.google.co.in/sorry/?continue=http://www.google.co.in/# I don't know why this appeared but I guess somehow too many requests might have been sent to these sites and they blocked me. But in that case, SE would be smart enough to show a captcha like google. So, how to confirm the problem and fix it. Similar questions like these don't look solved yet - Unable to access certain websites Unable to Access Certain Websites I have lately started actively participating in lots of SE sites. There are new new questions popping up in my mind every time and I am not able to ask them. Please help! Thanks

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >