Search Results

Search found 51847 results on 2074 pages for 'web browser'.

Page 421/2074 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • Serve web application error messages from Http server

    - by licorna
    I have nginx as a http server with tomcat as a backend (using proxy_pass). It works great but I want to define my own error pages (404, 500, etc.) and that they are served by nginx and not tomcat. For example I have the following resource: https://domain.com/resource which doesn't exist. If I [GET] that URL then I get a Not Found message from Tomcat and not from nginx. What I want is that every time Tomcat responds with a 404 (or any other error message) nginx sends itself a message to the user: some html file accessible by nginx. The way I have my nginx server configured is very easy, just: location / { proxy_pass http://localhost:8080/<webapp-name>/; } And I've configured port 8080, which is tomcat, as not accessible from outside this machine. I don't think that using different location directives in nginx configuration will work, because there are some resources that depend on the URL: https://domain.com/customer/<non-existent-customer-name>/[GET] Will always return 404 (or any other error message), while: https://domain.com/customer/<existent-customer>/[GET] Will return anything different from 404 (the customer exists). Is there any way of serving Tomcat (Application Server) error messages with Nginx (http Server)? To check the message sent by the proxy_pass directive and act upon it?

    Read the article

  • How to add a second domain to an EC2 instance with Elastic & Route 53

    - by memeLab
    I've got my domain site.com running on EC2, using Elastic IP and Route 53. I want to park site.net so that it resolves to the same site.. I've looked up Migrating an Existing Domain to Route 53 in the docs, but can't find mention of how to add a second domain! I figured I'd have to create an A record, but when I do so, the record is created site.net.site.com .. not quite what I'm after! I've also done searches for mixes of 'route 53', 'park domain', 'addon domain', 'second domain', but no dice... My prior experience is with cPanel and Plesk, so I'm a bit lost! Any pointers would be appreciated! TIA

    Read the article

  • Session cach and tmp folder error on cPanel Centos

    - by Danialzo
    One of my clients has come across multiple breakdowns in their websites with the following error PHP Code: Warning: session_start() [function.session-start]:open(/tmp/sess_1d6616afe1b8a0d91a8d9ec29254b453, O_RDWR) failed: No space left on device (28) in /home/***/public_html/system/library/session.php on line 11Warning: session_start() [function.session-start]: Cannot send session cookie - headers already sent by (output started at /home/***/public_html/index.php:104) in /home/***/public_html/system/library/session.php on line 11Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/***/public_html/index.php:104) in /home/***/public_html/system/library/session.php on line 11Warning: Cannot modify header information - headers already sent by (output started at /home/***/public_html/index.php:104) in /home/***/public_html/index.php on line 177Warning: Cannot modify header information - headers already sent by (output started at /home/***/public_html/index.php:104) in /home/***/public_html/system/library/currency.php on line 45&#65279;Notice: Error: Can't create/write to file '/tmp/#sql_4c5_0.MYI' (Errcode: 28) Error No: 1 They are experiencing this problem while: Nothing has been recently changed on the server tmp and other folders are occupied with not more than 10% of their total space The error come and goes I would really appreciate it if anyone could guide me through. Thanks in advance

    Read the article

  • Remove Windows 7's limitation on number of concurrent tcp connections (http web requests)

    - by Ghita
    I have an application that tries to open as many http requests as possible (in order to stress test a proxy implementation) It seems to me that Win7 (SP1) may have a limitation on number of concurrent opened connection (it may be the so called half-open state if I'm not wrong). Is there something I can do for client ? and also I test using a vista PC that acts as a proxy server. It would be great if I could configure it to sustain at least 50 new connections initiated / second on client side and many more on server. I made the modification according to this technet article by setting TcpNumConnections = 150 but it doesn't make a difference. I still only see about 20 tcp sockets associated with my http client by using tcpview.

    Read the article

  • Multi domain server with dedicated SSL's HA

    - by user3692800
    I am hosting a server with 150 domain names (websites), each of the ssl's requere dedicated IP address. So server windows 2008, with 150 IP addressees and 150 websites. I need to have high availability solution. So thinking setting up AWS but ELB will not be a solution... and max IP's I can get per instance is 12 addresses. So what can I do to have all 150 sites hosted on one instance and be HA with instance in different availability zone.

    Read the article

  • AWS RDS Mysql with benstalk Hibernate app: Character encoding issue

    - by TeraTon
    I'm running a webapp from amazon rds with tomcat 7 and spring, which uses hibernate as the persistence layer. The application and utf-8 encoding work properly on localhost, but for some reason when I deploy to amazon, the UTF-8 encoding breaks. I use mysql 5.5.27 on amazon rds and the table that we wish to update has collation set to utf8 - utf8_unicode_ci And in hibernate I have set: < prop key="hibernate.connection.charSet"UTF-8 UTF-8 characters get replaced by ??? and this is of course especially bad for passwords and usernames + email as it basically kills them. Anyone else encountered character encoding breaking when deploying to amazon?

    Read the article

  • Troubleshooting web page load issues

    - by Richard Testani
    I having an odd issue where a particular page doesn't finish loading... all the time. Intermittenly, this page doesn't finish loading - in particluar, it seems maybe the JS file isn't downloaded or finished loading. Marchelos Law I am not sure how to troubleshoot this issue, since refreshing it fixes it. I see it mostly in IE 7 & 8, rarely in Safari or FireFox. If you view the page in question, refreshing it - you should see the issue. Any ideas?

    Read the article

  • Mitigating the 'firesheep' attack at the network layer?

    - by pobk
    What are the sysadmin's thoughts on mitigating the 'firesheep' attack for servers they manage? Firesheep is a new firefox extension that allows anyone who installs it to sidejack session it can discover. It does it's discovery by sniffing packets on the network and looking for session cookies from known sites. It is relatively easy to write plugins for the extension to listen for cookies from additional sites. From a systems/network perspective, we've discussed the possibility of encrypting the whole site, but this introduces additional load on servers and screws with site-indexing, assets and general performance. One option we've investigated is to use our firewalls to do SSL Offload, but as I mentioned earlier, this would require all of the site to be encrypted. What's the general thoughts on protecting against this attack vector? I've asked a similar question on StackOverflow, however, it would be interesting to see what the systems engineers thought.

    Read the article

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

  • Fix Fatal Error Condition showing system path

    - by JMC
    I've noticed there are a large number of servers running Magento Commerce that will return a fatal error showing the system path: Fatal error: Uncaught exception 'Exception' with message 'File '/usr/local/www/magento/data1702/media/css' does not exists.' in /usr/local/www/magento/data1702/lib/Varien/File/Transfer/Adapter/Http.php:96 Stack trace: #0 /usr/local/www/magento/data1702/get.php(205): Varien_File_Transfer_Adapter_Http->send('/usr/local/www/...') #1 /usr/local/www/magento/data1702/get.php(165): sendFile('/usr/local/www/...') #2 {main} thrown in /usr/local/www/magento/data1702/lib/Varien/File/Transfer/Adapter/Http.php on line 96 Magento as an application is generally good about supressing error messages. How can a linux server running apache be configured to avoid returning this error message since the app has problems suppressing it.

    Read the article

  • How should I interpret site analytics with 11 pageviews in an 3 second visit?

    - by Juank
    I'm using google analytics and recently i've noticed some weird trends going on. I have a lot of visits that last mere seconds but mark several page views... more than a normal human can see in that range of time. A specific case is that the only visitor from Ireland i've had until now recorded 11 pageviews in a 3 second visit. Are these crawlers? Shouldn't google analytics filter those out?

    Read the article

  • Chef command to create new ec2 instance with second ebs volume attached and mounted instead of the default ephemeral volume?

    - by runamok
    We currently use this command to create a new ec2 instance with chef: knife ec2 server create --node-name=prod-apache-1 --availability-zone us-east-1c --image ami-3d4ff254 --distro ubuntu12.04-gems --groups "default" --ssh-key foo --identity-file ~/.ssh/id_rsa --ssh-user ubuntu --flavor m1.small After this command we then run further chef commands to finish provisioning the server. I was wondering if it would be possible while first setting up the instance I wanted a 100 gb volume created and mounted at /mnt and to have the ephemeral storage mounted at /tmp or /mnt-ephemeral instead. If not what further commands in chef would you advise running? I know how to do this via the aws console and can probably figure out how to do it via the ec2 command line tools but I am knew to chef and a bit overwhelmed.

    Read the article

  • Do Parallels Plesk Panel 11 have free inbuilt firewall for Debian 6 Linux OS? [closed]

    - by Sampath
    Possible Duplicate: How strong is parallels plesk 11 inbuilt firewall for linux os? I am new to Linux Dedicated server hosting and plesk panel 11 , i am looking for inbuilt firewall module . Is it comes with free in plesk 11 or i need to pay or plesk 11 doesn't support firewall?. I looked at the demo http://www.parallels.com/products/plesk/demos/, but i couldn't find any information about firewall in plesk 11.

    Read the article

  • get a list of running ec2 instances programmatically

    - by user113981
    Hi i have started with aws and found out that we can get a list of running servers with the aws php sdk. Is there any other way to get the list of all ec2 instances? after getting the list i want to sync the data from one main instances to all the instances. Something like a button click can also do the operation. Are rsync, incron the only options, or it can be done by aws php sdk also. Please provide some tutorial links.

    Read the article

  • Cannot connect to MySQL on RDS (Amazon Web Services) from my laptop

    - by Bruno Reis
    I'm having some trouble connecting to a MySQL 5.1 server on an RDS instance on AWS from my laptop. The detailed description of the problem is here: https://forums.aws.amazon.com/thread.jspa?messageID=323397 In short: I have 2 MySQL servers, both with the same db configuration and firewall (security group) configuration. One of them works fine: I can connect to it from my EC2 instances (ie, from inside the AWS cloud) and from my laptop. The other one doesn't: I can connect from my EC2 instances but not from my laptop. The symptom: a connection attempt from my laptop just hangs, and then times out, as if there was a firewall blocking me (ie, silently dropping my SYN packets). I must say that everything has been working fine for a very long time, and this problem began suddenly, 3 days ago, without any modifications to DB parameters or the security groups. My current analysis of the situation: The firewall (ie, security group) cannot be the problem: both MySQL servers share the same firewall configuration -- I can connect to one of them but not to the other. Later on, I even added a rule to allow inbound connections from 0.0.0.0/0 (ie, I turned off the firewall), and nothing. Oh, I also created a new, fresh security group and changed this instance's SG to the new one (to which I first added my ip address, and then 0.0.0.0/0) but still nothing. The credentials cannot be the problem: I use the same from my laptop and from my EC2 instances -- and the user (which is what Amazon calls master user), in the database, has a host of '%'. MySQL is not blocking my IP due to, say, too many failed connection attemps: I've FLUSH HOSTS on the database, and also I tried to connect using many different source IP addresses, even from all around the world through a VPN proxy service. What could I be missing? I'm asking here because it's been about 36 hours since I've posted on AWS forums but got no answer at all over there... someone here might have a solution! Any input is really appreciated, I'm out of ideas. Thanks!

    Read the article

  • How to change the computer name on a server configured by Puppet

    - by David Sulpy
    I am new to Puppet and I'm trying to get Puppet to configure my EC2 instances after they're started from a Cloud Formation Template in AWS. The problem is that all the nodes that get started from the Cloud Formation Template all have the same name (the name from the AMI that the new nodes derive from). I would love to find a way to have puppet rename the nodes when the nodes start up. (although, as far as I know, a Computer Name change requires reboot, a separate issue...) If you can point me to some documentation that can help me figure this out or if you have any ideas that would be great. My ultimate goal is to have each EC2 start with a unique name so that I can use New Relic server monitoring to report the different servers.

    Read the article

  • How can I match an AWS account number to a key and secret

    - by iwein
    My client gave me a key and a secret to manage his EC2 things, but to make one of my AMI's available to run I have to fill in the Account Number. Is it possible to deduce the account number from the key and the secret? Obviously I also asked the client for this information, but since it's weekend and I'm not fond of waiting I wanted to see if I could figure it out myself. Have you done this before?

    Read the article

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • Is there anything like Heroku for PHP and/or .NET?

    - by Wayne M
    In my area PHP is very widespread, so is .NET. Ruby not so much; most places have never heard of it. For some personal things I am "forced" to choose Rails because I want to take advantage of Heroku - the ability to deploy and scale on the cloud very easily is the main reason. Also, they offer a small FREE plan, with no ads or strings attached, that I can use for demo sites or, in this case, for my business' static page; as a totally bootstrapped startup I have maybe $50 or so in initial capital and cannot afford to pay monthly fees while I'm getting started. Are there any similar offerings for other languages? Specifically, I really like the small, 5MB site for free that Heroku offers - is there anything like that for PHP and/or .NET? I'm not even that concerned about the "cloud" part, but that would be a nice bonus. If there is, I might be able to kill two birds with one stone and pick up a useful skill as I'm doing my own thing instead of using something that nobody else knows or cares about. I should add I'm specifically interested in something that offers a free plan. As I said, Heroku has a 5mb plan that you can have as many as you want for free; I have yet to find anything similar for any other platform (most of the "free" sites require you to have ugly banners on your page, or don't allow you to use your own domain name), and to be honest I'm not too thrilled about using Ruby on Rails for everything simply to take advantage of this. I'm asking this here because I already asked it on StackOverflow and someone suggested it would be better suited here.

    Read the article

  • AWS VPC - why have a private subnet at all?

    - by jkim
    In Amazon VPC, the VPC creation wizard allows one to create a single "public subnet" or have the wizard create a "public subnet" and a "private subnet". Initially, the public and private subnet option seemed good for security reasons, allowing webservers to be put in the public subnet and database servers to go in the private subnet. But I've since learned that EC2 instances in the public subnet are not reachable from the Internet unless you associate an Amazon ElasticIP with the EC2 instance. So it seems with just a single public subnet configuration, one could just opt to not associate an ElasticIP with the database servers and end up with the same sort of security. Can anyone explain the advantages of a public + private subnet configuration? Are the advantages of this config more to do with auto-scaling, or is it actually less secure to have a single public subnet?

    Read the article

  • I am getting a 400 Bad Request error when using Nginx and PHP-FPM, why?

    - by Bob
    I am trying to run a website (that requires PHP - it technically doesn't require MySQL at this time, but it may sometime in the near future as I continue developing it, so I went ahead and installed that as well) using nginx 1.2.4 and PHP-FPM 5.3.3 on Ubuntu 12.04.1 LTS. As far as I know, I haven't done anything wrong, but clearly something is not quite right - I seem to be getting a 400 Bad Request error whenever I try to browse to my website. I've been mostly following one guide, and I've done more or less everything it recommends, except for not setting up PHP-FPM to use a Unix Socket and I used service as opposed to /etc/init.d/ when starting/stopping nginx, PHP, and MySQL. Anyways, here are my relevant configuration files (I have only censored personal/sensitive details, like my domain name - which contains my real name): /etc/nginx/nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/subdomain.mydomain.net server { listen 80; # listen for IPv4 listen [::]:80; # listen for IPv6 server_name www.subdomain.mydomain.net subdomain.mydomain.net; access_log /srv/www/subdomain.mydomain.net/logs/access.log; error_log /srv/www/subdomain.mydomain.net/logs/error.log; location / { root /srv/www/subdomain.mydomain.net/public; index index.php; } location ~ \.php$ { try_files $uri =400; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/subdomain.mydomain.net/public$fastcgi_script_name; } } All the directories listed in the configuration files above are correct on my server (to the extent of my knowledge). I have not included /etc/php5/fpm/pool.d/www.conf or /etc/php5/fpm/php.ini in this post as they're rather long, but I have posted them on Pastebin: http://pastebin.com/ensErJD8 and http://pastebin.com/T23dt7vM, respectively. Although, the only thing I've changed in either of the two files was in php.ini, where I set expose_php to off so as to hide the .php file extension from users. What can I do to resolve my issue? Please let me know if I need to supply any additional details.

    Read the article

  • How to move mail accounts when migrating webhosting

    - by pkswatch
    I am migrating my website abc.com from one webhosting company to another in a shared hosting environment. Both have cpanel. And the second hosting account i am preparing to move is my multi-domain hosting account with 3 domains already in it. The problem is, i have many email accounts associated with my website abc.com, which are accessed using webmail. So if i move it to the other host, will i lose all those accounts and their emails? If yes, then how should i synchronise the email accounts so that all the accounts and the contained emails remain intact? I saw some several sync tools like IMAP Sync, etc. But these require two hosts while synchronizing, and as you see, i have just one domain name to be synchronized over 2 servers. PS, i do not have any ssh access on either of them, and i have made complete backup of all files using backup wizard in cpanel.

    Read the article

  • How to setup apache multi-site with multi-domain on ec2

    - by Esh
    Say I have two document roots domain1/ and domain2/ I know how to access those two roots from my own computer if they are hosted on the same computer. My question is that if I want to do the same thing on my ec2 server, how should I configure my elastic ips to those two roots? I know by default the elastic ip will only associate to the root with the name localhost(127.0.0.1). Anyone could give me a detailed answer? An example would help, thanks!

    Read the article

  • AWS Free Usage Tier + Cloudflare... possible?

    - by crashintoty
    If I throw my MySQL/PHP app up on a Amazon EC2 instance (using their AWS Free Usage Tier program) and couple it with CloudFlare (the free plan of course) roughly how many daily visitors can I comfortably handle before performance starts to suffer? Just looking for a rough estimate or educated guess - I understand this setup might be less than ideal but I'm still very curious nonetheless. Thanks in advance

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >