Search Results

Search found 11453 results on 459 pages for 'apache axis'.

Page 370/459 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • SSL 3.0 warning in Chrome on Ubuntu 10.04LTS

    - by Leopd
    I'm running Apache2 with SSL on Ubuntu 10.04LTS. Chrome gives me this annoying warning when I inspect the certificate: The connection had to be retried using SSL 3.0. This typically means that the server is using very old software and may have other security issues. The relevant part of the apache config looks like: SSLEngine on SSLCertificateFile /etc/ssl/... SSLCertificateKeyFile /etc/ssl/... SSLCACertificateFile /etc/ssl/... SSLProtocol -all +SSLv3 +TLSv1 The last line I added to try to address this problem, but it's not working. Any advice on properly enabling TLS?

    Read the article

  • FTP Error 550 when trying to access a folder via symbolic link

    - by OrangeTux
    I'm configuring svftp on a linux machine. At the moment local users can login via ftp and they will see listened their home dir. They have write acces to it. No I want the users to write in de /var/www/ dir. Therefore I created an new group apache. Added users to the group and gave the group write access to /var/www. Via the terminal all users can write .var/www/. I created a link in the home directory to /var/www via ln -s /var/www/ /home/user/www ls gives: drwxr-xr-x 2 orangetux orangetux 4096 Jun 23 15:06 ftp lrwxrwxrwx 1 orangetux orangetux 21 Jun 23 15:00 www -> /var/www/ But when I use FTP I see the link but I cannot follow it. Error 550 which means file not found or bad access. How can I solve this, so that the users have access to /var/www via their home dir?

    Read the article

  • Safari, IIS and optional Client Certificates

    - by Philipp
    I've a ASP.Net Webapp running on IIS7.5. The Webserver is configured to accept Client Certifcates. Unfortunately Visitors with Safari Browser are unable to view the Page. Same Problem as described under the following link: http://www.mnxsolutions.com/apache/safari-providing-an-ssl-error-client-certificate-rejected%E2%80%9D-when-other-browsers-work.html Does anyone knows how to solve this? I'd really appreciate your help. edit: Seems to be the same problem: http://superuser.com/questions/231695/iis7-5-ssl-question-safari-users-get-a-prompt-of-certificate-to-select

    Read the article

  • Polling performance on shared host

    - by Azincourt
    I am planning on writing a small browser game. The webserver is a shared server, with no root / install possible. I want to use AJAX for client/server communication. There will be 12 players. So each player would be polling the server for the current game status every X milliseconds (let's say 200ms). So that would be 200ms x 12 players x 5 = 60 requests per second Can Apache handle those requests? What might be the bottlenecks when using this attempt?

    Read the article

  • dig gets the right result from DNS server, but name still fails to resolve

    - by EMiller
    Under what conditions would the following occur? From a given OSX machine on an internal network: $~ cat /etc/resolv.conf nameserver 10.102.120.7 nameserver 10.102.120.2 From the same machine: $~ dig @10.102.120.7 in.local <snip> ... ;; QUESTION SECTION: ;in.local. IN A ;; ANSWER SECTION: in.local. 43200 IN A 10.102.123.30 <snip> ... And yet, this workstation cannot ping in.local, nor load pages hosted by apache on that machine. 10.102.123.30 is definitley up (2 OSX machines I know fail to resolve in.local - but other machines on the network can). I have also checked their /etc/hosts to see if anything there might interfere... Not sure what else to check...

    Read the article

  • Access my local server by hostname or servername

    - by S.M.09
    I have a local host server hosting a few applications in tomcat which comes through a apache proxy The client or User trying to access these application has to access them like 10.XXX.XXX.XX:8080/appName OR 10.XXX.XXX.XX/appName But I want to replace the ip address with soem other name related to my applications. But I cannot go and enter the host name of the server in each users /etc/host Nor do I want to be setting up DNS. Is there another way to do this. I am using ProxyPass XXX YYY to redirect all applications of tomcat to port 80

    Read the article

  • Why doesn't my htaccess redirect work?

    - by cosmicbdog
    I have setup a simple htaccess redirect which looks like this (this is the whole .htaccess file): Options +FollowSymLinks RewriteEngine On Redirect 301 /something http://something.com/something.php If I then load the site which contains this .htaccess, ie, myredirectsite.com/something I end up with the following 404: The requested URL /something was not found on this server. Apache/2.2.3 (Red Hat) Server at myredirectsite.com Port 80 And the logs: [Tue Jul 10 14:25:46 2012] [error] [client xx.xx.xxx.xx] File does not exist: /home/sites/scp/something Something is not a file, and something does not exist. I have assumed I could use Redirect the same as a Rewrite but it looks like the redirect needs to be for a file that actually exists? I created the file 'something' and it just attempts to load the blank file. No redirect. What am I missing in getting this working?

    Read the article

  • ErrorDocument 404 not found in non-existent subdomain

    - by Question Overflow
    I am trying to get the apache server to issue a custom 404 error for invalid subdomains. The following is the relevant part of the httpd configuration: Alias /err/ "/var/www/error/" ErrorDocument 404 /err/HTTP_NOT_FOUND.html.var <VirtualHost *:80> # the default virtual host ServerName site_not_found Redirect 404 / </VirtualHost> <VirtualHost *:80> ServerName example.com ServerAlias ??.example.com </VirtualHost> What I get instead is this: Not Found The requested URL / was not found on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I don't understand why a URL to non-existent-subdomain.example.com produces a 404 error without custom error as shown above while a URL to eg.example.com/non-existent-file produces the full custom 404 error. Can someone advise on this. Thanks.

    Read the article

  • Best idea dataserver serving small pictures 40 ko

    - by Nicolas Manzini
    I'm designing the server structure for my application in case things go well. I have one server DB connected to multiple server who process connections. All those with lots of RAM and fast processors. (still looking for a way to use the multithread because now it's dumb apache php... so loooots of ram needed). Upon an answer from those servers, the client can then connect to another server to retrieve pictures using the address he previously got from the db. Is it a good idea to have one database server with let's say nginx and ssd disk having to send all pictures to everybody? or should I have multiple server accessing to a shared ssd disk drive or multiple disk updating each other? Also should I put a lot of RAM on the database server? because probably there wont be a picture more popular than another.

    Read the article

  • How do we increase the maximum allowed HTTP GET query length in Jetty?

    - by Mike
    We are using Jetty to run an Apache Solr index. We've had some queries that have grown way beyond the previously expected maximum length, and are now having issues wehre most queries are not returning any data because the URL gets truncated. These requests are not being made through a browser, they're being made programmatically using the Apache_Solr_Service PHP library. The application is expecting queries to come in as HTTP GET requests, so simply switching to a POST will not solve this problem. How can we increase the maximum allowed HTTP GET query length in Jetty? Thanks!

    Read the article

  • Migrating domains - 301 Redirect of all contents of directory

    - by Trufa
    I need to do a 301 redirect with apache since I'm migrating domains. What I would need to do is the following, from certain directories, redirect all of it's content to a different damin (where the file already exists). Let's say I have one.com/files/something.doc or one.com/files/other.php I have already "copied" or "backed up" all the contents of the directory, so the following already exist: two.com/old/files/something.doc and two.com/old/files/other.php So I would just need to redirect anything in the directory "files" (or whatever). I hope the question is clear enough, if not please ask for any clarification needed!! Thanks in advance!!

    Read the article

  • Mod_rewrite issue with godaddy web hosting

    - by MrFoh
    Am trying to use laravel to build a site but my routes all redirect to the homepage. Apache error logs show this AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. And the .htaccess file is this <IfModule mod_rewrite.c> Options -MultiViews Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] </IfModule> The webroot has multiple sub-folders which are document roots for different domains. Am working with one of these sub-folders. What is causing this error and how can it be fixed

    Read the article

  • nagiosgraph new services not showing

    - by Eleven-Two
    I am using Nagios Core with Nagiosgraph and had only enabled graphing for cpu usage for a while. This worked fine, but now i wanted to add some more services (for example memory usage). The new services are not working (no rrd data is generated). The Nagiosgraph site only says "no data available" and I get no error in apache log, nagiosgraph.log or nagiosgraph-cgi.log. The new services are standard services (nsclient++ MEMUSE for example) and of course they are included in the map file. If I execute the checks manually, it shows also the perfdata. I added the services by enabling the "graphed-service" use. Did I miss something?

    Read the article

  • Invalid command 'VirtualDocumentRoot'

    - by andy
    I'm unsure as to why I'm getting the following error when apache is rebooted: Invalid command 'VirtualDocumentRoot', perhaps misspelled or defined by a module not included in the server configuration Action 'start' failed. The snippet it is referring to is this: <VirtualHost *:80> ServerAdmin [email protected] VirtualDocumentRoot /local/www/staging/%1 ServerAlias *.staging.mydomain.com </VirtualHost> I assumed it was a misspelling as it said, but it was copied directly from another server of mine. It works perfect there. Any ideas?

    Read the article

  • Thin web server - single or multiple instances per IP address:port?

    - by wchrisjohnson
    I'm deploying a rack/sinatra/web socket app onto several servers and will use thin as the web server (http://code.macournoyer.com/thin/). There are almost no views to show, so I am not front-ending it with a traditional web server like Apache or nginx. In general, you see thin started and the underlying config file for it has the number of server instances to start, say 3, and the port to start with, say 5000. So, in my example, when thin starts, it starts up three instances on a range of ports, starting on port 5000. If I have a series of virtual machines, say 3, 6, 9, etc. that I treat as a cluster, would/should I choose to start a single thin instance on each VM, or multiple instances on each VM? Why? Thanks - Chris

    Read the article

  • umask is being ignored on Gentoo while creating new files

    - by drcelus
    I have a server running Gentoo and hosting a drupal installation. Whenever a Drupal update is executed, the directory permissions of the updated module turn from 755 to 744 preventing the application from accessing the files. The umask is defined as 022 under /etc/profile and the Apache server is running under user and group nobody. I believe this has nothing to do with the drupal installation since if I create a directory as root, the same happens, it is created with 744 permissions, since the umask is 022 shouldn't it be created as 755 ? Why is the umask being ignored and how do I tell the server to create the directories with permission 755 ?

    Read the article

  • IPv4 NameVirtualHost, IPv6 VirtualHost

    - by MadHatter
    Like many of us, I have an apache server (2.2.15, plus patches) with a lot of virtual hosts on it. More than I have IPv4 addresses, to be sure, which is why I use NameVirtualHost to run lots of them on the same IPv4 address. I'm busily trying to get everything I do IPv6-enabled. This server now has a routed /64, which gives me an awful lot of v6 addresses to throw around. What I'm trying to find is a simple way to tell each v4-NameVirtualHost that it should also function as a VirtualHost on a unique ipv6 address. I really, really don't want to have to define each virtual host twice. Does anyone know of an elegant way to do this? Or to do something comparable, in case I've embedded any dangerously-ignorant assumptions in my question?

    Read the article

  • Best server OS for running up-to date software

    - by rjstelling
    I need to configure a server (*nix) that runs our (bespoke) CMS and Applications. In the bast I have defaulted to using Cent OS 5, but I find this difficult to upgrade the software to the versions we require. For example, we need PHP 5.3, but CentOS 5 has 5.2. Updating is fine but breaks something else (normally MySQL support in PHP). Eventually it will get to a situation where I can't upgrade because of missing dependancies and incompatible versions. Error: Missing Dependency: httpd = 2.2.3-43.el5.centos.3 is needed by package httpd-devel-2.2.3-43.el5.centos.3.i386 (updates) Is there a better alternative OS for hassle free updates, I need: Apache 2.2.17 (the development version for apxs) MySQL 5.5.8 PHP 5.3.5

    Read the article

  • I'd like to archive files from Ubuntu to Windows between two computers on a shared home network

    - by Wabbitseason
    I have an old laptop running Ubuntu 9.10 which I use as a LAMP environment for web development, and I have a comfortable, powerful desktop computer with Windows 7 installed on it. These two are connected to a home router so both can access the internet. I have been able to set up Samba so I can mount my Apache home directory so it is accessible from Windows and is mapped as a network drive. What I'd like to do is access some Windows folders from Linux so I could automatically create backups (with cron scripts) of my work to physically different locations on the Windows box. Perhaps at a later time I'd set up a local Subversion repository but I'd love to keep backups of that on the Windows drives too. Using Ubuntu's Places/Network menu I can see my desktop but I'm unable to log in to that despite having created the corrent username and password on Windows. All I can get is the following error message: "Unable to mount location. Failed to retrieve share list from server." What could be misconfigurated?

    Read the article

  • "Countersigning" a CA with openssl

    - by Tom O'Connor
    I'm pretty used to creating the PKI used for x509 authentication for whatever reason, SSL Client Verification being the main reason for doing it. I've just started to dabble with OpenVPN (Which I suppose is doing the same things as Apache would do with the Certificate Authority (CA) certificate) We've got a whole bunch of subdomains, and applicances which currently all present their own self-signed certificates. We're tired of having to accept exceptions in Chrome, and we think it must look pretty rough for our clients having our address bar come up red. For that, I'm comfortable to buy a SSL Wildcard CN=*.mycompany.com. That's no problem. What I don't seem to be able to find out is: Can we have our Internal CA root signed as a child of our wildcard certificate, so that installing that cert into guest devices/browsers/whatever doesn't present anything about an untrusted root? Also, on a bit of a side point, why does the addition of a wildcard double the cost of certificate purchase?

    Read the article

  • How to identify which website on my instance is receiving lots of traffic?

    - by Bob Flemming
    I am new to server administration and have just setup a new quad core instance which hosts around 15 websites. Over the past couple of days my server load has been averaging at around 15.00. I believe it is because of one (or maybe more) websites are getting spammed by spambots. Typing 'top' at the command line shows many processes from user 'www-data' which indicates lots of web traffic. Is there an easy way identify which one of my sites is taking a hammering? Reading the apache error logs is a very difficult tasks as most of the websites receive daily traffic of 10,000 + unique users. Any help would be appreciated!

    Read the article

  • How to install Gitlab in a VM on a production server?

    - by Michaël Perrin
    I have a production server running Ubuntu 12.04 and I would like to install on it a VM with Gitlab (using Vagrant and Virtualbox). Let's say that the address to access Gitlab is gitlab.mydomain.com . The DNS zone has been configured to point to the IP address of the server. I want users to be able to access to Gitlab (either for pushing to a repository, or for accessing to the web interface) from the outside. The VM has been configured to have an IP address. It means that when browsing http://gitlab.mydomain.com for instance, the request has to be forwarded to the VM on the server, ie. to the VM IP address. What are the ways to configure this? Can Apache be used as a proxy? In this case, I guess it only works for HTTP requests, but not for pushing to a Git repository on the VM.

    Read the article

  • Best blog package/platform (java, php etc)?

    - by user50912
    Hi Folks, I want to set up a blog, but I want it to reside on a URL I've bought, I also don't want any of the ads and such that sit around other blogs on blog specific sites like blogspot and generally want more control. I was thinking of getting shared hosting with mysql and such to get it going (as opposed to a VM which would be overkill). Then I just need to decide on the easiest quickest (and most secure) way of getting something up there. After some googling, I see b2evolution.net which sits on php, or Apache Roller, which seems to sit on Java. Could anyone offer any advice on whats my best approach here? Are there security concerns with either or has anyone any experience in this area? I really want setup time to be minimal, so I can concentrate of the feel of the blog rather than whats under the hood. Many Thanks.

    Read the article

  • What is the Optimal Server Configuration for Split-Path Testing?

    - by doug
    I am far from an expert on Apache or any server for that matter, so i apologize if this question is poorly worded, which it likely is. We have always relied on a vendor for split-path testing (aka "AB Testing"). If you're not familiar with that term, it's a form of marketing research in which you slightly modify one of your web pages (usually one nearest the point of conversion), say for instance, by changing the position of the "Buy Now" button or its color/contrast/texture, then serving one of those two pages to a given user based on random selection. By doing split-path testing ourselves, I suspect we can do it far more cheaply and increase cycle times as well. What is the optimal set-up for these tests? "Optimal" is based on the following criteria: how quickly/easily new tests can be set-up and put online; and minimal disruption to overall site performance

    Read the article

  • Squid stale-while-revalidate not working when max-age=0

    - by Wiliam
    Squid 2.7 always reaches backend, expected is to reach backend using stale-while-revalidate only when cache expires, not when client triggers max-age=0. Script: <?php header('Cache-Control: public, max-age=10, stale-if-error=200, stale-while-revalidate=500'); header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); sleep(2); die("OK"); And squid config: # http_port public_ip:port accel defaultsite= default hostname, if not provided http_port 80 accel defaultsite=mydomain.com # IP and port of your main application server (or multiple) cache_peer 127.0.0.1 parent 8000 0 no-query allow-miss originserver name=main # Do not tell the world that which squid version we're running httpd_suppress_version_string on # Remove the Caching Control header for upstream servers header_access Cache-Control deny all #header_access Last-Modified deny all # log all incoming traffic in Apache format logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh access_log /usr/local/squid/var/logs/squid.log combined all cache_effective_user squid refresh_pattern . 10080 90% 999999 ignore-no-cache override-expire ignore-private icp_port 0

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >