Search Results

Search found 56465 results on 2259 pages for 'http status 401'.

Page 291/2259 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

  • mod_rewrite: redirect from subdomain to main domain

    - by Bald
    I have two domains - domain.com and forum.domain.com that points to the same directory. I'd like redirect all request from forum.domain.com to domain.com (for example: forum.domain.com/foo to domain.com/forum/foo) without changing address in addres bar (hidden redirect). I wrote something like this and put it into .htaccess file: Options +FollowSymlinks RewriteEngine on RewriteCond %{HTTP_HOST} ^forum\.example\.net$ RewriteRule (.*) http://example.com/forum/$1 [L] RewriteCond %{REQUEST_FILENAME} !-s [NC] RewriteCond %{REQUEST_FILENAME} !-d [NC] RewriteRule ^(.+) index.php/$1 [L] That works only if I add Redirect directive: RewriteRule (.*) http://example.com/forum/$1 [R,L] But it changes previous address in address bar. EDIT: Ok, let's make it simple. I added those two lines at the end of the c:\windows\system32\drivers\etc\hosts on my local computer: 127.0.0.3 foo.net 127.0.0.3 forum.foo.net Now, I created two virtual hosts: <VirtualHost foo.net:80> ServerAdmin [email protected] ServerName foo.net DocumentRoot "C:/usr/src/foo" </VirtualHost> <VirtualHost forum.foo.net:80> ServerAdmin [email protected] ServerName forum.foo.net DocumentRoot "C:/usr/src/foo" </VirtualHost> ..and directory called "foo", where i put two files: .htaccess and index.php. Index.php: <?php echo $_SERVER['PATH_INFO']; ?> .htaccess: Options +FollowSymlinks RewriteEngine on RewriteBase / RewriteCond %{HTTP_HOST} ^forum\.foo\.net$ RewriteCond %{REQUEST_URI} !^/forum/ RewriteCond %{REQUEST_FILENAME} !-s [NC] RewriteCond %{REQUEST_FILENAME} !-d [NC] RewriteRule ^(.+)$ /index.php/forum/$1 [L] RewriteCond %{HTTP_HOST} !^forum\.foo\.net$ RewriteCond %{REQUEST_FILENAME} !-s [NC] RewriteCond %{REQUEST_FILENAME} !-d [NC] RewriteRule ^(.+) index.php/$1 [L] When I type address http://forum.foo.net/test in address bar, it displays /forum/test which is good. http://foo.net/a/b/c shows /a/b/c which is good. But! http://forum.foo.net/ displays empty value (should display /forum).

    Read the article

  • Confusion on networking service start/stop in Ubuntu

    - by Daniel Ball
    I'm preparing to move and took down two of my servers, leaving only one with some essential services running. What I neglected to consider was that one was the DHCP server(which I realized when somebody contacted me saying they couldn't connect. Whups). So because I only have a few hosts on this small network, I opted to just statically configure them for now. One of these is a new Ubuntu 11.04 server, where I have very little experience. I edited /etc/network/interfaces and /etc/hosts to reflect my changes. I ran $sudo /etc/init.d/networking stop *deconfiguring network interfaces ... So yay. Then I try to start, it gives me the mumbo jumbo about using services (why didn't it do that for the stop?) So instead I run ... $sudo service networking start networking stop/waiting Now, to me that says the status of the service is stopped. But when I ping another computer, I get a successful reply. So is it not actually stopped? More importantly, am I doing something wrong? Edit daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking stop stop: Unknown instance: daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking start networking stop/waiting daniel@FOOBAR:~$ sudo service networking status networking stop/waiting So you can see why I ran /etc/init.d/networking stop instead. For some reason upstart (that is what "services" is, right?) isn't working with stop. cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 FOOBAR 198.3.9.2 FOOBAR #Added entry July 19 2011 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth0 #iface eth0 inet dhcp # hostname FOOBAR auto eth0 iface eth0 inet static address 198.3.9.2 netmask 255.255.255.0 network 198.3.9.0 broadcast 198.3.9.255 gateway 198.3.9.15 No I didn't save backups, it was just a minor change so I just commented out the old DHCP setting. Edit I set everything back to original settings and set up a DHCP server. "starting" networking does the same thing. I can only assume this is normal, I just don't know WHY. It can't be anything to do with the configuration files, since they've been restored.

    Read the article

  • PHP+Apache as forward/reverse proxy: ¿how to process client requests and server responses in PHP?

    - by Lightworker
    Hi! I'm having a lot of troubles with the propper configuration of Apache mod_proxy.so to work as desired... The main idea, is to create a proxy on a local machine in a network wich will have the ability to proces a client request (client connected through this Apache prepared proxy) in PHP. And also, it will have the capacity to process the server responses on PHP too. Those are the 2 funcionalities, and they are independent one from each other. Let me present a little schema of what I need to achive: As you can see here, there're 2 ways: blue one and red one. For the blue one, I basically conected a client (Machine B - cell phone) on my local network (home) and configured it to go thorugh a proxy, wich is the Machine A (personal computer) on the exactly same network. So let's say (not DHCP): Machine A: 192.168.1.40 -- Apache is running on this machine, and configured to listen port 80. Machine B (cell phone): 192.168.1.75 -- configured to go throug a proxy, wich is IP 192.168.1.75 and port 80 (basically, Machine A). After configuring Apache properly, wich is basically to remove the "#" from httpd.conf on the lines for the mod_proxy.so (main worker), mod_proxy_connect.so (SSL, allowCONNECT, ...) and mod_proxy_http.so (needed for handle HTTP request/responses) and having in my case, lines like this: # Implements a proxy/gateway for Apache. Include "conf/extra/httpd-proxy.conf" # Various default settings Include "conf/extra/httpd-default.conf" # Secure (SSL/TLS) connections Include "conf/extra/httpd-ssl.conf" wich gives me the ability to configure the file httpd-proxy.conf to prepare the forward proxy or the reverse proxy. So I'm not sure, if what I need it's a forward proxy or a reverse one. For a forward proxy I've done this: <IfModule proxy_module> <IfModule proxy_http_module> # # FORWARD Proxy # #ProxyRequests Off ProxyRequests On ProxyVia On <Proxy *> Order deny,allow # Allow from all Deny from all Allow from 192.168.1 </Proxy> </IfModule> </IfModule> wich basically passes all the packets normally to the server and back to the client. I can trace it perfectly (and testing that works) looking at the "access.log" from Apache. Any request I make with the cell phone, appears then on the Apache log. So it works. But here come the problem: I need to process those client requests. And I need to do it, in PHP. I have read a lot about this. I've read in detail the oficial site from Apache about mod_proxy. And I've searched a lot on forums, but without luck. So I thought about a first aproximation: 1) Forward proxy in Apache, passes all the packets and it's not possible to process them. This seems to be true, so, what about a reverse proxy? So I envisioned something like: ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass http://www.google.com http://www.yahoo.com ProxyPassReverse http://www.google.com http://www.yahoo.com which is just a test, but this should cause on my cell phone that when trying to navigate to Google, I should be going to Yahoo, isn't it? But not. It doesn't work. So you really see, that ALL the examples on Apache reverse proxy, goes like: ProxyPass /foo http://foo.example.com/bar ProxyPassReverse /foo http://foo.example.com/bar wich means, that any kind of request in a local context, will be solved on a remote location. But what I needed is the inverse! It's that when asking for a remote site on my phone, I solve this request on my local server (the Apache one) to process it with a PHP module. So, if it's a forward proxy, I need to pass through PHP first. If it's a reverse proxy, I need to change the "going" direction to my local server one to process first on PHP. Then comes in mind second option: 2) I've seen something like: <Proxy http://example.com/foo/*> SetOutputFilter INCLUDES </Proxy> And I started to search for SetOutputFilter, SetInputFilter, AddOutputFilter and AddInputFilter. But I don't really know how can I use it. Seems to be good, or a solution to me, cause with somethin' like this, I should can add an Input filter to process on PHP the client requests and send back to the client what I programed/want (not the remote server response) wich is the BLUE path on schema, and I should have the ability to add an Output filter wich seems to give me the ability to process the remote server response befor sending it to the client, wich should be the RED path on the schema. Red path, it's just to read server responses and play with em. But nothing more. The Blue path, it's the important one. Cause I will send to the client whatever I want after procesing the requests. I so sorry for this amazingly big post, but I needed to explain it as well as I can. I hope someone will understand my problem, and will help me to solve it! Lot of thanks in advance!! :)

    Read the article

  • How much network latency is "typical" for east - west coast USA?

    - by Jeff Atwood
    At the moment we're trying to decide whether to move our datacenter from the west coast (Corvallis, OR) to the east coast (NY, NY). However, I am seeing some disturbing latency numbers from my location (Berkeley, CA) to the NYC host. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: Berkeley to NYC server: 215 ms latency, 46ms transfer time, 261ms total Berkeley to Corvallis server: 114ms latency, 41ms transfer time, 155ms total some URLs if you want to try yourself: http://careers.stackoverflow.com/content/cso/img/logo.png (NY, NY) http://serverfault.com/cache/logo.png (Corvallis, OR) It makes sense that Corvallis, OR is geographically closer to Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only went up 10%, yet the latency went up ten times as much! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... http://serverfault.com/questions/63531/does-routing-distance-affect-performance-significantly http://serverfault.com/questions/61719/how-does-geography-affect-network-latency http://serverfault.com/questions/6210/latency-in-internet-connections-from-europe-to-usa ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?

    Read the article

  • Electronics Recycling

    - by Evan Carroll
    Is it common practice to not pay for recycling electronics? I find it kind of funny that scrap metal demands such a premium but electronics recycling gets nothing. My dad owns a stereostore and throws away hundreds of radio head-units a month is there anything useful or profitable that can be done with electronics on such a mass scale? http://stselectronicrecyclinginc.com/index.php/electronics-recycling http://www.basscomputerrecycling.com/?page_id=30 http://www.computerrecyclecenter.com/services.htm None of them seem to pay anything for the scrap.

    Read the article

  • Cannot install g++ on ubuntu

    - by Erel Segal
    I don't have g++: erelsgl@ubuntu:/etc/apt$ which g++ erelsgl@ubuntu:/etc/apt$ erelsgl@ubuntu:/etc/apt$ g++ The program 'g++' can be found in the following packages: * g++ * pentium-builder Try: sudo apt-get install <selected package> So I try to install it: erelsgl@ubuntu:~/srilm$ sudo apt-get install g++ Reading package lists... Done Building dependency tree Reading state information... Done g++ is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. 2 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up g++ (4:4.4.3-1ubuntu1) ... update-alternatives: error: alternative path /usr/bin/g++ doesn't exist. dpkg: error processing g++ (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of build-essential: build-essential depends on g++ (>= 4:4.3.1); however: Package g++ is not configured yet. dpkg: error processing build-essential (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: g++ build-essential E: Sub-process /usr/bin/dpkg returned an error code (1) I also try to install build-essential, and get same results. I also tried "sudo apt-get update" - didn't help. This is my apt-cache: erelsgl@ubuntu:/etc/apt$ apt-cache policy g++ build-essential g++: Installed: 4:4.4.3-1ubuntu1 Candidate: 4:4.4.3-1ubuntu1 Version table: *** 4:4.4.3-1ubuntu1 0 500 http://il.archive.ubuntu.com/ubuntu/ lucid/main Packages 100 /var/lib/dpkg/status build-essential: Installed: 11.4build1 Candidate: 11.4build1 Version table: *** 11.4build1 0 500 http://il.archive.ubuntu.com/ubuntu/ lucid/main Packages 100 /var/lib/dpkg/status erelsgl@ubuntu:/etc/apt$ I also tried this and got the same error: erelsgl@ubuntu:~/Ace/Files/corpus$ sudo dpkg --configure -a Setting up g++ (4:4.4.3-1ubuntu1) ... update-alternatives: error: alternative path /usr/bin/g++ doesn't exist. dpkg: error processing g++ (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of build-essential: build-essential depends on g++ (>= 4:4.3.1); however: Package g++ is not configured yet. dpkg: error processing build-essential (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: g++ build-essential

    Read the article

  • Apache/Django subdomains problem

    - by thomasgg
    Now I have apache configuration which works only with localhost domain (http://localhost/). Alias /media/ "/sciezka/do/instalacji/django/contrib/admin/media/" Alias /site_media/ "/sciezka/do/plikow/site_media/" <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE settings PythonPath "['/thomas/django_projects/project'] + sys.path" PythonDebug On </Location> <Location "/site_media"> SetHandler none </Location> How can I make it working for some subdomains like pl.localhost or uk.localhost? This subdomains should display the same page what domain (localhost). Second question: It is possible change default localhost address (http://localhost/) to (http://localhost.com/) or (http://www.localhost.com/) or something else?

    Read the article

  • How can I install Satchmo?

    - by Jonathan Hayward
    I am trying to install Satchmo 0.9 on an Ubuntu 9.10 32-bit guest off of the instructions at http://bitbucket.org/chris1610/satchmo/downloads/Satchmo.pdf. I run into difficulties at 2.1.2: pip install -r http://bitbucket.org/chris1610/satchmo/raw/tip/scripts/requirements.txt pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo The first command fails because a compile error for how it's trying to build PIL. So I ran an "aptitude install python-imaging", locally copy the first line's requirements.text, and remove the line that's unsuccessfully trying to build PIL. The first line completes without reported error, as does the second. The next step tells me to change directory to the /path/to/new/store, and run: python clonesatchmo.py A little bit of trouble here; I am told that clonesatchmo.py will be in /bin by now, and it isn't there, but I put some Satchmo stuff under /usr/local, create a symlink in /bin, and run: python /bin/clonesatchmo.py This gives: jonathan@ubuntu:~/store$ python /bin/clonesatchmo.py Creating the Satchmo Application Traceback (most recent call last): File "/bin/clonesatchmo.py", line 108, in <module> create_satchmo_site(opts.site_name) File "/bin/clonesatchmo.py", line 47, in create_satchmo_site import satchmo_skeleton ImportError: No module named satchmo_skeleton A find after apparently checking out the repository reveals that there is no file with a name like satchmo*skeleton* on my system. I thought that bash might be prone to take part of the second pip invocation's URL as the beginning of a comment; I tried both: pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9\#egg=satchmo pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo Neither way of doing it seems to take care of the import error mentioned above. How can I get a Satchmo installation under Ubuntu, or at least enough of a Satchmo installation that I am able to start with a skeleton of a store and then flesh it out the way I want? Thanks, Jonathan

    Read the article

  • Nginx location regex is not matching

    - by shtuff.it
    The following has been working to cache css and js for me: location ~ "^(.*)\.(min.)?(css|js)$" { expires max; } results: $ curl -I http://mysite.com/test.css HTTP/1.1 200 OK Server: nginx Date: Thu, 16 Jan 2014 18:55:28 GMT Content-Type: text/css Content-Length: 19578 Last-Modified: Mon, 13 Jan 2014 18:54:53 GMT Connection: keep-alive Expires: Thu, 31 Dec 2037 23:55:55 GMT Cache-Control: max-age=315360000 X-Backend: stage01 Accept-Ranges: bytes I am trying to get versioning setup for my js / css using a 10 digit unix timestamp and am having issues getting a regex match with the following valid a regex. location ~ "^(.*)([\d]{10})\.(min\.)?(css|js)$" { expires max; } results: $ curl -I http://mysite.com/test_1234567890.css HTTP/1.1 200 OK Server: nginx Date: Thu, 16 Jan 2014 19:05:03 GMT Content-Type: text/css Content-Length: 19578 Last-Modified: Mon, 13 Jan 2014 18:54:53 GMT Connection: keep-alive X-Backend: stage01 Accept-Ranges: bytes

    Read the article

  • Does the nginx “upstream” directive have a port setting?

    - by user55467
    moved from:http://stackoverflow.com/questions/3748517/does-nginx-upstream-has-a-port-setting I use upstream and proxy for load balancing. The directive proxy_pass http://upstream_name uses the default port, which is 80. However, if the upstream server does not listen on this port, then the request fails. How do I specify an alternate port? my configuration: http{ #... upstream myups{ server 192.168.1.100:6666; server 192.168.1.101:9999; } #.... server{ listen 81; #..... location ~ /myapp { proxy_pass http://myups:81/; } } nginx -t: [warn]: upstream "myups" may not have port 81 in /opt/nginx/conf/nginx.conf:78.

    Read the article

  • DIY NAS - links for Instructions

    - by Kaushik Gopal
    Good folk of SU, I'm planning to build a NAS (Network Access Storage). I'm planning to do it cheap (:read Old PC Config + Open source software). I was looking for good DIY links . Before you shoot this down as a Repost, I'm only looking for good links containing detailed instructions for setting up a NAS. I did a fair bit of searching and found these links (so please suggest others. While these links are great they delve more on the hardware side, i'm looking for more instructions in the software side). For the sake of the interwebs: Ubuntu: http://snarfquest.com/wiki/index.php/Setting_up_a_Home_NAS http://www.smallnetbuilder.com/content/view/27962/77/ http://jonpeck.blogspot.com/2006/11/how-to-configure-80-fileserver-in-45.html FreeNAS http://www.smallbusinesscomputing.com/webmaster/article.php/3719706 http://www.codeproject.com/KB/system/homemade_nas.aspxhttp://www.codeproject.com/KB/system/homemade_nas.aspx There was one at rubbervir.us that everyone points to, but apparently the site has gone down. A couple of other queries: Is Printer/Scanner sharing a possibility with NAS devices? Many talk of torrent support with NAS Devices, a little more light on this? Does this mean, an auto download of torrent through a feed into NAS, or just support for storing Torrent download files onto the NAS(don't see the difference between the latter and a normal file tranfer)

    Read the article

  • Security Resources

    - by dr.pooter
    What types of sites do you visit, on a regular basis, to stay current on information security issues? Some examples from my list include: http://isc.sans.org/ http://www.kaspersky.com/viruswatch3 http://www.schneier.com/blog/ http://blog.fireeye.com/research/ As well as following the security heavyweights on twitter. I'm curious to hear what resources you recommend for daily monitoring. Anything specific to particular operating systems or other software. Are mailing lists still considered valuable. My goal would be to trim the cruft of all the things I'm currently subscribed to and focus on the essentials.

    Read the article

  • Apache > 2.2.22 rewrite rule not working?

    - by EBAH
    since yesterday I'm trying to figure out how to fix the following: running phpipam (http://www.phpipam.net/) with WAMP (Windows environment). The problem I am facing is related with RewriteRule functionality, so forget phpipam for a moment and concentrate on few lines of code. Here is the directory structure of my test website that emulate the first steps phpipam does (you can download http://goo.gl/ksvuGc): C:\wamp\www\rewrite-tst\ C:\wamp\www\rewrite-tst\.htaccess C:\wamp\www\rewrite-tst\index.php C:\wamp\www\rewrite-tst\install C:\wamp\www\rewrite-tst\install\index.php It seems that the following rewrite rule in .htaccess doesn't work: C:\wamp\www\rewrite-tst\.htaccess # install RewriteRule ^install$ install/ [R] RewriteRule ^install/$ index.php?page=install When opening C:\wamp\www\rewrite-tst\index.php the first step check the URL for "install" argument. Since the URL is: http://localhost/rewrite-tst no arguments are supplied and the browser is redirected to: header("Location: /rewrite-tst/install/") At this point the browser opens the page: C:\wamp\www\rewrite-tst\install\index.php >> http://localhost/rewrite-tst/install Apache, thanks to C:\wamp\www\rewrite-tst.htaccess should intercept this URL and redirect to: http://localhost/rewrite-tst/index.php?page=install Here are my tries: Win Apache 2.2.22: works Win Apache 2.4.4: KO Win Apache 2.4.6: KO In the attached zip file you can also find two traces from apache RewriteLog which I can't understand very well. Why Apache 2.4 doesn't work on Windows? Is it possible that there's a bug on Windows version of Apache (2.4.4 and 2.4.6) or am I wrong someway? Thanks for your help!!! Evan -- UPDATE 12 oct 2013 Now I'm really confused! Working on Linux, Kubuntu 13.04. Linux Apache 2.2.22: works Linux Apache 2.4.6: KO I guess there's something wrong in my rules at this point, or some change happened from Apache 2.2 to 2.4 ...

    Read the article

  • Can I alias all directory requests to a single file in nginx?

    - by user749618
    I'm trying to figure out how to take all requests made to a particular directory and return a json string without a redirect, in nginx. Example: curl -i http://example.com/api/call1/ Expected result: HTTP/1.1 200 OK Accept-Ranges: bytes Content-Type: application/json Date: Fri, 13 Apr 2012 23:48:21 GMT Last-Modified: Fri, 13 Apr 2012 22:58:56 GMT Server: nginx X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 38 Connection: keep-alive {"logout": true} Here's what I have so far in my nginx conf: location ~ ^/api/(.*)$ { index /api_logout.json; alias /path/to/file/api_logout.json; types { } default_type "application/json; charset=utf-8"; break; } However, when I try to make the request the Content-Type doesn't stick: $ curl -i http://example.com/api/call1/ HTTP/1.1 200 OK Accept-Ranges: bytes Content-Type: application/octet-stream Date: Fri, 13 Apr 2012 23:48:21 GMT Last-Modified: Fri, 13 Apr 2012 22:58:56 GMT Server: nginx X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 38 Connection: keep-alive {"logout": true} Is there a better way to do this? How can I get the application/json type to stick?

    Read the article

  • SharePoint 2010 Server Configuration Error -> "Cannot connect to database master"

    - by Chrish Riis
    I recieve the following error when I try to configure SharePoint 2010 Server: "Cannot connect to the database master at SQL server at [computer.domain]. The database might not exist, or the current user does not have permission to connect to it." I run the following setup: Windows Server 2008 R2 Standard with SP1 and all the updates SQL Server 2008 R2 with SP1 SharePoint Server 2010 with SP1 Everything is installed on the same server (it's a testserver) I have tried the following: Rebooting the server Checking the install account's DB rights (dbcreator, securityadmin - I even let it have sysadmin) Opened up the firewall on port 1433 and 1434 Uninstalled both SQL and SP, then reinstalled the both Enabled all client protocols in SQL Server Configuration Made sure I used the correct account for installing SharePoint (local admin) Useful links: TCP/IP settings – http:// blog.vanmeeuwen-online.nl/2010/10/cannot-connect-to-database-master-at.html http:// ybbest.wordpress.com/2011/04/22/cannot-connect-to-database-master-at-sql-server-at-sql2008r2/ Wrong slash - http:// yakimadev.com/2010/11/cannot-connect-to-database-master-at-sql-server-at-serverdbname-error-during-sharepoint-2010-products-configuration-wizard-and-installation/ Port error - http:// www.knowsharepoint.com/2011/08/error-connecting-to-database-server.html

    Read the article

  • IIS 7 rewriting subdomain to point at a specific port

    - by Tommy Jakobsen
    Having installed Team Foundation Server 2010 on Windows Server 2008, I need an easy URL for our developers to access their repositories. The default URL for the TFS repositories is http://localhost:8080/tfs Now I want the subdomain domain tfs.server.domain.com to point at http://localhost:8080/tfs. And when you access tfs.server.domain.com/repos_name it should redirect to http://localhost:8080/tfs/repos_name. How can I do this in IIS7? I already tried using the following rule, but it does not work. I get a 404. <rewrite> <globalRules> <rule name="TFS" stopProcessing="true"> <match url="^(?:tfs/)?(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^tfs.server.domain.com$" /> </conditions> <action type="Rewrite" url="http://localhost:8080/tfs/{R:1}" /> </rule> </globalRules> </rewrite> EDIT I actually got this working by adding a binding for the site on port 80 with host name tfs.server.domain.com. But using tfs.server.domain.com, I can't authenticate using Windows Authentication. Is there something that I need to configure for Windows Authentication? You can see a trace here: http://pastebin.com/k0QrnL0m

    Read the article

  • Making Apache 2.2 on SuSE Linux Case In-Sensitive. Which is a better approach?

    - by pingu
    Problem: http://<server>/home/APPLE.html http://<server>/hoME/APPLE.html http://<server>/HOME/aPPLE.html http://<server>/hoME/aPPLE.html All the above should pick this http://<server>/home/apple.html I implemented 2 solutions and both are working fine. Not sure which one is better(performance). Please Suggest..Also Directive - CheckCaseOnly on never worked Option 1: a)Enable:mod_speling In /etc/sysconfig/apache2 - APACHE_MODULES="rewrite speling apparmor......" b) Add directive - CheckSpelling on (Either in .htaccess or add in httpd.conf) In httpd.conf <Directory srv/www/htdcos/home> Order allow,deny CheckSpelling on Allow from all </Directory> or In .htaccess inside /srv/www/htdcos/home(your content folder) CheckSpelling on Option 2: a) Enable: mod_rewrite b) Write the rule vhost(you can not write RewriteMap in directory. check apache docs ) <VirtualHost _default_:80> <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteMap lc int:tolower RewriteCond %{REQUEST_URI} [A-Z] RewriteRule (.*) ${lc:$1} [R=301,L] </IfModule> </VirtualHost> <VirtualHost _default_:80> <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteMap lc int:tolower RewriteCond %{REQUEST_URI} [A-Z] RewriteRule (.*) ${lc:$1} [R=301,L] </IfModule> </VirtualHost> This changes the entire request uri into lowercase. I want this to happen for specific folder, but RewriteMap doesn't work in .htaccess. I am novice in regex and Rewrite. I need a RewriteCond which checks only /css//. can any body help

    Read the article

  • Kerberos authentication not working for one single domain

    - by Buddy Casino
    We have a strange problem regarding Kerberos authentication with Apache mod_auth_kerb. We use a very simple krb5.conf, where only a single (main) AD server is configured. There are many domains in the forest, and it seems that SSO is working for most of them, except one. I don't know what is special about that domain, the error message that I see in the Apache logs is "Server not found in Kerberos database": [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(1025): [client xx.xxx.xxx.xxx] Using HTTP/[email protected] as server principal for password verification [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(714): [client xx.xxx.xxx.xxx] Trying to get TGT for user [email protected] [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(625): [client xx.xxx.xxx.xxx] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(640): [client xx.xxx.xxx.xxx] krb5_get_credentials() failed when verifying KDC [Wed Aug 31 14:56:02 2011] [error] [client xx.xxx.xxx.xxx] failed to verify krb5 credentials: Server not found in Kerberos database [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(1110): [client xx.xxx.xxx.xxx] kerb_authenticate_user_krb5pwd ret=401 user=(NULL) authtype=(NULL) When I try to kinit that user on the machine on which Apache is running, it works. I also checked that DNS lookups work, including reverse lookup. Who can tell me whats going?

    Read the article

  • Firefox does not load certificate chain

    - by TimWolla
    I'm running lighttpd/1.4.28 (ssl) on Debian Squeeze. I just created a http://startssl.com certificate, I runs fine at all of my Browsers (Firefox, Chrome, Opera), but my users are reporting certificate-errors in Firefox. I already nailed it down to a failing of loading of the certificate chain: Certificate at my Firefox: http://i.stack.imgur.com/moR5x.png Certificate at others Firefox: http://i.stack.imgur.com/ZVoIu.png (Note the missing StartCOM-certificates here) I followed this tutorial for embedding the certificate in my lighttpd: https://forum.startcom.org/viewtopic.php?t=719 The relevant parts of my lighttpd.conf look like this: $SERVER["socket"] == ":443" { ssl.engine = "enable" ssl.ca-file = "/etc/lighttpd/certs/ca-bundle.pem" ssl.pemfile = "/etc/lighttpd/certs/www.bisaboard.crt" } ca-bundle.pem was created like this: cat ca.pem sub.class1.server.ca.pem > ca-bundle.pem I grabbed the relevant files from here: http://www.startssl.com/certs/ www.bisaboard.crt was created like this: cat certificate.pem ssl.key > www.bisaboard.crt Where certificate.pem is my StartSSL-Class1 Certificate and ssl.key my SSL-Root-Key. Do you have any idea why the second Firefox does not correctly load the certificate-chain?

    Read the article

  • KVM-admin tools required

    - by Dr. Death
    I require the KVM-admin tools that are supposed to be on the KVM home page http://www.linux-kvm.org/page/Kvmtools#Download but I am unable to find any refernce for then. Can anybody from the forum tell me from where I can get them. If somebody has a copy of them them please share a link so that I can get them. I specifically require the below tools: kvm-admin boot domain_name List item kvm-admin status domain_name kvm-admin status all kvm-admin monitor domain_name kvm-admin show domain_name

    Read the article

  • Disable ProxyPass rules within a virtual host on apache 2

    - by chinto
    I have a global proxypass rule in httpd.conf rules at global level ProxyPass /test/css http://myserver:7788/test/css ProxyPassReverse /test/css http://myserver:7788/test/css and I have a virtual host Listen localhost:7788 NameVirtualHost localhost:7788 <VirtualHost localhost:7788> Alias /test/css/ "C:/jboss/server/default/deploy/test.ear/test-web-app.war/css/" </VirtualHost> I would like to disable all global proxypass rules applying in this virtual host? NoProxy doesn't seem to work. (The reason I would like to do this is I have below global rules which create a 502 proxy loop if applied within this virtual host #pass all requests to application server ProxyPass /test http://localhost:8080/test ProxyPassReverse /test http://localhost:8080/test ) What I'm trying to do is, serve all static content (like css) using apache, while still proxying all the rest of requests to the application server.

    Read the article

  • MySQL: Pacemaker cannot start the failed master as a new slave?

    - by quanta
    I'm going to setup failover for MySQL replication (1 master and 1 slave) follow this guide: https://github.com/jayjanssen/Percona-Pacemaker-Resource-Agents/blob/master/doc/PRM-setup-guide.rst Here're the output of crm configure show: node serving-6192 \ attributes p_mysql_mysql_master_IP="192.168.6.192" node svr184R-638.localdomain \ attributes p_mysql_mysql_master_IP="192.168.6.38" primitive p_mysql ocf:percona:mysql \ params config="/etc/my.cnf" pid="/var/run/mysqld/mysqld.pid" socket="/var/lib/mysql/mysql.sock" replication_user="repl" replication_passwd="x" test_user="test_user" test_passwd="x" \ op monitor interval="5s" role="Master" OCF_CHECK_LEVEL="1" \ op monitor interval="2s" role="Slave" timeout="30s" OCF_CHECK_LEVEL="1" \ op start interval="0" timeout="120s" \ op stop interval="0" timeout="120s" primitive writer_vip ocf:heartbeat:IPaddr2 \ params ip="192.168.6.8" cidr_netmask="32" \ op monitor interval="10s" \ meta is-managed="true" ms ms_MySQL p_mysql \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" globally-unique="false" target-role="Master" is-managed="true" colocation writer_vip_on_master inf: writer_vip ms_MySQL:Master order ms_MySQL_promote_before_vip inf: ms_MySQL:promote writer_vip:start property $id="cib-bootstrap-options" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1341801689" property $id="mysql_replication" \ p_mysql_REPL_INFO="192.168.6.192|mysql-bin.000006|338" crm_mon: Last updated: Mon Jul 9 10:30:01 2012 Stack: openais Current DC: serving-6192 - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ serving-6192 svr184R-638.localdomain ] Master/Slave Set: ms_MySQL Masters: [ serving-6192 ] Slaves: [ svr184R-638.localdomain ] writer_vip (ocf::heartbeat:IPaddr2): Started serving-6192 Editing /etc/my.cnf on the serving-6192 of wrong syntax to test failover and it's working fine: svr184R-638.localdomain being promoted to become the master writer_vip switch to svr184R-638.localdomain Current state: Last updated: Mon Jul 9 10:35:57 2012 Stack: openais Current DC: serving-6192 - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ serving-6192 svr184R-638.localdomain ] Master/Slave Set: ms_MySQL Masters: [ svr184R-638.localdomain ] Stopped: [ p_mysql:0 ] writer_vip (ocf::heartbeat:IPaddr2): Started svr184R-638.localdomain Failed actions: p_mysql:0_monitor_5000 (node=serving-6192, call=15, rc=7, status=complete): not running p_mysql:0_demote_0 (node=serving-6192, call=22, rc=7, status=complete): not running p_mysql:0_start_0 (node=serving-6192, call=26, rc=-2, status=Timed Out): unknown exec error Remove the wrong syntax from /etc/my.cnf on serving-6192, and restart corosync, what I would like to see is serving-6192 was started as a new slave but it doesn't: Failed actions: p_mysql:0_start_0 (node=serving-6192, call=4, rc=1, status=complete): unknown error Here're snippet of the logs which I'm suspecting: Jul 09 10:46:32 serving-6192 lrmd: [7321]: info: rsc:p_mysql:0:4: start Jul 09 10:46:32 serving-6192 lrmd: [7321]: info: RA output: (p_mysql:0:start:stderr) Error performing operation: The object/attribute does not exist Jul 09 10:46:32 serving-6192 crm_attribute: [7420]: info: Invoked: /usr/sbin/crm_attribute -N serving-6192 -l reboot --name readable -v 0 The full logs: http://fpaste.org/AyOZ/ The strange thing is I can starting it manually: export OCF_ROOT=/usr/lib/ocf export OCF_RESKEY_config="/etc/my.cnf" export OCF_RESKEY_pid="/var/run/mysqld/mysqld.pid" export OCF_RESKEY_socket="/var/lib/mysql/mysql.sock" export OCF_RESKEY_replication_user="repl" export OCF_RESKEY_replication_passwd="x" export OCF_RESKEY_test_user="test_user" export OCF_RESKEY_test_passwd="x" sh -x /usr/lib/ocf/resource.d/percona/mysql start: http://fpaste.org/RVGh/ Did I make something wrong?

    Read the article

  • Recompiling Nginx 1.4.3 with "--with-http_gzip_static_module" error

    - by Elijah Paul
    I'm trying to enable the 'ngx_http_gzip_static_module' module in Nginx 1.4.3 by adding the --with-http_gzip_static_module to my ./configure configuration. But I recieve the following error when i try to recompile (make): # make make -f objs/Makefile make[1]: Entering directory `/tmp/nginx-1.4.3' make[1]: *** No rule to make target `src/os/unix/ngx_gcc_atomic_x86.h', needed by `objs/src/core/nginx.o'. Stop. make[1]: Leaving directory `/tmp/nginx-1.4.3' make: *** [build] Error 2 My current config (CentOS 6.4): # nginx -V nginx version: nginx/1.4.3 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --http-log-path=/var/log/nginx/access.log --with-http_ssl_module --prefix=/usr --add-module=./nginx-sticky-module-1.1 --add-module=./headers-more-nginx-module-0.23 i was under the impression that this was a module that just need to be 'enabled' as opposed to 'added'. What am I missing here?

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >