Search Results

Search found 21350 results on 854 pages for 'url parsing'.

Page 715/854 | < Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >

  • How to grant read/write to specific user in any existent or future subdirectory of a given directory? [migrated]

    - by Samuel Rossille
    I'm a complete newbie in system administration and I'm doing this as a hobby. I host my own git repository on a VPS. Let's say my user is john. I'm using the ssh protocol to access my git repository, so my url is something like ssh://[email protected]/path/to/git/myrepo/. Root is the owner of everything that's under /path/to/git I'm attempting to give read/write access to john to everything which is under /path/to/git/myrepo I've tried both chmod and setfacl to control access, but both fail the same way: they apply rights recursively (with the right options) to all the current existing subdirectories of /path/to/git/myrepo, but as soon as a new directory is created, my user can not write in the new directory. I know that there are hooks in git that would allow me to reapply the rights after each commit, but I'm starting to think that i'm going the wrong way because this seems too complicated for a very basic purpose. Q: How should I setup my right to give rw access to john to anything under /path/to/git/myrepo and make it resilient to tree structure change ? Q2: If I should take a step back change the general approach, please tell me.

    Read the article

  • both ssl and non-ssl on single port

    - by Zulakis
    I would like to make my apache2 webserver serve both http and https on the same port. With the different method i tried it was either not working on http or on https.. How can I do this? Update: If I enable SSL and then visit the with http I get page like this: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://server/"><b>https://server/</b></a></blockquote></p> <hr> <address>Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny16 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g Server at server Port 443</address> </body></html> Because of this, it seems very much possible to have both http and https on the same port. A first step would be to change this default-page so it would present a 301-Moved header. Update2: According to this, it is possible. Now, the question is just how to configure apache to do it.

    Read the article

  • Apache Reverse proxy Http to https

    - by Coppes
    I have a website which is fully running on Https. For some reason i did get the task to find a way to convert a url for example: http://www.domain.com/a/e-nc/youless to a https version of it, without losing HTTP POST header such as the POST values which are in it. So i thought (not even sure) let's try to make a reversed proxy in apache and see how that works. Anyway after a lot of struggling i came to the point to ask it here. So to be speicific my goal is: Convert the http://www.domain.com/a/e-nc/youless to https://www.domain.com/a/e-nc/youless without losing the POST conditions. What i have tried until now is the following: Created a file called: proxiedhosts in my apache2/sites-enabled folder with the following contents: SSLProxyEngine On SSLProxyCACertificateFile /etc/apache2/ssl/certificate****.pem ProxyRequests Off ProxyPreserveHost On <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass /a/e-nc/youless/ https://www.domain.com/a/e-nc/youless/ ProxyPassReverse /a/e-nc/youless/ https://www.domain.com/a/e-nc/youless/ Thanks in advance!

    Read the article

  • Insufficient channel capacity of 1GBit

    - by Roman S
    There is a Caching Server (Varnish): it receives data from Amazon S3 on request, saves it for some time and gives it to the client. We have encountered the problem of insufficient channel capacity of 1GBit. Peak load within 4 hours completely chokes the channel. Server performance is sufficient for now. Approximately 4.5TB of data are transmitted per day. More than 100TB are accumulated per month. The first thought that comes to mind is simply to add one more 1GBit port and sleep peacefully until 2GBit are not enough (it may happen quite quickly) or one server is not able to handle it. And then we just need to add new Caching Servers. But now we need a Load Balancer, which will send requests on one and the same URL, always on one and the same server (to avoid multiple copies of the same cached objects). Here are the questions: Does a Balancer need a band equal to sum of all bands of Caching Servers? What shall we do in case there are no ports in a Balancer? Should we add more Balancers or solve the problem by means of Round robin DNS? What are the standard approaches to such problems? Can anyone advise hosting-companies, which can solve this problem? We are interested in American and European markets.

    Read the article

  • Can I convert an existing Firefox installation to ESR without a re-install?

    - by Iszi
    It took some jumping through hoops (including a mailing list subscription that I apparently didn't need) but I finally found where to download the Firefox ESR. This is great for fresh installs, but I was wondering if there's a way to simply convert existing installations to the ESR configuration without having to do a full install. As I understand it, the only difference between ESR and regular Firefox will be how they receive updates. After the new standard version of Firefox comes out, ESR releases will only receive critical security updates and bug fixes for the remainder of their support life. Newer versions of Firefox's standard build will have all the latest and greatest features, while ESR releases are meant to provide stability for environments that can't be expected to keep up with a new full version number change as often as Mozilla does them. In regular Firefox, the About screen shows that I am using the "release" update channel. Is switching to ESR really just a matter of switching the update channel? I presume this can be done in about:config by changing app.update.channel and probably also app.update.url. However, I don't know what these values should be for ESR or if anything else should be tweaked. So, is it possible to switch to ESR without a reinstall and, if so, how? (Note: While this question was written originally for Firefox 10, I expect any answers will apply to future ESR versions as well.)

    Read the article

  • Trying to run a codeigniter app on custom php

    - by hamstar
    I have a CodeIgniter app that I deployed to a server with php 5.2 and my dev box has 5.3, and some stuff doesn't work anymore. I didn't want to upgrade php and risk the other app on the server having issues. Anyway I compiled a custom PHP and added the following to a single .conf file in /etc/httpd/conf.d/zcid.conf with all the other conf files. <VirtualHost *:80> DocumentRoot /var/www/cid/app ServerName sub.example.co.nz </VirtualHost> <Directory "/var/www/cid/app"> authtype Basic authname "oh dear how did this get here i am no good with computer" authuserfile /path/to/auth require valid-user RewriteEngine on RewriteCond $1 !^(index\.php|robots\.txt|createEvent\.php|/cgi-bin) RewriteRule ^(.*)$ /index.php/$1 [L] AddHandler custom-php .php Action custom-php /cgi-bin/php53.cgi </Directory> In /var/www/cid/app I have the cgi-bin folder and the php53.cgi that I copied from /usr/local/php53/bin/php-cgi But now when I navigate to the subdomain it says: The requested URL /cgi-bin/php53.cgi/index.php/ was not found on this server. And if I try to browse to /cgi-bin it says (what it is supposed to?): You don't have permission to access /cgi-bin/ on this server. Quite confused now. Anyone know what to do here? Thanks :)

    Read the article

  • Unable to browse to apache service, Service is running

    - by Jeff
    Summary I have a very peculiar problem. I am not able to open the "It Works!" page after installing a fresh server with apache. I am able to ssh to the box (from outside the network). Apache seems to be running on my Centos6.4x86_64 box just fine. Nothing useful in /var/logs/httpd/*. What am I missing? The setup I am outside the network right now. The "server" is a VM on my home computer running bridged mode. public ip: A.B.C.D Host: 192.168.1.5 VM: 192.168.1.8 I have a verizon fios router that is forwarding ports 22, 80, and 8888 to the VM. I am able to ssh over port 22, but I am not able to browse to the public URL over port 80. so A.B.C.D:22 is working, but http://A.B.C.D:80 is not. What I've tried nmap to see if it is listening: nmap -sT -O localhost Starting Nmap 5.51 ( http://nmap.org ) at 2013-10-25 11:10 EDT Nmap scan report for localhost (127.0.0.1) Host is up (0.000040s latency). Other addresses for localhost (not scanned): 127.0.0.1 Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 25/tcp open smtp 80/tcp open http 3306/tcp open mysql I tried going to it locally (lynx) and it does work. So, is the problem in my ports?

    Read the article

  • Apache HTTPd FollowSymLinks path permission

    - by apast
    Hi, I'm configuring my development environment with a basic Apache HTTPd configuration. But, to avoid a often problem, I want to map my test URL to my development folder. I'm using Ubuntu. My development path is located under the following example path: /home/myusername/myworkspace/hptargetpath/src/pages Considering the following symbolic link mapping: #ls -l /opt/share/www/mydevelopmentrootpath: lrwxrwxrwx 1 root root 77 2011-02-13 18:53 /opt/share/www/mydevelopmentrootpath -> /home/myusername/myworkspace/hptargetpath/src/pages With this folder mapping, I configured Apache HTTPd with the following configuration: <VirtualHost *:*> ServerName local.server.com ServerAdmin [email protected] DirectoryIndex index.html DocumentRoot /opt/share/www/mydevelopmentrootpath <Directory /opt/share/www/mydevelopmentrootpath/ > Options +Indexes Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> But, I'm receiving a 403 Forbidden error when I want to access index.html under the address http://local.server.com/index.html. 403 Forbidden You don't have permission to access /index.html on this server. On httpd debug log, I checked the following message: [Sun Feb 13 19:34:47 2011] [error] [client 127.0.1.1] Symbolic link not allowed or link target not accessible: /opt/share/www/mydevelopmentrootpath I'm thinking that this problem is been generated by some path permission. It's not a direct permission to directory, but some intermediate directory in the path. There's a directive on httpd core Options: SymLinksIfOwnerMatch The server will only follow symbolic links for which the target file or directory is owned by the same user id as the link. But, I tested it without effects. Somebody may help me? I think that it's a trivial configuration on development environment. Best regards, And Past

    Read the article

  • Nginx wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Site hanging in iis7 - how do I troubleshoot?

    - by Chris Foot
    I am currently having a problem with a windows 2008 server running IIS 7. The server runs several sites but only seems to have the issue with one particular site. Every so often, the whole server slows to a crawl with nearly all requests timing out! Invariably, when we log in to take a look there is always an IIS process using up around 90% cpu. Looking into the worker processes in IIS there are usually one or two requests that have been running for a long time. They are always in the ExecuteRequestHandler state with ManagedPipeline as the module name and the current ones i'm looking at have been running for 7686248 (what units is this in, it doesn't say?). It is also not always the same page, in fact we have seen at least 3 different pages listed under url when this has happened. It seems that the only way to bring the server back to life is to kill the 90% process! The site is running under .Net 4.0 and the code on it is very similar to other sites on the server which do not have the problem! How do I start troubleshooting this?

    Read the article

  • Use of backreferences in fail2ban filters possible?

    - by Izzy
    From time to time, I see collections of suspect "File not found" errors in my Apache logs, basically using the pattern File does not exist: /var/www/file, referer: http://my.server.com/file In human terms: The file was not found, though it referenced here itself. A clear hacking attempt, as that's hardly possible (and the REQUEST_URIs often enough suggest the same). In my eyes a clear case for fail2ban – if I could get backreferences to work here: failregex = ^%(_apache_error_client)s File does not exist: /var/www(.+), referer: http://.+\1$ (Justin Case: above examples assume the DIRECTORY_ROOT of that webserver being /var/www) I googled for hours, searched the fail2ban wiki up and down – but nowhere I could find a statement concerning backreferences in its filters. Are they not supported, or did I do it the wrong way? Any hints how to make it work (except from "dirty hacks" like first sending the request to another fake url using mod-rewrite, and then catching on that (if anyone is interested, I can elaborate on that approach in an answer), or doing something similar using mod-security)? as an entire log line was requested: [Fri Nov 08 14:57:28 2013] [error] [client 50.67.234.213] File does not exist: /var/www/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found;, referer: http://www.myserver.com/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found; (sorry, logs were just switched, so this long candidate was the only one left currently; minor adjustments were made for privacy reasons)

    Read the article

  • Blocking HTTPS and P2P Traffic

    - by Genboy
    I have a Debian server running at the gateway level on a LAN. This runs squid for creating block lists of websites - for eg. blocking social networking on the LAN. Also uses iptables. I am able to do a lot of things with squid & iptables, but a few things seem difficult to achieve. 1) If I block facebook through their http url, people can still access https://www.facebook.com because squid doesn't go through https traffic by default. However, if the users set the gateway IP address as proxy on their web browser, then https is also blocked. So I can do one thing - using iptables drop all outgoing 443 traffic, so that people are forced to set proxy on their browser in order to browse any HTTPS traffic. However, is there a better solution for this. 2) As the number of blocked urls increase in squid, I am planning to integrate squidguard. However, the good squidguard lists are not free for commercial use. Anyone knows of a good squidguard list which is free. 3) Block yahoo messenger, gtalk etc. There are so many ports on which these Instant Messenger softwares work. You need to drop lots of outgoing ports in iptables. However, new ports get added, so you have to keep adding them. And even if your list of ports is current, people can still use the web version of gtalk etc. 4) Blocking P2P. Haven't been able to figure out how to do this till now.

    Read the article

  • Correct MySQL username/password, but getting Access Denied error when run from script

    - by Nick
    I'm currently trying to run the following command from within a shell script. /usr/bin/mysql -u username -ppassword -h localhost database It works perfectly fine when executed manually, and not from within a script. When I try to execute a script that contains that command, I get the following error: ERROR 1045 (28000) at line 3: Access denied for user 'username'@'localhost' (using password: YES) I literally copied and pasted the working command into the script. Why the error? As a sidenote: the ultimate intent is to run the script with cron. EDIT: Here is a stripped down version of my script that I'm trying to run. You can ignore most of it up until the point where it connects to MySQL around line 19. #!/bin/sh #Run download script to download product data cd /home/dir/Scripts/Linux /bin/sh script1.sh #Run import script to import product data to MySQL cd /home/dir/Mysql /bin/sh script2.sh #Download inventory stats spreadsheet and rename it cd /home/dir /usr/bin/wget http://www.url.com/file1.txt mv file1.txt sheet1.csv #Remove existing export spreadsheet rm /tmp/sheet2.csv #Run MySQL queries in "here document" format /usr/bin/mysql -u username -ppassword -h localhost database << EOF --Drop old inventory stats table truncate table table_name1; --Load new inventory stats into table Load data local infile '/home/dir/sheet1.csv' into table table_name1 fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; --MySQL queries to combine product data and inventory stats here --Export combined data in spreadsheet format group by p.value into outfile '/tmp/sheet2.csv' fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; EOF EDIT 2: After some more testing, the issue is with the << EOF that is at the end of the command. This is there for the "here document". When removed, the command works fine. The problem is that I need << EOF there so that the MySQL queries will run.

    Read the article

  • More than 10k connections on linux vps

    - by Sash_007
    my question what is causing this and how to check? we use url masking script is the website..is it causing this?please help We could noticed that you are abusing our network, as you have made more than 10k connections in our node due to this our node became unstable and all of our customer faced down time because of your VPS. Please find the log details below for your reference. ============================== 593 src=199.231.227.56 dst=58.2.236.196 465 src=199.231.227.56 dst=192.223.243.6 396 src=199.231.227.56 dst=58.2.238.191 217 src=199.231.227.56 dst=58.2.236.197 161 src=199.231.227.56 dst=20.139.83.50 145 src=199.231.227.56 dst=192.223.163.6 136 src=199.231.227.56 dst=125.21.230.68 134 src=199.231.227.56 dst=125.21.230.132 131 src=199.231.227.56 dst=20.139.67.50 117 src=199.231.227.56 dst=110.234.29.210 112 src=199.231.227.56 dst=65.52.0.51 104 src=199.231.227.56 dst=202.46.23.55 100 src=199.231.227.56 dst=202.3.120.4 94 src=199.231.227.56 dst=117.198.39.22 69 src=203.197.253.62 dst=199.231.227.56 62 src=14.194.248.225 dst=199.231.227.56 53 src=199.231.227.56 dst=192.223.136.5 52 src=49.248.11.195 dst=199.231.227.56 51 src=199.231.227.56 dst=117.198.38.15 50 src=199.231.227.56 dst=192.71.175.2 47 src=199.231.227.56 dst=61.16.189.76 45 src=199.231.227.56 dst=122.177.222.17 43 src=199.231.227.56 dst=115.242.89.40 42 src=199.231.227.56 dst=103.22.237.215 41 src=125.16.9.2 dst=199.231.227.56 39 src=199.231.227.56 dst=117.198.35.90 38 src=199.231.227.56 dst=203.91.201.54 38 src=199.231.227.56 dst=14.139.241.89 38 src=199.231.227.56 dst=111.93.85.82 37 src=199.231.227.56 dst=65.52.0.56 Note: 1st column indicates the total number of connections to a particular IP. You have totally made more than 10k connections.

    Read the article

  • When I try to access a website without www I get access denied.

    - by madphp
    I have an apache web server on a debian machine. I'm using virtualmin to administer virtual hosts. I have two sites on this server right now, when I try to access one site without the www in the URL I get an access denied. The other site is fine. The site with the problem is a cakephp app and has the following .htaccess file in the public_html folder. <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> Below is the directives for the problem domain. SuexecUserGroup "#1001" "#1001" ServerName mydomain.net ServerAlias www.mydomain.net ServerAlias webmail.mydomain.net ServerAlias admin.mydomain.net DocumentRoot /home/mydomain/public_html ErrorLog /var/log/virtualmin/mydomain.net_error_log CustomLog /var/log/virtualmin/mydomain.net_access_log combined ScriptAlias /cgi-bin/ /home/mydomain/cgi-bin/ ScriptAlias /awstats/ /home/mydomain/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/mydomain/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 FCGIWrapper /home/mydomain/fcgi-bin/php5.fcgi .php FCGIWrapper /home/mydomain/fcgi-bin/php5.fcgi .php5 </Directory> <Directory /home/mydomain/cgi-bin> allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} =webmail.mydomain.net RewriteRule ^(.*) https://mydomain.net:20000/ [R] RewriteCond %{HTTP_HOST} =admin.mydomain.net RewriteRule ^(.*) https://mydomain.net:10000/ [R] RemoveHandler .php RemoveHandler .php5 IPCCommTimeout 31 <Files awstats.pl> AuthName "mydomain.net statistics" AuthType Basic AuthUserFile /home/mydomain/.awstats-htpasswd require valid-user </Files>

    Read the article

  • wget crawling search results of news website

    - by kiltek
    I am trying to crawl the search results of a news website using wget. The name of the website is www.voanews.com. After typing in my search keyword and clicking search, it proceeds to the results. Then i can specify a "to" and a "from"-date and hit search again. After this the URL becomes: http://www.voanews.com/search/?st=article&k=mykeyword&df=10%2F01%2F2013&dt=09%2F20%2F2013&ob=dt#article and the actual content of the results is what i want to download. To achieve this I created the following wget-command: wget --reject=js,txt,gif,jpeg,jpg \ --accept=html \ --user-agent=My-Browser \ --recursive --level=2 \ www.voanews.com/search/?st=article&k=germany&df=08%2F21%2F2013&dt=09%2F20%2F2013&ob=dt#article Unfortunately, the crawler doesn't download the search results. It only gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..." links and saves the articles they link to. It seems like he crawler doesn't check the search result links at all. What am I doing wrong and how can I modify the wget command to download the results search list links (and of course the sites they link to) only ?

    Read the article

  • IIS / Virtual Directory authentication.

    - by Chris L
    I have an IIS(v6)/Windows 2003/.Net 3.5(app code, libraries etc.) server hosting a website at www.mywebsite.com mapped to E:\Inetpub\wwwroot\mywebsite, we also have a virtual directory (VirtDir) mapped out to E:\Inetpub\wwwroot\mywebsite\files (although in theory this could be in a different directory or a separate machine) where we store a customer's files(a bunch of .pdf & .xls). Currently to access a file you can enter into the url something like: www.mywebsite.com/VirtDir/Customer/myFile.pdf and get access to the file. The problem is the user doesn't have to log into www.mywebsite.com to get access to the file, we would prefer them to log in first. We would like the user to login via the mywebsite and if valid, let them download files from the virtual directory. The www.mywebsite.com and VirtDir are separate sites on the same farm. Allow Anon Access, and Integrated Windows Authentication both enabled. I'm more of a developer and less of a Sys Admin, but hopefully I'm in the right spot, any help would be appreciated.

    Read the article

  • Free, simple, configurable SOCKS5 server

    - by Pooria Azimi
    I've been looking (for the past 6-7 hours) for a fast, free and configurable SOCKS5 server. I haven't found anything that matches my needs. They are either too complicated, too bare-bones or simply buggy as hell. This is (all) I need: I want it to run on Linux (and also OS X, preferably) I want it to listen on localhost:8888 When my app (say wget.. or curl --socks5=localhost:8888) requests http://www.google.com/search?q=asd (or any other url - both http and https), I want it to fetch the page not from google's servers, but from http://localhost:4444/cached?uri=http://www.google.com/search%3Fq%3Dasd. Nothing more! I don't need caching, or anything else. I just want a SOCKS5 server, running locally, which redirects all queries to my own (local) server. It could be written in C, C++, Python, PHP, Perl, Node.js or any other language. I don't care, as long as it supports my (very limited) needs, or I can easily change the source to make it so. Thanks a lot

    Read the article

  • How do you make Google's interface always be in your chosen language?

    - by Michael Wolf
    Google's interface and search results don't always appear in my preferred language, English. I'm located in Mexico City and, although I generally have no problem with Spanish, I would prefer search results in English most of the time. (The exception is when I'm using search terms in Spanish.) I'd also prefer the interface to be in English, but that's far less important to me than search results. Google looks at your IP to decide where you're coming from and thus what language to present results in. So, when I type www.google.com into the URL bar, it redirects me to www.google.com.mx. Is there a way to force Google to use one language all the time? Here are some things I've done and tried: 0) I have a Google account, and I've configured it such that it should know that English is my preferred language. I don't often explicitly log out of Google, so generally Google knows I'm me and my preferences when I access its services. 1) I've configured my browser to ask for pages in English. Very few sites support this feature at all; Google isn't one of them. 2) From www.google.com.mx, I can click on "Google.com in English". This works until, I think, I close the browser. 2a) From www.google.com.mx, I can go into account configuration, which is English. From then on, everything's in English. 3) I can append &hl=en (Human Language = English) to the end of the URLs of result pages. 2, 2a, and 3 all "work", but they're all mildly annoying. I'd rather avoid them if I could. (At the risk of stating the obvious, English and Spanish are the languages I'm dealing with, but I imagine that, say, a francophone using Google from Korea would run into basically the same issue.)

    Read the article

  • Authority Information Access local path being ignored

    - by Kevin
    I have a CA set up in Server 2008 R2, and generally it is working, but I can't control the local path/filename it writes its own certificate to for the Authority Information Access publishing. Here's a screen shot of the dialog I'm trying to set this on: From these settings I would expect to get the file: C:\Windows\system32\CertSrv\CertEnroll\DAMNIT.crt But instead I get: C:\Windows\system32\CertSrv\CertEnroll\SERVER.domain.com_My Issuing Authority(1).crt Of course, the actual change shown wouldn't be very useful, but it's illustrative; no matter what path/filename I use, it always lands up in the same place and with the same name. I actually wanted to change the name from <ServerDNSName>_<CaName><CertificateName>.crt to <CaName><CertificateName>.crt, since the latter corresponds to the HTTP URL whereas the former does not. Admittedly, I haven't set up many CAs so perhaps I'm just deluded as to what this dialog is supposed to be setting, but if so this is notoriously bad UI design. (Incidentally, I have a couple other complaints with the same dialog.) What's going on here and is there some way to get the filename pattern I want?

    Read the article

  • Apache reports a 200 status for non-existent WordPress URLs

    - by Jonah Bishop
    The WordPress .htaccess generally has the following rewrite rules: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> When I access a non-existent URL at my website, this rewrite rule gets hit, redirects to index.php, and serves up my custom 404.php template file. The status code that gets sent back to the client is the correct 404, as shown in this HTTP Live Headers output example: http://www.borngeek.com/nothere/ GET /nothere/ HTTP/1.1 Host: www.borngeek.com {...} HTTP/1.1 404 Not Found However, Apache reports the entire exchange with a 200 status code in my server log, as shown here in a log snippet (trimmed for simplicity): {...} "GET /nothere/ HTTP/1.1" 200 2155 "-" {...} This makes some sense to me, seeing as the original request was redirected to page that exists (index.php). Is there a way to force Apache to report the exchange as a 404? My problem is that bogus requests coming from Bad Guys show up as "successful requests" in the various server statistics software I use (AWStats, Analog, etc). I'd love to have them show up on the Apache side as 404s so that they get filtered out from the stat reports that get generated. I tried adding the following line to my .htaccess, but it had no effect (I'm guessing for the same reason as the previous redirect rules): ErrorDocument 404 /index.php?error=404 Does anyone have a clever way to fix this annoyance? Additional Info: OS is Debian 6.0.4, and Apache version looks to be 2.2.22-3 (hosted on DreamHost) The 404 being sent back to the client is being set by WordPress (i.e. I'm not manually calling header() anywhere)

    Read the article

  • Apache Name-based Virtual Hosts - configuring httpd.conf file

    - by Dave
    Hi there. I am running a web app on Tomcat at the following location on my server. /var/tomcat/webapps/SoccerApp I am looking to update the Tomcat httpd.conf file with the following virtual host... <VirtualHost *:80> DocumentRoot /var/tomcat/webapps/SoccerApp/MyTeam ServerName www.mysoccerapp.com </VirtualHost> This gives me a 404 error as the directory MyTeam does not exist. However my application behaves in a way that it uses this URL directory as the name of the soccer team for which to display data, so it will never be a physical folder on the server. None the less, I would like www.mysoccerapp.com to resolve to webapps/SoccerApp/MyTeam, even though the directory isnt there. does this make any sense? Any ideas on how to get this working. At the end of the day, i want to do the following... www.teamone.com -> runs /webapps/SoccerApp/TeamOne www.teamone.com -> runs /webapps/SoccerApp/TeamTwo ...where TeamOne and TeamTwo are not physical directories, but merely processed by my SoccerApp application as the current soccer team to display data for. Many many thanks! Dave.

    Read the article

  • Multiple munin-nodes per machine

    - by Alexander T
    I'm collecting statistics remotely through JMX. The munin JMX plugin allows you to select an URL to connect to when aggregating statistics. This allows me to collect statistics from hosts which do not actually have munin-node installed. I find this a desirable property for some systems where I am hindered to install munin-node. How I work today is that if i want to collect JMX stats from machine A without munin-node, I install munin-node on machine B. Machine B then collects data from A via JMX, and reports it to munin-server, which runs on machine C. This setup requires multiple B-type machines: one per C-type machine. I would like to avoid this and instead use only one B-type machine to collect the data from all A-type machines and reports it to the only munin-server (C-type machine). As far as I understand this requires running multiple munin-nodes on B or in some other way report to munin-server that the B-type machine is reporting data from multiple sources. Is this possible? Thank you.

    Read the article

  • IIS crashes with unhandled exception in ASP.NET

    - by SnowCrash
    We had an issue recently with an unhandled exception in an ASP.NET C# application bringing down IIS and all application pools that it was hosting. IIS Manager was unable to restart or stop/start the service and I was unable start IIS again after killing w3wp.exe in the task manager. A system reboot restored IIS to a running state; as a primarily Linux admin, I generally consider an unplanned system reboot to resolve a software error to be an act of high heresy. Is there a way to "harden" IIS so that a faulting application does not affect anything but the request that exposes the fault? Some details on the server and application fault. IIS: 7.5 .NET: 4.0 Windows Server 2008 R2 Faulted on call to System.Net.Dns.Resolve() with a url pointing to a non-existant domain as the argument. (I'm aware that this method is deprecated but the point that a page code issue shouldn't bring down the server still stands) The exception generated was SocketException. The faulting module according to event viewer was KERNELBASE.dll The issue was resolved by wrapping the call in a try-catch, logging the exception and displaying some generic content on the page. I'm hoping that I missed something in the IIS config that would switch it to "production" mode or something.

    Read the article

  • Get Safari to use different autocompletion on different URLs on same hostname

    - by Luke404
    I have a webserver publishing different services over the same SSL VirtualHost, the two most commonly used being PhpMyAdmin and Cacti. These (and others) use 'cookie' style authentication, asking user and password in an HTML form (thus not using HTTP Authentication). Being on the same hostname, the Safari browser didn't manage too well stored passwords: if I login to one app with user foo, and then go to app two it would propose me user foo and its password in the login form. Changing just the username to bar used to be sufficient to let Safari autocomplete the correct password in its form field. Annoying, but I could live with it - usernames are short and easy to remember when compared to the passwords we use. After the update to safari5 this seems to be no longer true: if I store in safari (actually user keychain on OSX) credentials for https://www.foobarbaz.com/app1 AND credentials for https://www.foobarbaz.com/app2 there seem to be no way for it to autocomplete both based on the url. Even editing the keychain to add the path (it will store only the hostname by default) does not help. Is there anything I can do to let it work the way I want while still keeping everything on one hostname? Modifying anything server side is of course possible, but I can't switch apps to HTTP Auth (and not every one will support it anyway) to use different 'realms'.

    Read the article

< Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >