Search Results

Search found 3068 results on 123 pages for 'rewrite'.

Page 103/123 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • nginx: dump HTTP requests for debugging

    - by Alexander Gladysh
    Ubuntu 10.04.2 nginx 0.7.65 I see some weird HTTP requests coming to my nginx server. To better understand what is going on, I want to dump whole HTTP request data for such queries. (I.e. dump all request headers and body somewhere I can read them.) Can I do this with nginx? Alternatively, is there some HTTP server that allows me to do this out of the box, to which I can proxy these requests by the means of nginx? Update: Note that this box has a bunch of normal traffic, and I would like to avoid capturing all of it on low level (say, with tcpdump) and filtering it out later. I think it would be much easier to filter good traffic first in a rewrite rule (fortunately I can write one quite easily in this case), and then deal with bogus traffic only. And I do not want to channel bogus traffic to another box just to be able to capture it there with tcpdump. Update 2: To give a bit more details, bogus request have parameter named (say) foo in their GET query (the value of the parameter can differ). Good requests are guaranteed not to have this parameter ever. If I can filter by this in tcpdump or ngrep somehow — no problem, I'll use these.

    Read the article

  • Using my own Postfix, filtering spam and getting all the mail into my ISP's inbox

    - by djechelon
    Hello, I currently own a domain bought via GoDaddy.com, which provides me a basic email setup for the most common needs. I configured it to forward all mail to [email protected] to [email protected]. I also own a virtual server with a running Postfix that I use for a specific website (all mail to somedomain.com gets forwarded via LMTP to a program written by me). Since I'm recently experiencing some harassing by spammers, since GoDaddy doesn't seem to filter spam, and since my Windows Phone's Pocket Outlook cannot filter spam, I would like to use SpamAssassin to filter inbound spam by changing my domain's MX records to my server My ideal setup is the following: All mail delivered to somedomain.com gets redirected via LMTP as usual via virtual transport without any spam check All mail to [email protected] gets redirected to [email protected] after a severe spam check I don't care about [email protected] since I use just one address for now I would like to train SpamAssassin with customized spam rules, possibly based on the presence of certain keywords (links to certain unsubscribe pages I found recurring) I currently configured Postfix with transport somedomain.com lmtp:[127.0.0.1]:8025 .somedomain.com error: Cannot accept mail for this domain relay somedomain.com OK (I guess I should add mydomain.com OK too) virtual @mydomain.com [email protected] (looks like a catch-all rule, it's OK as requirement 3) I installed SpamAssassin, I can do rcspamd start and set it to boot with the server, but I don't know if there is anything else to do for use in Postfix, and how to apply requirement 1 (only mail to mydomain.com gets filtered) I also tried to send an email via Telnet to make sure my settings are ready for MX change. I received the message into my account but I found that it gone through secureserver.net, like Postfix didn't rewrite the destination but simply relayed the message. Thank you in advance. I'm no expert in SpamAssassin, and I have little experience in Postfix (enough to avoid making my server an open relay)

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • Redirect port / port 10000 to https apache

    - by Hamid Elaosta
    I have been reading around and trying different configurations to get a request to my server on port 10000 to redirect a http to a https request. For some reason I can't figure out how to make it happen when i use port 10000 although i can set a rewrite rule for port 80 (implicit) to do it: All I want is a request as follows: http://127.0.0.1:10000 to redirect me to https://127.0.0.1:10000 but it needs to be written so that it also works when accessed via my domain name externally. My current, vhost, the last of many different attempts is currently set as follows, but it doesn't seem to work at all: <VirtualHost *:10000> RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_POST}%{REQUEST_URI} ErrorLog "/var/log/httpd/webmin-redirect_error_log.log" CustomLog "/var/log/httpd/webmin-redirect_access_log.log" common </VirtualHost> I'v also tried a few other things but nothing seems to work, any help would be appreciated. EDIT: I already have a re-write in my httpd.conf that redirects port 80 to https. If I access port 10000 externally it is redirected to https, but from the lan "http://192.168.0.2:10000" it doesnt.

    Read the article

  • Missing access log for virtual host on Plesk

    - by Cummander Checkov
    For some reason i don't understand, after creating a new virtual host / domain in Plesk a few months back, i cannot seem to find the access log. I noticed this when running /usr/local/psa/admin/sbin/statistics The host in question is being scanned Main HTML page is 'awstats.<hostname_masked>-http.html'. Create/Update database for config "/opt/psa/etc/awstats/awstats.<hostname_masked>.com-https.conf" by AWStats version 6.95 (build 1.943) From data in log file "-"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Jumped lines in file: 0 Parsed lines in file: 0 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 0 new qualified records. So basically no access logs have been parsed/found. I then went on to check if i could find the log myself. I looked in /var/www/vhosts/<hostname_masked>.com/statistics/logs but all i find is error_log Does anybody know what is wrong here and perhaps how i could fix this? Note: in the <hostname_masked>.com/conf/ folder i keep a custom vhost.conf file, which however contains only some rewrite conditions plus a directory statement that contains php_admin_flag and php_admin_value settings. None of them are related to logging though.

    Read the article

  • How do I enable additional debugging output from Ansible and Vagrant?

    - by Brian Lyttle
    I'm investigating Ansible for server and application provisioning. My application is currently provisioned with shell scripts in Vagrant. Rather than rewrite my scripts I've taken a sample and attempted to deploy it. It appears to deploy fine, but I've seeing a failure message after what looks like a series of successful steps: » vagrant provision ~/vm/blvagrant 1 ? [default] Running provisioner: ansible... PLAY [web-servers] ************************************************************ GATHERING FACTS *************************************************************** ok: [192.168.9.149] TASK: [install python-software-properties] ************************************ ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [add nginx ppa if it ubuntu 10.04 and up] ******************************* ok: [192.168.9.149] => {"changed": false, "item": "", "repo": "ppa:nginx/stable", "state": "present"} TASK: [update apt repo] ******************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [install nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [copy fixed init for nginx] ********************************************* ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0755", "owner": "root", "path": "/etc/init.d/nginx", "size": 2321, "state": "file", "uid": 0} TASK: [service nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": "", "name": "nginx", "state": "started"} TASK: [write nginx.conf] ****************************************************** ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0644", "owner": "root", "path": "/etc/nginx/nginx.conf", "size": 1067, "state": "file", "uid": 0} PLAY RECAP ******************************************************************** 192.168.9.149 : ok=8 changed=0 unreachable=0 failed=0 Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. How do I go about getting additional debug information? I've already added ansible.verbose = true to my vagrant config which results in the dictionaries being displayed within the output above.

    Read the article

  • Apache 2.4 with PHP-FPM

    - by tubaguy50035
    I'm trying to setup Apache 2.4 with PHP-FPM 5.4 using the new modules with Apache 2.4. The following is what I have currently in my virtual host file: <VirtualHost *:80> ServerAdmin root@localhost DocumentRoot /var/www #Directory permissions <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Require all granted </Directory> CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> I have PHP-FPM running using Unix sockets with a sock file located at /var/run/php5-fpm.sock. How do I proxy my requests to this sock file? I've seen some sites say to use ProxyPassMatch and others are saying Rewrite Rule. Are there pros or cons on either side? Also, most sites I'm seeing are showing ProxyPassMatch with a regex to only pass .php files. Could I also send it .html files? For whatever reason, we have a ton of PHP inside .html files. Edit: As noted in the comments, it looks like mod_proxy_fcgi doesn't support Unix sockets. Is there another module I should be using?

    Read the article

  • SQL Server Read Locking behavior

    - by Charles Bretana
    When SQL Server Books online says that "Shared (S) locks on a resource are released as soon as the read operation completes, unless the transaction isolation level is set to repeatable read or higher, or a locking hint is used to retain the shared (S) locks for the duration of the transaction." Assuming we're talking about a row-level lock, with no explicit transaction, at default isolation level (Read Committed), what does "read operation" refer to? The reading of a single row of data? The reading of a single 8k IO Page ? or until the the complete Select statement in which the lock was created has finished executing, no matter how many other rows are involved? NOTE: The reason I need to know this is we have a several second read-only select statement generated by a data layer web service, which creates page-level shared read locks, generating a deadlock due to conflicting with row-level exclusive update locks from a replication prcoess that keeps the server updated. The select statement is fairly large, with many sub-selects, and one DBA is proposing that we rewrite it to break it up into multiple smaller statements (shorter running pieces), "to cut down on how long the locks are held". As this assumes that the shared read locks are held till the complete select statement has finished, if that is wrong (if locks are released when the row, or the page is read) then that approach would have no effect whatsoever....

    Read the article

  • IIS7 default document for urlMapped url throws 403 error

    - by MorningZ
    Hopefully this all makes sense: I have a Web Application project against an IIS7 server that is "theme-able" using different master pages. As a result of what I am trying to do, the root of the project has no aspx files, so I am using the web.config's ability to rewrite "~/default.aspx" to "~/themes/a/default.aspx" this works great as long as i type in "http://www.mysite.com/default.aspx", but typing just "http://www.mysite.com" results in a "403 - Forbidden: Access is denied" error I was hoping that the combination of urlMapping and default document would be smart enough to handle this, but it's not <system.webServer> <defaultDocument enabled="true"> <files> <clear /> <add value="default.aspx"/> </files> </defaultDocument> </system.webServer> i also tried <system.webServer> <defaultDocument enabled="true"> <files> <clear /> <add value="~/themes/a/default.aspx"/> </files> </defaultDocument> </system.webServer> to no avail I was hoping a browser would come in without a document defined, IIS7 would assume it was default.aspx, and then the urlMapping would map it accordingly, but nope any pointers? I've read a ton of posts here with similar issues, but not the exact issue

    Read the article

  • Recover data from Dynamic Disk (MBR) bigger than 2TB

    - by Helder
    Here is the situation: Promise Array FastTrak TX4310 with 3 disks (750 GB each) in RAID5. This comes to around 1500 GB of data. Last week I had the idea of expanding the RAID with an additional 750 GB disk. This would bring the volume to around 2250 GB. I plugged the disk and used the Webpam software to do the RAID expansion. However, I didn't count with the MBR 2TB limit, as I didn't remembered that the disk was using MBR instead of GPT and I didn't check it prior to the expansion. After a couple of days of expansion, today when I got home, the disk in Windows disk manager showed the message "Invalid disk" and when I try to activate it, it says "The operation is not allowed on the Invalid pack". From what I figured, the logical volume on the RAID expanded, and passed that info to the Windows layer and I ended up with an "larger than 2TB" MBR disk. I'm hopping that somehow I can still recover some data from this, and I was wondering if I can "rewrite" the MBR structure back to the 1500 GB partition size, so I can access the partition in Windows. Right now I'm doing an "Analyse" with TestDisk, as I hope the program will pickup the old 1500 structure and allow me to somehow revert back to it. I think that even though the Logical Drive in the RAID is bigger than the 2TB, I can somehow correct the MBR to show the 1500 GB partition again. I had a similar problem once, and I was able to recover the data using a similar method. What do you guys think? Is it a dead end? Am I totally screwed because there is the extra RAID layer that I'm not counting? Or is there other way to move with this? Thanks all!

    Read the article

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • Does anyone know how to "tcpdump" traffic decrypted by Mallory MITM? [migrated]

    - by chriv
    I'm looking for some help in capturing network traffic that I can analyze in Wireshare (or other tools). The tool I'm using is mallory. If anyone is familiar with mallory, I could use some help. I've got it configured and running correctly, but I don't know how to get the output that I want. The setup is on my private network. I have a VM (running Ubuntu 12.04 - precise) with two NICs: eth0 is on my "real" network eth1 is only on my "fake" network, and is using dnsmasq (for DNS and DHCP for other devices on the "fake" network) Effectively eth0 is the "WAN" on my VM, and eth1 is the "LAN" on my VM. I've setup mallory and iptables to intercept, decrypt, encrypt and rewrite all traffic coming in on destination port 443 on eth1. On the device I want intercepted, I have imported the ca.cer that mallory generated as a trusted root certificate. I need to analyze some strange behavior in the HTTPS stream between the client and server, so that's why mallory is setup in between for this MITM. I would like to take the decrypted HTTPS traffic and dump it to either a logfile or a socket in a format compatible with tcpdump/wireshark (so I can collect it later and analyze it). Running tcpdump on eth1 is too soon (it's encrypted), and running tcpdump on eth2 is too late (it's been re-encrypted). Is there a way to make mallory "tcpdump" the decrypted traffic (in both directions)?

    Read the article

  • SSLVerifyClient optional with location-based exceptions

    - by Ian Dunn
    I have a site that requires authentication in order to access certain directories, but not others. (The "directories" are really just rewrite rules that all pass through /index.php) In order to authenticate, the user can either login with a standard username/password, or submit a client-side x509 certificate. So, Apache's vhost conf looks something like this: SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient none SSLVerifyDepth 1 <LocationMatch "/(foo-one|foo-two|foo-three)"> SSLVerifyClient optional </LocationMatch> That works fine, but then large file uploads fail because of the behavior documented in bug 12355. The workaround for that is to set SSLVerifyClient require (or optional) as the default, so now the conf looks like this SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient optional SSLVerifyDepth 1 <LocationMatch "/(bar-one|bar-two|bar-three)"> SSLVerifyClient none </LocationMatch> That fixes the upload problem, but the SSLVerifyClient none doesn't work for bar-one, bar-two, etc. Those directories are still prompted to present a certificate. Additionally, I also need the root URL to accessible without the user being prompted for a certificate. I'm afraid that will cancel out the workaround, though.

    Read the article

  • Requests are making it to my app server, but not into node.js -- why?

    - by Zane Claes
    I detailed in this question on StackOverflow how some random requests are not making it from the client to my Node.js app server, resulting in a gateway timeout. In summary, identical requests are, at random, not even making it far enough to trigger a console.log() in my first line of express middleware. I need to narrow down the problem, though, to find out WHERE the traffic is being lost and it was suggested that I try a packet sniffer on my app servers. Here's my setup: 2x Load Balancers (m1.larges) 2x node.js servers (also m1.large) Here's what's interesting/unusual: the node.js servers started as PHP servers with an Apache stack and continue to serve PHP files for my domain (streamified.me). However, I use a little httpd.conf magic on the app servers so that requests to api.streamified.me get routed over port 8888 to the node.js server: RewriteCond %{HTTP_HOST} ^api.streamified.me RewriteRule ^(.*) http://localhost:8888$1 [P] So, the request hits the load balancer = goes to an app server = gets routed to port 8888 if it's intended for the API = gets handled by node.js So, in the same httpd.conf file, I turned on RewriteLogLevel 5 and then created a simple PHP+CURL script on my localhost to hit my api.streamified.me with a random URL (which should cause node.js to trigger a simple "not found" response) until it resulted in a Gateway timeout. Here, you can see that it has happened -- and the rewrite log shows that the request was definitely received by the app server and forwarded to port 8888... but it was never received by node.js (or, at least, the first line of code in the first line of middleware never gets it...) Image Link: http://i.stack.imgur.com/3OQxS.png

    Read the article

  • IIS virtual location path

    - by Worp
    Sorry in advance, for this seems to be a very noobish question and should be easily fixable, yet I can't work it out: I am setting up windows authentication for a website running on IIS 8.5: The Yii framework of the website takes input like http://localhost/mywebsite/index.php/site/loginand processes it, using the "login" action of the "site" Controller. I flipped on URL Rewrite to have nicer URLs, leaving the URL with /mywebsite/site/login. Now I need to set up windows authentication for this location. Very specifically ONLY for this very exact location. Only the /site/login location of mywebsite needs to have authentication. Every other location needs to have anonymous. Since it is a "virtual location", I don't know how to do it. I can set up win-auth for files, directories, virtual directories, etc. but not for virtual locations that do not map to any file but only to a Controller/Action in Yii. The working counterpart in Apache is but i can't "translate this into IIS". I have read that IIS does have the "~" symbol but I just couldn't make it work yet. Could it be used to achieve authentication on a location basis? I have looked around virtual directories as well, which seem to simply be a kind of "symlink" to actual folders on the harddrive. Can they be used differently to "create a virtual folder in a location that doesn't really exist to manage its properties"? Help is much appreciated.

    Read the article

  • nginx hackery : change image file every X request

    - by Vangel
    Let me describe what I am trying to do first. I have a bunch of pictures in a directory called /images/*.(jpg|gif|png|blah blah|) Now say these images are embedded in an html page and I dont really care which image or where its embedded. For every 10th request for the same picture file (if possible) or for any picture I want to display a fixed image (e.g. trollface.jpg). thats it! I have searched around a bit but i am not even sure what I am looking for. Rewrite might help but then its a permanent thing. this has got to do something with requests. I have heard perl scripts can be used with nginx. I can't write an nginx module (though I did bravely lookup the docs and then gave up) Before you ask "But why don't you do it in application, noob?". This is a static files only server. The point is to not execute any binary at all.

    Read the article

  • Removing trailing slashes in WordPress blog hosted on IIS

    - by Zishan
    I have a WordPress blog hosted in my IIS virtual directory that has all URLs ending with a forward slash. For example: http://www.example.com/blog/ I have the following rules defined in my web.config: <rule name="wordpress" patternSyntax="Wildcard"> <match url="*" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="index.php" /> </rule> <rule name="Redirect-domain-to-www" patternSyntax="Wildcard" stopProcessing="true"> <match url="*" /> <conditions> <add input="{HTTP_HOST}" pattern="example.com" /> </conditions> <action type="Redirect" url="http://www.example.com/blog/{R:0}" /> </rule> In addition, I tried adding the following rule for removing trailing slashes: <rule name="Remove trailing slash" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> It seems that the last rule doesn't work at all. Anyone around here who has attempted to remove trailing slashes from WordPress blogs hosted on IIS?

    Read the article

  • default domain and first domain in apache2 causing trouble

    - by acidzombie24
    I have 3 sites and a default/test site using mono's test page. I created aFirst, c, d, e, zLast. zLast has rewrite rules that should be evaluated last. Since the first VirtualHost seen is the default i set it to this --aFirst-- <VirtualHost *:80> ServerName www.domain.tld ServerAdmin webmaster@localhost DocumentRoot /var/www/test DirectoryIndex index.html index.aspx index.php MonoDocumentRootDir "/var/www/test" MonoServerPath rootsite "/usr/local/bin/mod-mono-server2" MonoApplications rootsite "/:/var/www/test" <Directory /var/www/test> MonoSetServerAlias rootsite SetHandler mono AddHandler mod_mono .aspx .ascx .asax .ashx .config .cs .asmx </Directory> </VirtualHost> The problem is my default page (the ip address of my server) and the first website (csite.ddomain.net) have problems (even though csite is defined in c and is not the first virtual host). The ip address of my server and csite.ddomain.net ALWAYS load the same site. Either monos test page or the csite. It flips every time i restart apache. Why isnt the server ip address always loading the default page (mono test page) and why isnt csite.ddomain.net always loading the site i want!?! Heres the config for --csite-- <VirtualHost *:80> ServerName csite.testdomain.net ServerAdmin webmaster@localhost ServerAlias s.csite.testdomain.net DocumentRoot /var/www/prjname DirectoryIndex index.html index.aspx MonoDocumentRootDir "/var/www/prjname" MonoServerPath rootsite "/usr/local/bin/mod-mono-server2" MonoApplications rootsite "/:/var/www/prjname" <Directory /var/www/prjname> MonoSetServerAlias rootsite SetHandler mono AddHandler mod_mono .aspx .ascx .asax .ashx .config .cs .asmx </Directory> </VirtualHost> aFirst, c, d, e, zLast are all enabled.

    Read the article

  • (updated) Subfolder needs whitelist and standard redirect for all others

    - by Superstrong
    How can I allow access to the foo.html files in the .com/song/private/ subfolder for: a logged-in Wordpress user; or any referral domains (including subfolders) I add; or any URL on our own domain from the com/song/private folder; For all others, the user should be redirected to the corresponding public version of the Post, which is the same html filename and structured .com/song/foo.html. (The private versions uses a different template with different custom fields for each Post.) Update: Here's what I have so far: <Limit GET POST> order deny,allow deny from all allow from domain.com/song/private allow from otherdomain.com </Limit> RewriteRule ^(.*)$ ../$ [NC,L] More: Will that last rewrite rule take people back to the public version, from com/song/private/foo.html to com/song/foo.html? I found the following rule for detecting Wordpress logged-in status, but what do I put aferward with a RewriteRule, and will it work anyway? (If not, is there an alternative?) RewriteCond %{HTTP_COOKIE} !^.*wordpress_logged_in_.*$ N.B. I have added code to my root .htaccess allowing me to insert additional .htaccess files in other subfolders as needed. Copied from Stack Overflow, where they suggested I ask here.

    Read the article

  • Nginx: Disallow index.html in URL

    - by Martin Vilcans
    We're generating a site consisting of only static files (using Assemble). Having the .html extension on URLs looks so nineties, so we generate every static HTML file in its own directory and call it index.html. For example, the url http://www.example.com/foo/bar/ is in the file /var/www/foo/bar/index.html. This works well, but there is one small thing nagging me: Now there are two possible URLs to the same resource: http://www.example.com/foo/bar/ (slash URL) http://www.example.com/foo/bar/index.html (index.html URL) By accident someone may link to the index.html form of the URL, which is bad for SEO and looks ugly (remember the nineties?). Is it possible in Nginx to give a 404 error on the index.html URL, but serve the slash URL? I tried this: location ~ /index\.html$ { return 404; } But it seems that Nginx does some internal rewrite of the slash URL to the index.html URL, and then matches this location so we get a 404 even on the slash URL. Note that to catch mistakes, we want index.html URLs to be an error, not just redirect to the slash URL.

    Read the article

  • Dual Boot Installing Ubuntu 12.04 with Windows 7 (64) on a non UEFI system fails

    - by Randnum
    I cannot seem to install the correct boot loader for a non-UEFI firmware system. I'm trying to install Ubuntu 12.04 and Windows 7 (64) which are technically compatible with GPT but for windows only if the firmware is UEFI enabled. My system uses the old BIOS system and does not support UEFI. Therefore, whenever I finish my Ubuntu install and try to install Windows I get a "cannot install to GPT partition type" error. Even if I use Gparted to format a special NTFS file format for windows it can't handle the GPT partition style because it doesn't have UEFI. But my ubuntu install always forces GPT during installation and never asks if I want to install the old BIOS style MBR instead. How do I resolve this? Both OS's will install fine on their own the problem is when I try to install the second OS it doesn't recognize any of the other's partitions and tries to rewrite it's own on top of the other. I've tried both OS's first and always run into the same problem. Since there is no way to make Windows recognize GPT without upgrading my Motherboard how do I tell Ubuntu to use the old BIOS MBR on install? Do I have to download a special Ubuntu with a specific grub version? or should I manaually configure my partition somehow to force it not to use GPT? Thank you,

    Read the article

  • nginx giving of 404 when using set in an if-block

    - by ba
    I've just started using nginx and I'm now trying to make it play nice with the Wordpress plugin WP-SuperCache which adds static files of my blog posts. To serve the static file I need to make sure that some cookies aren't set, that it's not a POST-request and making sure the cached/static file exist. I found this guide and it seems like a good fit. But I've noticed that as soon as I try to set something inside an if my site starta giving 404s on an URL that isn't rewritten. The location block of the configuration: location /blog { index index.php; set $supercache_file ''; set $supercache_ok 1; if ($request_method = POST) { set $supercache_ok 0; } if ($http_cookie ~* "(comment_author_|wordpress|wp-postpass_)") { set $supercache_ok '0'; } if ($supercache_ok = '1') { set $supercache_file '$document_root/blog/wp-content/cache/supercache/$http_host/$1/index.html.gz'; } if (-f $supercache_file) { rewrite ^(.*)$ $supercache_file break; } try_files $uri $uri/ @wordpress; } The above doesn't work, and if I remove all the ifs above and add if ($http_host = 'mydomain.tld') { set $supercache_ok = 1; } and then I get the exact same message in the errors.log. Namely: 2010/05/12 19:53:39 [error] 15977#0: *84 "/home/ba/www/domain.tld/blog/2010/05/blogpost/index.php" is not found (2: No such file or directory), client: <ip>, server: domain.tld, request: "GET /blog/2010/05/blogpost/ HTTP/1.1", host: "domain.tld", referrer: "http://domain.tld/blog/" Remove the if and everything works as it should. I'm stymied, no idea at all where I should start searching. =/ ba@cell: ~> nginx -v nginx version: nginx/0.7.65

    Read the article

  • Bonnie does not provide speed for Sequential Input / Block

    - by Lqp1
    I'm using ProxmoxVE and I would like to run some benchmarks regarding performances of this product. One of these benchmarks is bonnie++ ; it runs very well in a VM (qemu-kvm) but when I run it in a conainer (openVZ), it does not provide me reading speed (only writing). I don't understand why... Does anyone know what's happenning ? VMs ans Containers are Debian 7.4. Here's the output of bonnie in the container: root@ct2:/# bonnie++ -u root Using uid:0, gid:0. Writing a byte at a time...done Writing intelligently...done Rewriting...done Reading a byte at a time...done Reading intelligently...done start 'em...done...done...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ct2 1G 843 99 59116 8 60351 4 4966 99 +++++ +++ 2745 8 Latency 9558us 3582ms 527ms 1672us 936us 5248us Version 1.96 ------Sequential Create------ --------Random Create-------- ct2 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Latency 19567us 358us 368us 107us 59us 25us 1.96,1.96,ct2,1,1401810323,1G,,843,99,59116,8,60351,4,4966,99,+++++,+++,2745,8,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,9558us,3582ms,527ms,1672us,936us,5248us,19567us,358us,368us,107us,59us,25us The filesystem for / is of type "simfs", which is a pseudo filesystem for openVZ. Maybe it's related to this issue but I can't find anyone with the same issue with bonnie and openVZ... Thanks for your help. Regards, Thomas.

    Read the article

  • Nginx load balancing and maintaining URLs

    - by Steve Klabnik
    I'm trying to use nginx as a load balancer, and it's working great. One problem, though. The load balancing box is at 123.123.123.123, and the backend box is 456.456.456.456. So I have this config: upstream backend { server 456.456.456.456; } server { listen 80; server_name 123.123.123.123; access_log off; error_log off; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://backend; } } This works great. I hit 123.123.123.123 in my browser, and the page comes up. But now the URL in the browser says http://456.456.456.456. Do I need to use a rewrite rule or something to keep the url correct? I don't want it to be different when going to different backed servers. None of the tutorials I've read have mentioned anything about this.

    Read the article

  • Nginx server 301 Moved permanently

    - by user145714
    When I did a curl -v http://site-wordpress.com:81 I received this result: About to connect() to site-wordpress.com port 81 (#0) Trying ip... connected Connected to site-wordpress.com (ip) port 81 (#0) GET / HTTP/1.1 User-Agent: curl/7.19.7 (x86_64-unknown-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 Host: site-wordpress.com:81 Accept: / < HTTP/1.1 301 Moved Permanently < Server: nginx/1.2.4 < Date: Fri, 16 Nov 2012 16:28:19 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Pingback: The URL above/xmlrpc.php < Location: The URL above Seems like this line in my fastcgi_params is causing grief. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; If I remove this line , I get HTTP/1.1 200 OK but I get a blank page. This is my config: server { listen 81; server_name site-wordpress.com; root /var/www/html/site; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php; if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; # port where FastCGI processes were spawned fastcgi_index index.php; include /etc/nginx/fastcgi_params; include /etc/nginx/mime.types; } location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } } This config works with ip and port 80. But now I need to use a domain name and port 81, which doesn't work. Could someone please help. Thanks.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >