Search Results

Search found 8013 results on 321 pages for 'clean urls'.

Page 71/321 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Error opening hyperlinks in Excel 2003

    - by richardtallent
    When clicking to follow hyperlinks from Excel, I'm now getting this error: Unable to open http://blah... Cannot download the information you requested. The hyperlinks in the Excel file are created using the HYPERLINK() formula. I use Google Chrome as my default browser. The web site in question uses Basic Authentication, and I've entered correct credentials when prompted (the dialog looked like an IE auth box, not Chrome's, but it's always been that way, even when it was working properly). This hasn't been an issue until recently. I'm guessing our IT department made some lame change to IE's configuration that is causing Office to not be able to open the URLs, despite having Chrome as my browser. Things I've checked already: URLs are good, they work fine when pasted manually into Chrome, IE, or Firefox. IE is not set to Work Offline (already found that suggestion on Google). I checked Program Access and Defaults and verified that Chrome is selected. Nothing in the URL requires URLEncoding, so it's no goofy issue with encoding I've had reports from some other users now and then about the same problem, but this is the first time I've experienced it myself.

    Read the article

  • SquidGuard and Active Directory: how to deal with multiple groups?

    - by Massimo
    I'm setting up SquidGuard (1.4) to validate users against an Active Directory domain and apply ACLs based on group membership; this is an example of my squidGuard.conf: src AD_Group_A { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_A%2cdc=domain%2cdc=com)) } src AD_Group_B { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_B%2cdc=domain%2cdc=com)) } dest dest_a { domainlist dest_a/domains urllist dest_b/urls log dest_a.log } dest dest_b { domainlist dest_b/domains urllist dest_b/urls log dest_b.log } acl { AD_Group_A { pass dest_a !dest_b all redirect http://some.url } AD_Group_B { pass !dest_a dest_b all redirect http://some.url } default { pass !dest_a !dest_b all redirect http://some.url } } All works fine if an user is member of Group_A OR Group_B. But if an user is member of BOTH groups, only the first source rule is evaluated, thus applying only the first ACL. I understand this is due to how source rule matching works in SquidGuard (if one rule matches, evaluation stops there and then the related ACL is applied); so I tried this, too: src AD_Group_A_B { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_A%2cdc=domain%2cdc=com)) ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_B%2cdc=domain%2cdc=com)) } acl { AD_Group_A_B { pass dest_a dest_b all redirect http://some.url } [...] } But this doesn't work, too: if an user is member of either one of those groups, the whole source rule is matched anyway, so he can reach both destinations (which is of course not what I want). The only solution I found so far is creating a THIRD group in AD, and assign a source rule and an ACL to it; but this setup grows exponentially with more than two or three destination sets. Is there any way to handle this better?

    Read the article

  • F5 Networks iRule/Tcl - Escaping UNICODE 6-character escape sequences so they are processed as and r

    - by openid.malcolmgin.com
    We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: Redirects any request to port 80 (HTTP) to port 443 (HTTPS) through HTTP 302 redirects and URL rewriting. Rewrites any response to the browser to selectively rewrite URLs embedded within the HTML so that they go to port 443 (HTTPS). This prevents the 302 redirects from breaking DHTML generated by SharePoint. We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance.

    Read the article

  • Apache2 MPM-prefork MPM-worker multiple instances on same Ubuntu host machine possible?

    - by user60985
    I have a live Apache2/MPM-Worker instance running Django. I want to also run an Apache2/MPM-prefork instance to run some Drupal6 applications on the same host machine and utilize a vast selection of PHP modules that run on the prefork model. I plan to use my MPM-worker instance to reverse proxy to the Apache2-prefork instance for URLS starting with myhost.com/drupal6/. It seems theoretically doable/configurable by having the second Apache2-prefork instance configured to listen on an internal port, say 127.0.0.1:8080 and having my current Apache2-worker configured to proxy pass and reverse pass to it for the 'drupal6' URLs. However, how do I compile or install the apache2-prefork version so it has a different executable name than /usr/sbin/apache2, for example /usr/sbin/apache2p, and so apache2ctl has a different name, say apache2pctl, and that apache2pctl invokes the /usr/sbin/apache2p instead of /usr/sbin/apache2... and so on down the line (eg /etc/apache2p) so I can start and restart my two instances independently? As I understand it, no one executable of 'apache2' can be compiled with both the MPM-prefork and MPM-worker modules, so it seems I need two separate versions of the apache2 MPM flavors. But then I need to invoke and control them by separate names, I assume. I looked at the configuration options for apache2 and I am a bit queasy about compiling a second apache2 version with prefork because I am not sure I can set all the options so that none of my current apache2 files is overwritten. Is there a way? Is there a standard solution to separately installing and controlling prefork and worker apache2 executables on the same machine without them stepping on each other during installation or operation?

    Read the article

  • Unable to start tomcat6: Java error (Exception in thread "main")

    - by Nyxynyx
    After installing tomcat6 on CentOS 6.3, I am unable to start tomcat6 server. root@host [/var/log/tomcat6]# service tomcat6 start Starting tomcat6: [ OK ] Although it says OK, I cant access http://mydomain.com:8080. catalina.out Exception in thread "main" java.lang.NullPointerException at java.lang.VMClassLoader.defineClass(libgcj.so.10) at java.lang.ClassLoader.defineClass(libgcj.so.10) at java.security.SecureClassLoader.defineClass(libgcj.so.10) at java.net.URLClassLoader.findClass(libgcj.so.10) at gnu.gcj.runtime.SystemClassLoader.findClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at gnu.java.lang.MainThread.run(libgcj.so.10) Tomcat6 was installed using yum: yum -y install java tomcat6 tomcat6-webapps tomcat6-admin-webapps When I tried to find the version: tomcat6 version: Exception in thread "main" java.lang.NoClassDefFoundError: org.apache.catalina.util.ServerInfo at gnu.java.lang.MainThread.run(libgcj.so.10) Caused by: java.lang.ClassNotFoundException: org.apache.catalina.util.ServerInfo not found in gnu.gcj.runtime.SystemClassLoader{urls=[], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}} at java.net.URLClassLoader.findClass(libgcj.so.10) at gnu.gcj.runtime.SystemClassLoader.findClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at gnu.java.lang.MainThread.run(libgcj.so.10) Any idea what I should do? Thanks!

    Read the article

  • What's with the accesses to $random_existing_file/cache/df.php?

    - by Bernd Jendrissek
    Occasionally I eyeball Apache's access_log and lately I've been noticing these accesses to URLs that I don't serve. They're correctly 404'ed, but I'd like to know just who and what is involved here. "Obviously" it's some sort of vulnerability probing; I'd like to know which. (Not that it affects me, but I like to know the score.) Here's an example: 69.89.31.206 - - [28/Nov/2012:17:36:34 +0200] "GET /cvfull.pdf/cache/df.php HTTP/1.1" 404 489 "-" "-" Oddly, all 26 attempts are to either /cache/df.php, or to /cvfull.pdf/cache/df.php - they come in pairs. A few weeks ago it was zx.php, now it's df.php - I'm assuming the target is the same. Perhaps I should be flattered that a script is thinking of hiring me. Seriously, my CV is one of only two PDF files on my site, so I can only guess that non-PDF URLs aren't interesting? I've tried Googling for "cache df php", but my Google-fu is weak at the best of times, so I can only find a few reports of other script attacks. What's the vulnerability being scanned for here?

    Read the article

  • How to install a private user script in Chrome 21+?

    - by Mathias Bynens
    In Chrome 20 and older versions, you could simply open any .user.js file in Chrome and it would prompt you to install the user script. However, in Chrome 21 and up, it downloads the file instead, and displays a warning at the top saying “Extensions, apps, and user scripts can only be added from the Chrome Web Store”. The “Learn More” link points to http://support.google.com/chrome_webstore/bin/answer.py?hl=en&answer=2664769, but that page doesn’t say anything about user scripts, only about extensions in .crx format, apps, and themes. This part sounded interesting: Enterprise Administrators: You can specify URLs that are allowed to install extensions, apps, and themes directly through the ExtensionInstallSources policy. So, I ran the following commands, then restarted Chrome and Chrome Canary: defaults write com.google.Chrome ExtensionInstallSources -array "https://gist.github.com/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://gist.github.com/*" Sadly, these settings only seem to affect extensions, apps, and themes (as it says in the text), not user scripts. (I’ve filed a bug asking to make this setting affect user scripts as well.) Any ideas on how to install a private user script (that I don’t want to add to the Chrome Web Store) in Chrome 21+? Update: The problem was that gist.github.com’s raw URLs redirect to a different domain. So, use these commands instead: # Allow installing user scripts via GitHub or Userscripts.org defaults write com.google.Chrome ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" This works!

    Read the article

  • Server Directory Not Accessible

    - by GusDeCooL
    I got strange things happen on live server, but normal in local server. My local server is using mac, and my live server is linux. Consider i try to access some files http://redddor.babonmultimedia.com/assets/images/map-1.jpg This work correctly. http://redddor.babonmultimedia.com/assets/modules/evogallery/check.php Return 404, I'm pretty sure my file is in there and there is no typo mistake. How come it give me 404? There is only one .htaccess on the root server and it's configuration is like this. # For full documentation and other suggested options, please see # http://svn.modxcms.com/docs/display/MODx096/Friendly+URL+Solutions # including for unexpected logouts in multi-server/cloud environments # and especially for the first three commented out rules #php_flag register_globals Off #AddDefaultCharset utf-8 #php_value date.timezone Europe/Moscow Options +FollowSymlinks RewriteEngine On RewriteBase / <IfModule mod_security.c> SecFilterEngine Off </IfModule> # Fix Apache internal dummy connections from breaking [(site_url)] cache RewriteCond %{HTTP_USER_AGENT} ^.*internal\ dummy\ connection.*$ [NC] RewriteRule .* - [F,L] # Rewrite domain.com -> www.domain.com -- used with SEO Strict URLs plugin #RewriteCond %{HTTP_HOST} . #RewriteCond %{HTTP_HOST} !^www\.example\.com [NC] #RewriteRule (.*) http://www.example.com/$1 [R=301,L] # Exclude /assets and /manager directories and images from rewrite rules RewriteRule ^(manager|assets)/*$ - [L] RewriteRule \.(jpg|jpeg|png|gif|ico)$ - [L] # For Friendly URLs RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Reduce server overhead by enabling output compression if supported. #php_flag zlib.output_compression On #php_value zlib.output_compression_level 5

    Read the article

  • Apache rewrite rules and special characters

    - by Massimo
    I have a server where some files have an actual %20 in their name (they are generated by an automated tool which handles spaces this way, and I can't do anything about this); this is not a space: it's "%" followed by "2" followed by "0". On this server, there is an Apache web server, and there are some web pages which links to those files, using their name in URLs like http://servername/file%20with%20a%20name%20like%20this.html; those pages are also generated by the same tool, so I (again!) can't do anything about that. A full search-and-replace on all files, pages and URLs is out of question here. The problem: when Apache gets called with an URL like the one above, it (correctly) translates the "%20"s into spaces, and then of course it can't find the files, because they don't have actuale spaces in their names. How can I solve this? I discovered than by using an URL like http://servername/file%2520name.html it works nicely, because then Apache translates "%25" into a "%" sign, and thus the correct filename gets built. I tried using an Apache rewrite rule, and I can succesfully replace spaces with hypens with a syntax like this: RewriteRule (.*)\ (.*) $1-$2 The problem: when I try to replace them with a "%2520" sequence, this just doesn't happen. If I use RewriteRule (.*)\ (.*) $1%2520$2 then the resulting URL is http://servername/file520name.html; I've tried "%25" too, but then I only get a "5"; it just looks like the initial "%2" gets somewhat discarded. The questions: How can I build such a regexp to replace spaces with "%2520"? Is this the only way I can deal with this issue (other than a full search-and-replace which, as I said, can't be done), or do you have any better idea?

    Read the article

  • Google: "302 Moved" in Firefox

    - by Virtlink
    For some Google search queries executed from the Firefox search bar, or manually by typing in the URL, I get a ''302 Moved'' page. I did a quick virus scan, have checked the hosts file and the plugins and add-ons that are installed in Firefox. Nothing is out of the ordinary. What could be the problem? These URLs (and any URL with google.com, empty and firefox-a in them) show me a 302 Moved page: https://www.google.com/search?q=c%23+empty+array&client=firefox-a https://www.google.com/search?q=empty&client=firefox-a Whereas these URLs work fine: https://www.google.com/search?q=c%23+empty+array (no firefox-a) https://www.google.com/search?q=empty (no firefox-a) https://www.google.com/search?q=c%23+array&client=firefox-a (no empty) https://www.google.nl/search?q=c%23+empty+array&client=firefox-a (no .com) By default Google redirects my queries to their .nl website, and uses HTTPS. I am currently executing a full system virus scan using Security Essentials. My Firefox plugins were up-to-date. No unfamiliar Firefox plugins or add-ons found. Restarting Firefox did not solve the issue. The issue does not occur in Internet Explorer. The hosts file did not contain any unfamiliar entries.

    Read the article

  • Google Chrome doesn't delete my browsing history correctly

    - by Derfder
    I have deleted everything that I could from my browser history: chrome://settings/clearBrowserData I checked everything and select the begining of time Then when I access browsing history: chrome://history/ There is nothing (as I expected), or to be precise No history entries found. The problem is that I still see my specific search url with very specific query I have made a month ago, when I start typing the url of the website into chrome address bar. How is that possible? Where is Google stroing these data. How to get rid off them completely? I want to mention that my autosuggestion options look like this: So, what else should I delete to remove everything from autosuggestions? Right now it has some specific URLs (subpages, pages with very specific search query I have made in a month or so). I have tried restarting Chrome and restarting my computer, but the urls are still in the autosuggestion. Also I am unable to turn off the autosuggestion, even I have unchecked that option in settings. My Google Chrome version is: Version 27.0.1453.116 m (probably the latest) Btw. in Firefox deleting the history works as expected. So, I guess that this has nothing to do with the operating system I am using (Windows 7), but only it's an issue with Chrome itself.

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • How can Django/WSGI and PHP share / on Apache?

    - by Mark Snidovich
    I have a server running an established PHP site, as well as some Django apps. Currently, a VirtualHost set up for PHP listens on port 80, and requests to certain directories are proxied to a VirtualHost set up for Django with WSGI. I'd like to change it so Django handles anything not existing as a PHP script or static file. For example, / -parsed by PHP as index.php /page.php -parsed as PHP normally /images/border.jpg -served as a static file /johnfreep -handled by Django (interpreted by urls.py) /pages/john -handled by Django /(anything else) - handled by Django I have a few ideas. It seems the options are 'php first' or 'wsgi first'. set up Django on port 80, and set Apache to skip all the known PHP, CSS or image files. Maybe using SetHandler? Anything else goes to Django to be parsed by urls.py. Set up a script referring everything to Django as a 404 handler on PHP. So, if a file is not found for a name, it sends the request path to a VirtualHost running Django to be parsed.

    Read the article

  • 9000+ different subdomains 301 to main domain, .htacess apache

    - by Karim
    I bought a domain that had various subdomains such as Kim.domain.com/whatever john.domain.com/whatever1 Lizo.domain.com/whatever2 Simon.domain.com/whatever1 And this was in the thousands, and also had links to these pages I'd like to do a 301 redirect for all these urls into http://domain.com Any idea how this could be done? This is for a apache web server and needs to be done via .htaccess I have implemented the solution from reading the answer below. RewriteEngine On RewriteCond %{HTTP_HOST} !^www. domain.com$ RewriteCond %{HTTP_HOST} !^$ RewriteRule ^/(.*)$ http:/ / www. domain.com/$1 [L,R=301] However I have a slight problem, I would like to redirect all subdomains + subfolders to http://www. domain.com/ With the exception of http: //domain. com/subfolder/, in which case I would like to redirect to http: // www.domain. com/subfolder/ [i.e. exception for no subdomain] I'm guessing I need to add an exception, what can I do to implement this. Note: example URLs above have had spaces added to them to prevent spam blocks for blocking the post.

    Read the article

  • Solution to easily share large files with non-tech-savvy users?

    - by Tim
    Hey all, We've got a server setup at work which we'd like to use to exchange large files with known clients easily. We're looking into software to facilitate this, but somewhow typing "large file hosting" into Google gives questionable results.. ;) We've come up with the following requirements, and I hope any of you can points us in the direction of a solution that offers this functionality, or is malleable to our needs. Synchronization / revision management is of no concern, it's mostly single large (up to 1+ GB) file uploads & downloads we'll need. We'd like to make the downloads expire & be removed after a certain number of days / downloads, to limit the amount of cleanup we'd have to do. The data files exchanged sometimes hold confidential information, so the URLs generated should be random and not publicly visible. Our users are of the less technically savvy variety, so a simple webform would be best over a desktop client (because we also have to support a mix of operating systems). As for use of the system we'd either like to send out generated random URLs for them to upload their files, or have an easy way manage & expire users. Works on a linux (Ubuntu) server (so nothing .Net-related please) Does anyone know of software that fits the above criteria? We've already seen a few instances of this within the scientific community, but nothing we could use directly.. Best regards, Tim

    Read the article

  • Solution to easily share large files with non-tech-savvy users?

    - by Tim
    Hey all, We've got a server setup at work which we'd like to use to exchange large files with known clients easily. We're looking into software to facilitate this, but somewhow typing "large file hosting" into Google gives questionable results.. ;) We've come up with the following requirements, and I hope any of you can points us in the direction of a solution that offers this functionality, or is malleable to our needs. Synchronization / revision management is of no concern, it's mostly single large (up to 1+ GB) file uploads & downloads we'll need. We'd like to make the downloads expire & be removed after a certain number of days / downloads, to limit the amount of cleanup we'd have to do. The data files exchanged sometimes hold confidential information, so the URLs generated should be random and not publicly visible. Our users are of the less technically savvy variety, so a simple webform would be best over a desktop client (because we also have to support a mix of operating systems). As for use of the system we'd either like to send out generated random URLs for them to upload their files, or have an easy way manage & expire users. Works on a linux (Ubuntu) server (so nothing .Net-related please) Does anyone know of software that fits the above criteria? We've already seen a few instances of this within the scientific community, but nothing we could use directly.. Best regards, Tim

    Read the article

  • Map a URL bought with Dreamhost to Amazon EC2 (AWS)

    - by Edan Maor
    I have several URLs I purchased through Dreamhost. I'm starting to use Amazon's AWS, and I'd like to map the URLs to Amazon. This is something of a silly question, and I've already done the same thing several times to other services (mapping from Dreamhost to WebFaction). But for some reason when I tried to find the proper way to do the same mapping to Amazon, I find a lot of detailed writing talking about whether I should be using CNAME or A records, etc. So I wanted to ask in the simplest possible terms and hopefully get a simple, concrete answer: I bought a URL from Dreamhost, I have an EC2 server running on AWS (to which I already mapped an Elastic IP address). How do I make the URL map to AWS? And if there are several options, which one should I effectively be using? P.S. Meta-question - why are things so much more difficult with AWS? When I search Google for "Move from Dreamhost to WebFaction, I get very simple answers on how to do the mapping. In what way is AWS different?

    Read the article

  • How can i export a folder of Firefox Bookmarks into a text file

    - by Memor-X
    I want to clean up my bookmark folders by getting rid of duplicated/links. i have created a program which will import 2 text files which contains a URL like this File 1: http://www.google/com http://anime.stackexchange.com/ https://www.fanfiction.net/guidelines/ https://www.fanfiction.net/anime/Magical-Girl-Lyrical-Nanoha/?&srt=1&g1=2&lan=1&r=103&s=2 File 2: http://scifi.stackexchange.com/ http://scifi.stackexchange.com/questions/56142/why-didnt-dumbledore-just-hunt-voldemort-down http://anime.stackexchange.com/ http://scifi.stackexchange.com/questions/5650/how-can-the-doctor-be-poisoned the program compares the 2 lists and creates a single master list with duplicate URLs removed. Now i have a few Backup bookmark folders in Firefox which on occasion i'll bookmark all tabs into a new folder with the date of the backup before i close off tabs or do reset my PC. each folder can have between 1000-2000 bookmarks, sometimes there are a bunch of pages which keep getting bookmarked, ie, i have ~50 pages on the Magical Girl Lyrical Nanoha Wiki on different spells, character and terminology which i commonly look back on. I'd like to know how i can export a bookmark folder so i have a list of URLs similar to what i use in my program

    Read the article

  • plesk: how to configure reverse proxy rules properly?

    - by rvdb
    I'm trying to configure reverse proxy rules in vhost.conf. I have Apache-2.2.8 on Ubuntu-8.04, monitored by Plesk-10.4.4. What I'm trying to achieve is defining a reverse proxy rule that defers all traffic to -say- http://mydomain/tomcat/ to the Tomcat server running on port 8080. I have mod_rewrite and mod_proxy loaded in Apache. As far as I understand mod_proxy docs, entering following rules in /var/www/vhosts/mydomain/conf/vhost.conf should work: <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests off RewriteRule ^/tomcat/(.*)$ http://mydomain:8080/$1 [P] Yet, I am getting a HTTP 500: internal server error when requesting above URL. (Note: I decided to use a rewrite rule in order to at least get some information logged.) I have made mod_rewrite log extensively, and find following entries in the logs [note: due to a limitation of max. 2 URLs in posts of new users, I have modified all following URLs so that they only contain 1 slash after http:. In case you're suspecting typos: this was done on purpose): 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (2) init rewrite engine with requested uri /tomcat/testApp/ 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (3) applying pattern '^/tomcat/(.*)$' to uri '/tomcat/testApp/' 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (2) rewrite '/tomcat/testApp/' - 'http:/mydomain:8080/testApp/' 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (2) forcing proxy-throughput with http:/mydomain:8080/testApp/ 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (1) go-ahead with proxy request proxy:http:/mydomain:8080/testApp/ [OK] This suggests that the rewrite and proxy part is processed ok; still the proxied request produces a 500 error. Yet: Addressing the testApp directly via http:/mydomain:8080/testApp does work. The same setup does work on my local computer. Is there something else (Plesk-related, perhaps?) I should configure? Many thanks for any pointers! Ron

    Read the article

  • URL autocomplete no longer working in Chrome

    - by Yuji Tomita
    The browser URL autocomplete has started behaving differently starting yesterday. I used to access my top urls by typing the first one or two letters of a URL then pressing enter. Now, I have to visually fish for the right one and push the down arrow to select the url. Big difference. Anybody know if I can get the old functionality back somehow? Have I messed a setting? Example of how my browser used to work: Gmail.com: CMD + L Type G Enter Stackoverflow.com CMD + L Type S Enter Normally, the browser bar would already be highlighted with gmail.com after typing the first g. It would narrow the matches depending on what characters were typed next, or simply go to it if I pressed enter. UPDATE: I just realized my history tab looks suspicious. No entries But clearly Chrome is pulling some data from my history, as I have very personalized recommendations when typing in a letter. UPDATE: Fixed! Saved my bookmarks, removed my ~/Library/Application\ Support/Google/Default directory (careful, it looks like absolutely everything is stored here) restarted chrome, and within one visit to Gmail.com, my autocomplete was filling in my URLs like so: Beautiful.

    Read the article

  • Gitolite SSH URL Format

    - by KPthunder
    So I got gitolite set up. Simple. But there is one issue I am having. The SSH urls follow the format of git@host:repo. I'm used to Bitbucket / Github where the urls follow the format of git@host:user/repo. Is there a way to get the latter format using gitolite? Another question. I have my ~/.ssh/config file set up with the following entry: Host <host> User <user> IdentityFile <path/to/public/key> I don't have any configuration specifying git as a user, and yet I am able to clone git@host:repo without problem. Obviously, my ssh client is using my public key to access the server which is why gitolite is letting me clone the repo, but how does my ssh client know to use my public key which is only configured for the <user> user and not the git user?

    Read the article

  • mod_rewrite adds .html when redirecting

    - by user12093810293812031
    I have a redirect situation where the site is part dynamic and part generated .html files. For example, mysite.com/homepage and mysite.com/products/42 are actually static html files Whereas other URLs are dynamically generated, like mysite.com/cart Both mysite.com and www.mysite.com are pointing to the same place. However I want to redirect all of the traffic from mysite.com to www.mysite.com. I'm so close but I'm running into an issue where Apache is adding .html to the end of my URLs for anything where a static .html file exists - which I don't want. I want to redirect this: http://mysite.com/products/42 To this: http://www.mysite.com/products/42 But Apache is making it this, instead (because 42.html is an actual html file): http://www.mysite.com/products/42.html I don't want that - I want it to redirect to www.mysite.com/products/42 Here's what I started with: RewriteCond %{HTTP_HOST} ^mysite\.com$ [NC] RewriteRule ^(.*)$ http://www.mysite.com/$1 [R=301,L] I tried making the parameters and the .html optional, but the .html is still getting added on the redirect: RewriteCond %{HTTP_HOST} ^mysite\.com$ [NC] RewriteRule ^(.*)?(\.html)?$ http://www.mysite.com/$1 [R=301,L] What am I doing wrong? Really appreciate it :)

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • How do I tell Websphere 7 about a front end load balancer so that re-directs are handled correctly?

    - by TiGz
    On WebLogic 11G I can use the console to set the FrontendHost and FrondendPort on a server or on a cluster so that re-directs are handled correctly and end up resolving to the front end load balancer instead of the local host. The MBeans associated with this on WebLogic are, for example: MBean Name com.bea:Name=AdminServer,Type=WebServer,Server=AdminServer Attribute Name FrontendHost Description The name of the host to which all redirected URLs will be sent. If specified, WebLogic Server will use this value rather than the one in the HOST header. Sets the HTTP frontendHost Provides a method to ensure that the webapp will always have the correct HOST information, even when the request is coming through a firewall or a proxy. If this parameter is configured, the HOST header will be ignored and the information in this parameter will be used in its place. Type java.lang.String Readable / Writable RW How is the same thing achieved under Websphere 7? Follow up info: So I have 2 use cases actually. One is that I have a web app running under WebSphere on host A on port 9002 and a LB running on host B at port 80, when I visit the home page of the app via the LB on http://hostb/app the app redirects my browser to http://hostb:9002/app and it 404's I think this is WebSphere's fault but I guess it could be the app's fault? The second is that the web app in question needs to send emails containing URls that the customer can click on to get back into the web app - obviously this needs to be via the LB. On WebLogic the app uses MBeans to derive the LB url and I was hoping to use a similar mechanism on WebSphere.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >