Search Results

Search found 21350 results on 854 pages for 'url parsing'.

Page 106/854 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • How to transform a production to LL(1) grammar for a list separated by a semicolon?

    - by Subb
    Hi, I'm reading this introductory book on parsing (which is pretty good btw) and one of the exercice is to "build a parser for your favorite language." Since I don't want to die today, I thought I could do a parser for something relatively simple, ie a simplified CSS. Note: This book teach you how to right a LL(1) parser using the recursive-descent algorithm. So, as a sub-exercice, I am building the grammar from what I know of CSS. But I'm stuck on a production that I can't transform in LL(1) : //EBNF block = "{", declaration, {";", declaration}, [";"], "}" //BNF <block> =:: "{" <declaration> "}" <declaration> =:: <single-declaration> <opt-end> | <single-declaration> ";" <declaration> <opt-end> =:: "" | ";" This describe a CSS block. Valid block can have the form : { property : value } { property : value; } { property : value; property : value } { property : value; property : value; } ... The problem is with the optional ";" at the end, because it overlap with the starting character of {";", declaration}, so when my parser meet a semicolon in this context, it doesn't know what to do. The book talk about this problem, but in its example, the semicolon is obligatory, so the rule can be modified like this : block = "{", declaration, ";", {declaration, ";"}, "}" So, Is it possible to achieve what I'm trying to do using a LL(1) parser?

    Read the article

  • If Html File Has No Ending "/tr" Tag OR "/td" Tag Then HTML Agility Pack Does Not Read That Informat

    - by Harikrishna
    I am using HTML Agility Pack to parse html content. I am using parsing to extract table information. It works. But if there is no ending "/tr" tag or "/td" tag then it does not parse that information perfectly.(in which there is no ending tr tag or td tag.) Like <TABLE border=0><TBODY><TR height=20><TD class=xl27boL noWrap width="7%">01890345&nbsp;</TD> <TD class=xl27boL noWrap width="4%">1416</TD> <TD class=xl27boL noWrap width="7%">kjlkjkls&nbsp;</TD><TD class=xl27boL noWrap width="4%">14:01:57&nbsp;</TD> <TD class=xl27boL noWrap align=left width="15%">Football</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD> <TD class=xl27boL noWrap align=right width="5%">50&nbsp;</TD> <TD class=xl27boL noWrap align=right width="5%">4997.2500</TD><TD class=xl27boL noWrap align=right width="7%">249862.50&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD><TD class=xl27boRLnoWrap align=right width="8%">249612.64&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">4997.2500</TD><TD class=xl27boL noWrap align=right width="7%">249862.50&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">249.86</TD><TD class=xl27boL noWrap align=right width="5%">4992.2528</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD> <TD class=xl27boRL noWrap align=right width="8%">249612.64&nbsp;</TD> </table> So for that what should I do ?

    Read the article

  • How to cache code in PHP?

    - by Janis Peisenieks
    I am creating a custom form building system, which includes various tokens. These tokens are found using Regular Expressions, and depending on the type of toke, parsed. Some require simple replacement, some require cycles, and so forth. Now I know, that RegExp is quite resource and time consuming, so I would like to be able to parse the code for the form once, creating a php code, and then save the PHP code, for next uses. How would I go about doing this? So far I have only seen output caching. Is there a way to cache commands like echo and cycles like foreach()? Because of misunderstandings, I'll create an example. Unparsed template data: Thank You for Your interest, [*Title*] [*Firstname*] [*Lastname*]. Here are the details of Your order! [*KeyValuePairs*] Here is the link to Your request: [*LinkToRequest*]. Parsed template: "Thank You for Your interest, <?php echo $data->title;?> <?php echo $data->firstname;?> <?php echo $data->lastname;?>. Here are the details of Your order! <?php foreach($data->values as $key=>$value){ echo $key."-".$value }?> Here is the link to Your request: <?php echo $data->linkToRequest;?>. I would then save the parsed template, and instead of parsing the template every time, just pass the $data variable to the already parsed one, which would generate an output.

    Read the article

  • How to deal with unknown entity references?

    - by Chris
    I'm parsing (a lot of) XML files that contain entity references which i dont know in advance (can't change that fact). For example: xml = "<tag>I'm content with &funny; &entity; &references;.</tag>" when i try to parse this using the following code: final DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); final DocumentBuilder db = dbf.newDocumentBuilder(); final InputSource is = new InputSource(new StringReader(xml)); final Document d = db.parse(is); i get the following exception: org.xml.sax.SAXParseException: The entity "funny" was referenced, but not declared. but, what i do want to achieve is, that the parser replaces every entity that is not declared (unknown to the parser) with an empty String ''. Or even better, is there a way to pass a map to the parser like: Map<String,String> entityMapping = ... entityMapping.put("funny","very"); entityMapping.put("entity","important"); entityMapping.put("references","stuff"); so that i could do the following: final DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); final DocumentBuilder db = dbf.newDocumentBuilder(); final InputSource is = new InputSource(new StringReader(xml)); db.setEntityResolver(entityMapping); final Document d = db.parse(is); if i would obtain the text from the document using this example code i should receive: I'm content with very important stuff. Any suggestions? Of course, i already would be happy to just replace the unknown entity's with empty strings. Thanks,

    Read the article

  • Error when feeding a mysql db with a python-parsed data

    - by Barnabe
    I use this bit of code to feed some data i have parsed from a web page to a mysql database c=db.cursor() c.executemany( """INSERT INTO data (SID, Time, Value1, Level1, Value2, Level2, Value3, Level3, Value4, Level4, Value5, Level5, ObsDate) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)""", clean_data ) The parsed data looks like this (there are several hundred such lines) clean_data = [(161,00:00:00,8.19,1,4.46,4,7.87,4,6.54,null,4.45,6,2010-04-12),(162,00:00:00,7.55,1,9.52,1,1.90,1,4.76,null,0.14,1,2010-04-12),(164,00:00:00,8.01,1,8.09,1,0,null,8.49,null,0.20,2,2010-04-12),(166,00:00:00,8.30,1,4.77,4,10.99,5,9.11,null,0.36,2,2010-04-12)] if i hard code the data as above mySQL accepts my request (except for some quibbles about formatting) but if the variable clean_data is instead defined as the result of the parsing code, like this: cleaner = [(""" $!!'""", ')]'),(' $!!', ') etc etc] def processThis(str,lst): for find, replace in lst: str = str.replace(find, replace) return str clean_data = processThis(data,cleaner) then i get the dreaded "TypeError: not enough arguments for format string" After playing with formatting options for a few hours (I am very new to this) I am confused... what is the difference between the hard coded data and the result of the processThis function as fas as mySQL is concerned? Any idea greatly appreciated...

    Read the article

  • How would I sort files to directories based on filenames?

    - by gnomed
    I have a huge number of files to sort all named in some terrible convention. Here are some examples: (4)_mr__mcloughlin____.txt 12__sir_john_farr____.txt (b)mr__chope____.txt dame_elaine_kellett-bowman____.txt dr__blackburn______.txt These names are supposed to be a different person (speaker) each. Someone in another IT department produced these from a ton of XML files using some script but the naming is unfathomably stupid as you can see. I need to sort literally tens of thousands of these files with multiple files of text for each person; each with something stupid making the filename different, be it more underscores or some random number. They need to be sorted by speaker. This would be easier with a script to do most of the work then I could just go back and merge folders that should be under the same name or whatever. There are a number of ways I was thinking about doing this. parse the names from each file and sort them into folders for each unique name. get a list of all the unique names from the filenames, then look through this simplified list of unique names for similar ones and ask me whether they are the same, and once it has determined this it will sort them all accordingly. I plan on using Perl, but I can try a new language if it's worth it. I'm not sure how to go about reading in each filename in a directory one at a time into a string for parsing into an actual name. I'm not completely sure how to parse with regex in perl either, but that might be googleable. For the sorting, I was just gonna use the shell command: `cp filename.txt /example/destination/filename.txt` but just cause that's all I know so it's easiest. I dont even have a pseudocode idea of what im going to do either so if someone knows the best sequence of actions, im all ears. I guess I am looking for a lot of help, I am open to any suggestions. Many many many thanks to anyone who can help. B.

    Read the article

  • Downloading a file from the internet with '&' in URL using wget

    - by matt_tm
    Hi, I'm trying to download a file from a URL that looks like this: http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf Within the browser, this link prompts me to download a file called x.pdf irrespective of what DEF is (but 'x.pdf' is the right content). However using wget, I get the following: >wget.exe http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files\GnuWin32/etc/wgetrc --2011-01-06 07:52:05-- http://pdf.example.com/filehandle.ashx?p1=ABC Resolving pdf.example.com... 99.99.99.99 Connecting to pdf.example.com|99.99.99.99|:80... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2011-01-06 07:52:08 ERROR 500: Internal Server Error. 'p2' is not recognized as an internal or external command, operable program or batch file. This is on a Windows Vista system Edit1 >wget.exe "http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf" SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files\GnuWin32/etc/wgetrc --2011-02-06 10:18:31-- http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf Resolving pdf.example.com... 99.99.99.99 Connecting to pdf.example.com|99.99.99.99|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4568 (4.5K) [image/JPEG] Saving to: `filehandle.ashx@p1=ABC&p2=DEF.pdf' 100%[======================================>] 4,568 --.-K/s in 0.1s 2011-02-06 10:18:33 (30.0 KB/s) - `filehandle.ashx@p1=ABC&p2=DEF.pdf' saved [4568/4568]

    Read the article

  • Cannot access certain URL on my wireless

    - by dehmann
    Problem: On my wireless network at home, there is one URL that I just cannot access with my browser: http://research.microsoft.com/ I have no problems with the Internet connection otherwise. But on that address I just get The connection was reset The connection to the server was reset while the page was loading. from Firefox. I am using a DSL modem (Westell) and Linksys wireless router (using DHCP). When I use my neighbor's wireless connection I can access the microsoft site without a problem. Additional technical details: But with my connection, here is what I get from nslookup. It is weird: It first cannot find the address, but after I look up another address it can find it: $ nslookup research.microsoft.com ;; connection timed out; no servers could be reached $ nslookup google.com Non-authoritative answer: Name: google.com Address: 72.14.204.104 Name: google.com Address: 72.14.204.147 Name: google.com Address: 72.14.204.99 Name: google.com Address: 72.14.204.103 $ nslookup research.microsoft.com Non-authoritative answer: Name: research.microsoft.com Address: 131.107.65.14 But even after nslookup finds it Firefox still cannot access it. Here is what traceroute says: $ traceroute http://research.microsoft.com/ traceroute: Warning: http://research.microsoft.com/ has multiple addresses; using 8.15.7.117 traceroute to http://research.microsoft.com/ (8.15.7.117), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 4.515 ms 2.760 ms 3.072 ms 2 * * * Traceroute just to the IP: $ traceroute 131.107.65.14 traceroute to 131.107.65.14 (131.107.65.14), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 11.912 ms 2.684 ms 2.808 ms 2 * * * Comparison: Traceroute to google.com IP: $ traceroute 72.14.204.99 traceroute to 72.14.204.99 (72.14.204.99), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 6.428 ms 6.981 ms 117.099 ms 2 * * * Any comments / help?

    Read the article

  • Apache Logs - Not Showing Requested URL or User IP

    - by iarfhlaith
    Hey all, I'm having a problem with a server that keeps falling over. Looking through the Apache error logs it appears to come from a rogue PHP script. I'm trying to track this down using Apache's error_log and access_log but the server log format isn't giving me the detail I need. I suspect the log format isn't sufficient, but I've reviewed the Apache documentation and I've included the switches that I think I need to see. Here's my LogFormat configuration in the httpd.conf file: `LogFormat "%h %l %u %t \"%r\" %s %b %U %q %T \"%{Referer}i\" \"%{User-Agent}i\"" extended CustomLog logs/access_log extended` Using the %U %q %T switches I expected to see the requested URL, query string, and the time it took to serve the request, but I'm not seeing any of this information when I tail the log. Here's an example: 127.0.0.1 - - [01/Jun/2010:14:12:04 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:05 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:06 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:07 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:08 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:09 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" Have a made a mistake in configuring the LogFormat or is it something else? Also, each request appears to come from the localhost. How come it's not giving me the remote user's IP address? Thanks, Iarfhlaith

    Read the article

  • Make Google Chrome's address bar prefer page titles to domain names when offering completions?

    - by Ryan Thompson
    I've recently switched from Firefox to Chrome, and the thing I miss most from Firefox is the "Awesome Bar" that suggests completions for what I type primarily based on page titles, and then secondarily based on domain names. Chrome offers both matching URLs and titles, just like like Firefox, but Chrome seems to always prefer a matching domain name over a matching page title or a match to another part of the URL (besides the domain), no matter how many times I pass over the former for the latter. In fact, Chrome also prefers to suggest a search rather than matching anything other than a domain name. So is there any hidden preference I can change to tell Chrome that I care more about page titles than domain names? Example: I want to go to Google Reader, so I press Control+L and begin typing "reader". The URL for google reader is http://www.google.com/reader/view/#overview-page, so the domain name is www.google.com, which does not contain the word "reader". So the first option that Chrome suggests is either another site that has "reader" as part of the domain, or a search for "reader" with the default search engine. No matter how many times I scroll down and select Google Reader, Chrome never "learns" that that's what I want.

    Read the article

  • Exchange 2007 OWA returns blank page with url xxxxx&reason=0

    - by Dayton Brown
    Hi All: I've just run into an issue with my exchange OWA. It returns a blank page with the url string https://www.xxxxxxxx/&reason=0. Nothing in the logs gives me any good reasons. Here's what I've done so far; 1) reinstall Exchange roll-up 7. 2) recreate virtual directories. 3) reboot. (this was mostly a shot in the dark, but what the hell) Exchange via rpc/https is still working great. Anyone run into this before? EDIT Here is the last snippet from the OWASetupLog. doesn't look like anything blew up. [09:45:36] ******************************************* [09:45:36] * UpdateOwa.ps1: 5/27/2009 9:45:36 AM [09:45:40] Updating OWA on server HOMER [09:45:40] Finding OWA install path on the filesystem [09:45:40] Updating OWA to version 8.1.375.2 [09:45:40] Copying files from 'C:\Program Files\Microsoft\Exchange Server\ClientAccess\owa\Current' to 'C:\Program Files\Microsoft\Exchange Server\ClientAccess\owa\8.1.375.2' [09:45:41] Getting all Exchange 2007 OWA virtual directories [09:45:42] Found 1 OWA virtual directories. [09:45:42] Updating OWA virtual directories [09:45:42] Processing virtual directory with metabase path 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa'. [09:45:42] Metabase entry 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2' exists. Removing it. [09:45:42] Creating metabase entry IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2. [09:45:42] Configuring metabase entry 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2'. [09:45:43] Saving changes to 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2' [09:45:43] Saving changes to 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa'

    Read the article

  • Nginx multiple upstream servers on the same domain via diferent url

    - by Barry
    Hello. I am trying to rout trafic to different upstream servers (that serve different applications and not for load balancing). The incoming trafic has the same domain name but different URL. Here is an example of my configuration: http { upstream backend1 { server 127.0.0.1:8080 fail_timeout=0; server 127.0.0.1:8081 fail_timeout=0; } upstream backend2 { server 127.0.0.1:8090 fail_timeout=0; server 127.0.0.1:8091 fail_timeout=0; } server { listen 80; server_name my_server.com; root /home/my_server; location /serve_me { fastcgi_pass backend1; include fastcgi_params; } location / { fastcgi_pass backend2; include fastcgi_params; } } } It seems that whatever trafic comes in (including "my_server.com/serve_me") goes to backend2. How do I make queries that start with /serve_me to be directed to backend1? Thanks, Barry.

    Read the article

  • redirect landing without changing url

    - by Gerard
    I'm working with Apache and Joomla. I would like to have some URIs that land to the same landing page, but avoiding the URL changes. So, for example: examplecom/luggage/delay/iberia examplecom/luggage/delay/aireuropa examplecom/luggage/delay/ryanair Need to point to examplecom/luggage/delay but the address bar must show the different airline companies. I've been messing arround with mod_rewrite with the following codes: 1: RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-iberia [NC,OR] RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-vueling [NC,OR] RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-ryanair [NC,OR] RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-aireuropa [NC,OR] RewriteRule ^(.*)$ /equipaje/problemas-de-equipaje/perdida-equipaje [P] 2: RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-iberia equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-vueling equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-ryanair equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-aireuropa equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] The idea is to have different urls for AdWords but then I'll avoid the indexing of that company landpages to avoid replicated content. I've been searching for several examples and trying to understand how the module works, but until now I haven't acomplished with that... Is it possible to do that with mod_rewrite? Many thanks!

    Read the article

  • Combining URL mapping and Access-Control-Allow-Origin: *

    - by ksangers
    I am in the progress of migrating an old banner system to a new one and in doing so I want to rewrite the old banner system's URL's to the new one. I load my banners via an AJAX request, and therefore I require the Access-Control-Allow-Origin to be set to *. I have the following VirtualHost configuration: <VirtualHost *:80> ServerAdmin [email protected] ServerName banner.studenten.net # we want to allow XMLHTTPRequests Header set Access-Control-Allow-Origin "*" RewriteEngine on RewriteMap bannersOldToNew txt:/home/user/banner.studenten.net/banner-studenten-net-to-ads-all4students-nl-map # check whether a zoneid exists in the query string RewriteCond %{QUERY_STRING} ^(.*)zoneid=([1-9][0-9]*)(.*) # make sure the requested banner has been mapped RewriteCond ${bannersOldToNew:%2|NOTFOUND} !=NOTFOUND # rewrite to ads.all4students.nl RewriteRule ^/ads/.* http://ads.all4students.nl/delivery/ajs.php?%1zoneid=${bannersOldToNew:%2}%3 [R] # else 404 or something ErrorLog ${APACHE_LOG_DIR}/banner.studenten.net-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/banner.studenten.net-access.log combined </VirtualHost> My map file, /home/user/banner.studenten.net/banner-studenten-net-to-ads-all4students-nl-map, contains something like: # oldId newId 140 11 141 12 142 13 Based on the above configuration I was expecting the following: GET /ads/ajs.php?zoneid=140 HTTP/1.1 Host: banner.studenten.net HTTP/1.1 302 Found ... Access-Control-Allow-Origin: * Location: http://ads.all4students.nl/delivery/ajs.php?zoneid=11 But instead I get the following: GET /ads/ajs.php?zoneid=140 HTTP/1.1 Host: banner.studenten.net HTTP/1.1 302 Found ... Location: http://ads.all4students.nl/delivery/ajs.php?zoneid=11 Note the missing Access-Control-Allow-Origin header, this means the XMLHttpRequest is denied and the banner is not displayed. Any suggestions on how to fix this in Apache?

    Read the article

  • NGINX Remove index.php /index.php/something/more/ to /something/more

    - by Gaston
    I'm trying to clean urls in NGINX using framework DooPHP. This = - http://example.com/index.php/something/more/ To This = - http://example.com/something/more/ I want to remove (clean url) the "index.php" from the url if someone try to enter in the first form. Like a permanent redirect. How to do this config on NGINX? Thanks. [Update: Actual nginx config] server { listen 80; server_name vip.example.com; rewrite ^/(.*) https://vip.example.com/$1 permanent; } server { listen 443; server_name vip.example.com; error_page 404 /vip.example.com/404.html; error_page 403 /vip.example.com/403.html; error_page 401 /vip.example.com/401.html; location /vip.example.com { root /sites/errors; } ssl on; ssl_certificate /etc/nginx/config/server.csr; ssl_certificate_key /etc/nginx/config/server.sky; if (!-e $request_filename){ rewrite /.* /index.php; } location / { auth_basic "example Team Access"; auth_basic_user_file config/htpasswd; root /sites/vip.example.com; index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /sites/vip.example.com$fastcgi_script_name; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; } }

    Read the article

  • How can I disable Kerberos authentication for only the root of my site?

    - by petRUShka
    I have Kerberos-based authentication and I want to disable it on only root url: http://mysite.com/. And I want it to continue to work fine on any other page like http://mysite.com/page1. I have such things in my .htaccess: AuthType Kerberos AuthName "Domain login" KrbAuthRealms DOMAIN.COM KrbMethodK5Passwd on Krb5KeyTab /etc/httpd/httpd.keytab require valid-user I want to turn it off only for root URL. As workaround it is possible to turn off using .htaccess in virtual host config. Unfortunately I don't know how to do it. Part of my vhost.conf: <Directory /home/user/www/current/public/> Options -MultiViews +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> UPD. I'm using Apache/2.2.3 (Linux/SUSE) I tried to use such version of .htaccess: SetEnvIf Request_URI ^/$ rootdir=1 Allow from env=rootdir Satisfy Any AuthType Kerberos AuthName "Domain login" KrbAuthRealms DOMAIN.COM KrbMethodK5Passwd on Krb5KeyTab /etc/httpd/httpd.keytab require valid-user Unfortunately such config turn Kerberos AuthType for all URLs. I tried to place first 3 lines SetEnvIf Request_URI ^/$ rootdir=1 Allow from env=rootdir Satisfy Any after main block, but it didn't help me.

    Read the article

  • certificate working on IP but not on URL

    - by Stephan
    I asked this question on stackoverflow, and I've been suggested to repost it here. I have a problem accessing my site (on https) with IEMobile 9 (WP 7.5). It says it's got problem with the certificate, as if it wasn't valid. Everything works on any other browser or platform I tested (android (several phones and a galaxy tab with stock browser, firefox, opera, dolphin), iOS (iphone and ipad with safari and chrome), an old nokia with symbian, windows 7, linux and mac). To try to solve this I saved the certificate (.cer) on the server and accessed it from the phone browser. It always complained except when I accessed it through the server IP (192.168.xx.xx). At that point it (said it) installed correctly the certificate. If then I try to access the index.html still using the IP all works fine and it doesn't complain about the certificate. If, though, I try to access the index using the actual URL (blah.myblah.com), it complains again about the certificate, as if it wasn't installed! It isn't a problem of DNS, cause that's up and serving the right ip, and the phone is correctly setup to use it. The certificate is signed by geotrust/rapidssl for *.myblah.com.

    Read the article

  • Setting Outlook Web Access as default mail client in Firefox

    - by Barton Chittenden
    The company that I work for is nice enough to let me use Linux for work, but they use Outlook. I've been using Outlook Web Access (OWA) as my mail client, which is more or less acceptable. The only problem is that whenever I click on a mailto link or use the "Send Link" menu option in firefox, I'm prompted to use evolution. Since connecting to an exchange server through evolution seems to be sketchy at best, I would like to set OWA as my default mail client. I'm using Firefox 3.6.13 Here's what I've found so far: Default mail client can be found at Edit Menu -> Preferences -> Applications Tab -> mailto When I click on the drop down menu, one of the options is "Application Details" This shows two options by default: Google Yahoo! Mail Each of these shows how to launch that service. For Gmail: https://mail.google.com/mail/?extsrc=mailto&url=%s For Yahoo!: http://compose.mail.yahoo.com/?To=%s I presume that Outlook Web Access has something similar. Based on the googling that I've done so far, I think that this should look something like this: https://<server name>/owa/?cmd=compose... A little experimentation on my part shows that the following will compose a message: https://<email server>/owa/?ae=Item&a=New&t=IPM.Note but I still don't know how to specify recipient, subject or body of the email to be composed... What I want to know is a) does anyone know the URL parameters to compose a mailto in Outlook Web Access, including subject, recipient and body? else b) can someone give me a decent pointer for where to get this information?

    Read the article

  • Adding a GET parameter to URL causes 404 error

    - by Adrian Grigore
    I'm trying to install the syntaxhighlightter evolved plugin to my wordpress blog. I've uploaded and activated the plugin, but it did not work. I've looked into the page source code and found out that the plugin style is loaded from the following URL: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css?ver=2.0.320 This causes a 404 error (page not found). The strange thing though is that when I remove the GET parameters, the CSS loads ok: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css What could be causing this problem and how can I fix this? Unfortunately I don't know how to make wordpress drop the GET parameters when loading the stylesheet. EDIT: As I just found out, this happens only in Firefox (3.0.11). IE loads both URLs above just fine. Not that this would be of any help though, so any suggestions would be appreciated. SECOND EDIT: I tried this on my laptop and it works fine with Firefox 3.08. So this really seems to be a browser problem after all.

    Read the article

  • Apache Proxy and Rewrite, append directory to URL

    - by ionFish
    I have a backend server running on http://10.0.2.20/ from the local network, which serves similar to this: / (root) | |_user1/ | |_www/ | |_private/ | |_user2/ |_www/ |_private/ (etc.) Accessing http://10.0.2.20/user1/ of course, contains 'www' and 'private' folders and is proxied through a public server using Apache's Reverse Proxy. I'd like it so the following happens: http://public-proxy-server/user1/ actually shows the content from http://10.0.2.20/user1/www/ without indicating it in the URL. (/private/ would not be accessible via the public proxy server). The key here, is for it to be dynamic, so all requests to http://public-proxy-server/*/should show content from http://10.0.2.20/*/www/. Again, the proxy currently works fine; below is the config: (On the public server) <VirtualHost *:80> ServerName www.domain.com ProxyRequests Off ProxyPreserveHost On ProxyVia full ProxyPass / http://10.0.2.20/ ProxyPassReverse / http://10.0.2.20/ </VirtualHost> (On the backend server) <VirtualHost *:80> ... #this directory contains folders 'user1' and 'user2' DocumentRoot /var/www/ ... </VirtualHost>

    Read the article

  • How to test a HTTPS URL with a given IP address

    - by GreatFire
    Let's say a website is load-balanced between several servers. I want to run a command to test whether it's working, such as curl DOMAIN.TLD. So, to isolate each IP address, I specify the IP manually. But many websites may be hosted on the server, so I still provide a host header, like this: curl IP_ADDRESS -H 'Host: DOMAIN.TLD'. In my understanding, these two commands create the exact same HTTP request. The only difference is that in the latter one I take out the DNS lookup part from cURL and do this manually (please correct me if I'm wrong). All well so far. But now I want to do the same for an HTTPS url. Again, I could test it like this curl https://DOMAIN.TLD. But I want to specify the IP manually, so I run curl https://IP_ADDRESS -H 'Host: DOMAIN.TLD'. Now I get a cURL error: curl: (51) SSL: certificate subject name 'DOMAIN.TLD' does not match target host name 'IP_ADDRESS'. I can of course get around this by telling cURL not to care about the certificate (the "-k" option) but it's not ideal. Is there a way to isolate the IP address being connected to from the host being certified by SSL?

    Read the article

  • Redirect from folder containing website

    - by Sam
    I have a website reached from this url: http://www.mysite.com/cms/index.php being served from this directory: public_html/cms/index.php In public_html I have this .htaccess RewriteRule (.*) cms/$1 [L] Which lets me get to the site like this: http://www.mysite.com/index.php But now if I reference the 'old' address, I'd like to redirect to the rewritten address with a permanent redirect code. for example: http://www.mysite.com/cms/?q=node/1 is redirected to... http://www.mysite.com/?q=node/1 How can I make this happen? EDIT: Also in the .htaccess file supplied with Drupal(cms), this is written. I've tried enabling it, but it doesn't seem to have any effect. # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal EDIT: Including more of my .htaccess file - seems relevant. # Block access to "hidden" directories whose names begin with a period. RewriteRule "(^|/)\." - [F] #Strip cms folder from url RewriteRule (.*) cms/$1 RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^ index.php [L] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch>

    Read the article

  • .htaccess modify rules and redirect if there's .php in the url

    - by Ron
    Hello everyone. I got the following code in my .htaccess: Options +FollowSymlinks RewriteBase /temp/test/ RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule ^about/(.*)/$ $1.php [L] RewriteRule ^(.*)/download/(.*)/(.*)/(.*)/downloadfile/$ file-download.php?product=$1&version=$2&os=$3&method=$4 [L] RewriteRule ^(.*)/download/(.*)/(.*)/(.*)/$ download-donate.php?product=$1&version=$2&os=$3&method=$4 [L] RewriteRule ^(.*)/download/(.*)/$ download.php?product=$1&version=$2 [L] RewriteRule ^newsletter-confirm/(.*)/$ newsletter-confirm.php?email=$1 [L] RewriteRule ^newsletter-remove/(.*)/$ newsletter-remove.php?email=$1 [L] RewriteRule ^(.*)/screenshots/$ screenshots.php?product=$1 [L] RewriteRule ^(.*)/(.*)/$ products.php?product=$1&page=$2 [L] RewriteRule ^schedule-manager/$ products.php?product=schedule-manager&page=view [L] RewriteRule ^visual-command-line/$ products.php?product=visual-command-line&page=view [L] RewriteRule ^windows-hider/$ products.php?product=windows-hider&page=view [L] RewriteRule ^(.*)/$ $1.php [L] RewriteRule ^products/$ products.php [L] everything work perfect. I would like to know how can I modify it so it will be less lines. I am pretty sure I can atleast remove 4-5 lines, but I dont know how. (merge the schedule-manager, visual-command-line and windows-hider, and some more). I know that the order of the rules is important, this order works - although I have no idea why, I just played with the rules until it worked. If you think that there'll be a bug with the following order please tell me where. Another thing - I would like to redirect for example www.myweb.com/products.php to www.myweb.com/products/ (I mean that the URL in the address bar will change). I dont know if the redirect can go along with my rewrite rules. Thank you.

    Read the article

  • cant remove index.php from url in codeigniter

    - by Ashiq
    iam new in codeigniter frame work,i want to remove index.php from url and tried many times bt its not working..... here is my .htaccess file RewriteEngine on RewriteBase /test/ RewriteCond $1 !^(index\.php|resources|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ test/index.php/$1 [L,QSA] iam also change $config['index_page'] = ''; bt when running this i got an error message... Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator at [email protected] to inform them of the time this error occurred, and the actions you performed just before this error. More information about this error may be available in the server error log. here is my appache error log [Sat Jan 05 16:59:53.265625 2013] [core:error] [pid 3976:tid 1152] [client ] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. pls help to solve this........ Thanks

    Read the article

  • Manual Http error response code in non-existent folder via routing

    - by Slytherin
    Apache server running on ubuntu-like linux I am getting unexpected behaviour when i try to manually send error response. If my .htaccess is responsible for the error response , then appropriate error document is loaded and displayed , with according response code in browser console. However , if my router is origin of the response code , then i get blank screen , but correct response code. .htaccess looks like this RewriteEngine On # RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(css|js|icon|zip|rar|png|jpg|gif|pdf)$ index.php [L] ErrorDocument 404 /err/404.html ErrorDocument 403 /err/403.html ErrorDocument 500 /err/500.html part of my router that sends the response is the following header("HTTP/1.1 403 Forbidden"); trying this format didnt help either header("HTTP/1.1 403 Forbidden", TRUE, 403); I also tried HTTP/1.0. Furthermore i was thinking that maybe relative path to error page might be an issue , but discarded this idea after attempting to access a document that is forbidden via .htaccess EDIT I should also point out , this scenario happens when URL for not-existing article is requested. Is it possible that Server is looking for a .htaccess file in a folder based on URL ? Eg: domain/blog/non-existent , is server looking for blog folder ? I am specifically asking this because there is no blog folder

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >