Search Results

Search found 21350 results on 854 pages for 'url parsing'.

Page 2/854 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Using subdomains or directories for main categories?

    - by Matthieu
    I have a website which references places to travel to in the world. Those places are (of course) grouped by countries. Here is an example of an actual URL of my website: http://awespot.org/country/105/iceland I am wondering if it would be better, in terms of SEO, to have the countries separated in subdomains: http://iceland.awespot.org/ I know subdomains are considered by Google as different websites, so I am considering the 2 options: separating would mean also separating the pagerank and benefits of links to the website but separating would enable me to create a "web" or related websites (all related to travel and all) that link and benefit to each other I am only asking about SEO here, I know there are other questions raised by this possibility (user experience, administration... even password completion by web browsers)

    Read the article

  • Advanced Rails Routing of short URL's and usernames off of root url

    - by Michael Waxman
    I want to have username URL's and Base 58 short URL's to resources both off of the root url like this: http://mydomain.com/username #=> goes to given user http://mydomain.com/a3x9 #=> goes to given story I am aware of the possibilities of a user names conflicting with short urls, and I have a workaround, but what I can't figure out is the best way to set this up in rails. Can I do it in rails routes? Should I do something with a piece of Rack middleware? Should I set up a routing controller? Please let me know the best way to do this. Thanks so much!

    Read the article

  • apache rewriting url doesn't work(using godaddy hosting)

    - by AzizAG
    I'm using a framework to create my website(codeigniter) by default the urls are like this:mysite.com/index.php?/etc/etc/etc. And I'm trying to remove the index.php?, I tried to remove it by doing this(didn't work): RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /index.php?$1 [L] Note: it's working on my localhost(when putting my website's files in the root directiory). So, Is this issue associated with me or the hosting company(Go Daddy)?

    Read the article

  • Masking Domain URL with Sub domain URL

    - by Vijay Kumar
    Hai i'm having a subdomain (abc.example.in) but this is referred to mydomain/folder . the thing is i want the mask URL links and the masked url should be displayed following with sub domain can any one help me out . mydomain/folder/path to abc.example.in/path

    Read the article

  • The Mysterious ARR Server Farm to URL Rewrite link

    - by OWScott
    Application Request Routing (ARR) is a reverse proxy plug-in for IIS7+ that does many things, including functioning as a load balancer.  For this post, I’m assuming that you already have an understanding of ARR.  Today I wanted to find out how the mysterious link between ARR and URL Rewrite is maintained.  Let me explain… ARR is unique in that it doesn’t work by itself.  It sits on top of IIS7 and uses URL Rewrite.  As a result, ARR depends on URL Rewrite to ‘catch’ the traffic and redirect it to an ARR Server Farm. As the last step of creating a new Server Farm, ARR will prompt you with the following: If you accept the prompt, it will create a URL Rewrite rule for you.  If you say ‘No’, then you’re on your own to create a URL Rewrite rule. When you say ‘Yes’, the Server Farm’s checkbox for “Use URL Rewrite to inspect incoming requests” will be checked.  See the following screenshot. However, I’m not a fan of this auto-rule.  The problem is that if I make any changes to the URL Rewrite rule, which I always do, and then make the wrong change in ARR, it will blow away my settings.  So, I prefer to create my own rule and manage it myself. Since I had some old rules that were managed by ARR, I wanted to update them so that they were no longer managed that way.  I took a look at a config in applicationHost.config to try to find out what property would bind the two together.  I assumed that there would be a property on the ServerFarm called something like urlRewriteRuleName that would serve as the link between ARR and URL Rewrite.  I found no such property.  After a bit of testing, I found that the name of the URL Rewrite rule is the only link between ARR and URL Rewrite.  I wouldn’t have guessed.  The URL Rewrite rule needs to be exactly ARR_{ServerFarm Name}_loadBalance, although it’s not case sensitive. Consider the following auto-created URL Rewrite rule: And, the link between ARR and URL Rewrite exists: Now, as soon as I rename that to anything else, for example, site.com ARR Binding, the link between ARR and URL Rewrite is broken. To be certain of the relationship, I renamed it back again and sure enough, the relationship was reestablished. Why is this important?  It’s only important if you want to decouple the relationship between ARR the URL Rewrite rule, but if you want to do so, the best way to do that is to rename the URL Rewrite rule.  If you uncheck the “Use URL Rewrite to inspect incoming requests” checkbox, it will delete your rule for you without prompting.  Conclusion The mysterious link between ARR and URL Rewrite only exists through the ARR Rule name.  If you want to break the link, simply rename the URL Rewrite rule.  It’s completely safe to do so, and, in my opinion, this is a rule that you should manage yourself anyway. 

    Read the article

  • Are there disadvantages an literal + instead of an encoded + (%2B) in an URL?

    - by M_rk
    A client of mine has a product ending with a plus-sign (e.g. Google+) and would like the webpage of this product to have an URL that is human-readable (i.e. an URL that doesn't contain %2B). Since our projects use the following .htaccess RewriteRule RewriteRule ^(.*)$ index.php?$1 it is possible to use an urlencoded space in an URL like that. However, while the url would read like /google+, the actual meaning of the URL would be /google[space]. (The markup won't let me place a real space there.) Now my concern is that this would have disadvantages for SEO. Is this concern valid and/or are there other culprits to this approach?

    Read the article

  • How can I unshorten a URL using python?

    - by Andrew
    I want to be able to take a shortened or non-shortened URL and return its un-shortened form. How can I make a python program to do this? Additional Clarification: Case 1: shortened -- unshortened Case 2: unshortened -- unshortened e.g. bit.ly/silly in the input array should be google.com in the output array e.g. google.com in the input array should be google.com in the output array Thanks for the help!

    Read the article

  • URL Rewrite – Protocol (http/https) in the Action

    - by OWScott
    IIS URL Rewrite supports server variables for pretty much every part of the URL and http header. However, there is one commonly used server variable that isn’t readily available.  That’s the protocol—HTTP or HTTPS. You can easily check if a page request uses HTTP or HTTPS, but that only works in the conditions part of the rule.  There isn’t a variable available to dynamically set the protocol in the action part of the rule.  What I wish is that there would be a variable like {HTTP_PROTOCOL} which would have a value of ‘HTTP’ or ‘HTTPS’.  There is a server variable called {HTTPS}, but the values of ‘on’ and ‘off’ aren’t practical in the action.  You can also use {SERVER_PORT} or {SERVER_PORT_SECURE}, but again, they aren’t useful in the action. Let me illustrate.  The following rule will redirect traffic for http(s)://localtest.me/ to http://www.localtest.me/. <rule name="Redirect to www"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="http://www.localtest.me/{R:1}" /> </rule> The problem is that it forces the request to HTTP even if the original request was for HTTPS. Interestingly enough, I planned to blog about this topic this week when I noticed in my twitter feed yesterday that Jeff Graves, a former colleague of mine, just wrote an excellent blog post about this very topic.  He beat me to the punch by just a couple days.  However, I figured I would still write my blog post on this topic.  While his solution is a excellent one, I personally handle this another way most of the time.  Plus, it’s a commonly asked question that isn’t documented well enough on the web yet, so having another article on the web won’t hurt. I can think of four different ways to handle this, and depending on your situation you may lean towards any of the four.  Don’t let the choices overwhelm you though.  Let’s keep it simple, Option 1 is what I use most of the time, Option 2 is what Jeff proposed and is the safest option, and Option 3 and Option 4 need only be considered if you have a more unique situation.  All four options will work for most situations. Option 1 – CACHE_URL, single rule There is a server variable that has the protocol in it; {CACHE_URL}.  This server variable contains the entire URL string (e.g. http://www.localtest.me:80/info.aspx?id=5)  All we need to do is extract the HTTP or HTTPS and we’ll be set. This tends to be my preferred way to handle this situation. Indeed, Jeff did briefly mention this in his blog post: … you could use a condition on the CACHE_URL variable and a back reference in the rewritten URL. The problem there is that you then need to match all of the conditions which could be a problem if your rule depends on a logical “or” match for conditions. Thus the problem.  If you have multiple conditions set to “Match Any” rather than “Match All” then this option won’t work.  However, I find that 95% of all rules that I write use “Match All” and therefore, being the lazy administrator that I am I like this simple solution that only requires adding a single condition to a rule.  The caveat is that if you use “Match Any” then you must consider one of the next two options. Enough with the preamble.  Here’s how it works.  Add a condition that checks for {CACHE_URL} with a pattern of “^(.+)://” like so: How you have a back-reference to the part before the ://, which is our treasured HTTP or HTTPS.  In URL Rewrite 2.0 or greater you can check the “Track capture groups across conditions”, make that condition the first condition, and you have yourself a back-reference of {C:1}. The “Redirect to www” example with support for maintaining the protocol, will become: <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions trackAllCaptures="true"> <add input="{CACHE_URL}" pattern="^(.+)://" /> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{C:1}://www.localtest.me/{R:1}" /> </rule> It’s not as easy as it would be if Microsoft gave us a built-in {HTTP_PROTOCOL} variable, but it’s pretty close. I also like this option since I often create rule examples for other people and this type of rule is portable since it’s self-contained within a single rule. Option 2 – Using a Rewrite Map For a safer rule that works for both “Match Any” and “Match All” situations, you can use the Rewrite Map solution that Jeff proposed.  It’s a perfectly good solution with the only drawback being the ever so slight extra effort to set it up since you need to create a rewrite map before you create the rule.  In other words, if you choose to use this as your sole method of handling the protocol, you’ll be safe. After you create a Rewrite Map called MapProtocol, you can use “{MapProtocol:{HTTPS}}” for the protocol within any rule action.  Following is an example using a Rewrite Map. <rewrite> <rules> <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{MapProtocol:{HTTPS}}://www.localtest.me/{R:1}" /> </rule> </rules> <rewriteMaps> <rewriteMap name="MapProtocol"> <add key="on" value="https" /> <add key="off" value="http" /> </rewriteMap> </rewriteMaps> </rewrite> Option 3 – CACHE_URL, Multi-rule If you have many rules that will use the protocol, you can create your own server variable which can be used in subsequent rules. This option is no easier to set up than Option 2 above, but you can use it if you prefer the easier to remember syntax of {HTTP_PROTOCOL} vs. {MapProtocol:{HTTPS}}. The potential issue with this rule is that if you don’t have access to the server level (e.g. in a shared environment) then you cannot set server variables without permission. First, create a rule and place it at the top of the set of rules.  You can create this at the server, site or subfolder level.  However, if you create it at the site or subfolder level then the HTTP_PROTOCOL server variable needs to be approved at the server level.  This can be achieved in IIS Manager by navigating to URL Rewrite at the server level, clicking on “View Server Variables” from the Actions pane, and added HTTP_PROTOCOL. If you create the rule at the server level then this step is not necessary.  Following is an example of the first rule to create the HTTP_PROTOCOL and then a rule that uses it.  The Create HTTP_PROTOCOL rule only needs to be created once on the server. <rule name="Create HTTP_PROTOCOL"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{CACHE_URL}" pattern="^(.+)://" /> </conditions> <serverVariables> <set name="HTTP_PROTOCOL" value="{C:1}" /> </serverVariables> <action type="None" /> </rule>   <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{HTTP_PROTOCOL}://www.localtest.me/{R:1}" /> </rule> Option 4 – Multi-rule Just to be complete I’ll include an example of how to achieve the same thing with multiple rules. I don’t see any reason to use it over the previous examples, but I’ll include an example anyway.  Note that it will only work with the “Match All” setting for the conditions. <rule name="Redirect to www - http" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="http://www.localtest.me/{R:1}" /> </rule> <rule name="Redirect to www - https" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> <add input="{HTTPS}" pattern="on" /> </conditions> <action type="Redirect" url="https://www.localtest.me/{R:1}" /> </rule> Conclusion Above are four working examples of methods to call the protocol (HTTP or HTTPS) from the action of a URL Rewrite rule.  You can use whichever method you most prefer.  I’ve listed them in the order that I favor them, although I could see some people preferring Option 2 as their first choice.  In any of the cases, hopefully you can use this as a reference for when you need to use the protocol in the rule’s action when writing your URL Rewrite rules. Further information: Viewing all Server Variable for a site. URL Parts available to URL Rewrite Rules Further URL Rewrite articles

    Read the article

  • What is the best URL strategy to handle multiple search parameters and operators?

    - by Jon Winstanley
    Searching with mutltiple Parameters In my app I would like to allow the user to do complex searches based on several parameters, using a simple syntax similar to the GMail functionality when a user can search for "in:inbox is:unread" etc. However, GMail does a POST with this information and I would like the form to be a GET so that the information is in the URL of the search results page. Therefore I need the parameters to be formatted in the URL. Requirements: Keep the URL as clean as possible Avoid the use of invalid URL chars such as square brackets Allow lots of search functionality Have the ability to add more functions later. I know StackOverflow allows the user to search by multiple tags in this way: http://stackoverflow.com/questions/tagged/c+sql However, I'd like to also allow users to search with multiple additional parameters. Initial Design My design is currently to do use URLs such as these: http://example.com/search/tagged/c+sql/searchterm/transactions http://example.com/search/searchterm/transactions http://example.com/search/tagged/c+sql http://example.com/search/tagged/c+sql/not-tagged/java http://example.com/search/tagged/c+sql/created/yesterday http://example.com/search/created_by/user1234 I intend to parse the URL after the search parameter, then decide how to construct my search query. Has anyone seen URL parameters like this implemented well on a website? If so, which do it best?

    Read the article

  • Parsing scripts that use curly braces

    - by Keikoku
    To get an idea of what I'm doing, I am writing a python parser that will parse directx .x text files. The problem I have deals with how the files are formatted. Although I'm writing it in python, I'm looking for general algorithms for dealing with this sort of parsing. .x files define data using templates. The format of a template is template_name { [some_data] } The goal I have is to parse the file line-by-line and whenever I come across a template, I will deal with it accordingly. My initial approach was to check if a line contains an opening or closing brace. If it's an open brace, then I will check what the template name is. Now the catch here is that the open brace doesn't have to occur on the same line as the template name. It could just as well be template_name { [some_data] } So if I were to use my "open brace exists" criteria, it won't work for any files that use the latter format. A lot of languages also use curly braces (though I'm not sure when people would be parsing the scripts themselves), so I was wondering if anyone knows how to accurately get the template name (or in some other languages, it could just as well be a function name, though there aren't any keywords to look for)

    Read the article

  • Should a URL match the page's title?

    - by Yottatron
    Should the URL of a page match its title? For example: Http://example.com/about-cats.html <title>About Cats</title> Furthermore, if that title were to be changed by the page's author, should the URL change to match and the old URL be redirected (301) to the new URL? Edit Also, if the pages author were to decide to revert his changes after several days, would it be right to remove the redirect and set up an new redirect from the amended URL back to the old URL?

    Read the article

  • URL slugs: ideal length, and the real SEO effects of these slugs

    - by tattvamasi
    this question is addressed widely on SO and outside it, but for some reason, instead of taking it as a good load of great advice, all this information is confusing me. ** Problem ** I already had, on one of my sites, "prettified" urls. I had taken out the query strings, rewritten the URLS, and the link was short enough for me, but had a problem: the ID of the item or post in the URL isn't good for users. One of the users asked is there's a way to get rid of numbers, and I thought it was better for users to just see a clue of the page content in the URL. ** Solution ** With this in mind, I am trying with a section of the site.Armed with 301 redirects, some parsing work, and a lot of patience, I have added the URL slugs to some blog entries, and the slug of the URL reports the title of the article (something close to http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ ** Problems after Solution ** The problem, as I see it, is that now the URL of those blog articles is very descriptive for sure, but it is also impossible to remember. So, this brings me to the same issue I had with my previous problem: if numbers say nothing and can't be remembered, what's the use of these slugs? I prefer to see http://example.com/my-news/1/ than http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ To avoid forcing my user to memorize my URLS, I have added a script that finds the closest match to the URL you type, and redirects there. This is something I like, because the page now acts as a sort of little search engine, and users can play with the URLS to find articles. ** Open questions ** I still have some open questions, and don't seem to be able to find an answer, because answers tend to contradict one another. 1) How many characters should an URL ideally be long? I've read the magic number 115 and am sticking to that, but am not sure. 2) Is this really good for SEO? One of those blog articles I have redirected, with ID number in the URL and all, ranked second on Google. I've just found this question, and the answer seems to be consistent with what I think URL slug and SEO - structure (but see this other question with the opposite opinion) 3) To make a question with a specific example, would this URL risk to be penalized? Is it acceptable? Is it too long? StackOverflow seems to have comparably long URLs, but I'm not sure it's a winning strategy in my case. I just wanted to facilitate my users without running into Google's algorithms.

    Read the article

  • Convert a Relative URL to an Absolute URL in Actionscript / Flex

    - by Bear
    I am working with Flex, and I need to take a relative URL source property and convert it to an absolute URL before loading it. The specific case I am working with involves tweaking SoundEffect's load method. I need to determine if a file will be loaded from the local file system or over the network from looking at the source property, and the easiest way I've found to do this is to generate the absolute URL. I'm having trouble generating the absolute URL for sound effect in particular. Here were my initial thoughts, which haven't worked. Look for the DisplayObject that the Sound Effect targets, and use its loaderInfo property. The target is null when the SoundEffect loads, so this doesn't work. Look at FlexGlobals.topLevelApplication, at the url or loaderInfo properties. Neither of these are set, however. Look at the FlexGlobals.topLevelApplication.systemManager.loaderInfo. This was also not set. The SoundEffect.as code basically boils down to var url:String = "mySound.mp3"; /*>> I'd like to convert the URL to absolute form here and tweak it as necessary <<*/ var req:URLRequest = new URLRequest(url); var loader:Loader = new Loader(); loader.load(req); Does anyone know how to do this? Any help clarifying the rules of how relative urls are resolved for URLRequests in ActionScript would also be much appreciated. edit I would also be perfectly satisfied with some way to tell whether the url will be loaded from the local file system or over the network. Looking at an absolute URL it would just be easy to look at the prefix, like file:// or http://.

    Read the article

  • Creating a Reverse Proxy with URL Rewrite for IIS

    - by OWScott
    There are times when you need to reverse proxy through a server. The most common example is when you have an internal web server that isn’t exposed to the internet, and you have a public web server accessible to the internet. If you want to serve up traffic from the internal web server, you can do this through the public web server by creating a tunnel (aka reverse proxy). Essentially, you can front the internal web server with a friendly URL, even hiding custom ports. For example, consider an internal web server with a URL of http://10.10.0.50:8111. You can make that available through a public URL like http://tools.mysite.com/ as seen in the following image. The URL can be made public or it can be used for your internal staff and have it password protected and/or locked down by IP address. This is easy to do with URL Rewrite and IIS. You will also need Application Request Routing (ARR) installed even though for a simple reverse proxy you won’t use most of ARR’s functionality. If you don’t already have URL Rewrite and ARR installed you can do so easily with the Web Platform Installer. A lot can be said about reverse proxies and many different situations and ways to route the traffic and handle different URL patterns. However, my goal here is to get you up and going in the easiest way possible. Then you can dig in deeper after you get the base configuration in place. URL Rewrite makes a reverse proxy very easy to set up. Note that the URL Rewrite Add Rules template doesn’t include Reverse Proxy at the server level. That’s not to say that you can’t create a server-level reverse proxy, but the URL Rewrite rules template doesn’t help you with that. Getting Started First you must create a website on your public web server that has the public bindings that you need. Alternately, you can use an existing site and route using conditions for certain traffic. After you’ve created your site then open up URL Rewrite at the site level. Using the “Add Rule(s)…” template that is opened from the right-hand actions pane, create a new Reverse Proxy rule. If you receive a prompt (the first time) that the proxy functionality needs to be enabled, select OK. This is telling you that a proxy can route traffic outside of your web server, which happens to be our goal in this case. Be aware that reverse proxy rules can be dangerous if you open sites from inside you network to the world, so just be aware of what you’re doing and why. The next and final step of the template asks a few questions. The first textbox asks the name of the internal web server. In our example, it’s 10.10.0.50:8111. This can be any URL, including a subfolder like internal.mysite.com/blog. Don’t include the http or https here. The template assumes that it’s not entered. You can choose whether to perform SSL Offloading or not. If you leave this checked then all requests to the internal server will be over HTTP regardless of the original web request. This can help with performance and SSL bindings if all requests are within a trusted network. If the network path between the two web servers is not completely trusted and safe then uncheck this. Next, the template enables you to create an outbound rule. This is used to rewrite links in the page to look like your public domain name rather than the internal domain name. Outbound rules have a lot of CPU overhead because the entire web content needs to be parsed and updated. However, if you need it, then it’s well worth the extra CPU hit on the web server. If you check the “Rewrite the domain names of the links in HTTP responses” checkbox then the From textbox will be filled in with what you entered for the inbound rule. You can enter your friendly public URL for the outbound rule. This will essentially replace any reference to 10.10.0.50:8111 (or whatever you enter) with tools.mysite.com in all <a>, <form>, and <img> tags on your site. That’s it! Well, there is a lot more that you can do, this but will give you the base configuration. You can now visit www.mysite.com on your public web server and it will serve up the site from your internal web server. You should see two rules show up; one inbound and one outbound. You can edit these, add conditions, and tweak them further as needed. One common issue that can occur without outbound rules has to do with compression. If you run into errors with the new proxied site, try turning off compression to confirm if that’s the issue. Here’s a link with details on how to deal with compression and outbound rules. I hope this was helpful to get started and to see how easy it is to create a simple reverse proxy using URL Rewrite for IIS.

    Read the article

  • Blocking a specific URL by IP (a URL create by mod-rewrite)

    - by Alex
    We need to block a specific URL for anyone not on a local IP (anyone without a 192.168.. address) We however cannot use apache's <Directory /var/www/foo/bar> Order allow,deny Allow from 192.168 </Directory> <Files /var/www/foo/bar> Order allow,deny Allow from 192.168 <Files> Because these would block specific files or directories, we need to block a specific URL which is created by mod-rewrite and the page is dynamically created using PHP. Any ideas would be greatly appreciated

    Read the article

  • Parsing query strings in Java

    - by Will
    J2EE has ServletRequest.getParameterValues(). On non-EE platforms, URL.getQuery() simply returns a string. What's the normal way to properly parse the query string in a URL when not on J2EE?

    Read the article

  • Where does the URL parameter "?chocaid=397" come from?

    - by unor
    In Google Webmaster Tools, I noticed that my front page was indexed two times: example.com/ example.com/?chocaid=397 I know that I could fix this with the use of link type canonical, but I wonder: Where does this parameter come from? There are various sites that have pages indexed with this very parameter/value: https://duckduckgo.com/?q=chocaid%3D397. I looked for similarities between these sites. but couldn't find a conclusive one: It's often the front page, but not in every case. Some are NSFW, but not all. When one domains' URL has this parameter, often other subdomains of the same domain have it, too. Examples Wikipedia entry Microsoft Codeplex

    Read the article

  • IIS 7 URL Rewrite to GeoServer running on Apache

    - by Maxim Zaslavsky
    I'm building a mapping application based on OpenLayers that uses GeoServer to serve up mapping data. The problem I'm having is that besides the map images I'm requesting through WMS, I'm using jQuery AJAX to get information from GeoServer. As GeoServer is running on a different port, my requests are being blocked due to cross-site scripting security policies in JavaScript. As a Java application, GeoServer runs on Apache on port 8080, while my IIS instance is running on port 80. Instead of building a proxy, I've decided to use URL Rewriting in IIS7 to fix this problem. I'm following this guide, but it's still not working. Here are my URL Rewrite rule settings: Matches URL: (.*) Condition: {HTTP_URL} matching /geoserver Action: rewrite to http://localhost:8080/{R:1}, appending query string When I request http://localhost/geoserver/wms?QUERY_LAYERS=SanDiego:FWSA_sandiego&LAYERS=SanDiego:FWSA_sandiego&SERVICE=WMS&VERSION=1.1.1&FEATURE_COUNT=20&REQUEST=GetFeatureInfo&EXCEPTIONS=application/vnd.ogc.se_xml&BBOX=-13009123.590156,3862057.2905992,-13006066.109025,3865114.7717302&INFO_FORMAT=text/html&x=20&y=20&width=40&height=40&srs=EPSG:900913, however, all I get is a 404, although the same request on port 8080 returns the proper result. What am I doing wrong? Thanks in advance.

    Read the article

  • Parsing a string, Grammar file.

    - by defn
    How would I separate the below string into its parts. What I need to separate is each < Word including the angle brackets from the rest of the string. So in the below case I would end up with several strings 1. "I have to break up with you because " 2. "< reason " (without the spaces) 3. " . But Let's still " 4. "< disclaimer " 5. " ." I have to break up with you because <reason> . But let's still <disclaimer> . below is what I currently have (its ugly...) boolean complete = false; int begin = 0; int end = 0; while (complete == false) { if (s.charAt(end) == '<'){ stack.add(new Terminal(s.substring(begin, end))); begin = end; } else if (s.charAt(end) == '>') { stack.add(new NonTerminal(s.substring(begin, end))); begin = end; end++; } else if (end == s.length()){ if (isTerminal(getSubstring(s, begin, end))){ stack.add(new Terminal(s.substring(begin, end))); } else { stack.add(new NonTerminal(s.substring(begin, end))); } complete = true; } end++;

    Read the article

  • Parsing tab delimited file with double quotes in Perl

    - by sfactor
    I have a data set that is tab delimited with the user-agent strings in double quotes. I need to parse each of these columns and based on the answer of my other post I used the Text::CSV module. 94410634 0 GET "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6.6; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; AskTB5.5)" 1 The code is a simple one. #!/usr/bin/perl use strict; use warnings; use Text::CSV; my $csv = Text::CSV->new(sep_char => "\t"); while (<>) { if ($csv->parse($_)) { my @columns = $csv->fields(); print "@columns\n"; } else { my $err = $csv->error_input; print "Failed to parse line: $err"; } } But i get the Failed to parse line: error when I try it on this dataset. what am I doing wrong? I need to extract the 4th column containing the user-agent strings for further processing.

    Read the article

  • Perl: parsing string enclosed by double quotes

    - by sfactor
    I need to parse tab/space delimited files that have a lot of columns in Perl. The values are such that the there are large strings enclosed within double quotes. These strings can have any characters such as tabs and spaces or anything else. When I try to parse them with the split function it splits these strings as well. Now how can I make perl understand that the strings within the " " are a single column entry? A simple example is, 12 345546.67677 "Hello World!!!" -567.55656 0.5465767 "Hello_Again; "

    Read the article

  • How to remove trailing slashes from URL with .htaccess?

    - by Matt
    The situation Across the entire domain, we'd like the URLs to hide file extensions and remove trailing slashes, independent of the domain name itself (as in, works on any domain). Sample of our directory structure We're not using index.* files except for the homepage. / /index.php /account.php /account /subscriptions.php /login.php /login /reset-password.php The goal Some examples of how these files might be requested, and how they should look in the browser: / and index.php -- mydomain.com (literally just the bare domain name). /account.php or /account/ or /account -- mydomain.com/account /account/subscriptions.php or /account/subscriptions/ or /account/subscriptions -- mydomain.com/account/subscriptions As you can see, there are several ways to access each webpage, but no matter which of the 2 or 3 ways you use to get there, it only shows the one preferred URL in the browser. The question How is this done with .htaccess using mod_rewrite? I've banged my head against the wall trying to figure this out, but in general, the rewrite flow would seem to be something like this: External 301 redirect ( mydomain.com/account/ -- mydomain.com/account ) Internally append .php ( mydomain.com/account -- mydomain.com/account.php ) I've been Googling this all day, read thousands of lines of documentation and config texts, and have tried several dozen times... I think more brains on this would help a lot. UPDATE We found an answer our question (see below).

    Read the article

  • Parsing a URL - Php Question

    - by Adi Mathur
    I am using the Following code <?php $url = 'http://www.ewwsdf.org/012Q/rhod-05.php?arg=value#anchor'; $parse = parse_url($url); $lnk= "http://".$parse['host'].$parse['path']; echo $lnk; ?> This is giving me the output as http://www.ewwsdf.org/012Q/rhod-05.php How can i modify the code so that i get the output as http://www.ewwsdf.org/012Q/ Just need the Directory name without the file name ( I actually need the link so that i can link up the images which are on the pages, By appending the link behind the image Eg http://www.ewwsdf.org/012Q/hi.jpg )

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >