Search Results

Search found 21350 results on 854 pages for 'url parsing'.

Page 9/854 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • URL encoding for latin characters in Java

    - by sammichy
    I'm trying to read in an image URL. As mentioned in the java documentation, I tried converting the URL to URI by String imageURL = "http://www.shefinds.com/files/Christian-Louboutin-Décolleté-100-pumps.jpg"; URL url = new URL(imageURL); url = new URI(url.getProtocol(), url.getHost(), url.getFile(), null).toURL(); URLConnection conn = url.openConnection(); InputStream is = conn.getInputStream(); I get the following error when the code is executed http://www.shefinds.com/files/Christian-Louboutin-Décolleté-100-pumps.jpg What am I doing wrong and what is the right way to encode this URL?

    Read the article

  • Google Webmaster Tools is showing duplicate URLs based on page title differences

    - by Praveen Reddy
    I have 700+ title tag duplicates showing in WMT. Every first link in that picture is as duplicate link of second one. I don't know from where the first link got indexed by Google when that link doesn't exist in the site. It's showing the title of every page as link. Original link: http://www.sitename.com/job/407/Swedish-plus-Any-other-Nordic-Language-Customer-Service-Representative-Dublin-Ireland. Duplicate link: http://www.sitename.com/job/407/Swedish-plus-Any-other-Nordic-Language-Customer-Service-Representative-Dublin-Ireland-Ireland. How can this happen? I have checked entire site I didn't find where the second version is linked. I have no images linked to with duplicated version of URL.

    Read the article

  • Should I have link rel=next & prev on URLs which have query variables?

    - by user21100
    For example, I have link rel prev & next set up on these pages of products: site.com?page=2 site.com?page=3 (this is my preferred structure by the way and I'm trying to get all the ugly URLs which are littered with query variables deindexed as they are causing duplicate content). So the above URLs are fine but once a filter to narrow product results is selected, like "price", the URL shows like this: site.com?price[1000-1499]=on site.com?page=2&price[1000-1499]=on As of right now, I am having the link rel prev & next dynamically added to the header of these pages but since I am working on getting these query variable URLs pages deindexed, I am wondering if I should get rid of it on these pages? Any thoughts?

    Read the article

  • android sdk main.out.xml parsing error?

    - by mobibob
    I just started a new Android project, "WeekendStudy" to continue learning Android development and I got stumped compiling the default 'hello weekendstudy' compile / run. I think that I missed a step in configuration and setup, but I am at a loss to find out where. I have an AVD configured, set and launched. When I press 'run', the SDK is building a file main.out.xml and then fails as this: [2010-03-06 09:46:47 - WeekendStudy]Error in an XML file: aborting build. [2010-03-06 09:46:48 - WeekendStudy]res/layout/main.xml:0: error: Resource entry main is already defined. [2010-03-06 09:46:48 - WeekendStudy]res/layout/main.out.xml:0: Originally defined here. [2010-03-06 09:46:48 - WeekendStudy]/Users/mobibob/Projects/workspace-weekend/WeekendStudy/res/layout/main.out.xml:1: error: Error parsing XML: no element found [2010-03-06 09:48:16 - WeekendStudy]Error in an XML file: aborting build. [2010-03-06 09:48:16 - WeekendStudy]res/layout/main.xml:0: error: Resource entry main is already defined. [2010-03-06 09:48:16 - WeekendStudy]res/layout/main.out.xml:0: Originally defined here. [2010-03-06 09:48:16 - WeekendStudy]/Users/mobibob/Projects/workspace-weekend/WeekendStudy/res/layout/main.out.xml:1: error: Error parsing XML: no element found [2010-03-06 09:55:29 - WeekendStudy]res/layout/main.xml:0: error: Resource entry main is already defined. [2010-03-06 09:55:29 - WeekendStudy]res/layout/main.out.xml:0: Originally defined here. [2010-03-06 09:55:29 - WeekendStudy]/Users/mobibob/Projects/workspace-weekend/WeekendStudy/res/layout/main.out.xml:1: error: Error parsing XML: no element found [2010-03-06 09:55:49 - WeekendStudy]Error in an XML file: aborting build. [2010-03-06 09:55:49 - WeekendStudy]res/layout/main.xml:0: error: Resource entry main is already defined. [2010-03-06 09:55:49 - WeekendStudy]res/layout/main.out.xml:0: Originally defined here. [2010-03-06 09:55:49 - WeekendStudy]/Users/mobibob/Projects/workspace-weekend/WeekendStudy/res/layout/main.out.xml:1: error: Error parsing XML: no element found

    Read the article

  • what is the most elegant way in ruby to remove a parameter from url?

    - by dimus
    I would like to take out a parameter from url by it's name without knowing if it is the first, middle or last parameter and reassemble url again. I guess it is not that hard to write something on my own using CGI or URI, but I imagine such functionality exists already. Any suggestions? in: http://example.com/path?param1=one&param2=2&param3=something3 out: http://example.com/path?param2=2&param3=something3

    Read the article

  • Address bar showing long URL

    - by Abel
    I recently upgraded my hosting account to Deluxe where I can host multiple websites. I added a domain name and created a folder in the root directory giving it the same name as my domain name and uploaded my files. Now when I navigate the site the address bar shows: 'http://mywebsite/mywebsite/default.aspx' I want it to display: 'http://mywebsite/default.aspx' My thinking in creating folders that match the domain names is to keep them somewhat organized; never intended to have my domain names listed twice in the address bar.

    Read the article

  • Google shows subdomain of main site instead of add on domain URL

    - by Welsher
    I have my host (lunarpages) set up with a few add on domains to my main account. These show up as sub-domains of my main account, but they can be reached by using the new domain I've created. So: subdomain1.domain.com -- www.mynewsite.com subdomain2.domain.com -- www.myothersite.com etc. The problem is, mynewsite.com shows up in google with that domain, but myothersite.com shows up with subdomain2.domain.com. I don't have a clue what might be causing this to happen. If anyone has an advice or can point me in the right direction, I'd really appreciate it! Thanks.

    Read the article

  • Setting up a Reverse Proxy using IIS, URL Rewrite and ARR

    - by The Official Microsoft IIS Site
    Today there was a question in the IIS.net Forums asking how to expose two different Internet sites from another site making them look like if they were subdirectories in the main site. So for example the goal was to have a site: www.site.com expose a www.site.com/company1 and a www.site.com/company2 and have the content from “www.company1.com” served for the first one and “www.company2.com” served in the second one. Furthermore we would like to have the responses cached in the server for performance...(read more)

    Read the article

  • Domain forwarding with url substitution in the address bar

    - by Mario Duarte
    Hello, I have a blog being served by a machine I have at home. Since the ip can change i set up a dyndns domain to always point to that machine. However, I purchased a more friendly domain (at godaddy.com) and I would like to forward it to that blog. The problem is that if I simply forward it the users will see the dyndns domain in the address bar and could potentially bookmark those urls and that's a problem. I noticed that godaddy.com has domain masking and although it does hide the dyndns domain in the address bar, it also keeps the same root address in the address bar even if I navigate to another page. I also have the feeling that search engines will not like this domain masking thing. Does anyone know how can I accomplish what I want?

    Read the article

  • Google shows subdomain of main site instead of add on domain URL

    - by Welsher
    I have my host (lunarpages) set up with a few add on domains to my main account. These show up as sub-domains of my main account, but they can be reached by using the new domain I've created. So: subdomain1.domain.com -- www.mynewsite.com subdomain2.domain.com -- www.myothersite.com etc. The problem is, mynewsite.com shows up in google with that domain, but myothersite.com shows up with subdomain2.domain.com. I don't have a clue what might be causing this to happen. If anyone has an advice or can point me in the right direction, I'd really appreciate it! Thanks.

    Read the article

  • Using Url Rewrite to Block Page Requests

    - by The Official Microsoft IIS Site
    The other day I was checking the traffic stats for my WordPress blog to see which of my posts were the most popular. I was a little concerned to see that wp-login.php was in the Top 5 total requests almost every month. Since I’m the only author on my blog my logins could not possibly account for the traffic hitting that page. The only explanation could be that the additional traffic was coming from automated hacking attempts. Any server administrator concerned about security knows that “ footprinting...(read more)

    Read the article

  • Please explain some of the features of URL Rewrite module for a newbie

    - by kunjaan
    I am learning to use the IIS Rewrite module and some of the "features" listed in the page is confusing me. It would be great if somebody could explain them to me and give a first hand account of when you would use the feature. Thanks a lot! Rewriting within the content of specific HTML tags Access to server variables and HTTP headers Rewriting of server variables and HTTP request headers What are the "server variables" and when would you redefine or define them? Rewriting of HTTP response headers HtmlEncode function Why would you use an HTMLEncode in the server? Reverse proxy rule template Support for IIS kernel-mode and user-mode output caching Failed Request Tracing support

    Read the article

  • URL-rewriting on Plesk using ISAPI_rewrite3 Lite

    - by Anusha
    I am using Plesk Windows based web server with Windows 2008 server OS with IIS-6 for my e-commerce website. I want to rewrite URLs for all dynamic pages, So I installed ISAPI_Rewrite 3 Lite on my web server also I had uploaded the .htaccess file with the basic rules as follows RewriteEngine on RewriteRule ^contact\.html$ contactus.php? [NC,R] I never worked before with ISAPI neither on URL- rewriting. My doubt is How should I proceed after installation. Should I upload .htaccess or httpd.conf file OR This s/w has ISAPI_Rewrite Manager which gives place to edit httpd.conf, Should I write rules on this. Anyways I had tried all these steps but unfortunately I couldn't find any remedies. Any immediate solution will be appreciable.

    Read the article

  • URL subfolder rewrite without server access

    - by Duke03
    I am having trouble with the following. I have a site in development that has every link on the site pointing to the wrong folder. Example: example.com/en/home/, a site link goes to example.com/en/, which throws a 404. Now the way the system is setup requires server access but I do not have that and I/S is backlogged with requests and will take a week. But I still need to develop the site. So is there a way to have the browser recognize when example.com/en/ is clicked then automatically redirect it to example.com/en/home so it bypasses the 404 and I can actually work. Im looking for anything that gets the job done. I am considering developing a Chrome app to do this but that would mean a shit ton of overtime and more work I don't want to do. Is there a easier way of doing this?

    Read the article

  • Multiple Google Analytics code for url under same domain

    - by will.i.am
    I have one domain, www.example.com, and www.example.com/sales. the analytic code on both urls are different. so when i login to google account, it will show two separate analytic accounts. on www.example.com/sales, i have a banner linked back to www.example.com. i clicked that banners, and i am sure there are other people have clicked the banner as well. but when i check the analytic of www.example.com, i don't see any thing come from my example.com/sales. I assume analytic on both urls are working, but why it doesn't track the visit from /sales. any idea??

    Read the article

  • How to Route URL from one domain to another..

    - by Magic
    Hello, I am an C# ASP.NET developer. I am trying to route URL from one domain to another using Godaddy IIS Virtual dedicated server or Dedicated server. For example I have a website application called A_Application in my server. An example URL: www.myserver.com/A_Application/product/bear/?productid=1 or using pretty URL www.myserver.com/A_Application/product/bear/1 I would like to setup for my client to point to A_Application using his/her domain. My Client example URL will be: www.hisserver.com/product/bear/?productid=1 or using pretty URL www.hisserver.com/product/bear/1 Thanks!

    Read the article

  • Postback problem when using URL Rewrite and 404.aspx

    - by salle55
    I'm using URL rewrite on my site to get URLs like: http://mysite.com/users/john instead of http://mysite.com/index.aspx?user=john To achive this extensionless rewrite with IIS6 and no access to the hosting-server I use the "404-approach". When a request that the server can't find, the mapped 404-page is executed, since this is a aspx-page the rewrite can be performed (I can setup the 404-mapping using the controlpanel on the hosting-service). This is the code in Global.asax: protected void Application_BeginRequest(object sender, EventArgs e) { string url = HttpContext.Current.Request.Url.AbsolutePath; if (url.Contains("404.aspx")) { string[] urlInfo404 = Request.Url.Query.ToString().Split(';'); if (urlInfo404.Length > 1) { string requestURL = urlInfo404[1]; if (requestURL.Contains("/users/")) { HttpContext.Current.RewritePath("~/index.aspx?user=" + GetPageID(requestURL)); StoreRequestURL(requestURL); } else if (requestURL.Contains("/picture/")) { HttpContext.Current.RewritePath("~/showPicture.aspx?pictureID=" + GetPageID(requestURL)); StoreRequestURL(requestURL); } } } } private void StoreRequestURL(string url) { url = url.Replace("http://", ""); url = url.Substring(url.IndexOf("/")); HttpContext.Current.Items["VirtualUrl"] = url; } private string GetPageID(string requestURL) { int idx = requestURL.LastIndexOf("/"); string id = requestURL.Substring(idx + 1); id = id.Replace(".aspx", ""); //Only needed when testing without the 404-approach return id; } And in Page_Load on my masterpage I set the correct URL in the action-attribute on the form-tag. protected void Page_Load(object sender, EventArgs e) { string virtualURL = (string)HttpContext.Current.Items["VirtualUrl"]; if (!String.IsNullOrEmpty(virtualURL)) { form1.Action = virtualURL; } } The rewrite works fine but when I perform a postback on the page the postback isn't executed, can this be solved somehow? The problem seems to be with the 404-approach because when I try without it (and loses the extensionless-feature) the postback works. That is when I request: http://mysite.com/users/john.aspx Can this be solved or is there any other solution that fulfil my requirements (IIS6, no serveraccess/ISAPI-filter and extensionless).

    Read the article

  • Memory Issues When DOM Parsing A Large XML File on Android Devices

    - by tonyc
    Hey awesome SO users, I have an Android application that parses an XML file for users and displays results in a much more mobile friendly format. The app works great for most users, but some users have lots and lots of data and the app crashes on them because it runs out of memory. Is there any way I have a DOM style XML parser quit parsing data after a certain amount of parsing? I only need the first 30 or so elements so it would make the application much more efficient. I'd like to use a SAX or pull parser instead, but the XML I'm parsing is not valid and I have no control over it. Unless anyone has some good SAX solutions that let me parse messy, invalid XML, I think DOM is the only way to go. Thanks for reading!

    Read the article

  • getting 502 proxy error while parsing

    - by developer
    Iam parsing a page and im getting response from that but after some time i.e. after some of the parsing gets done i get this error from the server - Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /file.php. Reason: Error reading from remote server and after this my parsing fails. I even tried sleep() function but it didnt helped and the error still came. Are they temporarily blocking my ip or what?? What could be the reason for this and how can i parse those pages without getting this error and all ???

    Read the article

  • urllib2.Request() with data returns empty url

    - by Mr. Polywhirl
    My main concern is the function: getUrlAndHtml() If I manually build and append the query to the end of the uri, I can get the response.url(), but if I pass a dictionary as the request data, the url does not come back. Is there anyway to guarantee the redirected url? In my example below, if thisWorks = True I get back a url, but the returned url is the request url as opposed to a redirect link. On a sidenote, the encoding for .E2.80.93 does not translate to - for some reason? #!/usr/bin/python import pprint import urllib import urllib2 from bs4 import BeautifulSoup from sys import argv URL = 'http://en.wikipedia.org/w/index.php?' def yesOrNo(boolVal): return 'yes' if boolVal else 'no' def getTitleFromRaw(page): return page.strip().replace(' ', '_') def getUrlAndHtml(title, printable=False): thisWorks = False if thisWorks: query = 'title={:s}&printable={:s}'.format(title, yesOrNo(printable)) opener = urllib2.build_opener() opener.addheaders = [('User-agent', 'Mozilla/5.0')] response = opener.open(URL + query) else: params = {'title':title,'printable':yesOrNo(printable)} data = urllib.urlencode(params) headers = {'User-agent':'Mozilla/5.0'}; request = urllib2.Request(URL, data, headers) response = urllib2.urlopen(request) return response.geturl(), response.read() def getSoup(html, name=None, attrs=None): soup = BeautifulSoup(html) if name is None: return None return soup.find(name, attrs) def setTitle(soup, newTitle): title = soup.find('div', {'id':'toctitle'}) h2 = title.find('h2') h2.contents[0].replaceWith('{:s} for {:s}'.format(h2.getText(), newTitle)) def updateLinks(soup, url): fragment = '#' for a in soup.findAll('a', href=True): a['href'] = a['href'].replace(fragment, url + fragment) def writeToFile(soup, filename='out.html', indentLevel=2): with open(filename, 'wt') as out: pp = pprint.PrettyPrinter(indent=indentLevel, stream=out) pp.pprint(soup) print('Wrote {:s} successfully.'.format(filename)) if __name__ == '__main__': def exitPgrm(): print('usage: {:s} "<PAGE>" <FILE>'.format(argv[0])) exit(0) if len(argv) == 2: help = argv[1] if help == '-h' or help == '--help': exitPgrm() if False:''' if not len(argv) == 3: exitPgrm() ''' page = 'Led Zeppelin' # argv[1] filename = 'test.html' # argv[2] title = getTitleFromRaw(page) url, html = getUrlAndHtml(title) soup = getSoup(html, 'div', {'id':'toc'}) setTitle(soup, page) updateLinks(soup, url) writeToFile(soup, filename)

    Read the article

  • Java - How to find the redirected url of a url?

    - by Yatendra Goel
    I am accessing web pages through java as follows: URLConnection con = url.openConnection(); But in some cases, a url redirects to another url. So I want to know the url to which the previous url redirected. Below are the header fields that I got as a response: null-->[HTTP/1.1 200 OK] Cache-control-->[public,max-age=3600] last-modified-->[Sat, 17 Apr 2010 13:45:35 GMT] Transfer-Encoding-->[chunked] Date-->[Sat, 17 Apr 2010 13:45:35 GMT] Vary-->[Accept-Encoding] Expires-->[Sat, 17 Apr 2010 14:45:35 GMT] Set-Cookie-->[cl_def_hp=copenhagen; domain=.craigslist.org; path=/; expires=Sun, 17 Apr 2011 13:45:35 GMT, cl_def_lang=en; domain=.craigslist.org; path=/; expires=Sun, 17 Apr 2011 13:45:35 GMT] Connection-->[close] Content-Type-->[text/html; charset=iso-8859-1;] Server-->[Apache] So at present, I am constructing the redirected url from the value of the Set-Cookie header field. In the above case, the redirected url is copenhagen.craigslist.org Is there any standard way through which I can determine which url the particular url is going to redirect. I know that when a url redirects to other url, the server sends an intermediate response containing a header field that tells the url which it is going to redirect but I am not receiving that intermediate response through the url.openConnection(); method.

    Read the article

  • What are the arguments against parsing the Cthulhu way?

    - by smarmy53
    I have been assigned the task of implementing a Domain Specific Language for a tool that may become quite important for the company. The language is simple but not trivial, it already allows nested loops, string concatenation, etc. and it is practically sure that other constructs will be added as the project advances. I know by experience that writing a lexer/parser by hand -unless the grammar is trivial- is a time consuming and error prone process. So I was left with two options: a parser generator à la yacc or a combinator library like Parsec. The former was good as well but I picked the latter for various reasons, and implemented the solution in a functional language. The result is pretty spectacular to my eyes, the code is very concise, elegant and readable/fluent. I concede it may look a bit weird if you never programmed in anything other than java/c#, but then this would be true of anything not written in java/c#. At some point however, I've been literally attacked by a co-worker. After a quick glance at my screen he declared that the code is uncomprehensible and that I should not reinvent parsing but just use a stack and String.Split like everybody does. He made a lot of noise, and I could not convince him, partially because I've been taken by surprise and had no clear explanation, partially because his opinion was immutable (no pun intended). I even offered to explain him the language, but to no avail. I'm positive the discussion is going to re-surface in front of management, so I'm preparing some solid arguments. These are the first few reasons that come to my mind to avoid a String.Split-based solution: you need lot of ifs to handle special cases and things quickly spiral out of control lots of hardcoded array indexes makes maintenance painful extremely difficult to handle things like a function call as a method argument (ex. add( (add a, b), c) very difficult to provide meaningful error messages in case of syntax errors (very likely to happen) I'm all for simplicity, clarity and avoiding unnecessary smart-cryptic stuff, but I also believe it's a mistake to dumb down every part of the codebase so that even a burger flipper can understand it. It's the same argument I hear for not using interfaces, not adopting separation of concerns, copying-pasting code around, etc. A minimum of technical competence and willingness to learn is required to work on a software project after all. (I won't use this argument as it will probably sound offensive, and starting a war is not going to help anybody) What are your favorite arguments against parsing the Cthulhu way?* *of course if you can convince me he's right I'll be perfectly happy as well

    Read the article

  • .htaccess to redirect any URL from a domain to a fixed URL on another domain

    - by AlexV
    Anyone can help me out with an .htaccess I'm trying to create? I want to redirect foo.com to foo.ca. Any URL from foo.com (with or without www and under http or https) will all be redirected to www.foo.ca. Some examples: http://www.foo.com/ -- http://www.foo.ca/ (http + www) https://www.foo.com/ -- http://www.foo.ca/ (https + www) http://foo.com/bar/ -- http://www.foo.ca/ (http + some url) https://foo.com/bar/ -- http://www.foo.ca/ (https + some url) http://www.foo.com/bar/ -- http://www.foo.ca/ (http + www + some url) https://www.foo.com/bar -- http://www.foo.ca/ (https + www + some url) Many thanks!

    Read the article

  • Display using QtWebKit, whilst parsing xml

    - by Beren Scott
    I wish to use QtWebKit to load a url for display, but, that's the easy part, I can do that. What I wish to do is record / log xml as I go. My attention here is to record and database certain details on the fly, by recording those details. My problem is, how to do this all on the fly, without requesting the same url from the server twice, once for the xml, and the second time to view the url. My hope here, is to implement a very fast way of recording set data as the user passes over it. Take for example, rather then have to type out details displayed by a website, I wish to have those details chucked into a database as I the user views the website. Now, I am using QtWebKit, and I have everything pretty much solved viewing wise. I have a loadUrl() routine which calls load(url) inside the qwebview.h The problem is, how do I piggyback xml parsing on top of this?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >