Search Results

Search found 21350 results on 854 pages for 'url parsing'.

Page 13/854 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Parsing a CSV File to a Rails Database

    - by Schroedinger
    G'day guys, I'm using fasterCSV and a rake script to parse a csv with about 30 columns into my rails db for a 'Trade' item. The script works fine when all of the values are set to strings, but when I change it to a decimal, int or other value, everything goes to hell. Wondering if fasterCSV has built in int etc parsing or whether I'll have to manage these within my model. Basically, I'm given a giant amount of trades data, need to import it, and then need to provide feedback with say the average trade volume, the times, etc. I understand I can do that all with the wonderful records provided to me by activeRecord but wondered if there was an easier way to populate a rather large Database with a given CSV? Several of the fields don't have values for certain rows, fasterCSV seems to work perfectly when they're all strings, but not when I try to get decimal or other.

    Read the article

  • Stuck with the first record while parsing an XML in Java

    - by Ritwik G
    I am parsing the following XML : <table ID="customer"> <T><C_CUSTKEY>1</C_CUSTKEY><C_NAME>Customer#000000001</C_NAME><C_ADDRESS>IVhzIApeRb ot,c,E</C_ADDRESS><C_NATIONKEY>15</C_NATIONKEY><C_PHONE>25-989-741-2988</C_PHONE><C_ACCTBAL>711.56</C_ACCTBAL><C_MKTSEGMENT>BUILDING</C_MKTSEGMENT><C_COMMENT>regular, regular platelets are fluffily according to the even attainments. blithely iron</C_COMMENT></T> <T><C_CUSTKEY>2</C_CUSTKEY><C_NAME>Customer#000000002</C_NAME><C_ADDRESS>XSTf4,NCwDVaWNe6tEgvwfmRchLXak</C_ADDRESS><C_NATIONKEY>13</C_NATIONKEY><C_PHONE>23-768-687-3665</C_PHONE><C_ACCTBAL>121.65</C_ACCTBAL><C_MKTSEGMENT>AUTOMOBILE</C_MKTSEGMENT><C_COMMENT>furiously special deposits solve slyly. furiously even foxes wake alongside of the furiously ironic ideas. pending</C_COMMENT></T> <T><C_CUSTKEY>3</C_CUSTKEY><C_NAME>Customer#000000003</C_NAME><C_ADDRESS>MG9kdTD2WBHm</C_ADDRESS><C_NATIONKEY>1</C_NATIONKEY><C_PHONE>11-719-748-3364</C_PHONE><C_ACCTBAL>7498.12</C_ACCTBAL><C_MKTSEGMENT>AUTOMOBILE</C_MKTSEGMENT><C_COMMENT>special packages wake. slyly reg</C_COMMENT></T> <T><C_CUSTKEY>4</C_CUSTKEY><C_NAME>Customer#000000004</C_NAME><C_ADDRESS>XxVSJsLAGtn</C_ADDRESS><C_NATIONKEY>4</C_NATIONKEY><C_PHONE>14-128-190-5944</C_PHONE><C_ACCTBAL>2866.83</C_ACCTBAL><C_MKTSEGMENT>MACHINERY</C_MKTSEGMENT><C_COMMENT>slyly final accounts sublate carefully. slyly ironic asymptotes nod across the quickly regular pack</C_COMMENT></T> <T><C_CUSTKEY>5</C_CUSTKEY><C_NAME>Customer#000000005</C_NAME><C_ADDRESS>KvpyuHCplrB84WgAiGV6sYpZq7Tj</C_ADDRESS><C_NATIONKEY>3</C_NATIONKEY><C_PHONE>13-750-942-6364</C_PHONE><C_ACCTBAL>794.47</C_ACCTBAL><C_MKTSEGMENT>HOUSEHOLD</C_MKTSEGMENT><C_COMMENT>blithely final instructions haggle; stealthy sauternes nod; carefully regu</C_COMMENT></T> </table> with the following java code: package xmlparserformining; import java.util.List; import java.util.Iterator; import org.dom4j.Document; import org.dom4j.DocumentException; import org.dom4j.Node; import org.dom4j.io.SAXReader; public class XmlParserForMining { public static Document getDocument( final String xmlFileName ) { Document document = null; SAXReader reader = new SAXReader(); try { document = reader.read( xmlFileName ); } catch (DocumentException e) { e.printStackTrace(); } return document; } public static void main(String[] args) { String xmlFileName = "/home/r/javaCodez/parsing in java/customer.xml"; String xPath = "//table/T/C_ADDRESS"; Document document = getDocument( xmlFileName ); List<Node> nodes = document.selectNodes( xPath ); System.out.println(nodes.size()); for (Node node : nodes) { String customer_address = node.valueOf(xPath); System.out.println( "Customer address: " + customer_address); } } } However, instead of getting all the various customer records, I am getting the following output: 1500 Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E and so on .. What is wrong here? Why is it printing only the first record ?

    Read the article

  • Which of these URL scenarios is best for big link menus? [seo /user friendly urls]

    - by Sam
    Hi folks, a question about urls... me and a good friend of mine are exploring the possibilities of either of the three scenarios for a website where each webpage has a menusystem with about 130 links.: SCENARIO 1 the pages menu system has SHORT non-descriptive hyperlinks as well as a SHORT canonical: <a href:"design">dutch design</a> the pages canonical url points to e.g.: "design" OR SCENARIO 2 the pages menu system has SHORT non-descriptive hyperlinks wwith LONG canonical urls: <a href="design">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest OR SCENARIO 3 the pages menu system has LONG descriptive hyperlinks with LONG canonical urls: <a href="dutch-design-crazy-yes-but-always-honest">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest Currently we have scenario 2... should we progress to scenario 3? All three work fine and point via RewriteMod to the same page which is fetched underwater. Now, my question is which of these is better in terms of: userfriendlyness (page loading times, full url visible in url bar or not) seo friendlyness (proper indexing due to the urls containing descriptive relevant tags) other concerns we forgot like possible penalties for so many words in link hrefs?? Thanks very much for your suggestions: much appreciated!

    Read the article

  • Complicated parsing in python

    - by Quazi Farhan
    I have a weird parsing problem with python. I need to parse the following text. Here I need only the section between(not including) "pre" tag and column of numbers (starting with 205 4 164). I have several pages in this format. <html> <pre> A Short Study of Notation Efficiency CACM August, 1960 Smith Jr., H. J. CA600802 JB March 20, 1978 9:02 PM 205 4 164 210 4 164 214 4 164 642 4 164 1 5 164 </pre> </html>

    Read the article

  • Fast parsing of PHP in C#

    - by Jessica Shea
    Hello there, I've got a requirement for parsing PHP files in C#. We essentially require some of the devs in another country to upload PHP files and once uploaded we need to check the php files and get a list of all the methods and classes/functions etc. I thought of using a regex but I can't workout if a function belongs to a class etc, so I was wondering if theres already something 'out there' that will parse out PHP files and spit out its functions (I'm trying to avoid writing a full blow AST implementation). Does anyone have any idea? I looked at Coco/R but I couldn't find a PHP grammar file. I'm using .NET 2.0 and C#.

    Read the article

  • Parsing the Youtube API with DOM

    - by Kirk
    I'm using the Youtube API and I can retrieve the date information without a problem, but don't know how to retrieve the description information. My Code: <?php $v = "dQw4w9WgXcQ"; $url = "http://gdata.youtube.com/feeds/api/videos/". $v; $doc = new DOMDocument; $doc->load($url); $pub = $doc->getElementsByTagName("published")->item(0)->nodeValue; $desc = $doc->getElementsByTagName("media:description")->item(0)->nodeValue; echo "<b>Video Uploaded:</b> "; echo date( "F jS, Y", strtotime( $pub ) ); echo '<br>'; if (isset ($desc)) { echo "<b>Description:</b> "; echo $desc; echo '<br>'; } ?> Here's a link to the feed: http://gdata.youtube.com/feeds/api/videos/dQw4w9WgXcQ?prettyprint=true And the excerpt of code I don't know how to retrieve data from: <media:group> <media:description type='plain'>Music video by Rick Astley performing Never Gonna Give You Up. (C) 1987 PWL</media:description> </media:group> Thanks in advance.

    Read the article

  • I need help to debug my XML parsing please

    - by Griffo
    I'm parsing this line: <type>branch</type> with this code if ([elementName isEqualToString:@"type"]) { [currentBranchDictionary setValue:currentText forKey:currentElementName]; } When I test the value in the type key, it does not contain branch but instead it contains branch\n. Here is the test I'm performing: if ([[currentBranchDictionary valueForKey:@"type"] isEqualToString:@"branch"]) { NSLog(@"no new-line"); } else if ([[currentBranchDictionary valueForKey:@"type"] isEqualToString:@"branch\n"]) { NSLog(@"new-line"); } this returns the "new-line" output I don't understand where the carriage return is being added, can anyone help?

    Read the article

  • How to color HTML elements based on parsing a user command string

    - by Anonymous the Great
    I'm working on a little parsing thing to color objects. For an example, you could type red:Hi!: and "Hi!" would be red. This is my not working code: <script type="text/javascript"> function post() { var preview = document.getElementById("preview"); var submit = document.getElementById("post"); var text = submit.value; <?php str_replace("red:*:",'<i class="red">*</i>',text); ?> preview.value = text; } </script>

    Read the article

  • RegEx (or other) parsing of script

    - by jpmyob
    RegEx is powerful - it is tru but I have a little - query for you I want to parse out the FUNCTIONS from some old code in JS...however - I am RegEx handicapped (mentally deficient in grasping the subtleties).. the issue that makes me NOT EVEN TRY - is two fold - 1) myVar = function(x){ yadda yadda } AND function myVar(x) { yadda yadda } are found throuout - COLD I write a parser for each? sure - but that seems inefficient... 2) MANY things may reside INSIDE the {} including OTHER sets of {} or other Functions(){} block of text... HELP - does anyone have, or know of some code parsing snippets or examples that will parse out the info I want to collect? Thanks

    Read the article

  • "Parsing" I think is the word.

    - by Anonymous the Great
    I'm working on a little parsing thing to color objects. For an example, you could type red:Hi!: and "Hi!" would be red. This is my not working code: <script type="text/javascript"> function post() { var preview = document.getElementById("preview"); var submit = document.getElementById("post"); var text = submit.value; <?php str_replace("red:*:",'<i class="red">*</i>',text); ?> preview.value = text; } </script>

    Read the article

  • parsing Two-dimensional array in c

    - by gitter78
    I'm trying to parse an array that looks like the one below: char *arr[][2] = { { "1", "Purple" }, { "2", "Blue" }, { "22", "Red" }, ... }; I was thinking having a loop as: char *func(char *a){ for(i = 0; i<sizeof(arr)/sizeof(arr[0]);i++){ if(strstr(a,arr[i][0])!=NULL) return arr[i][1]; } } char *out; const char *hello = "this is my 2 string"; out = func(hello); In this case, I'm trying to get the second value based on the first one: Purple, Blue Red, etc.. The question is how would go in parsing this and instead of printing out the value, return the value. UPDATE/FIXED: It has been fixed above. Thanks

    Read the article

  • building an XML service parsing library

    - by DanInDC
    This is more of a design question I suppose. My company offers a web service to our client that spits data out in a custom xml format. I'd like to build a java library we can offer so our customers can just feed it the url and we will turn it into a set of POJOs built from the response. I can obviously just create a library that will do some simple xml parsing and building of the POJOs but I'm looking to build something a bit more robust. My brain is pulling me in a million directions, wondering if anyone has some pointers or some code to poke at. Was thinking about adding an Abdera extension, but it's not really a syndication format that fits the Abdera model. And most of the popular service libraries (twitter, facebook) all rely on standards format parsers, of which our format isn't.

    Read the article

  • Turkish character problems while parsing (Android)

    - by alper35.5
    I am parsing an html content and have output on my screen. This website have Turkish characters such as çÇsSöÖgGiIüÜ. I am not able to show them as proper characters, they are printed out as question marks yet. Eclipse - Project - Properties - Resource - Text File Encoding = Inherited from container (Cp1254) I searched web and found this solution: Eclipse - Project - Properties - Resource - Text File Encoding = Other: UTF-8 However, it's not working. It only changes my files' current characters. (I have titles that have such characters on my activities) Any help? Thanks in advance...

    Read the article

  • Parsing C# Script into Java

    - by pantaryl
    I'm looking for a way to easily read in a C# file and place it into a Java Object for database storage (storing the class name, functions, variables, etc). I'm making a Hierarchical State Machine AI Building Tool for a game I'm creating and need to be able to import an existing C# file and store it in a database for retrieval in the future. Does anyone know of any preexisting libraries for parsing C# files? Something similar to JavaParser? Thanks everyone! EDIT: This needs to be part of my Java Project. I'll be loading in the C# files through my Java Application and saving it into my db4o database.

    Read the article

  • Problems parsing Google Data Booksearch API XML in Ruby

    - by FrogBot
    I'm trying to parse some XML I've gotten from the Google Data Booksearch API and I'm having trouble trying to target a specific element. Currently my code looks like so: require 'gdata' client = GData::Client::BookSearch.new feed = client.get("http://books.google.com/books/feeds/volumes?q=Foundation").to_xml books = [] feed.elements.each('entry') do |entry| book = { :title => entry.elements['title'].text, :author => entry.elements['dc:creator'].text, :book_id => entry.elements['dc:identifier'].text } books.push(book) end p books and that all works fine, but I want to add a thumbnail URL to the book hash. The tag with each book's thumbnail URL looks like so: <feed> <entry> ... <link rel="http://schemas.google.com/books/2008/thumbnail" type="image/x-unknown" href="http://bks6.books.google.com/books?id=ID5P7xbmcO8C&printsec=frontcover&img=1&zoom=5&edge=curl&source=gbs_gdata"/> ... </entry> </feed> I want to grab the contents of the href attribute from this element and I'm not exactly sure how. Can anyone help me out here?

    Read the article

  • [Python] Detect destination of shortened, or "tiny" url

    - by conradlee
    I have just scraped a bunch of Google Buzz data, and I want to know which Buzz posts reference the same news articles. The problem is that many of the links in these posts have been modified by URL shorteners, so it could be the case that many distinct shortened URLs actually all point to the same news article. Given that I have millions of posts, what is the most efficient way (preferably in python) for me to detect whether a url is a shortened URL (from any of the many URL shortening services, or at least the largest) Find the "destination" of the shortened url, i.e., the long, original version of the shortened URL. Does anyone know if the URL shorteners impose strict request rate limits? If I keep this down to 100/second (all coming form the same IP address), do you think I'll run into trouble?

    Read the article

  • ASP.Net URL rewriting and redirecting....

    - by DDiVita
    I am trying to wrap my head around a URL rewrtie / redirect project I need to work on. We currently have this url: www.domain.com/Details/Detail.aspx?param1=8&param2=12345 Here is what the rewritten URL will look like: www.domain.com/Param1/8/Param2/12345 I am using the ISAPI_Rewrite filter to allow for the "nice" url and make the page think it is still using the old url. That works fine. Now, I need to redirect users, if they use the old URL, to the new URL. I fgiure I would need to use a combination of the filter and an HTTPModule / Handler to perfomr the redirect. Any ideas?

    Read the article

  • Generate canonical / real URL based on base.href or location

    - by blueyed
    Is there a method/function to get the canonical / transformed URL, respecting any base.href setting of the page? I can get the base URL via (in jQuery) using $("base").attr("href") and I could use string methods to parse the URL meant to being made relative to this, but $("base").attr("href") has no host, path etc attributes (like window.location has) manually putting this together is rather tedious E.g., given a base.href of "http://example.com/foo/" and a relative URL "/bar.js", the result should be: "http://example.com/bar.js" If base.href is not present, the URL should be made relative to window.location. This should handle non-existing base.href (using location as base in this case). Is there a standard method available for this already? (I'm looking for this, since jQuery.getScript fails when using a relative URL like "/foo.js" and BASE tag is being used (FF3.6 makes an OPTIONS request, and nginx cannot handle this). When using the full URL (base.href.host + "/foo.js", it works).)

    Read the article

  • Transforming url with URI-module

    - by sid_com
    Do I gain something when I transform my $url like this: $url = URI-new( $url )? #!/usr/bin/env perl use warnings; use strict; use 5.012; use URI; use XML::LibXML; my $url = 'http://stackoverflow.com/'; $url = URI->new( $url ); my $doc = XML::LibXML->load_html( location => $url, recover => 2 ); my @nodes = $doc->getElementsByTagName( 'a' ); say scalar @nodes;

    Read the article

  • ASP.Net URL rewirting and redirecting....

    - by DDiVita
    I am trying to wrap my head around a URL rewrtie / redirect project I need to work on. We currently have this url: www.domain.com/Details/Detail.aspx?param1=8&param2=12345 Here is what the rewritten URL will look like: www.domain.com/Param1/8/Param2/12345 I am using the ISAPI_Rewrite filter to allow for the "nice" url and make the page think it is still using the old url. That works fine. Now, I need to redirect users, if they use the old URL, to the new URL. I fgiure I would need to use a combination of the filter and an HTTPModule / Handler to perfomr the redirect. Any ideas?

    Read the article

  • Domain name rewriting and URL rewriting in the meantime in .htaccess

    - by Steven
    Ugly URLs: www.domainname.com/en/piece/piece.php?piece_id=1 www.domainname.com/en/piece/piece.php?piece_id=2 www.domainname.com/en/piece/piece.php?piece_id=3 ... Friendly URLs: piece.domainname.com/en/1 piece.domainname.com/en/2 piece.domainname.com/en/3 ... I want to present website users only friendly URLs. When I apply RewriteEngine On RewriteRule ^en/([^/]*)$ /en/piece/piece.php?piece_id=$1 [L] only the URL is rewrote.Besides the CSS file can not be found in web page of the friendly URL. How to rewrite the domain name and the URL in the meantime? Do I have to use RedirectMatch, if so, how to do it?

    Read the article

  • Domain name rewriting and URL rewriting in the meantime in .htaccess

    - by Steven
    Ugly URLs: www.domainname.com/en/piece/piece.php?piece_id=1 www.domainname.com/en/piece/piece.php?piece_id=2 www.domainname.com/en/piece/piece.php?piece_id=3 ... Friendly URLs: piece.domainname.com/en/1 piece.domainname.com/en/2 piece.domainname.com/en/3 ... When I apply RewriteEngine On RewriteRule ^en/([^/]*)$ /en/piece/piece.php?piece_id=$1 [L] only the URL is rewrote.Besides the CSS file can not be found in web page of the friendly URL. How to rewrite the domain name and the URL in the meantime? Do I have to use RedirectMatch, if so, how to do it?

    Read the article

  • Can I use Zoneedit to do URL rewrite?

    - by chilly-child
    This is our scenario: Our DNS is hosted by a company. They don't manage the DNS. We use Zoneedit (www.zoneedit.com) to manage the DNS such as nameservers, CNAMEs, etc... Then we have our web host where we just have our files hosted. We have a subdomain created on zoneedit. We would like to do a URL rewrite so that subdomain.ourdomain.com is displayed as www.ourdomain.com/subdomain. Do I use Zoneedit to do the URL rewrite or the web host or the DNS host? I checked the Zoneedit docs but I could not find a way to do a URL rewrite. Need some advice. Thanks

    Read the article

  • Disabling URL decoding in nginx proxy

    - by Tomasz Nurkiewicz
    When I browse to this URL: http://localhost:8080/foo/%5B-%5D server (nc -l 8080) receives it as-is: GET /foo/%5B-%5D HTTP/1.1 However when I proxy this application via nginx: location /foo { proxy_pass http://localhost:8080/foo; } The same request routed through nginx port is forwarded with path decoded: GET /foo/[-] HTTP/1.1 Decoded square brackets in the GET path are causing the errors in the target server (HTTP Status 400 - Illegal character in path...) as they arrive un-escaped. Is there a way to disable URL decoding or encode it back so that the target server gets the exact same path when routed through nginx? Some clever URL rewrite rule?

    Read the article

  • Remove Trailing Slash from WordPress URL (The site also don't have www)

    - by mrintech
    I need help as I am confused a lot with .htaccess Some months back, I removed WWW from the URL of my domain name using following .htaccess lines: RewriteCond %{HTTP_HOST} !^example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] Now, I also want to remove the trailing slash from the URL, as because I am using WordPress and a page/post will open, no matter if there's a trailing slash or NOT! I request you to please provide me the .htaccess code, so that I can REMOVE the trailing slash. Kindly remember, I don't want WWW also and I have already set .htaccess rule for the removal of WWW Note: 3 Years back when I started the blog, I set the Permalinks Structure without trailing slash. Now, suddenly Google Webmasters Tools is showing warnings. Also, the URL for rel="canonical" is WITHOUT trailing slash If you require any more details, I will be happy to provide

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >