Search Results

Search found 10569 results on 423 pages for 'self extracting'.

Page 35/423 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Problem extracting text from RSS feeds

    - by Gautam
    Hi, I am new to the world of Ruby and Rails. I have seen rails cast 190 and I just started playing with it. I used selector gadget to find out the CSS and XPath I have the following code.. require 'rubygems' require 'nokogiri' require 'open-uri' url = "http://www.telegraph.co.uk/sport/football/rss" doc = Nokogiri::HTML(open(url)) doc.xpath('//a').each do |paragraph| puts paragraph.text end When I extracted text from a normal HTML page with css, I could get the extracted text on the console. But when I try to do the same either with CSS or XPath for the RSS Feed for the following URL mentioned in the code above, I dont get any output. How do you extract text from RSS feeds?? I also have another silly question. Is there a way to extract text from 2 different feeds and display it on the console something like url1 = "http://www.telegraph.co.uk/sport/football/rss" url2 = "http://www.telegraph.co.uk/sport/cricket/rss" Looking forward for your help and suggestions Thank You Gautam

    Read the article

  • Possible loss of precision; extracting char from string

    - by Troy
    I am getting a string from the user and then doing some checking to make sure it is valid, here is the code I have been using; char digit= userInput.charAt(0) - '0'; This had been working fine until I did some work on another method, I went to compile and have been receiving a 'possible loss of precision' error since then. What am I doing wrong?

    Read the article

  • Extracting Demographic and Contact Information from unstructured text files

    - by jn29098
    I am looking to extract specific items out of a large pool of unstructured documents. These documents could be 1-5 pages of text formatted in various ways by the user, but in most cases would contain at least: Name Address (physical) Email Address Phone number website URL I'm looking for a semantic parser that can attempt to extract these elements from the documents so that I can load that information into a relational database and work with these records as contacts. Other services I've looked for, while valuable for other purposes, do not address this specific need. Alchemy API Open Calais Saplo Any thoughts, suggestions or leads?

    Read the article

  • Help with regex - extracting text.

    - by Yeti
    I have 3 separate strings: $d = 'Created on November 25, 2009'; $v = 'Viewed 17,603 times'; $h = '389 hits'; Which needs to be converted to: $d1 = {unix timestamp of November 25, 2009}; $v1 = "17603"; (commas stripped if it exists) $h1 = "389"; What is the most efficient way to do this (possibly with regex)? Any code snippet would be great.

    Read the article

  • Batch file: Extracting substring from input parameter to use in IF statement

    - by Neomoon
    This is a very basic example of what I am trying to implement in a more complex batch file. I would like to extract a substring from an input parameter (%1) and branch based on if the substring was found or not. @echo off SETLOCAL enableextensions enabledelayedexpansion SET _testvariable=%1 SET _testvariable=%_testvariable:~4,3% ECHO %_testvariable% IF %_testvariable%=act CALL :SOME IF NOT %_testvariable%=act CALL :ACTION :SOME ECHO Substring found GOTO :END :ACTION ECHO Substring not found GOTO :END ENDLOCAL :END This is what my ouput looks like: C:\>test someaction act =act was unexpected at this time. If possible I would like to turn this in to a IF/ELSE statement and evaluate directly from %1. However I have not had success with either.

    Read the article

  • Extracting shell script from parameterised Hudson job

    - by Jonik
    I have a parameterised Hudson job, used for some AWS deployment stuff, which in one build step runs certain shell commands. However, that script has become sufficiently complicated that I want to "extract" it from Hudson to a separate script file, so that it can easily be versioned properly. The Hudson job would then simply update from VCS and execute the external script file. My main question is about passing parameters to the script. I have a Hudson parameter named AMI_ID and a few others. The script references those params as if they were environment variables: echo "Using AMI $AMI_ID and type $TYPE" Now, this works fine inside Hudson, but not if Hudson calls an external script. Could I somehow make Hudson set the params as environment variables so that I don't need to change the script? Or is my best option to alter the script to take command line parameters (and possibly assign those to named variables for readability: ami_id=$1; type=$2; ... )? I tried something like this but the script doesn't get correctly replaced values: export AMI_ID=$AMI_ID export TYPE=$TYPE external-script.sh # this tries to use e.g. $AMI_ID Bonus question: when the script is inside Hudson, the "console output" will contain both the executed commands and their output. This is extremely useful for debugging when something goes wrong with a build! For example, here the line starting with "+" is part of the script and the following line its output: + ec2-associate-address -K pk.pem -C cert.pem 77.125.116.139 -i i-aa3487fd ADDRESS 77.125.116.139 i-aa3487fd When calling an external script, Hudson output will only contain the latter line, making debugging harder. I could cat the script file to stdout before running it, but that's not optimal either. In effect, I'd like a kind of DOS-style "echo on" for the script which I'm calling from Hudson - anyone know a trick to achieve this?

    Read the article

  • Extracting fields from a define-type object in Scheme

    - by Mike
    Hi, I am trying to extract the field 'name' or 'named-expr' from the following object: (bind 'x (num 5)) ;; note that this is not a list, but a type Binding With the Binding definition: (define-type Binding (bind (name symbol?) (named-expr WAE?))) I have tried, but received the error "reference to an identifier before its definition: Binding-name." Here is what I tried typing: (begin (Binding-name (bind 'x (num 5)))) (begin (define x (bind 'x (num 5))) (Binding-name x)) Thank you!

    Read the article

  • WiX: Extracting Binary-string in Custom Action yields string like "???good data"

    - by leiflundgren
    I just found a weird behaviour when attempting to extract a string from the Binary-table in the MSI. I have a file containing "Hello world", the data I get is "???Hello world". (Literary question mark.) Is this as intended? Will it always be exactly 3 characters in the beginning? Regards Leif Sample code: [CustomAction] public static ActionResult CustomAction2(Session session) { View v = session.Database.OpenView("SELECT `Name`,`Data` FROM `Binary`"); v.Execute(); Record r = v.Fetch(); int datalen = r.GetDataSize("Data"); System.IO.Stream strm = r.GetStream("Data"); byte[] rawData = new byte[datalen]; int res = strm.Read(rawData, 0, datalen); strm.Close(); String s = System.Text.Encoding.ASCII.GetString(rawData); // s == "???Hello World" return ActionResult.Success; }

    Read the article

  • Extracting URLs (to array) in Ruby

    - by FearMediocrity
    Good afternoon, I'm learning about using RegEx's in Ruby, and have hit a point where I need some assistance. I am trying to extract 0 to many URLs from a string. This is the code I'm using: sStrings = ["hello world: http://www.google.com", "There is only one url in this string http://yahoo.com . Did you get that?", "The first URL in this string is http://www.bing.com and the second is http://digg.com","This one is more complicated http://is.gd/12345 http://is.gd/4567?q=1", "This string contains no urls"] sStrings.each do |s| x = s.scan(/((http|https):\/\/[a-z0-9]+([\-\.]{1}[a-z0-9]+)*\.[a-z]{2,5}(([0-9]{1,5})?\/.[\w-]*)?)/ix) x.each do |url| puts url end end This is what is returned: http://www.google.com http .google nil nil http://yahoo.com http nil nil nil http://www.bing.com http .bing nil nil http://digg.com http nil nil nil http://is.gd/12345 http nil /12345 nil http://is.gd/4567 http nil /4567 nil What is the best way to extract only the full URLs and not the parts of the RegEx? Thanks Jim

    Read the article

  • Rails: Extracting the raw url from the request

    - by pankajbhageria
    I am working with Rails 2.2. The required behaviour is as follows: I have a link(with a ajax link embedded) xyz.com/admin#page1 When I go to the above page, I should be redirected to the login page, if I am not logged in. After I log in, I should be taken back to xyz.com/admin#page1 For this I need to store the url in session when I visit any page. The problem is that when I do request.uri, I get xyz.com/admin But I want to store xyz.com/admin#page1 Regards, Pankaj

    Read the article

  • extracting a quadrilateral image to a rectangle

    - by Will
    In the image below, the sign on the side of the van is not face-on to the camera. I want to calculate, as best I can with the pixels I have, what it'd look like face on. I imagine that this is some kind of loop through the x and y axis doing a Bresenham's line on both dimensions at once with some kind of mixing when pixels in the source image overlap - some sub-pixel mixing of some sort? What approaches are there, and how do you mix the pixels? Is there a standard approach for this?

    Read the article

  • Extracting Mail from Microsoft Exchange server 2007 through IMAPS in java

    - by abhishekgem84
    props.put("mail.debug", "true"); props.setProperty("mail.store.protocol","imaps"); props.setProperty("mail.imaps.auth.plain.disable","false"); props.setProperty("mail.imaps.host","Mail3.connect.com"); props.setProperty("mail.imaps.port","135"); props.setProperty("mail.imaps.user","test"); props.setProperty("mail.imaps.pwd","123"); props.setProperty("mail.imaps.ssl.protocols","SSL"); props.setProperty("mail.imaps.socketFactory.class", "javax.net.ssl.SSLSocketFactory"); props.setProperty("mail.imaps.socketFactory.fallback", "false"); props.setProperty("mail.imaps.socketFactory.port", "135"); i have done all this but it still says javax.mail.AuthenticationFailedException: failed to connect, no password specified? kindly help me out thanks

    Read the article

  • Extracting data from jpeg files

    - by Ronny
    Hi, I want to extract some kind of bmp/rgb data from jpeg files. I had a look at libjpeg, but there seemed not to be any good documentation available. So my questions are: Where is documentation on libjpeg? Can you suggest other c-based jpeg-decompression libraries? thanks in regard ps: I am using GNU/Linux, and c/c++

    Read the article

  • Perl program for extracting the functions alone in a Ruby file

    - by thillaiselvan
    Hai all, I am having the following Ruby program. puts "hai" def mult(a,b) a * b end puts "hello" def getCostAndMpg cost = 30000 # some fancy db calls go here mpg = 30 return cost,mpg end AltimaCost, AltimaMpg = getCostAndMpg puts "AltimaCost = #{AltimaCost}, AltimaMpg = {AltimaMpg}" I have written a perl script which will extract the functions alone in a Ruby file as follows while (<DATA>){ print if ( /def/ .. /end/ ); } Here the <DATA> is reading from the ruby file. So perl prograam produces the following output. def mult(a,b) a * b end def getCostAndMpg cost = 30000 # some fancy db calls go here mpg = 30 return cost,mpg end But, if the function is having block of statements, say for example it is having an if condition testing block means then it is not working. It is taking only up to the "end" of "if" block. And it is not taking up to the "end" of the function. So kindly provide solutions for me. Example: def function if x > 2 puts "x is greater than 2" elsif x <= 2 and x!=0 puts "x is 1" else puts "I can't guess the number" end #----- My code parsing only up to this end Thanks in Advance!

    Read the article

  • R: extracting "clean" UTF-8 text from a web page scraped with RCurl

    - by SlowLearner
    Using R, I am trying to scrape a web page save the text, which is in Japanese, to a file. Ultimately this needs to be scaled to tackle hundreds of pages on a daily basis. I already have a workable solution in Perl, but I am trying to migrate the script to R to reduce the cognitive load of switching between multiple languages. So far I am not succeeding. Related questions seem to be this one on saving csv files and this one on writing Hebrew to a HTML file. However, I haven't been successful in cobbling together a solution based on the answers there. The pages are from Yahoo! Japan Finance and my Perl code that looks like this. use strict; use HTML::Tree; use LWP::Simple; #use Encode; use utf8; binmode STDOUT, ":utf8"; my @arr_links = (); $arr_links[1] = "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7203"; $arr_links[2] = "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7201"; foreach my $link (@arr_links){ $link =~ s/"//gi; print("$link\n"); my $content = get($link); my $tree = HTML::Tree->new(); $tree->parse($content); my $bar = $tree->as_text; open OUTFILE, ">>:utf8", join("","c:/", substr($link, -4),"_perl.txt") || die; print OUTFILE $bar; } This Perl script produces a CSV file that looks like the screenshot below, with proper kanji and kana that can be mined and manipulated offline: My R code, such as it is, looks like the following. The R script is not an exact duplicate of the Perl solution just given, as it doesn't strip out the HTML and leave the text (this answer suggests an approach using R but it doesn't work for me in this case) and it doesn't have the loop and so on, but the intent is the same. require(RCurl) require(XML) links <- list() links[1] <- "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7203" links[2] <- "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7201" txt <- getURL(links, .encoding = "UTF-8") Encoding(txt) <- "bytes" write.table(txt, "c:/geturl_r.txt", quote = FALSE, row.names = FALSE, sep = "\t", fileEncoding = "UTF-8") This R script generates the output shown in the screenshot below. Basically rubbish. I assume that there is some combination of HTML, text and file encoding that will allow me to generate in R a similar result to that of the Perl solution but I cannot find it. The header of the HTML page I'm trying to scrape says the chartset is utf-8 and I have set the encoding in the getURL call and in the write.table function to utf-8, but this alone isn't enough. The question How can I scrape the above web page using R and save the text as CSV in "well-formed" Japanese text rather than something that looks like line noise? Edit: I have added a further screenshot to show what happens when I omit the Encoding step. I get what look like Unicode codes, but not the graphical representation of the characters. So it may be some kind of locale-related issue, but in the exact same locale the Perl script does provide useful output. So this is still puzzling.

    Read the article

  • Extracting a string between specified characters in python

    - by Seth
    I'm a newbie to regular expressions and I have the following string: sequence = ["{\"First\":\"Belyuen,NT,0801\",\"Second\":\"Belyuen,NT,0801\"}","{\"First\":\"Larrakeyah,NT,0801\",\"Second\":\"Larrakeyah,NT,0801\"}"] I am trying to extract the text Belyuen,NT,0801 and Larrakeyah,NT,0801 in python. I have the following code which is not working: re.search('\:\\"...\\', ''.join(sequence)) I.e. I want to get the string between characters :\ and \.

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • Why is lua crashing after extracting zip files?

    - by Brian T Hannan
    I have the following code but it crashes every time it reaches the end of the function, but it successfully extracts all the files and puts them in the right location. require "zip" function ExtractZipAndCopyFiles(zipPath, zipFilename, destinationPath) local zfile, err = zip.open(zipPath .. zipFilename) -- iterate through each file insize the zip file for file in zfile:files() do local currFile, err = zfile:open(file.filename) local currFileContents = currFile:read("*a") -- read entire contents of current file local hBinaryOutput = io.open(destinationPath .. file.filename, "wb") -- write current file inside zip to a file outside zip if(hBinaryOutput)then hBinaryOutput:write(currFileContents) hBinaryOutput:close() end end zfile:close() end -- call the function ExtractZipAndCopyFiles("C:\\Users\\bhannan\\Desktop\\LUA\\", "example.zip", "C:\\Users\\bhannan\\Desktop\\ZipExtractionOutput\\") Why does it crash every time it reaches the end?

    Read the article

  • Extracting Information from Images

    - by Khorkrak
    What are some fast and somewhat reliable ways to extract information about images? I've been tinkering with openCV and this seems so far to be the best route plus it has Python bindings. So to be more specific I'd like to determine what I can about what's in an image. So for example the haar face detection and full body detection classifiers are great - now I can tell that most likely there are faces and / or people in the image as well as about how many. okay - what else - how about whether there are any buildings and if so what do they seem to be - huts, office buildings etc? Is there sky visible, grass, trees and so forth. From what I've read about training classifiers to detect objects, it seems like a rather laborious process 10,000 or so wrong images and 5,000 or so correct samples to train a classifier. I'm hoping that there are some decent ones around already instead of having to do this all myself for a bunch of different objects - or is there some other way to go about this sort of thing?

    Read the article

  • XSLT: How to remove the self-closed elment

    - by Daoming Yang
    I have a large xml file which contents a lot of self-closed tags. How could remove all them by using XSLT. eg. <?xml version="1.0" encoding="utf-8" ?> <Persons> <Person> <Name>user1</Name> <Tel /> <Mobile>123</Mobile> </Person> <Person> <Name>user2</Name> <Tel>456</Tel> <Mobile /> </Person> <Person> <Name /> <Tel>123</Tel> <Mobile /> </Person> <Person> <Name>user4</Name> <Tel /> <Mobile /> </Person> </Persons> I'm expecting the result: <?xml version="1.0" encoding="utf-8" ?> <Persons> <Person> <Name>user1</Name> <Mobile>123</Mobile> </Person> <Person> <Name>user2</Name> <Tel>456</Tel> </Person> <Person> <Tel>123</Tel> </Person> <Person> <Name>user4</Name> </Person> </Persons> Note: there are thousands of different elements, how can I programmatically remove all the self-closed tags. Another question is how to remove the empty element such as <name></name> as well. Can anyone help me on this? Many thanks.

    Read the article

  • Extracting noun+noun or (adj|noun)+noun from Text

    - by ssuhan
    I would like to query if it is possible to extract noun+noun or (adj|noun)+noun in R package openNLP?That is, I would like to use linguistic filtering to extract candidate noun phrases. Could you direct me how to do? Many thanks. Thanks for the responses. here is the code: library("openNLP") acq <- "Gulf Applied Technologies Inc said it sold its subsidiaries engaged in pipeline and terminal operations for 12.2 mln dlrs. The company said the sale is subject to certain post closing adjustments, which it did not explain. Reuter." acqTag <- tagPOS(acq) acqTagSplit = strsplit(acqTag," ") acqTagSplit qq = 0 tag = 0 for (i in 1:length(acqTagSplit[[1]])){ qq[i] <-strsplit(acqTagSplit[[1]][i],'/') tag[i] = qq[i][[1]][2] } index = 0 k = 0 for (i in 1:(length(acqTagSplit[[1]])-1)) { if ((tag[i] == "NN" && tag[i+1] == "NN") | (tag[i] == "NNS" && tag[i+1] == "NNS") | (tag[i] == "NNS" && tag[i+1] == "NN") | (tag[i] == "NN" && tag[i+1] == "NNS") | (tag[i] == "JJ" && tag[i+1] == "NN") | (tag[i] == "JJ" && tag[i+1] == "NNS")){ k = k +1 index[k] = i } } index Reader can refer index on acqTagSplit to do noun+noun or (adj|noun)+noun extractation. (The code is not optimum but work. If you have any idea, please let me know.) Furthermore, I still have a problem. Justeson and Katz (1995) proposed another linguistic filtering to extract candidate noun phrases: ((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun I cannot well understand its meaning, could someone do me a favor to explain it or transform such representation into R language

    Read the article

  • Extracting specific files from ZIP in PHP

    - by Michael
    If I have a ZIP file whose structure is: -directory1 DIR -files in here -directory2 DIR -more files in here Using pclzip.lib.php how can I open up this ZIP file and extract directory1 (recursive) into a directory and then extract directory2 (recursive) into another directory.

    Read the article

  • Extracting a Rails application into a plugin or engine

    - by Globalkeith
    I have a Rails 2.3 application which I would like to extract into a plugin, or engine. The application has user authentication, and basic cms capabilities supported by ancestry plugin. I want to extract the logic for the application into a plugin/engine so that I can use this code for future projects, with a different "skin" or "theme" if required. I'm not entirely sure I actually understand the difference between plugin and engine concepts, so that would be a good first point. What is the best approach, are there any good starting points, links, explanations, examples that I should follow. Also, with the release of R3 to consider, is there anything that I should be aware of for that, with regards to plugins etc. I am going to start off by watching Ryan's http://railscasts.com/episodes/149-rails-engines but obviously thats over a year old now, so one of the challenges I'm faced with is finding the most up to date and relevant information on this subject. All tips and help gratefully received.

    Read the article

  • Self Authenticating Links in Django

    - by awolf
    In my web app I would like to be able to email self-authenticating links to users. These links will contain a unique token (uuid). When they click the link the token being present in the query string will be enough to authenticate them and they won't have to enter their username and password. What's the best way to do this?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >