Search Results

Search found 3241 results on 130 pages for 'extract'.

Page 35/130 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Programming advice - Which Loops?

    - by GaxZE
    Theres no easy way to say this so ill just say it in the form of a story. Im looking for advice on which loops and where. Here goes: out of 200-odd fields in the database, i need to run the following against each field. extract allowed values using extract function place allowed values into an array loop the array to be inserted into a db table first check records dont already exist. if they dont exist insert into table. ive found myself playing with this for the past two days and getting tangled and tangled in loops. wondering if anybody can guide.

    Read the article

  • How to get a html elements with python lxml

    - by Damiano
    Hello! I have this html code: <table> <tr> <td class="test"><b><a href="">aaa</a></b></td> <td class="test">bbb</td> <td class="test">ccc</td> <td class="test"><small>ddd</small></td> </tr> <tr> <td class="test"><b><a href="">eee</a></b></td> <td class="test">fff</td> <td class="test">ggg</td> <td class="test"><small>hhh</small></td> </tr> </table> I use this Python code to extract all <td class="test"> with lxml module. import urllib2 import lxml.html code = urllib.urlopen("http://www.example.com/page.html").read() html = lxml.html.fromstring(code) result = html.xpath('//td[@class="test"][position() = 1 or position() = 4]') It works good! The result is: <td class="test"><b><a href="">aaa</a></b></td> <td class="test"><small>ddd</small></td> <td class="test"><b><a href="">eee</a></b></td> <td class="test"><small>hhh</small></td> (so the first and the fourth column of each <tr>) Now, I have to extract: aaa (the title of the link) ddd (text between <small> tag) eee (the title of the link) hhh (text between <small> tag) How could I extract these values? (the problem is that I have to remove <b> tag and get the title of the anchor on the first column and remove <small> tag on the forth column) Thank you!

    Read the article

  • Problem extracting text from RSS feeds

    - by Gautam
    Hi, I am new to the world of Ruby and Rails. I have seen rails cast 190 and I just started playing with it. I used selector gadget to find out the CSS and XPath I have the following code.. require 'rubygems' require 'nokogiri' require 'open-uri' url = "http://www.telegraph.co.uk/sport/football/rss" doc = Nokogiri::HTML(open(url)) doc.xpath('//a').each do |paragraph| puts paragraph.text end When I extracted text from a normal HTML page with css, I could get the extracted text on the console. But when I try to do the same either with CSS or XPath for the RSS Feed for the following URL mentioned in the code above, I dont get any output. How do you extract text from RSS feeds?? I also have another silly question. Is there a way to extract text from 2 different feeds and display it on the console something like url1 = "http://www.telegraph.co.uk/sport/football/rss" url2 = "http://www.telegraph.co.uk/sport/cricket/rss" Looking forward for your help and suggestions Thank You Gautam

    Read the article

  • Showing renames in hg status?

    - by Ryan Thompson
    I know that Mercurial can track renames of files, but how do I get it to show me renames instead of adds/removes when I do hg status? For instance, instead of: A bin/extract-csv-column.pl A bin/find-mirna-binding.pl A bin/xls2csv-separate-sheets.pl A lib/Text/CSV/Euclid.pm R src/extract-csv-column.pl R src/find-mirna-binding.pl R src/modules/Text/CSV/Euclid.pm R src/xls2csv-separate-sheets.pl I want some indication that four files have been moved. I think I read somewhere that the output is like this to preserve backward-compatibility with something-or-other, but I'm not worried about that.

    Read the article

  • Extracting Demographic and Contact Information from unstructured text files

    - by jn29098
    I am looking to extract specific items out of a large pool of unstructured documents. These documents could be 1-5 pages of text formatted in various ways by the user, but in most cases would contain at least: Name Address (physical) Email Address Phone number website URL I'm looking for a semantic parser that can attempt to extract these elements from the documents so that I can load that information into a relational database and work with these records as contacts. Other services I've looked for, while valuable for other purposes, do not address this specific need. Alchemy API Open Calais Saplo Any thoughts, suggestions or leads?

    Read the article

  • Batch INSYNC help needed...

    - by Raja Reddy
    I have a INSYNC batch to 'extract' certain conditioned data output. For instance, below insync code extracts the data if 44 pos has a value of '25'. Question here is, I wanna get the output in a sorted manner based on a particular field. Can we incorporate the SORT criteria below. Suggestions are really appreciated. FUNCTION=EXTRACT INDD=#INDD OUTDD=#OUTDD RDW=OFF LINESPERPAGE=080 CASE SEARCHDATA=(00044,002,EQ,C'25') ENDCASE PS: We can achieve the same by means of SORT utility through 'SORT FIELDS' parameter.

    Read the article

  • read first 1kb of a blob from oracle

    - by Angus
    Hi, I wish to extract just the first 1024 bytes of a stored blob and not the whole file. The reason for this is I want to just extract the metadata from a file as quickly as possible without having to select the whole blob. I understand the following: select dbms_lob.substr(file_blob, 16,1) from file_upload where file_upload_id=504; which returns it as hex. How may I do this so it returns it in binary data without selecting the whole blob? Thanks in advance.

    Read the article

  • c# Regex on XML string handler

    - by Dan Sewell
    Hi guys. Trying to fiddle around with regex here, my first attempt. Im trying to extract some figures out of content from an XML tag. The content looks like this: www.blahblah.se/maps.aspx?isAlert=true&lat=51.958855252721&lon=-0.517657021473527 I need to extract the lat and long numerical vales out of each link. They will always be the same amount of characters, and the lon may or may not have a "-" sign. I thought about doing something like this below: (The string in question is in the "link" tag): var document = XDocument.Load(e.Result); if (document.Root == null) return; var events = from ev in document.Descendants("item1") select new { Title = (ev.Element("title").Value), Latitude = Regex.xxxxxxx(ev.Element("link").Value, @"lat=(?<Lat>[+-]?\d*\.\d*)", String.Empty), Longitude = Convert.ToDouble(ev.Element("link").Value), }; foreach (var ev in events) { do stuff } Many thanks!

    Read the article

  • Jar extraction and verification in BlackBerry

    - by Basilio
    Hi All, The application I am currently working on requires me to extract contents from and verify the authenticity of the signed jar that is stored on the SD Card. In Java [and Android], we have the java.util.jar and java.util.zip classes, that allow to extract jar. However, J2ME or BlackBerry® does not provide support for these packages. I have, however, successfully extracted these using the third party ZipMe library. Can anyone let me know, how to get the signature block from the .DSA/.RSA file to authenticate the jar? I have the certificate that was used to sign the jar as well. This is easily done in Java using the getCertificates() method available in java.util.jar.JarFile. Is there any 3rd party API available that emulates the JarFile for BlackBerry®? Any help in this regard will be deeply appreciated. Thanks & Regards Basilio John Vincent D'souza

    Read the article

  • Extracting URLs (to array) in Ruby

    - by FearMediocrity
    Good afternoon, I'm learning about using RegEx's in Ruby, and have hit a point where I need some assistance. I am trying to extract 0 to many URLs from a string. This is the code I'm using: sStrings = ["hello world: http://www.google.com", "There is only one url in this string http://yahoo.com . Did you get that?", "The first URL in this string is http://www.bing.com and the second is http://digg.com","This one is more complicated http://is.gd/12345 http://is.gd/4567?q=1", "This string contains no urls"] sStrings.each do |s| x = s.scan(/((http|https):\/\/[a-z0-9]+([\-\.]{1}[a-z0-9]+)*\.[a-z]{2,5}(([0-9]{1,5})?\/.[\w-]*)?)/ix) x.each do |url| puts url end end This is what is returned: http://www.google.com http .google nil nil http://yahoo.com http nil nil nil http://www.bing.com http .bing nil nil http://digg.com http nil nil nil http://is.gd/12345 http nil /12345 nil http://is.gd/4567 http nil /4567 nil What is the best way to extract only the full URLs and not the parts of the RegEx? Thanks Jim

    Read the article

  • How to pass a variable as an argument to a command with quotes in powershell

    - by da_ponc
    Hi there, My powershell script takes the following parameter: Param($BackedUpFilePath) The value that is getting passed into my script is: "\123.123.123.123\Backups\Website.7z" I have another variable which is the location I want to extract the file: $WebsiteDeploymentFolder = "C:\example" I am trying to extract the archive with the following command: `7z x $BackedUpFilePath -o$WebsiteDeploymentFolder -aoa I keep getting the following error: Error: cannot find archive The following works but I need $BackedUpFilePath to be dynamic: `7z x '\123.123.123.123\Backups\Website.7z' -o$WebsiteDeploymentFolder -aoa I think I need to pass $BackedUpFilePath to 7z with quotes but they seem to get stripped out no matter what I try. I am in quote hell. Thanks.

    Read the article

  • grails date from params in controller

    - by nils petersohn
    y is it so hard to extract the date from the view via the params in a grails controller? i don't want to extract the date by hand like this: instance.dateX = parseDate(params["dateX_value"])//parseDate is from my helper class i just want to use instance.properties = params you know :) in the model the type is java.util.Date and in the params is all the information (dateX_month, dateX_day, ...) i searched on the net and found nothing on this :( i hoped that grails 1.3.0 could help but still the same thing. i can't and will not belief that extracting the date by hand is nessesary!

    Read the article

  • Is it possible to parse a stylesheet with Nokogiri?

    - by wbharding
    I've spent my requisite two hours Googling this, and I can not find any good answers, so let's see if humans can beat Google computers. I want to parse a stylesheet in Ruby so that I can apply those styles to elements in my document (to make the styles inlined). So, I want to take something like <style> .mystyle { color:white; } </style> And be able to extract it into a Nokogiri object of some sort. The Nokogiri class "CSS::Parser" (http://nokogiri.rubyforge.org/nokogiri/Nokogiri/CSS/Parser.html) certainly has a promising name, but I can't find any documentation on what it is or how it works, so I have no idea if it can do what I'm after here. My end goal is to be able to write code something like: a_web_page = Nokogiri::HTML(html_page_as_string) parsed_styles = Nokogiri::CSS.parse(html_page_as_string) parsed_styles.each do |style| existing_inlined_style = a_web_page.css(style.declaration) || '' a_web_page.css(style.declaration)['css'] = existing_inlined_style + style.definition end Which would extract styles from a stylesheet and add them all as inlined styles to my document.

    Read the article

  • Multi-module maven build : different result from parent and from module

    - by Albaku
    I am migrating an application from ant build to maven 3 build. This app is composed by : A parent project specifying all the modules to build A project generating classes with jaxb and building a jar with them A project building an ejb project 3 projects building war modules 1 project building an ear Here is an extract from my parent pom : <groupId>com.test</groupId> <artifactId>P</artifactId> <packaging>pom</packaging> <version>04.01.00</version> <modules> <module>../PValidationJaxb</module> <-- jar <module>../PValidation</module> <-- ejb <module>../PImport</module> <-- war <module>../PTerminal</module> <-- war <module>../PWebService</module> <-- war <module>../PEAR</module> <-- ear </modules> I have several problems which I think have the same origin, probably a dependency management issue that I cannot figure out : The generated modules are different depending on if I build from the parent pom or a single module. Typically if I build PImport only, the generated war is similar to what I had with my ant build and if I build from the parent pom, my war took 20MB, a lot of dependencies from other modules had been added. Both wars are running well. My project PWebService has unit tests to be executed during the build. It is using mock-ejb which has cglib as dependency. Having a problem of ClassNotFound with this one, I had to exclude it and add a dependency to cglib-nodep (see last pom extract). If I then build only this module, it is working well. But if I build from the parent project, it fails because other dependencies in other modules also had an implicit dependency on cglib. I had to exclude it in every modules pom and add the dependency to cglib-nodep everywhere to make it run. Do I miss something important in my configuration ? The PValidation pom extract : It is creating a jar containing an ejb with interfaces generated by xdoclet, as well as a client jar. <parent> <groupId>com.test</groupId> <artifactId>P</artifactId> <version>04.01.00</version> </parent> <artifactId>P-validation</artifactId> <packaging>ejb</packaging> <dependencies> <dependency> <groupId>com.test</groupId> <artifactId>P-jaxb</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate</artifactId> <version>3.2.5.ga</version> <exclusions> <exclusion> <groupId>cglib</groupId> <artifactId>cglib</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2.2</version> </dependency> ... [other libs] ... </dependencies> <build> ... <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <ejbVersion>2.0</ejbVersion> <generateClient>true</generateClient> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>xdoclet-maven-plugin</artifactId> ... The PImport pom extract : It depends on both Jaxb generated jar and the ejb client jar. <parent> <groupId>com.test</groupId> <artifactId>P</artifactId> <version>04.01.00</version> </parent> <artifactId>P-import</artifactId> <packaging>war</packaging> <dependencies> <dependency> <groupId>com.test</groupId> <artifactId>P-jaxb</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>com.test</groupId> <artifactId>P-validation</artifactId> <version>${project.version}</version> <type>ejb-client</type> </dependency> ... [other libs] ... </dependencies> The PWebService pom extract : <parent> <groupId>com.test</groupId> <artifactId>P</artifactId> <version>04.01.00</version> </parent> <artifactId>P-webservice</artifactId> <packaging>war</packaging> <properties> <jersey.version>1.14</jersey.version> </properties> <dependencies> <dependency> <groupId>com.sun.jersey</groupId> <artifactId>jersey-servlet</artifactId> <version>${jersey.version}</version> </dependency> <dependency> <groupId>com.rte.etso</groupId> <artifactId>etso-validation</artifactId> <version>${project.version}</version> <type>ejb-client</type> </dependency> ... [other libs] ... <dependency> <groupId>org.mockejb</groupId> <artifactId>mockejb</artifactId> <version>0.6-beta2</version> <scope>test</scope> <exclusions> <exclusion> <groupId>cglib</groupId> <artifactId>cglib-full</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2.2</version> <scope>test</scope> </dependency> </dependencies> Many thanks

    Read the article

  • Extraction Event

    - by Anicho
    So I have the following code: public override void Extract(object sender, ExtractionEventArgs e) { if (e.Response.HtmlDocument != null) { var myParam = e.Request.QueryStringParameters.Where(parameter => parameter.Name == QueryName).Select(parameter => parameter.Value).Distinct(); myParam. // add the extracted value to the web performance test context e.WebTest.Context.Add(this.ContextParameterName, myParam.ToString()); e.Success = true; return; } // If the extraction fails, set the error text that the user sees e.Success = false; e.Message = String.Format(CultureInfo.CurrentCulture, "Not Found: {0}", QueryName); } It's returning: System.Linq.Enumerable+<DistinctItem>d_81`1[system.string] I am expecting something along the lines of: 0152-1231-1231-123d My question is how do I extract the querystring's actual value from extractioneventargs. They say it's possible, but I have no idea.

    Read the article

  • Slow Speeds when unzipping with PHP onto a NFS, how can I speed it up?

    - by bunwich
    Hi, I'm trying to figure out how to boost my NFS speed and php uploads. File is uploaded to the webserver's local tmp dir With PHP I copy the file userxxx.zip to the NFS With PHP I extract the userxxx.zip on the NFS to another dir on the NFS. What I'm finding is the file is in Step 3, the file is being read through the NFS by the web server, processed by the web server, and uploaded back across the NFS. Speeds as expected are very slow. Might a possible solution be to get the Fileserver to extract the zip? a) Webserver copies the file to the NFS b) Webserver makes a web service call to the Fileserver c) Fileserver can now unzip the file like it's local and the speeds should be much faster. I would appreciate any suggestion anyone how people have approached this problem. (I'm aware that php ZipArchive() is very slow, and I'll likely use java or php exec unzip to speed it up) Thanks

    Read the article

  • Parse text/html part of email source using Javascript

    - by Ben McCormack
    Using javascript, I need to parse the Content-Type text/html portion of an email message and extract just the HTML part. Here's an example of the part of the mail source in question: ------=_Part_1504541_510475628.1327512846983 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 7bit <html ... a bunch of html ... /html> I want to extract everything between (and including) the <html> tags after text/html. How do I do this? NOTE: I'm OK with a hacky regex. I don't expect this to be bulletproof.

    Read the article

  • Using a RegEx in a SQL Query

    - by Jim B
    Hey Everyone, Here's the situation I'm in: We have a field in our database that contains a 3 digit number, surrounded by some text. This number is actually a PK in another table, and I need to extract this out so I can implement a proper FK relationship. Here's an example of what would currently reside in the column: Some Text Goes Here - (305) Followed By Some More Text So, what I'm looking to do is extract the '305' from the column, and hopefully end up with a result that looks something like this (pseudo code) SELECT <My Extracted Value>, Original Column Text, Id FROM dbo.MyTable It seems to me that using a Regex match in my query is the most effective way to do this. Can anybody point me in the right direction?

    Read the article

  • How to get everything in the string, but a particular pattern

    - by José Leal
    Yet another regexp question: I have a string as the following, "This is a string, and I have a priority !1" So I want to build a regexp that extracts my priority, which is this number 1 preceded by the "!". To extract it is very easy, "!([1-4])". But now I want to extract the text, leaving it out! How can I do that? DETAIL: The !1 can be anywhere in the string, so this is also perfectly fine: "This is a string, !1 and I have a priority" Thanks! UPDATE: I'm using scala

    Read the article

  • Parsing a dynamic value with Lift-JSON

    - by Surya Suravarapu
    Let me explain this question with an example. If I have a JSON like the following: {"person1":{"name": "Name One", "address": {"street": "Some Street","city": "Some City"}}, "person2":{"name": "Name Two", "address": {"street": "Some Other Street","city": "Some Other City"}}} [There is no restriction on the number of persons, the input JSON can have many more persons] I could extract this JSON to Persons object by doing var persons = parse(res).extract[T] Here are the related case classes: case class Address(street: String, city: String) case class Person(name: String, address: Address, children: List[Child]) case class Persons(person1: Person, person2: Person) Question: The above scenario works perfectly fine. However the need is that the keys are dynamic in the key/value pairs. So in the example JSON provided, person1 and person2 could be anything, I need to read them dynamically. What's the best possible structure for Persons class to account for that dynamic nature.

    Read the article

  • Extracting a Rails application into a plugin or engine

    - by Globalkeith
    I have a Rails 2.3 application which I would like to extract into a plugin, or engine. The application has user authentication, and basic cms capabilities supported by ancestry plugin. I want to extract the logic for the application into a plugin/engine so that I can use this code for future projects, with a different "skin" or "theme" if required. I'm not entirely sure I actually understand the difference between plugin and engine concepts, so that would be a good first point. What is the best approach, are there any good starting points, links, explanations, examples that I should follow. Also, with the release of R3 to consider, is there anything that I should be aware of for that, with regards to plugins etc. I am going to start off by watching Ryan's http://railscasts.com/episodes/149-rails-engines but obviously thats over a year old now, so one of the challenges I'm faced with is finding the most up to date and relevant information on this subject. All tips and help gratefully received.

    Read the article

  • Static source code analysis with LLVM

    - by Phong
    I recently discover the LLVM (low level virtual machine) project, and from what I have heard It can be used to performed static analysis on a source code. I would like to know if it is possible to extract the different function call through function pointer (find the caller function and the callee function) in a program. I could find the kind of information in the website so it would be really helpful if you could tell me if such an library already exist in LLVM or can you point me to the good direction on how to build it myself (existing source code, reference, tutorial, example...). EDIT: With my analysis I actually want to extract caller/callee function call. In the case of a function pointer, I would like to return a set of possible callee. both caller and callee must be define in the source code (this does not include third party function in a library).

    Read the article

  • xquery expression to return a link text only if it contains within it a specific string

    - by Arvind
    I want to extract some links from a XML document (links are in same format as on html pages). Now for eg a link is "http://xyz.com/start/tyu/a.html" and another is "http://ert.com/tyu/b.html" while a third link is "http://asdf.com/ghjk/c.html" From the above 3 links (which I have with me using a for clause in a FLWOR expression)...I want only the links which contain within them a string "tyu" to be selected-- I thought of using substring for this, but substring requires start and end positions to be specified- whereas in my scenario, I dont know at which position the desired string will be. How do I do substring matching in such a scenario, i.e. where exact position for occurrence of substring, is not known? I can use XQuery 1.0 for this purpose. Finally, I want to extract the link URL, as well as link text...

    Read the article

  • Which rdfa parser for java that supports currently used rdfa attributes?

    - by lennyks
    I am building an app in Java using Jena for semantic information scraping. I am looking for a RDFa parser that would allow me to correctly extract all the rdfa statements. Specifically, one that extracts info about namespaces used and presuming that rdfa tags are correct in the page produces correct triples, ones that distinguish between object and data properties. I went through all rdfa parsers from the site http://rdfa.info/wiki/Consume for Java. They all struggle to extract any rdfa statements and if they do not crash, Jena RDFa parser shows plenty of errors and then dies a terrible death, the data is of little use as it is incorrectly processed and generally mixed up. I am newbie in this area so please be gentle:) I was also thinking of using a library written in different language but then again I don't really know how to plug it into Java code. Any suggestions?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >