Search Results

Search found 3176 results on 128 pages for 'parsing'.

Page 57/128 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Get the package of a Java source file

    - by Oak
    My goal is to find the package (as string) of a Java source file, given as plaintext and not already sorted in folders. I can't just locate the first instance of the keyword package in the file, because it may appear inside a comment. So I was thinking about two alternatives: Scan the file word-by-word, maintaining a "inside-a-comment" flag for the scanner. The first time the package keyword is encountered while not inside a comment, stop the scanning and report the result. Use a regex - should be theoretically possible because block comments do not next in Java, but I tried making such a regex and it turned out to be quite complicated - for me, at least. Another difference between the two approaches is that when scanning manually I can stop the scan when I can be certain the package keyword can no longer appear, saving some time... and I'm not sure I can do something similar with regexes. On the other hand, the decision "when it can no longer appear" is not necessarily simple, though I could use some heuristic for that. I would like to hear any input on this problem, and would welcome any help with the regex. My solution is written in Java as well.

    Read the article

  • How to parse results from google blog search?

    - by Jooj
    Hello! I'm trying to parse the number of results from google seach blog. Could somebody please help me! http://blogsearch.google.com/blogsearch?hl=en&ie=UTF-8&q=a&btnG=Search+Blogs returns a complete page. On the right side you can see (Results 1 - 10 of about 2,504,830,546 for a. (0.05 seconds) ). How could I get 2,504,830,546??? Thanks. Regards.

    Read the article

  • Get ffmpeg information in friendly way

    - by JBernardo
    Every time I try to get some information about my video files with ffmpeg, it pukes a lot of useless information mixed with good things. I'm using ffmpeg -i name_of_the_video.mpg. There are any possibilities to get that in a friendly way? I mean JSON would be great (and even ugly XML is fine). By now, I made my application parse the data with regex but there are lots of nasty corners that appear on some specific video files. I fixed all that I encountered, but there may be more. I wanted something like: { "Stream 0": { "type": "Video", "codec": "h264", "resolution": "720x480" }, "Stream 1": { "type": "Audio", "bitrate": "128 kbps", "channels": 2 } }

    Read the article

  • XSLT : I need to parse the xml with same element name with sequence of order to map in to another xml with different name

    - by Karuna
    As the below source XML Value/string element value has to be replace with target element value, Could some please help me out how to create the XSL to transform from source xml into target xml .Please. Source XML: <PricingResultsV6> <subItems> <SubItem> <profiles> <ProfileValues> <values> <strings>800210</strings> <strings>THC</strings> <strings>10.0</strings> <strings>20.0</strings> <strings>30.0</strings> <strings>40.0</strings> <strings>550.0</strings> <strings>640.0</strings> </values> </ProfileValues> </rofiles> </SubItem> </subItems> </PricingResultsV6> Target XML : <CalculationOutput> <PolicyNumber> 800210 </PolicyNumber> <CommissionFactorMultiplier> THC </CommissionFactorMultiplier> <PremiumValue>10.0</PremiumValue> <SalesmanCommissionValue>20.0</SalesmanCommissionValue> <ManagerCommissionValue>30.0</ManagerCommissionValue> <GL_COR> 550.0</GL_COR> <GL_OPO>640.0</GL_OPO> </CalculationOutput>

    Read the article

  • Weird CSS behavior... removing a 1px border makes <DIV> move about 20px

    - by John
    I have the following: CSS #pageBody { height: 500px; padding:0; margin:0; /*border: 1px solid #00ff00;*/ } #pageContent { height:460px; margin-left:35px; margin-right:35px; margin-top:30px; margin-bottom:30px; padding:0px 0 0 0; } HTML <div id="pageBody"> <div id="pageContent"> <p> blah blah blah </p> </div> </div> </div> If I uncomment the border line in pageBody, everything fits sweetly... I had the border on to verify things were as expected. But removing the border, pageBody drops down the page about 20px, while pageContent does not appear to move at all. Now this is not the real page, but a subset. If nothing's obvious I can attempt to generate a working minimal sample, but I thought there might be an easy quick answer first. I see the same exact problem in Chrome and IE8, suggesting it's me not the browser. Any tips where to look? I wondered maybe the 1px border was some tipping point making the contents of a div just too big, but changing #pageContent height to e.g 400 makes no difference, other than clipping the bottom off that div.

    Read the article

  • What language should I use to parse a lot of text?

    - by BicMan
    My company's proprietary software generates a log file that is much easier to use if it is parsed. The log parser we all use was written by another employee as a side project, and it has horrible performance. These log files can grow to 10s of megabytes very quickly, and the parser we currently use has issues if a log file is bigger than 1 megabyte. So, I want to write a program that can parse this massive amount of text in the shortest amount of time possible. We use Windows exclusively, so running on Windows is a must. Our current implementation runs on a local web server, and I'm convinced that running it as an application would have to be faster. All suggestions will be helpful. Thanks.

    Read the article

  • How to determine which element(s) are visible in an overflowed <div>

    - by jjross
    Basically, I'm trying to implement a system that behaves similar to the reading pane that's built into the Google Reader interface. If you haven't seen it, Google Reader presents each article in a separate box and as you scroll it highlights the current box (and marks the article as read). In addition to this, you can move forward or backward in the article list by clicking the previous and next buttons in the UI. I've basically figured out how to do most of the functionality. However, I'm not sure how I can determine which of my divs is currently visible in in the scrollable pane. I have a div that is set to overflow:auto. Inside of this div, there are other divs, each one containing a piece of content. I've used the following jquery plugin to make everything scroll based on a click of the "next" or "previous" button and it works like a charm: http://demos.flesler.com/jquery/serialScroll/ But I can't tell which div has "focus" in the scrollable pane. I'd like to be able to do this for two reasons. I'd like to highlight the item that the user is currently reading (similar to Google Reader). I need to do this regardless of whether or not they used the plugin to get there or used the browser's scroll bar. I need to be able to tell the plugin which item has focus so that my call to scroll to the "next" pane actually uses the currently viewed pane (and not just the previous pane that the plugin scrolled from). I've tried doing some searching but I can't seem to figure out a way to do this. I found lots of ways to scroll to a particular item, but I can't find a way to determine which element is visible in an overflowed div. If I can determine which items are visible, I can (probably) figure out the rest. I'm using jquery if that helps. Thanks!

    Read the article

  • Extracting xml data from string var in android

    - by ram
    Hi I have post some value using HttpPost and convert response into string using HttpEntity entity = response.getEntity(); String rrr=EntityUtils.toString(entity); rrr contain some xml tags <root> <mytag>its my tag</mytag> </root> Now I have to extract string "its my tag" I have try to do it with SAX Parser but it give out put null. Plz, help me in solving this problem.

    Read the article

  • Custom Floating Point Representation

    - by Abion47
    I'm trying to write a parser that will read a particular file type, and I need to map the different data types to C# equivalents. Most of them aren't that difficult, but I'm having trouble wrapping my head around what "int16 with a bias of 14" means. I've deduced that it's some kind of floating point type, so my best bet would be to write a converter that would map it to a float, double, or decimal type. I'm not sure where to take it from here, though.

    Read the article

  • Help parsing long (3.5mil lines) text file, line by line and storing data, need a strategy

    - by Jarrod
    This is a question about solving a particular problem I am struggling with, I am parsing a long list of text data, line by line for a business app in PHP (cron script on the CLI). The file follows the format: HD: Some text here {text here too} DC: A description here DC: the description continues here DC: and it ends here. DT: 2012-08-01 HD: Next header here {supplemental text} ... this repeats over and over for a few hundred megs I have to read each line, parse out the HD: line and grab the text on this line. I then compare this text against data stored in a database. When a match is found, I want to then record the following DC: lines that succeed the matched HD:. Pseudo code: while ( the_file_pointer_isnt_end_of_file) { line = getCurrentLineFromFile title = parseTitleFrom(line) matched = searchForMatchInDB(line) if ( matched ) { recordTheDCLines // <- Best way to do this? } } My problem is that because I am reading line by line, what is the best way to trigger the script to start saving DC lines, and then when they are finished save them to the database? I have a vague idea, but have yet to properly implement it. I would love to hear the communities ideas\suggestions! Thank you.

    Read the article

  • Why does dojo parsing time depend on css and images availability?

    - by Kniganapolke
    I have been profiling javascript on my page that uses dojo widgets. I don't use explicit parsing - the parser runs on page load. What I noticed is that if I clear browser cache before refreshing the page, dojo parsing takes much more time than if all the files are already cached. Note that we build all the required dojo modules into a layer (a single file), so we don't lazy-load any js files. I wonder if dojo parsing process depends on images and css resources, as far as I know it only instantiates widgets and injects dom nodes. Do you have any ideas why dojo parser runs longer (2-3 times longer in my case) when the cache is cleared?

    Read the article

  • Is there a function that can read a php function post-parsing?

    - by Rob
    I've got a php file echoing hashes from a MySQL database. This is necessary for a remote program I'm using, but at the same time I need my other php script opening and checking it for specified strings POST parsing. If it checks for the string pre-parsing, it'll just get the MySQL query rather than the strings to look for. I'm not sure if any functions do this. Does fopen() read the file prior to parsing? or file_get_contents()? If so, is there a function that'll read the file after the php and mysql code runs? The file with the hashes query and echo is in the same directory as the php file reading it, if that makes a difference. Perhaps fopen reads it post-parse, and I've done something wrong, but at first I was storing the hashes directly in the file, and it was working fine. After I changed it to echo the contents of the MySQL table, it bugged out.

    Read the article

  • Any good libraries for parsing JSON in Classic ASP?

    - by Mark Biek
    I've been able to find a zillion libraries for generating JSON in Classic ASP (VBScript) but I haven't been to find ANY for parsing. I want something that I can pass a JSON string and get back a VBScript object of some sort (Array, Scripting.Dictionary, etc) Can anyone recommend a library for parsing JSON in Classic ASP?

    Read the article

  • jquery; django; parsing httpresponse

    - by Grinart
    hi_all() I have a problem with parsing http response.. I try send some values to client >>>>return HttpResponse(first=True,second=True) when parsing: $.post('get_values',"",function(data){ alert(data['first']); //The alert isn't shown!!! }); what is the right way to extract values from httpresponse maybe I make a mistake when create my response..

    Read the article

  • What are the differences between mapping,binding and parsing?

    - by sfrj
    I am starting to learn web-services in java EE6. I did web development before, but never nothing related to web services. All is new to me and the books and the tutorials i find in the web are to technical. I started learning about .xsd schemas and also .xml. In that topic i feel confident, i understand what are the schemas used for and what validation means. Now my next step is learning about JAX-B(Java Api for XML Binding). I rode some about it and i did also some practice in my IDE. But i have lots of basic doubts, that make me stuck and cannot feel confident to continue to the next topic. Ill appreciate a lot if someone could explain me well my doubts: What does it mean mapping and what is a mapping tool? What does it mean binding and what is a binding tool? What does it mean parsing and what is a parsing tool? How is JAX-B related to mapping,binding and parsing? I am seeking for a good answer built by you, not just a copy paste from google(Ive already been online a few hours and only got confused).

    Read the article

  • How relevant is UTF-7 when it comes to parsing emails?

    - by J. Pablo Fernández
    I recently implemented incoming emails for an application and boy, did I open the gates of hell? Since then every other day an email arrives that makes the app fail in a different way. One of those things is emails encoded as UTF-7. Most emails come as ASCII, some of the Latin encodings, or thankfully, UTF-8. Hotmail error messages (like email address doesn't exist or quota exceeded) seem to come as UTF-7. Unfortunately, UTF-7 is not an encoding Ruby understands: > "hello world".encode("utf-8", "utf-7") Encoding::ConverterNotFoundError: code converter not found (UTF-7 to UTF-8) > Encoding::UTF_7 => #<Encoding:UTF-7 (dummy)> My application doesn't crash, it actually handles the email quite well, but it does send me a notification about the potential error. I spent some time googling and I can't find anyone that implemented the conversion, at least not as a Ruby 1.9.3 Encoding::Converter. So, my question is, since I never got an email with actual content, from an actual person, in UTF-7, how relevant is that encoding? can I safely ignore it?

    Read the article

  • How should I deal with user agent parsing in logs?

    - by Mr. Jefferson
    My web app project includes logging functionality so we can see where visitors are coming from (referrer URL), what the popular user agents are, what pages are most popular, etc. The log is stored in SQL Server, and when I query the user agents I use a large (almost 100 lines) and growing CASE statement to separate the user agents using string matching (i.e. if the user agent contains the string "Firefox/9" then it's Firefox 9). Is there a better way to do this so I don't have to continually add to that CASE statement to deal with new browser releases? Also, how should I deal with less common, weird/unknown user agents? I've seen the following in the logs and been unable to find good information online about what they are: WordPress/3.3.1; http://www.facecolony.org Mozilla/4.0 ( http://www.hairirons.org redips; <a href=http://hairirons.org/>chi hair iron</a>) I'd guess they're bots/crawlers, but the sites they point to don't appear to reference web crawlers (or even be available sometimes). I've seen other user agents aren't familiar to me, but I know they're bots because they include "bot" or "spider" or something similar in them.

    Read the article

  • Does IE have more strict Javascript parsing than Chrome?

    - by Clay Shannon
    This is not meant to start a religio-technical browser war - I still prefer Chrome, at least for now, but: Because of a perhaps Chrome-related problem with my web page (see https://code.google.com/p/chromium/issues/detail?can=2&start=0&num=100&q=&colspec=ID%20Pri%20M%20Iteration%20ReleaseBlock%20Cr%20Status%20Owner%20Summary%20OS%20Modified&groupby=&sort=&id=161473), I temporarily switched to IE (10) to see if it would also view the time value as invalid. However, I didn't even get to that point - IE stopped me in my tracks before I could get there; but I found that IE was right - it is more particular/precise in validating my code. For example, I got this from IE: SCRIPT5007: The value of the property '$' is null or undefined, not a Function object ...which was referring to this: <script src="/CommonLogin/Scripts/jquery-1.9.1.min.js" type="text/javascript"></script> <script type="text/javascript"> // body sometimes becomes white???? with jquery 1.6.1 $("body").css("background-color", "#405DA7"); < This line is highlighted as the culprit: $("body").css("background-color", "#405DA7"); jQuery is referenced right above it - so why did it consider "$" to be undefined, especially when Chrome had no problem with it...ah! I looked at that location (/CommonLogin/Scripts/) and saw that, sure enough, the version of jQuery there was actually jquery-1.6.2.min.js. I added the updated jQuery file (1.9.1) and it got past this. So now the question is: why does Chrome ignore this? Does it download the referenced version from its own CDN if it can't find it in the place you specify? IE did flag other errs after that, too; so I'm thinking perhaps IE is better at catching lurking problems than, at least, Chrome is. Haven't tested Firefox diesbzg yet.

    Read the article

  • Parsing glGetShaderInfoLog() to get error info. Is this reliable, or is there a better way?

    - by m4ttbush
    I want to get a list of errors and their line numbers so I can display the error information different to how it's formatted in the error string, and also show the line in error. It looks easy enough to just parse the result of glGetShaderInfoLog(), look for "ERROR:" then read the next number up to : and then the next, and then the error description up to the next newline. But the OpenGL docs say "Application developers should not expect different OpenGL implementations to produce identical information logs." Which makes me worry that my code may behave incorrectly on different systems. I don't need them to be identical, I just need them to follow the same format. So is there a better way to get a list of errors with line number separate, is it safe to assume that they'll always follow the "ERROR: 0:123:" format, or is there simply no reliable way to do this? Thanks!

    Read the article

  • Does putting types/functions inside namespace make compiler's parsing work easy?

    - by iammilind
    Retaining the names inside namespace will make compiler work less stressful!? For example: // test.cpp #include</*iostream,vector,string,map*/> class vec { /* ... */ }; Take 2 scenarios of main(): // scenario-1 using namespace std; // comment this line for scenario-2 int main () { vec obj; } For scenario-1 where using namespace std;, several type names from namespace std will come into global scope. Thus compiler will have to check against vec if any of the type is colliding with it. If it does then generate error. In scenario-2 where there is no using namespace, compiler just have to check vec with std, because that's the only symbol in global scope. I am interested to know that, shouldn't it make the compiler little faster ?

    Read the article

  • How to obtain the root of a tree without parsing the entire file?

    - by Matt.
    I'm making an xml parser to parse xml reports from different tools, and each tool generates different reports with different tags. For example: Arachni generates an xml report with <arachni_report></arachni_report> as tree root tag. nmap generates an xml report with <nmaprun></nmaprun> as tree root tag. I'm trying not to parse the entire file unless it's a valid report from any of the tools I want. First thing I thought to use was ElementTree, parse the entire xml file (supposing it contains valid xml), and then check based on the tree root if the report belongs to Arachni or nmap. I'm currently using cElementTree, and as far as I know getroot() is not an option here, but my goal is to make this parser to operate with recognized files only, without parsing unnecessary files. By the way, I'm Still learning about xml parsing, thanks in advance.

    Read the article

  • Advantage to parsing Excel Spreadsheet data vs. CSV?

    - by john
    I have tabulated data in an Excel spreadsheet (file size will likely never be larger than 1 mb). I want to use PHP to parse the data and insert in to a MySQL database. Is there any advantage to keeping the file as an .xls/.xlsx and parsing it using a PHP Excel Parsing Library? If so, what are some good libraries to use? Obviuously, I can save the .xls/.xlsx as a CSV and handle the file that way. Thanks!

    Read the article

  • When parsing XML with jquery, how can we pass the current node from within each() to another functio

    - by Johusa
    Say we have an XML document with many book nodes... When parsing XML with jquery, how can i pass the current node from each() iteration to another function that will do some stuff until something is reached and then go back to the previous function (passing along the current node from this function back to the first function)? Here something more descriptive (this is just an example out of my head, not accurate): function MyParser(x1,x2,dom) { // if i am called by anotherFunction(thisNode) proceed from the passed node dom.find('book').each(function() { var Letter = thisNode.find(author).charAt(0); if(x1 == Letter) { // print everything till the next letter (x2) anotherFunction(thisNode) } } } function anotherFunction(x2,thisNode) { //continue parsing here until you reached x2 //when x2 is reached, return to previous function passing again the current node }

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >