Search Results

Search found 4222 results on 169 pages for 'dtd parsing'.

Page 150/169 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • handle large Parcelable ArrayList in Android

    - by Gal Ben-Haim
    I'm developing an Android app that is a client to a JSON webservice API. I have classes of resource objects (some are nested) and I pass results from an IntentService that access the webserive using the Parcelable interface for all the resource classes. the webservice returns arrays or results that can be potentially large (because of the nesting, for example, a post object also contains comments array, each comment also contains a user object). currently I'm either inserting the results into a SQlite database or displaying them in a ListView. (my relevant methods are accepting ArrayList<resourceClass> as arguments). (some data need to be persistent stored and some should not). since I don't know what size of lists I can handle this way without reaching the memory limits, is this a good practice ? is it a better idea to save the parsed JSON to a local file immediately and pass the file path to the ResultReceiver, then either insert to database from that file or display the data ? is there a better way to handle this ? btw - I'm parsing the JSON as a stream with Gson's Reader so there shouldn't be memory issues at that stage.

    Read the article

  • iphone file download not working

    - by Anonymous
    Hi, In my app I 'm first connecting to a web service, which in return sends a url for a file. I use the url to download the file and then display it on the new view. I get the correct URL but not able to download file from that location. I have another test app which will download file from the same location and it works like a charm. following is my code for webservice-file download. This is a snippet of the code where i 'm parsing the web service xml and then pass the result to NSData for file download. Any suggestions where am i going wrong -- I 'm referring to the following tutorials. Web Service PDF Viewer if ([elementName isEqualToString:@"PRHPdfResultsResult"]) { NSLog(soapResults); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Report downloaded from:" message:soapResults delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; NSData *pdfData = [[NSData alloc] initWithContentsOfURL:[NSURL URLWithString:soapResults]]; //Store the Data locally as PDF File NSString *resourceDocPath = [[NSString alloc] initWithString:[[[[NSBundle mainBundle] resourcePath] stringByDeletingLastPathComponent] stringByAppendingPathComponent:@"Documents"]]; NSString *filePath = [resourceDocPath stringByAppendingPathComponent:@"myPDF.pdf"]; [pdfData writeToFile:filePath atomically:YES]; [alert show]; [alert release]; [soapResults setString:@""]; elementFound = FALSE; }

    Read the article

  • Manually Trigger or Prevent Javascript Lazy Loading in Website from Bookmarklet

    - by stwhite
    One of the problems with using a bookmarklet for grabbing images on a page is that if a website uses lazy loading, the bookmarklet won't detect the image because it will have a placeholder, e.g. "grey.gif" and not the actual source of the image. Javascript on page load, is run to replace these urls. I'm looking for a solution to retrieve those images that are not being displayed by either triggering or preventing Lazy Loading from running. This bookmarklet isn't limited to one specific domain. So far some ideas I've had are: Ping the domain and retrieve the page html if no images are found the first time around: Problem: this then requires parsing the actual html. Problem: with lazy loading, a few images will always show, just none below the fold. Scroll page to initiate lazy loading when bookmarklet is clicked, then scroll back to top. Trigger Lazy Loading from inside bookmarklet using script. Lazy Loader adds the "original" attribute, potentially could check if attribute exists w/ value. Problem: ???

    Read the article

  • Iterating through Event Log Entry Collection, IndexOutOutOfBoundsException

    - by fjdumont
    Hello, in a service application I am iterating through the Windows application event log to parse Events in order react depanding on the entry message. In the case that the event log is full (Windows usually makes sure there is enough space by deleting old entries - this is configurable in the eventvwr.exe settings), the service always runs into an IndexOutOfBoundsException while iterating through the EventLog.Entries collection. No matter how I iterate (for-loop, using the collections enumerator, copying the collection into an array, ...), I can't seem to get rid of this ´bug´. Currently, I ensure that the log is not full in order to keep the service running by regularly deleting the last few item by parsing the event log file and deleting the last few nodes (Don't beat me up, I couldn't find a better alternative...). How can I iterate through the collection without trying to access already deleted entries? Is there probably a more elegant method? I am only trying to acces the logs written during the last x seconds (even LINQ failed to select those when the log is full - same exception), could this help? Thanks for any advice and hints Frank Edit: I forgot to mention that my assumption is the loops are accessing entries which are being deleted during iteration by Windows. Basically that is why I tried to clone the collection. Is there perhaps a way to lock the collection for a small amount of time for just my application?

    Read the article

  • What good open source programs exist for fuzzing popular image file types?

    - by JohnnySoftware
    I am looking for a free, open source, portable fuzzing tool for popular image file types that is written in either Java, Python, or Jython. Ideally, it would accept specifications for the fuzzable fields using some kind of declarative constraints. Non-procedural grammar for specifying constraints are greatly preferred. Otherwise, might as well write them all in Python or whatever. Just specifying ranges of valid values or expressions for them. Ideally, it would support some kind of generative programming to export the fuzzer into various programming languages to suit cases where more customization was required. If it supported a direct-manipulation GUI for controlling parameter values and ranges, that would be nice too. The file formats that should be supported are: GIF JPEG PNG So basically, it should be sort of a toolkit consisting of ready-to-run utility, a framework or library, and be capable of generating the fuzzed files directly as well as from programs it generates. It needs to be simple so that test images can be created quickly. It should have a batch capability for creating a series of images. Creating just one at a time would be too painful. I do not want a hacking tool, just a QA tool. Basically, I just want to address concerns that it is taking too long to get commonplace image rendering/parsing libraries stable and trustworthy.

    Read the article

  • If I use Unicode on a ISO-8859-1 site, how will that be interpreted by a browser?

    - by grg-n-sox
    So I got a site that uses ISO-8859-1 encoding and I can't change that. I want to be sure that the content I enter into the web app on the site gets parsed correctly. The parser works on a character by character basis. I also cannot change the parser, I am just writing files for it to handle. The content in my file I am telling the app to display after parsing contains Unicode characters (or at least I assume so, even if they were produced by Windows Alt Codes mapped to CP437). Using entities is not an option due to the character by character operation of the parser. The only characters that the parser escapes upon output are markup sensitive ones like ampersand, less than, and greater than symbols. I would just go ahead and put this through to see what it looks like, but output can only be seen on a publishing, which has to spend a couple days getting approved and such, and that would be asking too much for just a test case. So, long story short, if I told a site to output ?ÇÑ¥?? on a site with a meta tag stating it is supposed to use ISO-8859-1, will a browser auto-detect the Unicode and display it or will it literally translate it as ISO-8859-1 and get a different set of characters?

    Read the article

  • An algo for generating code callgraphs

    - by Shrey
    I am working on a project which requires generating some metrices of a code (it can be C/C++/Java/Python). One of the metrices can be that I create a callgraph after parsing the code entered (the programs are expected to be small - probably under 1000L). As of now, I am looking for a way to create a program (it can be C/Python) which can take as input a file (C/C++/Python/Java) and then create a textual output containing approximate calling sequence as well as tokens in the code file. As of now, I have looked at some other tools which do the same thing - like splint, pylint, codeviz etc. So, I have two ways of solving my problem: Read and understand the algorithm these tools use (tokenization-graph generation etc) Or, have a basic algo (something like very high level steps) and then sit down to create each of them as I want them to be. I know, re-inventing the wheel is not a good idea, but, I would still like to give option (2) a shot. Only issue is, currently I am a blank. My question: Does any one have any knowhow about how to create code graphs? Any hints as to what I should do? Any top levels steps which I can follow? Thanks a lot.

    Read the article

  • python on apache - getting 404

    - by Kirby
    I edited this question after i found a solution... i need to understand why the solution worked instead of my method? This is likely to be a silly question. I tried searching other questions that are related... but to no avail. i am running Apache/2.2.11 (Ubuntu) DAV/2 SVN/1.5.4 PHP/5.2.6-3ubuntu4.5 with Suhosin-Patch mod_python/3.3.1 Python/2.6.2 i have a script called test.py #! /usr/bin/python print "Content-Type: text/html" # HTML is following print # blank line, end of headers print "hello world" running it as an executable works... /var/www$ ./test.py Content-Type: text/html hello world when i run http://localhost/test.py i get a 404 error. What am i missing? i used this resource to enable python parsing on apache. http://ubuntuforums.org/showthread.php?t=91101 From that same thread... the following code worked.. why? #!/usr/bin/python import sys import time def index(req): # Following line causes error to be sent to browser # rather than to log file (great for debug!) sys.stderr = sys.stdout #print "Content-type: text/html\n" #print """ blah1 = """<html> <head><title>A page from Python</title></head> <body> <h4>This page is generated by a Python script!</h4> The current date and time is """ now = time.gmtime() displaytime = time.strftime("%A %d %B %Y, %X",now) #print displaytime, blah1 += displaytime #print """ blah1 += """ <hr> Well House Consultants demonstration </body> </html> """ return blah1

    Read the article

  • How SQLite on Android handles long strings?

    - by Levara
    I'm wondering how Android's implementation of SQLite handles long Strings. Reading from online documentation on sqlite, it said that strings in sqlite are limited to 1 million characters. My strings are definitely smaller. I'm creating a simple RSS application, and after parsing a html document, and extracting text, I'm having problem saving it to a database. I have 2 tables in database, feeds and articles. RSS feeds are correctly saved and retrieved from feeds table, but when saving to the articles table, logcat is saying that it cannot save extracted text to it's column. I don't know if other columns are making problems too, no mention of them in logcat. I'm wondering, since text is from an article on web, are signs like (",',;) creating problems? Is Android automaticaly escaping them, or I have to do that. I'm using a technique for inserting similar to one in notepad tutorial: public long insertArticle(long feedid, String title, String link, String description, String h1,tring h2, String h3, String p, String image, long date) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_FEEDID, feedid); initialValues.put(KEY_TITLE, title); initialValues.put(KEY_LINK, link); initialValues.put(KEY_DESCRIPTION, description ); initialValues.put(KEY_H1, h1 ); initialValues.put(KEY_H2, h2); initialValues.put(KEY_H3, h3); initialValues.put(KEY_P, p); initialValues.put(KEY_IMAGE, image); initialValues.put(KEY_DATE, date); return mDb.insert(DATABASE_TABLE_ARTICLES,null, initialValues); } Column P is for extracted text, h1, h2 and h3 are for headers from a page. Logcat reports only column p to be the problem. The table is created with following statement: private static final String DATABASE_CREATE_ARTICLES = "create table articles( _id integer primary key autoincrement, feedid integer, title text, link text not null, description text," + "h1 text, h2 text, h3 text, p text, image text, date integer);";

    Read the article

  • Deserialize xml which uses attribute name/value pairs

    - by Bodyloss
    My application receives a constant stream of xml files which are more or less a direct copy of the database record <record type="update"> <field name="id">987654321</field> <field name="user_id">4321</field> <field name="updated">2011-11-24 13:43:23</field> </record> And I need to deserialize this into a class which provides nullable property's for all columns class Record { public long? Id { get; set; } public long? UserId { get; set; } public DateTime? Updated { get; set; } } I just cant seem to work out a method of doing this without having to parse the xml file manually and switch on the field's name attribute to store the values. Is their a way this can be achieved quickly using an XmlSerializer? And if not is their a more efficient way of parsing it manually? Regards and thanks My main problem is that the attribute name needs to have its value set to a property name and its value as the contents of a <field>..</field> element

    Read the article

  • PHP: json_decode dumping NULL, BOM not found

    - by SerEnder
    I've been trying to find out why this 'json_encode'd string isn't parsing out correctly, and came across previously answered questions that had the UTF BOM sequence that was throwing the error, but didn't help me here. Here's the code that isn't currently working: //Decode the notes attached to the sig $aNotes = json_decode($rule->getNotes(),true); $bom = pack("CCC",0xef,0xbb,0xbf); if(0 == strncmp($rule->getNotes(),$bom,3)) { print('BOM detected - json encoding in UTF-8<br/>'); } else { print('BOM NOT detected - json encoding correctly<br/>'); } print('rule->getNotes:<br/>' . $rule->getNotes() .'<br/>'); var_dump($aNotes); Which generates this result: BOM NOT detected - json encoding correctly rule->getNotes: [{"lDate":"Unknown","sAuthor":"Unknown","sNote":"This is a general purpose Russian spam rule that matches anything starting with 2, 3 or 4 hex digits followed by a domain name ending with .ru -RSK 2010-05-10"},{"lDate":"1295031463082","sAuthor":"Drew Thorstenson","sNote":"this is Ryan's ru rule"}] NULL I've run it through JSON Lint, which said it was valid, and An Online JSON Parser which parsed it correctly too. Any insight would be greatly appreciated.

    Read the article

  • node.js UDP data lost at high package rates

    - by koleto
    I am observing a significant data-lost on a UDP connection with node.js 0.6.18 and 0.8.0 . It appears at high packet rates about 1200 packet per second with frames about 1500 byte limit. Each data packages has a incrementing number so it easy to track the number of lost packages. var server = dgram.createSocket("udp4"); server.on("message", function (message, rinfo) { //~processData(message); //~ writeData(message, null, 5000); }).bind(10001); On the receiving callback I tested two cases I first saved 5000 packages in a file. The result ware no dropped packages. After I have included a data processing routine and got about 50% drop rate. What I expected was that the process data routine should be completely asynchronous and should not introduce dead time to the system, since it is a simple parser to process binary data in the package and to emits events to a further processing routine. It seems that the parsing routine introduce dead time in which the event handler is unable to handle each packets. At the low package rates (< 1200 packages/sec) there are no data lost observed! Is this a bug or I am doing something wrong?

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • How to add an extra plist property using CMake?

    - by Jesse Beder
    I'm trying to add the item <key>UIStatusBarHidden</key><true/> to my plist that's auto-generated by CMake. For certain keys, it appears there are pre-defined ways to add an item; for example: set(MACOSX_BUNDLE_ICON_FILE ${ICON}) But I can't find a way to add an arbitrary property. I tried using the MACOSX_BUNDLE_INFO_PLIST target property as follows: I'd like the resulting plist to be identical to the old one, except with the new property I want, so I just copied the auto-generated plist and set that as my template. But the plist uses some Xcode variables, which also look like ${foo}, and CMake grumbles about this: Syntax error in cmake code when parsing string <string>com.bedaire.${PRODUCT_NAME:identifier}</string> syntax error, unexpected cal_SYMBOL, expecting } (47) Policy CMP0010 is not set: Bad variable reference syntax is an error. Run "cmake --help-policy CMP0010" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. In any case, I'm not even sure that this is the right thing to do. I can't find a good example or any good documentation about this. Ideally, I'd just let CMake generate everything as before, and just add a single extra line. What can I do?

    Read the article

  • VB.net XML Parser loop

    - by StealthRT
    Hey all i am new to XML parsing on VB.net. This is the code i am using to parse an XML file i have: Dim output As StringBuilder = New StringBuilder() Dim xmlString As String = _ "<ip_list>" & _ "<ip>" & _ "<ip>192.168.1.1</ip>" & _ "<ping>9 ms</ping>" & _ "<hostname>N/A</hostname>" & _ "</ip>" & _ "<ip>" & _ "<ip>192.168.1.6</ip>" & _ "<ping>0 ms</ping>" & _ "<hostname>N/A</hostname>" & _ "</ip>" & _ "</ip_list>" Using reader As XmlReader = XmlReader.Create(New StringReader(xmlString)) Do Until reader.EOF reader.ReadStartElement("ip_list") reader.ReadStartElement("ip") reader.ReadStartElement("ip") reader.MoveToFirstAttribute() Dim theIP As String = reader.Value.ToString reader.ReadToFollowing("ping") Dim thePing As String = reader.ReadElementContentAsString().ToString reader.ReadToFollowing("hostname") Dim theHN As String = reader.ReadElementContentAsString().ToString MsgBox(theIP & " " & thePing & " " & theHN) Loop End Using I put the "do until reader.EOF" myself but it does not seem to work. It keeps giving an error after the first go around. I must be missing something? David

    Read the article

  • Releasing Autoreleasepool crashes on iOS 4.0 (and only on 4.0)

    - by samsam
    Hi there. I'm wondering what could cause this. I have several methods in my code that i call using performSelectorInBackground. Within each of these methods i have an Autoreleasepool that is being alloced/initialized at the beginning and released at the end of the method. this perfectly works on iOS 3.1.3 / 3.2 / 4.2 / 4.2.1 but it fataly crashes on iOS 4.0 with a EXC_BAD_ACCESS Exception that happens after calling [myPool release]. After I noticed this strange behaviour I was thinking about rewriting portions of my code and to make my app "less parallel" in case that the client os is 4.0. After I did that, the next point where the app crashed was within the ReachabilityCallback-Method from Apples Reachability "Framework". well, now I'm not quite sure what to do. The things i do within my threaded methods is pretty simple xml parsing (no cocoa calls or stuff that would affect the UI). After each method finishes it posts a notification which the coordinating-thread listens to and once all the parallelized methods have finished, the coordinating thread calls viewcontrollers etc... I have absolutely no clue what could cause this weird behaviour. Especially because Apples Code fails as well. any help is greatly appreciated! thanks, sam

    Read the article

  • What file format can I use to output a formatted text file straight from a program without having the markup be too complicated?

    - by Matt
    Premise: I am parsing a file that is quite nearly XML, but not quite. From this file I would like to extract data and output in a file that a user could open up in some program and read. To make the data reasonable, I would almost certainly need to format the text. In case it matters, I will probably be using Java to write the program. Problem: I cannot find a file format that supports formatting without having terribly complex rules and encoding problems. Attempts: I looked into a basic .txt extension first, but it does not have enough formatting advantage. I then tried a .rtf extension, but the rules for outputting text seem to be terribly complicated. It was then suggested that I used XML, but I do not understand how this file would be viewed. This appears to be probably the best solution, but I don't understand much about it. Perhaps somebody could shed some light here. In Other Words: Could somebody suggest and easy to use file format and/or shed some light on how to use XML for text formatting and viewing?

    Read the article

  • Searching Natural Language Sentence Structure

    - by Cerin
    What's the best way to store and search a database of natural language sentence structure trees? Using OpenNLP's English Treebank Parser, I can get fairly reliable sentence structure parsings for arbitrary sentences. What I'd like to do is create a tool that can extract all the doc strings from my source code, generate these trees for all sentences in the doc strings, store these trees and their associated function name in a database, and then allow a user to search the database using natural language queries. So, given the sentence "This uploads files to a remote machine." for the function upload_files(), I'd have the tree: (TOP (S (NP (DT This)) (VP (VBZ uploads) (NP (NNS files)) (PP (TO to) (NP (DT a) (JJ remote) (NN machine)))) (. .))) If someone entered the query "How can I upload files?", equating to the tree: (TOP (SBARQ (WHADVP (WRB How)) (SQ (MD can) (NP (PRP I)) (VP (VB upload) (NP (NNS files)))) (. ?))) how would I store and query these trees in a SQL database? I've written a simple proof-of-concept script that can perform this search using a mix of regular expressions and network graph parsing, but I'm not sure how I'd implement this in a scalable way. And yes, I realize my example would be trivial to retrieve using a simple keyword search. The idea I'm trying to test is how I might take advantage of grammatical structure, so I can weed-out entries with similar keywords, but a different sentence structure. For example, with the above query, I wouldn't want to retrieve the entry associated with the sentence "Checks a remote machine to find a user that uploads files." which has similar keywords, but is obviously describing a completely different behavior.

    Read the article

  • SQL Server error handling: exceptions and the database-client contract

    - by gbn
    We’re a team of SQL Servers database developers. Our clients are a mixed bag of C#/ASP.NET, C# and Java web services, Java/Unix services and some Excel. Our client developers only use stored procedures that we provide and we expect that (where sensible, of course) they treat them like web service methods. Some our client developers don’t like SQL exceptions. They understand them in their languages but they don’t appreciate that the SQL is limited in how we can communicate issues. I don’t just mean SQL errors, such as trying to insert “bob” into a int column. I also mean exceptions such as telling them that a reference value is wrong, or that data has already changed, or they can’t do this because his aggregate is not zero. They’d don’t really have any concrete alternatives: they’ve mentioned that we should output parameters, but we assume an exception means “processing stopped/rolled back. How do folks here handle the database-client contract? Either generally or where there is separation between the DB and client code monkeys. Edits: we use SQL Server 2005 TRY/CATCH exclusively we log all errors after the rollback to an exception table already we're concerned that some of our clients won't check output paramaters and assume everything is OK. We need errors flagged up for support to look at. everything is an exception... the clients are expected to do some message parsing to separate information vs errors. To separate our exceptions from DB engine and calling errors, they should use the error number (ours are all 50,000 of course)

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

  • Ways to implement tags - pros and cons of each

    - by bobobobo
    Related Using SO as an example, what is the most sensible way to manage tags if you anticipate they will change often? Way 1: Seriously denormalized (comma delimited) table posts +--------+-----------------+ | postId | tags | +--------+-----------------+ | 1 | c++,search,code | Here tags are comma delimited. Pros: Tags are retrieved at once with a single select query. Updating tags is simple. Easy and cheap to update. Cons: Extra parsing on tag retrieval, difficult to count how many posts use which tags. (alternatively, if limited to something like 5 tags) table posts +--------+-------+-------+-------+-------+-------+ | postId | tag_1 | tag_2 | tag_3 | tag_4 | tag_5 | +--------+-------+-------+-------+-------+-------+ | 1 | c++ |search | code | | | Way 2: "Slightly normalized" (separate table, no intersection) table posts +--------+-------------------+ | postId | title | +--------+-------------------+ | 1 | How do u tag? | table taggings +--------+---------+ | postId | tagName | +--------+---------+ | 1 | C++ | | 1 | search | Pros: Easy to see tag counts (count(*) from taggings where tagName='C++'). Cons: tagName will likely be repeated many, many times. Way 3: The cool kid's (normalized with intersection table) table posts +--------+---------------------------------------+ | postId | title | +--------+---------------------------------------+ | 1 | Why is a raven like a writing desk? | table tags +--------+---------+ | tagId | tagName | +--------+---------+ | 1 | C++ | | 2 | search | | 3 | foofle | table taggings +--------+---------+ | postId | tagId | +--------+---------+ | 1 | 1 | | 1 | 2 | | 1 | 3 | Pros: No repeating tag names. More girls will like you. Cons: More expensive to change tags than way #1.

    Read the article

  • Checking if an SSH tunnel is up and running

    - by Jarmund
    I have a perl script which, when destilled a bit, looks like this: my $randport = int(10000 + rand(1000)); # Random port as other scripts like this run at the same time my $localip = '192.168.100.' . ($port - 4000); # Don't ask... backwards compatibility system("ssh -NL $randport:$localip:23 root\@$ip -o ConnectTimeout=60 -i somekey &"); # create the tunnel in the background sleep 10; # Give the tunnel some time to come up # Create the telnet object my $telnet = new Net::Telnet( Timeout => 10, Host => 'localhost', Port => $randport, Telnetmode => 0, Errmode => \&fail, ); # SNIPPED... a bunch of parsing data from $telnet The thing is that the target $ip is on a link with very unpredictable bandwidth, so the tunnel might come up right away, it might take a while, it might not come up at all. So a sleep is necessary to give the tunnel some time to get up and running. So the question is: How can i test if the tunnel is up and running? 10 seconds is a really undesirable delay if the tunnel comes up straight away. Ideally, i would like to check if it's up and continue with creating the telnet object once it is, to a maximum of, say, 30 seconds.

    Read the article

  • xml parameter in sql server stored procedure

    - by npalle
    I want to write a stored procedure that accept an XML parameter, parsing it's elements and inserting them in a table SQL. This is my XML: <Lines> <Line> <roomlist> <room> <namehotel>MeSa</namehotel> <typeroom>506671</typeroom> <typeroomname>Dbl Standard - Tip</typeroomname> <roomnumber>0</roomnumber> <priceroom>444.60</priceroom> <costroom>400.00</costroom> <boardtype/> <paxes> <pax> <name>EU</name> <lastname>CADO</lastname> <typepax>Adult</typepax> </pax> <pax> <name>LIN</name> <lastname>BAC</lastname> <typepax>Adult</typepax> </pax> </paxes> </room> </roomlist> </Line> </Lines> How can do that?

    Read the article

  • Testing ActionMailer's receive method (Rails)

    - by Brian Armstrong
    There is good documentation out there on testing ActionMailer send methods which deliver mail. But I'm unable to figure out how to test a receive method that is used to parse incoming mail. I want to do something like this: require 'test_helper' class ReceiverTest < ActionMailer::TestCase test "parse incoming mail" do email = TMail::Mail.parse(File.open("test/fixtures/emails/example1.txt",'r').read) assert_difference "ProcessedMail.count" do Receiver.receive email end end end But I get the following error on the line which calls Receiver.receive NoMethodError: undefined method `index' for #<TMail::Mail:0x102c4a6f0> /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/stringio.rb:128:in `gets' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:392:in `parse_header' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:139:in `initialize' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/stringio.rb:43:in `open' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/port.rb:340:in `ropen' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:138:in `initialize' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:123:in `new' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:123:in `parse' /Library/Ruby/Gems/1.8/gems/actionmailer-2.3.4/lib/action_mailer/base.rb:417:in `receive' Tmail is parsing the test file I have correctly. So that's not it. Thanks!

    Read the article

  • PHP - Parse XML subitems using foreach()

    - by Nate Shoffner
    I have an XML file that needs parsing in PHP. I am currenlty using simplexml_load_file to load the file like so: $xml = simplexml_load_file('Project.xml'); Inside that XML file lies a structure like this: <?xml version="1.0" encoding="ISO-8859-1"?> <project> <name>Project 1</name> <features> <feature>Feature 1</feature> <feature>Feature 2</feature> <feature>Feature 3</feature> <feature>Feature 4</feature> </features> </project> What I am trying to do is print the <name> contents along with EACH <feature> element within <features>. I'm not sure how to do this because there is more than 1 element called <feature>. Any help is greatly appreciated.

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >