Search Results

Search found 17966 results on 719 pages for 'xml parsing'.

Page 510/719 | < Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >

  • How to create an AST with ANTLR from a hierarchical key-value syntax

    - by Brabster
    I've been looking at parsing a key-value data format with ANTLR. Pretty straightforward, but the keys represent a hierarchy. A simplified example of my input syntax: /a/b/c=2 /a/b/d/e=3 /a/b/d/f=4 In my mind, this represents a tree structured as follows: (a (b (= c 2) (d (= e 3) (= f 4)))) The nearest I can get is to use the following grammar: /* Parser Rules */ start: (component NEWLINE?)* EOF -> (component)*; component: FORWARD_SLASH ALPHA_STRING component -> ^(ALPHA_STRING component) | FORWARD_SLASH ALPHA_STRING EQUALS value -> ^(EQUALS ALPHA_STRING value); value: ALPHA_STRING; /* Lexer Rules */ NEWLINE : '\r'? '\n'; ALPHA_STRING : ('a'..'z'|'A'..'Z'|'0'..'9')+; EQUALS : '='; FORWARD_SLASH : '/'; Which produces: (a (b (= c 2))) (a (b (d (= e 3)))) (a (b (d (= f 4)))) I'm not sure whether I'm asking too much from a generic tool such as ANTLR here, and this is as close I can get with this approach. That is, from here I consume the parts of the tree and create the data structure I want by hand. So - can I produce the tree structure I want directly from a grammar? If so, how? If not, why not - is it a technical limitation in ANTLR or is it something more CS-y to do with the type of language involved?

    Read the article

  • Parse a CSV file using python (to make a decision tree later)

    - by Margaret
    First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code. The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no). The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example: # Column1, Column2, Column3, Column4 Value01, Value02, Value03, Value04 Value11, Value12, Value13, Value14 At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines: Read in each line, character by character If the character is not a comma or a space Append character to temporary string If the character is a comma Append the temporary string to a list Empty string Once a line has been read Create a dictionary using the header row as the key (somehow!) Append that dictionary to a list However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it. Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?

    Read the article

  • What is the process involved in viewing a webservice in a browser from within visual studio?

    - by Sam Holder
    I have created a new VS2008 ASP.Net Web service project, with the default name WebService1. If I right click on the Service1.asmx file and select 'View in Browser' what are the processes that go on to make this happen? I am asking because I have a situation where when I run this from a visual studio project started in our development shell (which sets up a common build environment) I cannot get the web service to show up in the browser. It starts the asp.net development server and creates a single file: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\c43ddc22\268ae91b\hash\hash.web but when I start it from a stand alone project i get a whole slew of files in here: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\edad4eee\d198cf0e\App_Web_defaultwsdlhelpgenerator.aspx.cdcab7d2.vicgkf94.dll C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\edad4eee\d198cf0e\service1.asmx.cdcab7d2.compiled etc etc I am trying to debug this but not really getting anywhere. i have inspected the output from VS but the only option I get is for the build output, which is basic and doesn't really contain any information that is useful. I have tried running both versions with DebugView running but no output there either. I would like to know if there are any log files I could look at, or if anyone has any suggestions on how I might be able to debug what is going wrong here? For completeness the output I get when it doesn't work is: Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not create type 'WebService1.Service1'. Source Error: Line 1: Source File: /Service1.asmx Line: 1 Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3082

    Read the article

  • handle large Parcelable ArrayList in Android

    - by Gal Ben-Haim
    I'm developing an Android app that is a client to a JSON webservice API. I have classes of resource objects (some are nested) and I pass results from an IntentService that access the webserive using the Parcelable interface for all the resource classes. the webservice returns arrays or results that can be potentially large (because of the nesting, for example, a post object also contains comments array, each comment also contains a user object). currently I'm either inserting the results into a SQlite database or displaying them in a ListView. (my relevant methods are accepting ArrayList<resourceClass> as arguments). (some data need to be persistent stored and some should not). since I don't know what size of lists I can handle this way without reaching the memory limits, is this a good practice ? is it a better idea to save the parsed JSON to a local file immediately and pass the file path to the ResultReceiver, then either insert to database from that file or display the data ? is there a better way to handle this ? btw - I'm parsing the JSON as a stream with Gson's Reader so there shouldn't be memory issues at that stage.

    Read the article

  • NSInvocation object not getting allocated iphone sdk

    - by neha
    Hi all, I'm doing NSString *_type_ = @"report"; NSNumber *_id_ = [NSNumber numberWithInt:report.reportId]; NSDictionary *paramObj = [NSDictionary dictionaryWithObjectsAndKeys: _id_, @"bla1", _type_, @"bla2",nil]; _operation = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(initParsersetId:) object:paramObj]; But my _operation object is nil even after processing this line. The selector here is actually a function I'm writing which is like: -(void)initParsersetId:(NSInteger)_id_ type:(NSString *)_type_ { NSString *urlStr = [NSString stringWithFormat:@"apimediadetails?id=624&type=report"]; NSString *finalURLstr = [urlStr stringByAppendingString:URL]; NSURL *url = [[NSURL alloc] initWithString:finalURLstr]; NSXMLParser *xmlParser = [[NSXMLParser alloc] initWithContentsOfURL:url]; //Initialize the delegate. DetailedViewObject *parser = [[DetailedViewObject alloc] initDetailedViewObject]; //Set delegate [xmlParser setDelegate:parser]; //Start parsing the XML file. BOOL success = [xmlParser parse]; if(success) NSLog(@"No Errors"); else NSLog(@"Error Error Error!!!"); } Can anybody please point out wheather where I'm going wrong. Thanx in advance.

    Read the article

  • Passing NULL value

    - by FFXIII
    Hi. I use an instance of NSXMLParser. I store found chars in NSMutableStrings that are stored in an NSMutableDictionary and these Dicts are then added to an NSMutableArray. When I test this everything seems normal: I count 1 array, x dictionnaries and x strings. In a detailview controller file I want to show my parsed results. I call the class where everthing is stored but I get (null) returned. This is what I do (wrong): XMLParser.h @interface XMLParser : NSObject { NSMutableArray *array; NSMUtableDictionary *dictionary; NSSMutabletring *element; } @property (nonatomic, retain) NSMutableArray *array; @property (nonatomic, retain) NSMutableDictionary *dictionary; @property (nonatomic, retain) NSMutableString *element; XMLParser.m @synthesize array, dictionary, element; //parsing goes on here & works fine //so 'element' is filled with content and stored in a dict in an array //and released at the end of the file In my controller file I do this: controller.h @class XMLParser; @interface controller : UIViewController { XMLParser *aXMLParser; } @property (nonatomic, retain) XMLParser *aXMLParser; controller.m #import "XMLParser.h" @synthesize aXMLParser; - (void)viewDidLoad { NSLog(@"test array: %@", aXMLParser.array); NSLog(@"test dict: %@", aXMLParser.dictionary); NSLog(@"test element: %@", aXMLParser.element); } When I test the value of my array, a dict or an element in the XMLParser.h file I get my result. What am I doing wrong so I can't call my results in my controller file? Any help is welcome, because I'm pretty stuck right now :/

    Read the article

  • How to display recently view items in iphone

    - by Pugal Devan
    Hi friends, I have created tab bar and have three views. Tab-1 --> table view and it navigates to detail table view. Tab-2 --> map view. Tab-3 --> table view.(Recent view for Tab-1 items). Now i have displayed the names in the table view in tab-1. The names are changed dynamically and displayed using XML parsing. In tab-3 is the recent view. Now i want to display the details in recent view those who visit recently in tab-1 view. I want to display only 10 items in the recent view. So i have used NSMutableArray and i have set deleagtes due to get the items in recent view from tab-1 view. How to avoid the duplicate recent items and how to remove the older items and inserts the new recent items in recent view. Here my sample code, In Recent view class, stories = [[NSMutableArray alloc] initWithCapacity:10]; items = [[NSMutableDictionary alloc] init]; NSString *preferName = [devDelegate getCurrentAuthor]; //(get name dynamically) [items setObject:preferName forKey:@"preferName"]; [stories addObject:[items copy]]; cell.textLabel.text = [[stories objectAtIndex:storyIndex] objectForKey:@"preferName"]; So please guide me to how to achieve this. Thanks.

    Read the article

  • How to find Tomcat's PID and kill it in python?

    - by 4herpsand7derpsago
    Normally, one shuts down Apache Tomcat by running its shutdown.sh script (or batch file). In some cases, such as when Tomcat's web container is hosting a web app that does some crazy things with multi-threading, running shutdown.sh gracefully shuts down some parts of Tomcat (as I can see more available memory returning to the system), but the Tomcat process keeps running. I'm trying to write a simple Python script that: Calls shutdown.sh Runs ps -aef | grep tomcat to find any process with Tomcat referenced If applicable, kills the process with kill -9 <PID> Here's what I've got so far (as a prototype - I'm brand new to Python BTW): #!/usr/bin/python # Imports import sys import subprocess # Load from imported module. if __init__ == "__main__": main() # Main entry point. def main(): # Shutdown Tomcat shutdownCmd = "sh ${TOMCAT_HOME}/bin/shutdown.sh" subprocess.call([shutdownCmd], shell=true) # Check for PID grepCmd = "ps -aef | grep tomcat" grepResults = subprocess.call([grepCmd], shell=true) if(grepResult.length > 1): # Get PID and kill it. pid = ??? killPidCmd = "kill -9 $pid" subprocess.call([killPidCmd], shell=true) # Exit. sys.exit() I'm struggling with the middle part - with obtaining the grep results, checking to see if their size is greater than 1 (since grep always returns a reference to itself, at least 1 result will always be returned, methinks), and then parsing that returned PID and passing it into the killPidCmd. Thanks in advance!

    Read the article

  • Comparing lists in Standard ML

    - by user1050640
    I am extremely new to SML and we just got out first programming assignment for class and I need a little insight. The question is: write an ML function, called minus: int list * int list -> int list, that takes two non-decreasing integer lists and produces a non-decreasing integer list obtained by removing the elements from the first input list which are also found in the second input list. For example, minus( [1,1,1,2,2], [1,1,2,3] ) = [1,2] minus( [1,1,2,3],[1,1,1,2,2] ) = [3] Here is my attempt at answering the question. Can anyone tell me what I am doing incorrectly? I don't quite understand parsing lists. fun minus(xs,nil) = [] | minus(nil,ys) = [] | minus(x::xs,y::ys) = if x=y then minus(xs,ys) else x :: minus(x,ys); Here is a fix I just did, I think this is right now? fun minus(L1,nil) = L1 | minus(nil,L2) = [] | minus(L1,L2) = if hd(L1) > hd(L2) then minus(L1,tl(L2)) else if hd(L1) = hd(L2) then minus(tl(L1),tl(L2)) else hd(L1) :: minus(tl(L1), L2);

    Read the article

  • Iterating through Event Log Entry Collection, IndexOutOutOfBoundsException

    - by fjdumont
    Hello, in a service application I am iterating through the Windows application event log to parse Events in order react depanding on the entry message. In the case that the event log is full (Windows usually makes sure there is enough space by deleting old entries - this is configurable in the eventvwr.exe settings), the service always runs into an IndexOutOfBoundsException while iterating through the EventLog.Entries collection. No matter how I iterate (for-loop, using the collections enumerator, copying the collection into an array, ...), I can't seem to get rid of this ´bug´. Currently, I ensure that the log is not full in order to keep the service running by regularly deleting the last few item by parsing the event log file and deleting the last few nodes (Don't beat me up, I couldn't find a better alternative...). How can I iterate through the collection without trying to access already deleted entries? Is there probably a more elegant method? I am only trying to acces the logs written during the last x seconds (even LINQ failed to select those when the log is full - same exception), could this help? Thanks for any advice and hints Frank Edit: I forgot to mention that my assumption is the loops are accessing entries which are being deleted during iteration by Windows. Basically that is why I tried to clone the collection. Is there perhaps a way to lock the collection for a small amount of time for just my application?

    Read the article

  • Manually Trigger or Prevent Javascript Lazy Loading in Website from Bookmarklet

    - by stwhite
    One of the problems with using a bookmarklet for grabbing images on a page is that if a website uses lazy loading, the bookmarklet won't detect the image because it will have a placeholder, e.g. "grey.gif" and not the actual source of the image. Javascript on page load, is run to replace these urls. I'm looking for a solution to retrieve those images that are not being displayed by either triggering or preventing Lazy Loading from running. This bookmarklet isn't limited to one specific domain. So far some ideas I've had are: Ping the domain and retrieve the page html if no images are found the first time around: Problem: this then requires parsing the actual html. Problem: with lazy loading, a few images will always show, just none below the fold. Scroll page to initiate lazy loading when bookmarklet is clicked, then scroll back to top. Trigger Lazy Loading from inside bookmarklet using script. Lazy Loader adds the "original" attribute, potentially could check if attribute exists w/ value. Problem: ???

    Read the article

  • How SQLite on Android handles long strings?

    - by Levara
    I'm wondering how Android's implementation of SQLite handles long Strings. Reading from online documentation on sqlite, it said that strings in sqlite are limited to 1 million characters. My strings are definitely smaller. I'm creating a simple RSS application, and after parsing a html document, and extracting text, I'm having problem saving it to a database. I have 2 tables in database, feeds and articles. RSS feeds are correctly saved and retrieved from feeds table, but when saving to the articles table, logcat is saying that it cannot save extracted text to it's column. I don't know if other columns are making problems too, no mention of them in logcat. I'm wondering, since text is from an article on web, are signs like (",',;) creating problems? Is Android automaticaly escaping them, or I have to do that. I'm using a technique for inserting similar to one in notepad tutorial: public long insertArticle(long feedid, String title, String link, String description, String h1,tring h2, String h3, String p, String image, long date) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_FEEDID, feedid); initialValues.put(KEY_TITLE, title); initialValues.put(KEY_LINK, link); initialValues.put(KEY_DESCRIPTION, description ); initialValues.put(KEY_H1, h1 ); initialValues.put(KEY_H2, h2); initialValues.put(KEY_H3, h3); initialValues.put(KEY_P, p); initialValues.put(KEY_IMAGE, image); initialValues.put(KEY_DATE, date); return mDb.insert(DATABASE_TABLE_ARTICLES,null, initialValues); } Column P is for extracted text, h1, h2 and h3 are for headers from a page. Logcat reports only column p to be the problem. The table is created with following statement: private static final String DATABASE_CREATE_ARTICLES = "create table articles( _id integer primary key autoincrement, feedid integer, title text, link text not null, description text," + "h1 text, h2 text, h3 text, p text, image text, date integer);";

    Read the article

  • iphone file download not working

    - by Anonymous
    Hi, In my app I 'm first connecting to a web service, which in return sends a url for a file. I use the url to download the file and then display it on the new view. I get the correct URL but not able to download file from that location. I have another test app which will download file from the same location and it works like a charm. following is my code for webservice-file download. This is a snippet of the code where i 'm parsing the web service xml and then pass the result to NSData for file download. Any suggestions where am i going wrong -- I 'm referring to the following tutorials. Web Service PDF Viewer if ([elementName isEqualToString:@"PRHPdfResultsResult"]) { NSLog(soapResults); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Report downloaded from:" message:soapResults delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; NSData *pdfData = [[NSData alloc] initWithContentsOfURL:[NSURL URLWithString:soapResults]]; //Store the Data locally as PDF File NSString *resourceDocPath = [[NSString alloc] initWithString:[[[[NSBundle mainBundle] resourcePath] stringByDeletingLastPathComponent] stringByAppendingPathComponent:@"Documents"]]; NSString *filePath = [resourceDocPath stringByAppendingPathComponent:@"myPDF.pdf"]; [pdfData writeToFile:filePath atomically:YES]; [alert show]; [alert release]; [soapResults setString:@""]; elementFound = FALSE; }

    Read the article

  • python on apache - getting 404

    - by Kirby
    I edited this question after i found a solution... i need to understand why the solution worked instead of my method? This is likely to be a silly question. I tried searching other questions that are related... but to no avail. i am running Apache/2.2.11 (Ubuntu) DAV/2 SVN/1.5.4 PHP/5.2.6-3ubuntu4.5 with Suhosin-Patch mod_python/3.3.1 Python/2.6.2 i have a script called test.py #! /usr/bin/python print "Content-Type: text/html" # HTML is following print # blank line, end of headers print "hello world" running it as an executable works... /var/www$ ./test.py Content-Type: text/html hello world when i run http://localhost/test.py i get a 404 error. What am i missing? i used this resource to enable python parsing on apache. http://ubuntuforums.org/showthread.php?t=91101 From that same thread... the following code worked.. why? #!/usr/bin/python import sys import time def index(req): # Following line causes error to be sent to browser # rather than to log file (great for debug!) sys.stderr = sys.stdout #print "Content-type: text/html\n" #print """ blah1 = """<html> <head><title>A page from Python</title></head> <body> <h4>This page is generated by a Python script!</h4> The current date and time is """ now = time.gmtime() displaytime = time.strftime("%A %d %B %Y, %X",now) #print displaytime, blah1 += displaytime #print """ blah1 += """ <hr> Well House Consultants demonstration </body> </html> """ return blah1

    Read the article

  • What good open source programs exist for fuzzing popular image file types?

    - by JohnnySoftware
    I am looking for a free, open source, portable fuzzing tool for popular image file types that is written in either Java, Python, or Jython. Ideally, it would accept specifications for the fuzzable fields using some kind of declarative constraints. Non-procedural grammar for specifying constraints are greatly preferred. Otherwise, might as well write them all in Python or whatever. Just specifying ranges of valid values or expressions for them. Ideally, it would support some kind of generative programming to export the fuzzer into various programming languages to suit cases where more customization was required. If it supported a direct-manipulation GUI for controlling parameter values and ranges, that would be nice too. The file formats that should be supported are: GIF JPEG PNG So basically, it should be sort of a toolkit consisting of ready-to-run utility, a framework or library, and be capable of generating the fuzzed files directly as well as from programs it generates. It needs to be simple so that test images can be created quickly. It should have a batch capability for creating a series of images. Creating just one at a time would be too painful. I do not want a hacking tool, just a QA tool. Basically, I just want to address concerns that it is taking too long to get commonplace image rendering/parsing libraries stable and trustworthy.

    Read the article

  • Regex expression is too greedy

    - by alastairs
    I'm writing a regular expression to match data from the IMDb soundtracks data file. My regexes are mostly working, although they are in places slurping too much text into my named groups. Take the following regex for example: "^ Performed by '?(?<performer>.*)('? \(qv\))?$" The performer group includes the string ' (qv) as well as the performer's name. Unfortunately, because the records are not consistently formatted, some performers' names are surrounded by single quotation marks whilst others are not. This means they are optional as far as the regex is concerned. I've tried marking the last group as a greedy group using the ?> group specifier, but this appeared to have no effect on the results. I can improve the results by changing the performer group to match a small range of characters, but this reduces my chances of parsing the name out correctly. Furthermore, if I were to just exclude the apostrophe character, I would then be unable to parse, e.g., band names containing apostrophes, such as Elia's Lonely Friends Band who performed Run For Your Life featured in Resident Evil: Apocalypse.

    Read the article

  • An algo for generating code callgraphs

    - by Shrey
    I am working on a project which requires generating some metrices of a code (it can be C/C++/Java/Python). One of the metrices can be that I create a callgraph after parsing the code entered (the programs are expected to be small - probably under 1000L). As of now, I am looking for a way to create a program (it can be C/Python) which can take as input a file (C/C++/Python/Java) and then create a textual output containing approximate calling sequence as well as tokens in the code file. As of now, I have looked at some other tools which do the same thing - like splint, pylint, codeviz etc. So, I have two ways of solving my problem: Read and understand the algorithm these tools use (tokenization-graph generation etc) Or, have a basic algo (something like very high level steps) and then sit down to create each of them as I want them to be. I know, re-inventing the wheel is not a good idea, but, I would still like to give option (2) a shot. Only issue is, currently I am a blank. My question: Does any one have any knowhow about how to create code graphs? Any hints as to what I should do? Any top levels steps which I can follow? Thanks a lot.

    Read the article

  • If I use Unicode on a ISO-8859-1 site, how will that be interpreted by a browser?

    - by grg-n-sox
    So I got a site that uses ISO-8859-1 encoding and I can't change that. I want to be sure that the content I enter into the web app on the site gets parsed correctly. The parser works on a character by character basis. I also cannot change the parser, I am just writing files for it to handle. The content in my file I am telling the app to display after parsing contains Unicode characters (or at least I assume so, even if they were produced by Windows Alt Codes mapped to CP437). Using entities is not an option due to the character by character operation of the parser. The only characters that the parser escapes upon output are markup sensitive ones like ampersand, less than, and greater than symbols. I would just go ahead and put this through to see what it looks like, but output can only be seen on a publishing, which has to spend a couple days getting approved and such, and that would be asking too much for just a test case. So, long story short, if I told a site to output ?ÇÑ¥?? on a site with a meta tag stating it is supposed to use ISO-8859-1, will a browser auto-detect the Unicode and display it or will it literally translate it as ISO-8859-1 and get a different set of characters?

    Read the article

  • node.js UDP data lost at high package rates

    - by koleto
    I am observing a significant data-lost on a UDP connection with node.js 0.6.18 and 0.8.0 . It appears at high packet rates about 1200 packet per second with frames about 1500 byte limit. Each data packages has a incrementing number so it easy to track the number of lost packages. var server = dgram.createSocket("udp4"); server.on("message", function (message, rinfo) { //~processData(message); //~ writeData(message, null, 5000); }).bind(10001); On the receiving callback I tested two cases I first saved 5000 packages in a file. The result ware no dropped packages. After I have included a data processing routine and got about 50% drop rate. What I expected was that the process data routine should be completely asynchronous and should not introduce dead time to the system, since it is a simple parser to process binary data in the package and to emits events to a further processing routine. It seems that the parsing routine introduce dead time in which the event handler is unable to handle each packets. At the low package rates (< 1200 packages/sec) there are no data lost observed! Is this a bug or I am doing something wrong?

    Read the article

  • PHP: json_decode dumping NULL, BOM not found

    - by SerEnder
    I've been trying to find out why this 'json_encode'd string isn't parsing out correctly, and came across previously answered questions that had the UTF BOM sequence that was throwing the error, but didn't help me here. Here's the code that isn't currently working: //Decode the notes attached to the sig $aNotes = json_decode($rule->getNotes(),true); $bom = pack("CCC",0xef,0xbb,0xbf); if(0 == strncmp($rule->getNotes(),$bom,3)) { print('BOM detected - json encoding in UTF-8<br/>'); } else { print('BOM NOT detected - json encoding correctly<br/>'); } print('rule->getNotes:<br/>' . $rule->getNotes() .'<br/>'); var_dump($aNotes); Which generates this result: BOM NOT detected - json encoding correctly rule->getNotes: [{"lDate":"Unknown","sAuthor":"Unknown","sNote":"This is a general purpose Russian spam rule that matches anything starting with 2, 3 or 4 hex digits followed by a domain name ending with .ru -RSK 2010-05-10"},{"lDate":"1295031463082","sAuthor":"Drew Thorstenson","sNote":"this is Ryan's ru rule"}] NULL I've run it through JSON Lint, which said it was valid, and An Online JSON Parser which parsed it correctly too. Any insight would be greatly appreciated.

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • Searching Natural Language Sentence Structure

    - by Cerin
    What's the best way to store and search a database of natural language sentence structure trees? Using OpenNLP's English Treebank Parser, I can get fairly reliable sentence structure parsings for arbitrary sentences. What I'd like to do is create a tool that can extract all the doc strings from my source code, generate these trees for all sentences in the doc strings, store these trees and their associated function name in a database, and then allow a user to search the database using natural language queries. So, given the sentence "This uploads files to a remote machine." for the function upload_files(), I'd have the tree: (TOP (S (NP (DT This)) (VP (VBZ uploads) (NP (NNS files)) (PP (TO to) (NP (DT a) (JJ remote) (NN machine)))) (. .))) If someone entered the query "How can I upload files?", equating to the tree: (TOP (SBARQ (WHADVP (WRB How)) (SQ (MD can) (NP (PRP I)) (VP (VB upload) (NP (NNS files)))) (. ?))) how would I store and query these trees in a SQL database? I've written a simple proof-of-concept script that can perform this search using a mix of regular expressions and network graph parsing, but I'm not sure how I'd implement this in a scalable way. And yes, I realize my example would be trivial to retrieve using a simple keyword search. The idea I'm trying to test is how I might take advantage of grammatical structure, so I can weed-out entries with similar keywords, but a different sentence structure. For example, with the above query, I wouldn't want to retrieve the entry associated with the sentence "Checks a remote machine to find a user that uploads files." which has similar keywords, but is obviously describing a completely different behavior.

    Read the article

  • How to add an extra plist property using CMake?

    - by Jesse Beder
    I'm trying to add the item <key>UIStatusBarHidden</key><true/> to my plist that's auto-generated by CMake. For certain keys, it appears there are pre-defined ways to add an item; for example: set(MACOSX_BUNDLE_ICON_FILE ${ICON}) But I can't find a way to add an arbitrary property. I tried using the MACOSX_BUNDLE_INFO_PLIST target property as follows: I'd like the resulting plist to be identical to the old one, except with the new property I want, so I just copied the auto-generated plist and set that as my template. But the plist uses some Xcode variables, which also look like ${foo}, and CMake grumbles about this: Syntax error in cmake code when parsing string <string>com.bedaire.${PRODUCT_NAME:identifier}</string> syntax error, unexpected cal_SYMBOL, expecting } (47) Policy CMP0010 is not set: Bad variable reference syntax is an error. Run "cmake --help-policy CMP0010" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. In any case, I'm not even sure that this is the right thing to do. I can't find a good example or any good documentation about this. Ideally, I'd just let CMake generate everything as before, and just add a single extra line. What can I do?

    Read the article

  • SQL Server error handling: exceptions and the database-client contract

    - by gbn
    We’re a team of SQL Servers database developers. Our clients are a mixed bag of C#/ASP.NET, C# and Java web services, Java/Unix services and some Excel. Our client developers only use stored procedures that we provide and we expect that (where sensible, of course) they treat them like web service methods. Some our client developers don’t like SQL exceptions. They understand them in their languages but they don’t appreciate that the SQL is limited in how we can communicate issues. I don’t just mean SQL errors, such as trying to insert “bob” into a int column. I also mean exceptions such as telling them that a reference value is wrong, or that data has already changed, or they can’t do this because his aggregate is not zero. They’d don’t really have any concrete alternatives: they’ve mentioned that we should output parameters, but we assume an exception means “processing stopped/rolled back. How do folks here handle the database-client contract? Either generally or where there is separation between the DB and client code monkeys. Edits: we use SQL Server 2005 TRY/CATCH exclusively we log all errors after the rollback to an exception table already we're concerned that some of our clients won't check output paramaters and assume everything is OK. We need errors flagged up for support to look at. everything is an exception... the clients are expected to do some message parsing to separate information vs errors. To separate our exceptions from DB engine and calling errors, they should use the error number (ours are all 50,000 of course)

    Read the article

< Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >