Search Results

Search found 4962 results on 199 pages for 'parse'.

Page 113/199 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Creating a customized video using Flash and XML

    - by Aaron Ladage
    The problem: I have to create a Flash video (in CS3) that will query a MySQL database and display that data at certain points in the video. The bigger problem: I'm not a Flash/ActionScript developer, so this is all very foreign to me! I've divided this project into two parts: a.) dynamically generate an XML feed from the data using PHP (using an ID number passed in the URL's query string), and b.) be able to work with it in Flash. I've got the first part working, but am pretty lost in Flash. I can parse the XML, but I'm not sure how to set the data up as variables and attach it to a video's cue points. Can anyone point me in the direction of a good tutorial or offer some advice?

    Read the article

  • Haskell parsec parsing a string of items

    - by Chris
    I have a list that I need to parse where the all but the last element needs to be parsed by one parser, and the last element needs to be parsed by another parser. a = "p1 p1b ... p2" or a = "p2" Originally I tried parser = do parse1 <- many parser1 parse2 <- parser2 return AParse parse1 parse2 The problem is that parse1 can consume a parse2 input. So parse1 always consumes the entire list, and leave parse2 with nothing. Is there a way to say to apply parse1 to everything besides the last element in a string, and then apply parse2?

    Read the article

  • using Dependency Parser in Stanford coreNLP

    - by Eddie Dovzhik
    I am using the Stanford coreNLP ( http://nlp.stanford.edu/software/corenlp.shtml ) in order to parse sentences and extract dependencies between the words. I have managed to create the dependencies graph like in the example in the supplied link, but I don't know how to work with it. I can print the entire graph using the toString() method, but the problem I have is that the methods that search for certain words in the graph, such as getChildList, require an IndexedWord object as a parameter. Now, it is clear why they do because the nodes of the graph are of IndexedWord type, but it's not clear to me how I create such an object in order to search for a specific node. For example: I want to find the children of the node that represents the word "problem" in my sentence. How I create an IndexWord object that represents the word "problem" so I can search for it in the graph?

    Read the article

  • Error Message: Could not load type 'Global.Apps.Forms.NewVehicleRegistration.NewVehicleReg'.

    - by user279521
    I am getting an error message, and I not quite sure where the issue would be; Any ideas? Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load type 'Global.Apps.Forms.NewVehicle.NewVehicleReg'. Source Error: Line 1: <%@ Page Language="C#" EnableViewState="false" AutoEventWireup="true" CodeBehind="NewVehicleReg.aspx.cs" Inherits="Global.Apps.Forms.NewVehicle.NewVehicleReg" %> Line 2: Line 3: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> Source File: /globalprocurement/apps/Forms/NewVehicle/NewVehicleReg.aspx Line: 1 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.3082; ASP.NET Version:2.0.50727.3082

    Read the article

  • Linq to xml not able to add new elements

    - by Fore
    We save our xml in a "text" field in the database. So first I check if it exist any xml, if not I create a new xdocument, fill it with the necessary xml. else i just add the new element. Code looks like this: XDocument doc = null; if (item.xmlString == null || item.xmlString == "") { doc = new XDocument(new XDeclaration("1.0", "utf-8", "yes"), new XElement("DataTalk", new XAttribute(XNamespace.Xmlns + "xsi", "http://www.w3.org/2001/XMLSchema-instance"), new XAttribute(XNamespace.Xmlns + "xsd", "http://www.w3.org/2001/XMLSchema"), new XElement("Posts", new XElement("TalkPost")))); } else { doc = XDocument.Parse(item.xmlString); } This is working alright to create a structure, but then the problem appears, when I want to add new TalkPost. I get an error saying incorrectly structured document. The following code when adding new elements: doc.Add(new XElement("TalkPost", new XElement("PostType", newDialog.PostType), new XElement("User", newDialog.User), new XElement("Customer", newDialog.Customer), new XElement("PostedDate", newDialog.PostDate), new XElement("Message", newDialog.Message)));

    Read the article

  • How to get the changes on a branch in git

    - by Greg Hewgill
    What is the best way to get a log of commits on a branch since the time it was branched from the current branch? My solution so far is: git log $(git merge-base HEAD branch)..branch The documentation for git-diff indicates that "git diff A...B" is equivalent to "git diff $(git-merge-base A B) B". On the other hand, the documentation for git-rev-parse indicates that "r1...r2" is defined as "r1 r2 --not $(git merge-base --all r1 r2)". Why are these different? Note that "git diff HEAD...branch" gives me the diffs I want, but the corresponding git log command gives me more than what I want. In pictures, suppose this: x---y---z---branch / ---a---b---c---d---e---HEAD I would like to get a log containing commits x, y, z. "git diff HEAD...branch" gives these commits. However, "git log HEAD...branch" gives x, y, z, c, d, e.

    Read the article

  • Memory Issues When DOM Parsing A Large XML File on Android Devices

    - by tonyc
    Hey awesome SO users, I have an Android application that parses an XML file for users and displays results in a much more mobile friendly format. The app works great for most users, but some users have lots and lots of data and the app crashes on them because it runs out of memory. Is there any way I have a DOM style XML parser quit parsing data after a certain amount of parsing? I only need the first 30 or so elements so it would make the application much more efficient. I'd like to use a SAX or pull parser instead, but the XML I'm parsing is not valid and I have no control over it. Unless anyone has some good SAX solutions that let me parse messy, invalid XML, I think DOM is the only way to go. Thanks for reading!

    Read the article

  • Creating a "less"-like console pager interface for pysqlite3 database

    - by Eric
    I would like to add some interactive capability to a python CLI application I've writen that stores data in a SQLite3 database. Currently, my app reads-in a certain type of file, parses and analyzes, puts the analysis data into the db, and spits the formatted records to stdout (which I generally pipe to a file). There are on-the-order-of a million records in this file. Ideally, I would like to eliminate that text file situation altogether and just loop after that "parse and analyze" part, displaying a screen's worth of records, and allowing the user to page through them and enter some commands that will edit the records. The backend part I know how to do. Can anyone suggest a good starting point for creating that pager frontend either directly in the console (like the pager "less"), through ncurses, or some other system?

    Read the article

  • Writing an app with Perl and Ruby?

    - by Jeff Erickson
    I am working on a project that is mostly Ruby on Rails. However, I need to generate and parse Excel files in this project (I know, I know...), so I've been using Perl's Spreadsheet::WriteExcel and Spreadsheet::ParseExcel which work well. However, what is the best way to combine this use of Perl with the larger Ruby on Rails app? Is calling the Perl script with backticks the kosher way to go about this? It feels a little hacky to me, but if that is the only (or best) way, then that's what I'll do. I wanted to reach out and see if anyone else has some suggestions or advise. Thank you!

    Read the article

  • Android new Intent

    - by Sukitha
    Hi Im trying to start android market via my app to search similar products. I'm using this code. Intent intent = new Intent(Intent.ACTION_VIEW,Uri.parse("http://market.android.com/search?q=pub:\"some txt\"")); c.startActivity(intent); This works fine but when I hit on Home button with in the market and goto home phone home screen. When I open again the app it still shows market results. (i want to goto main menu) Whats the solution? thanks

    Read the article

  • How to use Netbeans platform syntax highlight with JEditorPane?

    - by Volta
    There are many tutorials online giving very complex or non-working examples on this. It seems that people recommend others to use the syntax highlighters offered by netbeans but I am totally puzzled on how to do so! I have checked many many sites on this and the best I can find is : http://www.antonioshome.net/kitchen/netbeans/nbms-standalone.php However I am still not able to use this example (as it is aimed to people who don't want to use the Netbeans platform but just a portion of it) and I am still not sure if I can just use syntax highlighting in a simple plug 'n play way. For example netbeans supports several language highlights by default, can I just use the highlighters in a JEditorPane to parse Ruby/Python/Java for example ? or do I need to write my own parser :-| ? I will really appreciate a small simple example on how to plug syntax highlight in a standalone application using the netbeans platform.

    Read the article

  • How do Scala parser combinators compare to Haskell's Parsec?

    - by artif
    I have read that Haskell parser combinators (in Parsec) can parse context sensitive grammars. Is this also true for Scala parser combinators? If so, is this what the "into" (aka "") function is for? What are some strengths/weaknesses of Scala's implementation of parser combinators, vs Haskell's? Do they accept the same class of grammars? Is it easier to generate error messages or do other miscellaneous useful things with one or the other? How does packrat parsing (introduced in Scala 2.8) fit into this picture? Is there a webpage or some other resource that shows how different operators/functions/DSL-sugar from one language's implementation maps onto the other's?

    Read the article

  • Issue with JSON and jQuery

    - by Jason N. Gaylord
    I'm calling a web service and returning the following data in JSON format: ["OrderNumber":"12345","CustomerId":"555"] In my web service success method, I'm trying to parse both: $.ajax({ type: "POST", url: "MyService.asmx/ServiceName", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { var data = msg.d; var rtn = ""; $.each(data, function(list) { rtn = rtn + this.OrderNumber + ", " + this.CustomerId + "<br/>"; } rtn = rtn + "<br/>" + data; $("#test").html(rtn); } }); but I'm getting a bunch of "undefined, undefined" rows followed by the correct JSON string. Any idea why? I've tried using the eval() method but that didn't help as I got some error message talking about ']' being expected.

    Read the article

  • Android - Using Camera Intent but not updating correctly?

    - by Tyler
    Hello - I am using an intent to capture a picture: Intent i = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE); i.putExtra(android.provider.MediaStore.EXTRA_OUTPUT, Uri.fromFile(new File(Environment.getExternalStorageDirectory(), "test.jpg"))); startActivityForResult(i, 2); And then once taken I do the following: sendBroadcast(new Intent(Intent.ACTION_MEDIA_MOUNTED, Uri.parse("file://" + Environment.getExternalStorageDirectory()))); launchGallery(); While the above seems to work the first time without issue, whenever I run through a second time (so test.jpg already exists) the image actually saves correctly to /sdcard/ but I am finding that the thumbnail does not update and when the gallery loads it shows the previous test.jpg image! I was under the impression that sendBroadcast should update the thumbnails, but it doesn't appear to be.. Is there some other way to go about this and ensure when I call my launchGallery(); method the most recent image I just took appears? Thanks!

    Read the article

  • How to improve my LDAP schema?

    - by asmaier
    Hello, I have a OpenLDAP Database and it holds some project objects that look like dn: cn=Proj1,ou=Project,ou=ua,dc=org cn: Proj1 objectClass: top objectClass: posixGroup member: 001ag member: 002ag System: ABEL System: PCx Budget: ABEL:1000000:0.3 Budget: PCx:300000:0.3 One can see that the Budget attribute is a ":"-separated string, where the first part holds the name of the system the budget is for, the second part holds some budget (which may change every month) and the last entry is a conversion factor for the budget of that system. Seeing this, I thought this is bad database design, since attribute values should always be atomic. But how can I improve that in LDAP, so that I can do a direct ldapsearch or a direct ldapmodify of the budget of System "ABEL" instead of writing a script, that will have to parse and split the ":"-separated string?

    Read the article

  • Generate canonical / real URL based on base.href or location

    - by blueyed
    Is there a method/function to get the canonical / transformed URL, respecting any base.href setting of the page? I can get the base URL via (in jQuery) using $("base").attr("href") and I could use string methods to parse the URL meant to being made relative to this, but $("base").attr("href") has no host, path etc attributes (like window.location has) manually putting this together is rather tedious E.g., given a base.href of "http://example.com/foo/" and a relative URL "/bar.js", the result should be: "http://example.com/bar.js" If base.href is not present, the URL should be made relative to window.location. This should handle non-existing base.href (using location as base in this case). Is there a standard method available for this already? (I'm looking for this, since jQuery.getScript fails when using a relative URL like "/foo.js" and BASE tag is being used (FF3.6 makes an OPTIONS request, and nginx cannot handle this). When using the full URL (base.href.host + "/foo.js", it works).)

    Read the article

  • [PHP] DOMDocument load on a page returning 400 Bad Request status

    - by PeteWilliams
    Hiya, I'm trying to use the Last.fm API for an application I'm creating, but am having some problems with validation. If an API request gives an error it returns a code and message in the response XML like this: <lfm status="failed"> <error code="6">No user with that name</error> </lfm> However, the request also returns an HTTP status of 400 (or in some cases 403) which DOMDocument considers an error and so then refuses to parse the XML. Is there any way round this, so that I can retrieve the error code and message? Thanks Pete

    Read the article

  • NHibernate, the Parallel Framework, and SQL Server

    - by andy
    hey guys, we have a loop that: 1.Loops over several thousand xml files. Altogether we're parsing millions of "user" nodes. 2.In each iteration we parse a "user" xml, do custom deserialization 3.finally, in each iteration, we send our object to nhibernate for saving. We use: .SaveOrUpdateAndFlush(user); This is a lengthy process, and we thought it would be a perfect candidate for testing out the .NET 4.0 Parallel libraries. So we wrapped the loop in a: Parallel.ForEach(); After doing this, we start getting "random" Timeout Exceptions from SQL Server, and finally, after leaving it running all night, OutOfMemory unhandled exceptions. I haven't done deep debugging on this yet, but what do you guys think. Is this simply a limitation of SQL Server, or could it be our NHibernate setup, or what? cheers andy

    Read the article

  • A tool to find and fix incomplete source code documentation

    - by Pekka
    I have several finished, older PHP projects with a lot of includes that I would like to document in javadoc/phpDocumentor style. While working through each file manually and being forced to do a code review alongside the documenting would be the best thing, I am, simply out of time constraints, interested in tools to help me automate the task as much as possible. The tool I am thinking about would ideally have the following features: Parse a PHP project tree and tell me where there are undocumented files, classes, and functions/methods (i.e. elements missing the appropriate docblock comment) Provide a method to half-way easily add the missing docblocks by creating the empty structures and, ideally, opening the file in an editor (internal or external I don't care) so I can put in the description. Optional: Automatic recognition of parameter types, return values and such. But that's not really required. The language in question is PHP, though I could imagine that a C/Java tool might be able to handle PHP files after some tweaking. Thanks for your great input!

    Read the article

  • Apache .htaccess

    - by Peter
    Hi! I have a htaccess problem. My directory structure look like this: / HEADER.html README.html /stackoverflow/ /stackoverflow/.htaccess .htaccess ServerSignature Off Options +Indexes HeaderName /HEADER.html IndexIgnore HEADER.html ReadmeName /README.html IndexIgnore /README.html IndexOptions +FancyIndexing AddCharset UTF-8 .txt IndexIgnore *.xml IndexIgnore *.php My primary directory is /stackoverflow/, when I navigate to this directory via browser I have included HEADER.html and README.html on every site/directories under /stackoverflow/, this works fine. I added some php code to my HEADER.html (which is in the root directory / ), I am trying to add to htaccess: AddType application/x-httpd-php .html .php .htm This not working, I think because the HEADER.html is in the root. If I try add the AddType... to the /.htaccess (and not to the /stackoverflow/.htaccess) it is overwriting my /stackoverflow/.htaccess rules. Why? How I can add AddType rule to my /stackoverflow/.htaccess to Apache parse html file as php file?

    Read the article

  • Converting to Byte Array after reading a BLOB from SQL in C#

    - by Soham
    Hi All, I need to read a BLOB and store it in a byte[], before going forward with Deserializing; Consider: //Reading the Database with DataAdapterInstance.Fill(DataSet); DataTable dt = DataSet.Tables[0]; foreach (DataRow row in dt.Rows) { byte[] BinDate = Byte.Parse(row["Date"].ToString()); // convert successfully to byte[] } I need help in this C# statement, as I am not able to convert an object type into a byte[]. Note, "Date" field in the table is a blob and not of type Date; Help appreciated; Soham

    Read the article

  • boost::Spirit Grammar for unsorted schema

    - by Hassan Syed
    I have a section of a schema for a model that I need to parse. Lets say it looks like the following. { type = "Standard"; hostname="x.y.z"; port="123"; } The properties are: The elements may appear unordered. All elements that are part of the schema must appear, and no other. All of the elements' synthesised attributes go into a struct. (optional) The schema might in the future depend on the type field -- i.e., different fields based on type -- however I am not concerned about this at the moment.

    Read the article

  • Recreation of MySQL DB using "mysql mydb < mydb.sql" is really slow when the table has tens of milli

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

  • writing a fast parser in python

    - by panzi
    I've written a hands-on recursive pure python parser for a some file format (ARFF) we use in one lecture. Now running my exercise submission is awfully slow. Turns out by far the most time is spent in my parser. It's consuming a lot of CPU time, the HD is not the bottleneck. I wonder what performant ways are there to write a parser in python? I'd rather not rewrite it in C. I tried to use jython, but that decreased performance a lot! The files I parse are partially huge ( 150 MB) with very long lines. My current parser only needs a look-ahead of one character. I'd post the source here but I don't know if that's such a good idea. After all the submission deadline has not jet ended. But then, the focus in this exercise is not the parser. You can choose whatever language you want to use and there already is a parser for Java.

    Read the article

  • How to get a non-XML output using JDOM XSLTransformer?

    - by Neil McF
    Hello, I have an XML file which I'd like to parse into a non-XML (text) file based on a XLST file. The code in both seem correct, and it works when testing manually, but I'm having a problem doing this programatically. I'm using JDOM's XSLTransformer class to apply the XSLT to the XML and it returns it in the format of a JDOM Document. The problem here is that I can't seem to access anything in the Document as it is not a proper XML file and I get a "java.lang.IllegalStateException: Root element not set" error. Is there a better way within Java to obtain a non-XML file as a result of XSLT?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >