Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 102/1477 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Google I/O 2011: JavaScript Programming in the Large with Closure Tools

    Google I/O 2011: JavaScript Programming in the Large with Closure Tools Michael Bolin Most developers who have tinkered with JavaScript could not imagine writing 1000 lines of code in such a language, let alone 100000. Yet that is exactly what Google engineers have done using a suite of JavaScript tools named "Closure" to produce many of the most popular and sophisticated applications on the Web, such as Gmail and Google Maps. From: GoogleDevelopers Views: 4915 35 ratings Time: 57:07 More in Science & Technology

    Read the article

  • Oracle IRM video demonstration of seperating duties of document security

    - by Simon Thorpe
    One thing an Information Rights Management technology should do well is separate out three main areas of responsibility.The business process of defining and controlling the classifications to which content is secured and the definition of the roles employees, customers, partners and contractors have when accessing secured content. Allow IT to manage the server and perform the role of authorizing the creation of new classifications to meet business needs but yet once the classification has been created and handed off to the business, IT no longer plays a role on the ongoing management. Empower the business to take ownership of classifications to which their own content is secured. For example an employee who is leading an acquisition project should be responsible for defining who has access to confidential project documents. This person should be able to manage the rights users have in the classification and also be the point of contact for those wishing to gain rights. Oracle IRM has since it's creation in the late 1990's had this core model at the heart of its design. Due in part to the important seperation of rights from the documents themselves, Oracle IRM places the right functionality within the right parts of the business. For example some IRM technologies allow the end user to make decisions about what users can print, edit or save a secured document. This in practice results in a wide variety of content secured with a plethora of options that don't conform to any policy. With Oracle IRM users choose from a list of classifications to which they have been given the ability to secure information against. Their role in the classification was given to them by the business owner of the classification, yet the definition of the role resides within the realm of corporate security who own the overall business classification policies. It is this type of design and philosophy in Oracle IRM that makes it an enterprise solution that works beyond a few users and a few secured documents to hundreds of thousands of users and millions of documents. This following video shows how Oracle IRM 11g, the market leading document security solution, lets the security organization manage and create classifications whilst the business owns and manages them. If you want to experience using Oracle IRM secured content and the effects of different roles users have, why not sign up for our free demonstration.

    Read the article

  • What is the best practice with KML files when adding geositemap?

    - by Floran
    Im not sure how to deal with kml files. Now important particularly in reference to the Google Venice update. My site basically is a guide of many company listings (sort of Yellow Pages). I want each company listing to have a geolocation associated with it. Which of the options I present below is the way to go? OPTION 1: all locations in a single KML file with a reference to that KML file from a geositemap.xml MYGEOSITEMAP.xml <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:geo="http://www.google.com/geo/schemas/sitemap/1.0"> <url><loc>http://www.mysite.com/locations.kml</loc> <geo:geo> <geo:format>kml</geo:format></geo:geo></url> </urlset> ALLLOCATIONS.kml <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document> <name>MyCompany</name> <atom:author> <atom:name>MyCompany</atom:name> </atom:author> <atom:link href="http://www.mysite.com/locations/3454/MyCompany" rel="related" /> <Placemark> <name>MyCompany, Kalverstraat 26 Amsterdam 1000AG</name> <description><![CDATA[<address><a href="http://www.mysite.com/locations/3454/MyCompany">MyCompany</a><br />Address: Kalverstraat 26, Amsterdam 1000AG <br />Phone: 0646598787</address><p>hello there, im MyCompany</p>]]> </description><Point><coordinates>5.420686499999965,51.6298808,0</coordinates> </Point> </Placemark> </Document> </kml> <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document> <name>MyCompany</name><atom:author><atom:name>MyCompany</atom:name></atom:author><atom:link href="http://www.mysite.com/locations/22/companyX" rel="related" /><Placemark><name>MyCompany, Rosestreet 45 Amsterdam 1001XF </name><description><![CDATA[<address><a href="http://www.mysite.com/locations/22/companyX">companyX</a><br />Address: Rosestreet 45, Amsterdam 1001XF <br />Phone: 0642195493</address><p>some text about companyX</p>]]></description><Point><coordinates>5.520686499889632,51.6197705,0</coordinates></Point></Placemark> </Document> </kml> OPTION 2: a separate KML file for each location and a reference to each KML file from a geositemap.xml (kml files placed in a \kmlfiles folder) MYGEOSITEMAP.xml <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:geo="http://www.google.com/geo/schemas/sitemap/1.0"> <url><loc>http://www.mysite.com/kmlfiles/3454_MyCompany.kml</loc> <geo:geo> <geo:format>kml</geo:format></geo:geo></url> <url><loc>http://www.mysite.com/kmlfiles/22_companyX.kml</loc> <geo:geo> <geo:format>kml</geo:format></geo:geo></url> </urlset> *3454_MyCompany.kml* <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document><name>MyCompany</name><atom:author><atom:name>MyCompany</atom:name></atom:author><atom:link href="http://www.mysite.com/locations/3454/MyCompany" rel="related" /><Placemark><name>MyCompany, Kalverstraat 26 Amsterdam 1000AG</name><description><![CDATA[<address><a href="http://www.mysite.com/locations/3454/MyCompany">MyCompany</a><br />Address: Kalverstraat 26, Amsterdam 1000AG <br />Phone: 0646598787</address><p>hello there, im MyCompany</p>]]></description><Point><coordinates>5.420686499999965,51.6298808,0</coordinates></Point></Placemark> </Document> </kml> *22_companyX.kml* <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document><name>companyX</name><atom:author><atom:name>companyX</atom:name></atom:author><atom:link href="http://www.mysite.com/locations/22/companyX" rel="related" /><Placemark><name>companyX, Rosestreet 45 Amsterdam 1001XF </name><description><![CDATA[<address><a href="http://www.mysite.com/locations/22/companyX">companyX</a><br />Address: Rosestreet 45, Amsterdam 1001XF <br />Phone: 0642195493</address><p>some text about companyX</p>]]></description><Point><coordinates>5.520686499889632,51.6197705,0</coordinates></Point></Placemark> </Document> </kml> OPTION 3?

    Read the article

  • Implementing the Security Development Lifecycle (SDL) at a Large Insurance Company

    Find out how Cigital, an SDL Pro Network member, assisted a large insurance company adopt the Microsoft SDL. The case study describes both the business drivers leading up to the company's recognizing the need for incorporating the SDL within their development process as well as the initial roll out of the SDL....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • White Paper on Analysis Services Tabular Large-scale Solution #ssas #tabular

    - by Marco Russo (SQLBI)
    Since the first beta of Analysis Services 2012, I worked with many companies designing and implementing solutions based on Analysis Services Tabular. I am glad that Microsoft published a white paper about a case-study using one of these scenarios: An Analysis Services Case Study: Using Tabular Models in a Large-scale Commercial Solution. Alberto Ferrari is the author of the white paper and many people contributed to it. The final result is a very technical document based on a case study, which provides a level of detail that I don’t see often in other case studies (which are usually more marketing-oriented). This white paper has the following structure: Requirements (data model, capacity planning, client tool) Options considered (SQL Server Columnstore Indexes, SSAS Multidimensional, SSAS Tabular) Data Model optimizations (memory compression, query performance, scalability) Partitioning and Processing strategy for near real-time latency Hardware selection (NUMA analysis, Azure VM tests) Scalability tests (estimation of maximum users per node) If you are in charge of evaluating Tabular as analytical engine, or if you have to design your solution based on Tabular, this white paper is a must read. But if you just want to increase your knowledge of Analysis Services, you will find a lot of useful technical information. That said, my favorite quote of the document is the following one, funny but true: […] After several trials, the clear winner was a video gaming machine that one guy on the team used at home. That computer outperformed any available server, running twice as fast as the server-class machines we had in house. At that point, it was clear that the criteria for choosing the server would have to be expanded a bit, simply because it would have been impossible to convince the boss to build a cluster of gaming machines and trust it to serve our customers.  But, honestly, if a business has the flexibility to buy gaming machines (assuming the machines can handle capacity) – do this. Owen Graupman, inContact I want to write a longer discussion about how companies are adopting Tabular in scenarios where it is the hidden engine of a more complex solution (and not the classical “BI system”), because it is more frequent than you might expect (and has several advantages over many alternative approaches).

    Read the article

  • Can I get vim to correctly indent this Ruby code (Nokogiri)?

    - by Nathan Long
    The first XML Builder Example for Nokogiri looks like this: builder = Nokogiri::XML::Builder.new do |xml| xml.root { xml.products { xml.widget { xml.id_ "10" xml.name "Awesome widget" } } } end puts builder.to_xml Even though I have the Ruby Vim files installed, Vim's autoindent flattens the above example like this: builder = Nokogiri::XML::Builder.new do |xml| xml.root { xml.products { xml.widget { xml.id_ "10" xml.name "Awesome widget" } } } end puts builder.to_xml Does anybody know how to get Vim to autoindent this correctly?

    Read the article

  • Deleting Large Number of Records

    Often someone will try to perform a delete on a large number of records and run into a number of problems. Slow performance, log growth, and more. Lynn Pettis shows us how to better handle this situation in SQL Server 2000 and SQL Server 2005 The Future of SQL Server Monitoring "Being web-based, SQL Monitor 2.0 enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • What Windows app can sort a huge XML file?

    - by Torben Gundtofte-Bruun
    I have some enormous XML-based configuration files, with 125000 lines in them. The problem is that they are auto-generated by the system I use, and "child" tags are in a random order within their respective parent tag. This means that a diff comparison is impossible. I want to recursively sort all tags within a parent tag by the value in name="". Some parent tags only appear once and don't have a name="" parameter; these should be sorted by the tag name itself. Once the files are sorted like this, they can be compared quite easily using normal tools. We are currently using ExamXML which can match unsorted XML files, but it fails because the files are too big. Is there an application that can do this? (Windows much preferred; Linux only as a last resort) I do not want to dive into development or XSLT jobs. I am thinking that someone must have made a simple sorting tool like this already - I just can't find it using Google. Update: With help from this site, I created a small package that I want to share: XML-Sorter_v0.3.zip Update: Follow-up question here.

    Read the article

  • Google Site Search (commercial) not indexing files in sitemap

    - by melat0nin
    I have a client for whom we have purchased Google Site Search. It works well for HTML pages served by the CMS, but files aren't being reliably indexed. I wrote a script to generate an XML feed (sitemap) of all the files in the CMS which I've plugged in to Google Webmaster Tools for the site. It says that for that sitemap 923 URLs have been submitted, but only 26 have been indexed. The client relies heavily on searching within files, which is why we decided to use Google search, so this is a bit of a problem. Many of the files aren't linked to from any page on the site, as they are old and therefore don't merit having a page of their own. But they still need to be accessible through search for archiving purposes. The file archive xml can be found at www.sniffer.org.uk/file-archive and the standard xml sitemap (of pages) can be found at www.sniffer.org.uk/sitemap.xml. Any thought would be much appreciated!

    Read the article

  • Making document storage in Sharepoint a breeze (leave the Web UI behind)

    - by deadlydog
    Hey everyone, I know many of us regularly use Sharepoint for document storage in order to make documents available to several people, have it version controlled, etc.  Doing this through the Web UI can be a real headache, especially when you have multiple documents you want to modify or upload, or when IE isn’t your default browser.  Luckily we can access the Sharepoint library like a regular network drive if we like. Open Sharepoint in Internet Explorer (other browsers don’t support the Open with Explorer functionality), navigate to wherever your documents are stored, choose the Library tab, and then click Open with Explorer. This will open the document storage in Explorer and you can interact with the documents just like they were on any other network drive J  This makes uploading large numbers of documents or directory structures super easy (a simple copy-paste), and modifying your files nice and easy. As an added bonus, you can drag and drop that location from the address bar in Explorer to the Favorites menu so that it’s always easily accessible and you can leave the Sharepoint Web UI behind completely for modifying your documents.  Just click on the new favorite to go straight to your documents.   You can even map this folder location as a network drive if you want to have it show up as another drive (e.g N: drive). I hope you found this as useful as I did

    Read the article

  • XML Rules Engine and Validation Tutorial with NIEM

    - by drrwebber
    Our new XML Validation Framework tutorial video is now available. See how to easily integrate code-free adaptive XML validation services into your web services using the Java CAMV validation engine. CAMV allows you to build fault tolerant content checking with XPath that optionally use SQL data lookups. This can provide warnings as well as error conditions to tailor your validation layer to exactly meet your business application needs. Also available is developing test suites using Apache ANT scripting of validations.  This allows a community to share sets of conformance checking test and tools . On the technical XML side the video introduces XPath validation rules and illustrates and the concepts of XML content and structure validation. CAM validation templates allow contextual parameter driven dynamic validation services to be implemented compared to using a static and brittle XSD schema approach.The SQL table lookup and code list validation are discussed and examples presented.Features are highlighted along with a demonstration of the interactive generation of actual live XML data from a SQL data store and then validation processing complete with errors and warnings detection.The presentation provides a primer for developing web service XML validation and integration into a SOA approach along with examples and resources. Also alignment with the NIEM IEPD process for interoperable information exchanges is discussed along with NIEM rules services.The CAMV engine is a high performance scalable Java component for rapidly implementing code-free validation services and methods. CAMV is a next generation WYSIWYG approach that builds from older Schematron coding based interpretative runtime tools and provides a simpler declarative metaphor for rules definition. See: http://www.youtube.com/user/TheCAMeditor

    Read the article

  • Storing revisions of a document

    - by dev.e.loper
    This is a follow up question to my original question. I'm thinking of going with generating diffs and storing those diffs in the database 'History' table. I'm using diff-match-patch library to generate what is called a 'patch'. On every save, I compare previous and new version and generate this patch. The patch could be used to generate a document at specific point in time. My dilemma is how to store this data. Should I: a Insert a new database record for every patch? b. Store these patches in javascript array and store that array in history table. So there is only one db History record for document with an array of all the patches. Concerns with: a. Too many db records generated. Will be slow and CPU intensive to query. b. Only one record. If record is somehow corrupted/deleted. Entire revision history is gone. I'm looking for suggestions, concerns with either approach.

    Read the article

  • Serve up syntactic XHTML5 using the text/html MIME type?

    - by cboettig
    I have a site currently written with HTML5 tags. I'd like to be able to parse the site as XML, with support for namespaces, etc, to facilitate programmatic extraction of data. Currently I have <!DOCTYPE html> and <meta charset="utf-8"> Which I gather is equivalent in HTML5 to explicitly setting the content-types as <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> for my current setup. In order to serve XML it sounds like the right thing to do is <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> Should I also change my Content-Type to <meta http-equiv="content-type" content="application/xhtml+xml; charset=iso-8859-1" /> Or is that not necessary? What is the advantage of having content-type be "application/xhtml+xml"? What is the disadvantage? (Sounds like it may break internet explorer rendering of the site? but maybe that information is out of date now?) Many thanks!

    Read the article

  • What methodology to use for planning/developing a Language conversion project?

    - by user 123
    I am trying to help my friend by creating a part of his on-going project. What I'm going to do is create a Java parser to break up the Java code into operators, parameters etc to build XML representation. Next I want to create a code generator to convert the parsed java code to XML conforming to the schema I've created. Finally I want to use an XML style-sheet to transform the XML into another programming language type. Basically I just wanted some advice on which methodology/model I should use for planning and developing this project. Is there some benefit to using Agile etc for instance?

    Read the article

  • MySQL Binary Storage using BLOB VS OS File System: large files, large quantities, large problems.

    - by Quantico773
    Hi Guys, Versions I am running (basically latest of everything): PHP: 5.3.1 MySQL: 5.1.41 Apache: 2.2.14 OS: CentOS (latest) Here is the situation. I have thousands of very important documents, ranging from customer contracts to voice signatures (recordings of customer authorisation for contracts), with file types including, but not limited to jpg, gif, png, tiff, doc, docx, xls, wav, mp3, pdf, etc. All of these documents are currently stored on several servers including Windows 32 bit, CentOS and Mac, among others. Some files are also stored on employees desktop computers and laptops, and some are still hard copies stored in hundreds of boxes and filing cabinets. Now because customers or lawyers could demand evidence of contracts at any time, my company has to be able to search and locate the correct document(s) effectively, for this reason ALL of these files have to be digitised (if not already) and correlated into some sort of order for searching and accessing. As the programmer, I have created a full Customer Relations Management tool that the whole company uses. This includes Customer Profiles management, Order and job Tracking tools, Job/sale creation and management modules, etc, and at the moment any file that is needed at a customer profile level (drivers licence, credit authority, etc) or at a job/sale level (contracts, voice signatures, etc) can be uploaded to the server and sits in a parent/child hierarchy structure, just like Windows Explorer or any other typical file managment model. The structure appears as such: drivers_license |- DL_123.jpg voice_signatures |- VS_123.wav |- VS_4567.wav contracts So the files are uplaoded using PHP and Apache, and are stored in the file system of the OS. At the time of uploading, certain information about the file(s) is stored in a MySQL database. Some of the information stored is: TABLE: FileUploads FileID CustomerID (the customer id that the file belongs to, they all have this.) JobID/SaleID (the id of the job/sale associated, if any.) FileSize FileType UploadedDateTime UploadedBy FilePath (the directory path the file is stored in.) FileName (current file name of uploaded file, combination of CustomerID and JobID/SaleID if applicable.) FileDescription OriginalFileName (original name of the source file when uploaded, including extension.) So as you can see, the file is linked to the database by the File Name. When I want to provide a customers' files for download to a user all I have to do is "SELECT * FROM FileUploads WHERE CustomerID = 123 OR JobID = 2345;" and this will output all the file details I require, and with the FilePath and FileName I can provide the link for download. http... server / FilePath / FileName There are a number of problems with this method: Storing files in this "database unconcious" environment means data integrity is not kept. If a record is deleted, the file may not be deleted also, or vice versa. Files are strewn all over the place, different servers, computers, etc. The file name is the ONLY thing matching the binary to the database and customer profile and customer records. etc, etc. There are so many reasons, some of which are described here: http://www.dreamwerx.net/site/article01 . Also there is an interesting article here too: sietch.net/ViewNewsItem.aspx?NewsItemID=124 . SO, after much research I have pretty much decided I am going to store ALL of these files in the database, as a BLOB or LONGBLOB, but there are still many considerations before I do this. I know that storing them in the database is a viable option, however there are a number of methods of storing them. I also know storing them is one thing; correlating and accessing them in a manageable way is another thing entirely. The article provided at this link: dreamwerx.net/site/article01 describes a way of splitting the uploaded binary files into 64kb chunks and storing each chunk with the FileID, and then streaming the actual binary file to the client using headers. This is a really cool idea since it alleviates preassure on the servers memory; instead of loading an entire 100mb file into the RAM and then sending it to the client, it is doing it 64kb at a time. I have tried this (and updated his scripts) and this is totally successful, in a very small frame of testing. So if you are in agreeance that this method is a viable, stable and robust long-term option to store moderately large files (1kb to couple hundred megs), and large quantities of these files, let me know what other considerations or ideas you have. Also, I am considering getting a current "File Management" PHP script that gives an interface for managing files stored in the File System and converting it to manage files stored in the database. If there is already any software out there that does this, please let me know. I guess there are many questions I could ask, and all the information is up there ^^ so please, discuss all aspects of this and we can pass ideas back and forth and teach each other. Cheers, Quantico773

    Read the article

  • Is there a distributed VCS that can manage large files?

    - by joelhardi
    Is there a distributed version control system (git, bazaar, mercurial, darcs etc.) that can handle files larger than available RAM? I need to be able to commit large binary files (i.e. datasets, source video/images, archives), but I don't need to be able to diff them, just be able to commit and then update when the file changes. I last looked at this about a year ago, and none of the obvious candidates allowed this, since they're all designed to diff in memory for speed. That left me with a VCS for managing code and something else ("asset management" software or just rsync and scripts) for large files, which is pretty ugly when the directory structures of the two overlap.

    Read the article

  • How to insert large files in mysql database using php? [closed]

    - by anjan
    Hi! I want to upload a large file of size 10M max to my mysql database. Using .htaccess i changed the PHP's own file upload limit to "10485760" = 10M, i am able to upload files upto 10M size without any problem. But i can not insert the file in database if it is more that 1M in size. i am using file_get_contents to read all file data and pass it to the insert query as a string to be inserted into a LONGBLOB field. But files with more than 1M size is not being added to database, though i can use print_r($_FILES) to examine that the file uploaded correctly. Any help will be appreciated and i will need it within next 6 hours. So, please help! best regards, Anjan * This is a duplicate of http://stackoverflow.com/questions/492549/how-can-i-insert-large-files-in-mysql-db-using-php *

    Read the article

  • Iterating over a large data set in long running Python process - memory issues?

    - by user1094786
    I am working on a long running Python program (a part of it is a Flask API, and the other realtime data fetcher). Both my long running processes iterate, quite often (the API one might even do so hundreds of times a second) over large data sets (second by second observations of certain economic series, for example 1-5MB worth of data or even more). They also interpolate, compare and do calculations between series etc. What techniques, for the sake of keeping my processes alive, can I practice when iterating / passing as parameters / processing these large data sets? For instance, should I use the gc module and collect manually? Any advice would be appreciated. Thanks!

    Read the article

  • Why do I get an extra newline in the middle of a UTF-8 character with XML::Parser?

    - by René Nyffenegger
    I encountered a problem dealing with UTF-8, XML and Perl. The following is the smallest piece of code and data in order to reproduce the problem. Here's an XML file that needs to be parsed: <?xml version="1.0" encoding="utf-8"?> <test> <words>???????????? ??????? ????????? ???? ???????????? ??????</words> <words>???????????? ??????? ????????? ???? ???????????? ??????</words> <words>???????????? ??????? ????????? ???? ???????????? ??????</words> [<words> .... </words> 148 times repeated] <words>???????????? ??????? ????????? ???? ???????????? ??????</words> <words>???????????? ??????? ????????? ???? ???????????? ??????</words> </test> The parsing is done with this perl script: use warnings; use strict; use XML::Parser; use Data::Dump; my $in_words = 0; my $xml_parser=new XML::Parser(Style=>'Stream'); $xml_parser->setHandlers ( Start => \&start_element, End => \&end_element, Char => \&character_data, Default => \&default); open OUT, '>out.txt'; binmode (OUT, ":utf8"); open XML, 'xml_test.xml' or die; $xml_parser->parse(*XML); close XML; close OUT; sub start_element { my($parseinst, $element, %attributes) = @_; if ($element eq 'words') { $in_words = 1; } else { $in_words = 0; } } sub end_element { my($parseinst, $element, %attributes) = @_; if ($element eq 'words') { $in_words = 0; } } sub default { # nothing to see here; } sub character_data { my($parseinst, $data) = @_; if ($in_words) { if ($in_words) { print OUT "$data\n"; } } } When the script is run, it produces the out.txt file. The problem is in this file on line 147. The 22th character (which in utf-8 consists of \xd6 \xb8) is split between the d6 and b8 with a new line. This should not happen. Now, I am interested if someone else has this problem or can reproduce it. And why I am getting this problem. I am running this script on Windows: C:\temp>perl -v This is perl, v5.10.0 built for MSWin32-x86-multi-thread (with 5 registered patches, see perl -V for more detail) Copyright 1987-2007, Larry Wall Binary build 1003 [285500] provided by ActiveState http://www.ActiveState.com Built May 13 2008 16:52:49

    Read the article

  • View problem - how to show integer from activity in XML?

    - by Oliver Goossens
    Hi there, I started two days ago with android, gone through the hello android stuff and also started to read the Hello Android book, which is great. PROBLEM: I use in my app - VERY EASY APP- the XML output. So basically the main activity just tells the android to show the XML layout of main. But what if I have in the activity - code defined integer variable and I want this integer variable also be shown on the display? How do I PUSH the integer variable to the XML??? From main XML reference to other strings in XML is easy - @string/app_name ... but how do I use the integer variable from the activity? Please help Thank you Oliver

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >