Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 162/1477 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Downloading large files with AFNetworking

    - by goodfella
    I'm trying to implement downloading of a large file and show to user current progress, but block in: -[AFURLConnectionOperation setDownloadProgressBlock:] returns incorrect bytesRead and totalBytesRead values (they are smaller than they should be). For example: If I have a 90MB file and when it downloads completely, latest block invocation in setDownloadProgressBlock: gives me totalBytesRead value about 30MB. On other side, if file is 2MB large, latest block invocation gives correct totalBytesRead 2MB value. AFNetworking is updated to the latest version from github. If AFNetworking can't do it correctly, what solution can I use? Edit: I've determined that even if file is not downloaded completely (and this happens every time with relatively big file) AFNetworking calls success block in: -[AFHTTPRequestOperation setCompletionBlockWithSuccess:failure] I asked a similar question here about this situation, but didn't get any answers. I can check in code downloaded and real file sizes, but AFNetworking has no API for continuation of partial download.

    Read the article

  • large test data for knapsack problem

    - by user347918
    i am researcher student. I am searching large data for knapsack problem. I wanted test my algorithm for knapsack problem. But i couldn't find large data. I need data has 1000 item and capacity is no matter. The point is item as much as huge it's good for my algorithm. Is there any huge data available in internet. Does anybody know please guys i need urgent.

    Read the article

  • Best place to store large amounts of session data

    - by audiopleb
    I'm building an application that needs to store and re-use large amounts of data per session. So for example, the user selects a large list of list items (say 2000 or significantly more) which have a numeric value as their key then they save that selection and go off to another page, do something else and then come back to the original page and need to load their selections into that page. What is the quickest and most efficient way of storing and reusing that data? In a text file saved with the session id? In a temp db table? In the session data itself (db sessions so size isn't a limit) using a serialised string or using gzcompress or gzencode? Any advice or insight would be great! Thank you!!!!

    Read the article

  • document.getElementById returns null for my drop-down select menu

    - by tucson
    I am trying to create a drop-down select menu with custom css (similar to the drop-down selection of language at http://translate.google.com/#). I have current html code: <ul id="Select"> <li> <select id="myId" onmouseover="mopen('options')" onmouseout="mclosetime()"> <div id="options" onmouseover="mcancelclosetime()" onmouseout="mclosetime()"> <option value="1" selected="selected">One</option> <option value="2">Two</option> <option value="3">Three</option> </div> </select> </li> </ul> and the Javascript: function mopen(id) { // cancel close timer mcancelclosetime(); // close old layer if(ddmenuitem) ddmenuitem.style.visibility = 'hidden'; // get new layer and show it ddmenuitem = document.getElementById(id); ddmenuitem.style.visibility = 'visible'; } But document.getElementById returns null. Though if I use the code with a div element that does not contain a select list the document.getElementById(id) returns proper div value. How do I fix this? or is there a better way to create drop-down select menu like http://translate.google.com ?

    Read the article

  • setting syntax on in vim with large C file makes complete very slow

    - by skeept
    when I have syntax on in a large C file (about 8000) lines the completion ctrl-p and ctrl-n are very slow (more than 20). When I turn syntax off then completion takes less than a second. Any ideas on how to solve this? Thanks! EDIT: I figured out a minimal way of reproducing this behaviour: with an empty .vimrc and .vim folder the only changed settings are :set syntax on :set foldmethod=syntax and a large C file to edit, completion (and even general editing) becomes very very slow.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • How do I correctly reference georss: point in my xsd?

    - by Chris Hinch
    I am putting together an XSD schema to describe an existing GeoRSS feed, but I am stumbling trying to use the external georss.xsd to validate an element of type georss:point. I've reduced the problem to the smallest components thusly: XML: <?xml version="1.0" encoding="utf-8"?> <this> <apoint>45.256 -71.92</apoint> </this> XSD: <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:georss="http://www.georss.org/georss"> <xs:import namespace="http://www.georss.org/georss" schemaLocation="http://georss.org/xml/1.1/georss.xsd"/> <xs:element name="this"> <xs:complexType> <xs:sequence> <xs:element name="apoint" type="georss:point"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> If I make apoint type "xs: string" instead of "georss: point", the XML validates happily against the XSD, but as soon as I reference an imported type (georss: point), my XML validator (Notepad++ | XML Tools) "cannot parse the schema". What am I doing wrong?

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

  • Javascript large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • Download Large Files using java

    - by angelina
    Dear All, I M building a application in which i want to download large files on handset (mobile),but if size of file is large i m getting exception socket exception-broken pipe . inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); }

    Read the article

  • Streaming large result sets with MySQL

    - by configurator
    I'm developing a spring application that uses large MySQL tables. When loading large tables, I get an OutOfMemoryException, since the driver tries to load the entire table into application memory. I tried using statement.setFetchSize(Integer.MIN_VALUE); but then every ResultSet I open hangs on close(); looking online I found that that happens because it tries loading any unread rows before closing the ResultSet, but that is not the case since I do this: ResultSet existingRecords = getTableData(tablename); try { while (existingRecords.next()) { // ... } } finally { existingRecords.close(); // this line is hanging, and there was no exception in the try clause } The hangs happen for small tables (3 rows) as well, and if I don't close the RecordSet (which happened in one method) then connection.close() hangs.

    Read the article

  • Get count matches in query on large table very slow

    - by Roy Roes
    I have a mysql table "items" with 2 integer fields: seid and tiid The table has about 35000000 records, so it's very large. seid tiid ----------- 1 1 2 2 2 3 2 4 3 4 4 1 4 2 The table has a primary key on both fields, an index on seid and an index on tiid. Someone types in 1 or more tiid values and now I would like to get the seid with most results. For example when someone types 1,2,3, I would like to get seid 2 and 4 as result. They both have 2 matches on the tiid values. My query so far: SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid HAVING c = (SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid ORDER BY c DESC LIMIT 1) But this query is extremly slow, because of the large table. Does anyone know how to construct a better query for this purpose?

    Read the article

  • Browser, upload large file

    - by Mike
    I'm looking for a way to allow a user to upload a large file (~1gb) to my unix server using a web page and browser. There are a lot of examples that illustrate how to do this with a traditional post request, however this doesn't seem like a good idea when the file is this large. I'm looking for recommendations on the best approach. Bonus points if the method includes a way of providing progress information to the user. For now security is not a major concern, as most users who will be using the service can be trusted. We can also assume that the connection between client and host will not be interrupted (or if it is they have to start over).

    Read the article

  • opening word document contailning macros using textarea

    - by avani-nature
    Hai frnds i am avani here,actually 1.i amhaving one word document which contains macros i wann to open it in textarea.. 2.i am able to open the word document which is not containing macros 3.i am not able to open the document which contains macros 4.i am using below code 5.please do help me anyone i am thinking its some what impossible { //echo $aud; $filename = 'C:/xampp/htdocs/mts/sites/default/files/a.doc'; //echo $filename; if(isset($_REQUEST['Save'])){ $somecontent = stripslashes($_POST['somecontent']); // Let's make sure the file exists and is writable first. if (is_writable($filename)) { // In our example we're opening $filename in append mode. // The file pointer is at the bottom of the file hence // that's where $somecontent will go when we fwrite() it. if (!$handle = fopen($filename, 'w')) { echo "Cannot open file ($filename)"; exit; } // Write $somecontent to our opened file. if (fwrite($handle, $somecontent) === FALSE) { echo "Cannot write to file ($filename)"; exit; } echo "Success, wrote ($somecontent) to file ($filename) - Continue - "; fclose($handle); } else { echo "The file $filename is not writable"; } } else{ // get contents of a file into a string $handle = fopen($filename, "r"); $somecontent = fread($handle, filesize($filename)); $word = new COM("word.application") or die ("Could not initialise MS Word object."); $word-Documents-Open(realpath("$filename")); // Extract content. $somecontent = (string) $word-ActiveDocument-Content; //echo $somecontent; $word-ActiveDocument-Close(false); $word-Quit(); $word = null; unset($word); fclose($handle); } ? Edit file -------- ?

    Read the article

  • Why do I get Error#2036 in flash AS3 and how to solve this?

    - by sydas
    I'm currently building an application in flash in relation to RSS/XML feeds for a project. I'm stuck at the moment because I keep getting this error: Error opening URL 'http://distilleryimage6.instagram.com/.jpg' Error #2044: Unhandled IOErrorEvent:. text=Error #2036: Load Never Completed. I know my string picURL is not functioning properly, but am I missing something that makes it not functional? This is my code: package { import flash.display.*; import flash.events.*; import flash.net.*; public class instagramFeed extends MovieClip { //link to #rit xml loader public var ritFileRequest = new URLRequest("http://instagram.com/tags/rit/feed/recent.rss"); public var ritXmlLoader = new URLLoader(); //link to #rochester xml loader // same as above public variables public function instagramFeed() { // constructor code trace("About to load..."); //Loads #rit hashtag xml data. ritXmlLoader.load(ritFileRequest); ritXmlLoader.addEventListener(Event.COMPLETE, displayRITInfo); } //focuses on data with #rit hashtag public function displayRITInfo(e:Event):void{ var ritInstagramData:XML = new XML(ritXmlLoader.data); var ritInfoList:XMLList = ritInstagramData.elements().item; trace(ritInfoList); //load the image var ritPic:String = ritInfoList.*::condition.@code; var picURL:String = "http://distilleryimage6.instagram.com/" + ritPic + ".jpg"; trace(picURL); //show the image on the stage! var ritImageRequest:URLRequest = new URLRequest(picURL); var ritImageLoader:Loader = new Loader(); ritImageLoader.load(ritImageRequest); addChild(ritImageLoader); ritImageLoader.x=200; ritImageLoader.y=100; ritImageLoader.scaleX = 3; ritImageLoader.scaleY = 3; } } }

    Read the article

  • Emailing a fixed document through Outlook

    - by MoominTroll
    I've added functionality to an application that prints out a bunch of information to a FixedDOcument and sends this off to the printer. This works just fine, however the request is that there be an in application function that emails the document using OUtlook and its here that I come unstuck. I'd very much like to just reuse the class that makes the fixed document for printing to generate the text for email, but I'm struggling to do this. I've tried the following... Microsoft.Office.Interop.Outlook.Application oApp = new Microsoft.Office.Interop.Outlook.Application(); MailItem email = (MailItem)(oApp.CreateItem(OlItemType.olMailItem)); email.Recipients.Add("[email protected]"); email.Subject = "Hello"; email.Body = "TEST"; FixedDocument doc = CreateReport(); //make my fixed document //this doesn't work, and the parameters it takes suggest it never will email.Attachments.Add(doc, OlAttachmentType.olByValue, 1, null); email.Send(); I can't help but think I'm on completely the wrong tack here, but I don't really want to have to write a bunch of new text formatting (since email.Body only takes a string) when I've already got the content formatted how I want it. Note that the content is all textual, so I don't really care if it gets sent as an attachment or as text in the emails body. Ideally if its sent as an attachment it won't be saved anywhere permanently. Any pointers?

    Read the article

  • Secrets of delivering .NET size large products?

    - by Joan Venge
    In software companies I have seen it's really hard to work on very large products where everything depends on everything else. For instance Microsoft works on C#, F#, .NET, WPF, Visual Studio where these things are interconnected. I don't know how many people are involved, but if it's in 100s, how do they keep in sync with everything, so they design and implement features without conflicting with other dependencies and future plans of other products? I am wondering that if MS is able to do this, they must have a very good system. Any guidelines or secrets for MS or non-MS very large software product delivering?

    Read the article

  • Alternative to Altova's MissionKit

    - by nomad311
    Anyone know of any good alternatives (other than those listed below which really are only good at specific XML development tasks)? The Why (if you're interested): I've been doing XML development on and off for years now, but someone brought XMLSpy to my attention recently, and it is awesome - the price isn't. Lately I've been using a combination of: Notepad++ (modifying XML) EditX (validating/debugging XML) Eclipse (designing schemas) and MS Visual Studio (validating schemas) ...based on which makes the task(s) easiest. But, I've just found out that we will be using XSL transformations to generate XML in the future. I've never used mission kit before, but I'm just short of positive XMLSpy replaces all the above mentioned tool for XML development. And if their XSL tools are anywhere near the caliber of XMLSpy ...simply put I need it. I don't believe that I can convince the budgeting types to buy licenses for MissionKit at $1000 each (won't stop me from trying). In the mean while some research on alternatives won't hurt, but a few Google queries has only revealed that not many people pay for Altova's (overpriced?) software as there are mostly links to P2P sites for downloading a more free-like version of MissionKit.

    Read the article

  • Jquery Block on my document .ready function not working

    - by kumar
    Hello Friens, I have this code, I added JS Script file to my Master page. <script src="/Scripts/Jquery.blockUI.js" type="text/javascript"></script> This Below code I have in my master page.on document.ready <script type="text/javascript"> $(document).ready(function () { $.blockUI({ message: $('#question'), css: { width: '275px'} }); }); </script> <div id="question" style="display:none; cursor: default"> <h2 class="padding"><br />An unexpected system error has occurred while processing your request.<br /></h2> <h3>We apologize for this inconvenience.<br /> Please report this error to your system administrator with the following information:<br /><br /> Session id is:</h3> <input type="button" id="OK" value="OK" /> </asp:Content> On my Document.ready Function my BlockUi is not working? can any body tell me why its not working? thanks

    Read the article

  • visibility false, XML, as3

    - by pixelGreaser
    Hey, I'm using external XML to set flash vars. Alpha works, but not Visibility. How do I get my swf to respond to visibility? Thanks. XML <?xml version="1.0" encoding="utf-8"?> <SESSION> <BGv TITLE="visible true">false</BGv> <BGa TITLE="alpha 50 percent">.5</BGa> </SESSION> SWF //LISTEN AND LOAD XML var myXML:*; var myLoad:URLLoader = new URLLoader(); myLoad.load(new URLRequest("visible.xml")); myLoad.addEventListener(Event.COMPLETE, parseXML); //PARSE XML function parseXML(e:Event):void { myXML = new XML(e.target.data); //MY TEST var bgA:*; var bgV:*; trace(myXML.BGa.text()); trace(myXML.BGv.text()); bgA =(myXML.BGa.text()); bgV =(myXML.BGv.text()); //MY OBJECT bg.alpha = bgA;//This works great bg.visible = bgV;//This has no effect } OUTPUT .5 false

    Read the article

  • How to factorize common tags with nokogiri builder ?

    - by plafoucriere
    Hi, I'd like to create several builders, with common tags, in order to have xml docs like : <xml version="1.0"?> <a_kind_of_root> <!-- This part is common --> <event_date>20100514</event_date> <event_id>123</event_id> <event_type>Conference</event_type> <!-- This part is specific to the builder --> <my_tag>some text</my_tag> </a_kind_of_root> </xml> <xml version="1.0"?> <another_kind_of_root> <!-- This part is common --> <event_date>20100514</event_date> <event_id>123</event_id> <event_type>Conference</event_type> <!-- This part is specific to the builder --> <my_other_tag>some integer</my_other_tag> </another_kind_of_root> </xml> I don't know how to put the common part inside a Nokogiri::XML::Builder Thanks

    Read the article

  • Comparing large strings in JavaScript with a hash

    - by user4815162342
    I have a form with a textarea that can contain large amounts of content (say, articles for a blog) edited using one of a number of third party rich text editors. I'm trying to implement something like an autosave feature, which should submit the content through ajax if it's changed. However, I have to work around the fact that some of the editors I have as options don't support an "isdirty" flag, or an "onchange" event which I can use to see if the content has changed since the last save. So, as a workaround, what I'd like to do is keep a copy of the content in a variable (let's call it lastSaveContent), as of the last save, and compare it with the current text when the "autosave" function fires (on a timer) to see if it's different. However, I'm worried about how much memory that could take up with very large documents. Would it be more efficient to store some sort of hash in the lastSaveContent variable, instead of the entire string, and then compare the hash values? If so, can you recommend a good javascript library/jquery plugin that implements an appropriate hash for this requirement?

    Read the article

  • Visual Studio App.config XML Transformation

    - by João Angelo
    Visual Studio 2010 introduced a much-anticipated feature, Web configuration transformations. This feature allows to configure a web application project to transform the web.config file during deployment based on the current build configuration (Debug, Release, etc). If you haven’t already tried it there is a nice step-by-step introduction post to XML transformations on the Visual Web Developer Team Blog and for a quick reference on the supported syntax you have this MSDN entry. Unfortunately there are some bad news, this new feature is specific to web application projects since it resides in the Web Publishing Pipeline (WPP) and therefore is not officially supported in other project types like such as a Windows applications. The keyword here is officially because Vishal Joshi has a nice blog post on how to extend it’s support to app.config transformations. However, the proposed workaround requires that the build action for the app.config file be changed to Content instead of the default None. Also from the comments to the said post it also seems that the workaround will not work for a ClickOnce deployment. Working around this I tried to remove the build action change requirement and at the same time add ClickOnce support. This effort resulted in a single MSBuild project file (AppConfig.Transformation.targets) available for download from GitHub. It integrates itself in the build process so in order to add app.config transformation support to an existing Windows Application Project you just need to import this targets file after all the other import directives that already exist in the *.csproj file. Before – Without App.config transformation support ... <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> </Project> After – With App.config transformation support ... <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Import Project="C:\MyExtensions\AppConfig.Transformation.targets" /> <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> </Project> As a final disclaimer, the testing time was limited so any problem that you find let me know. The MSBuild project invokes the mage tool so the Framework SDK must be installed. Update: I finally had some spare time and was able to check the problem reported by Geoff Smith and believe the problem is solved. The Publish command inside Visual Studio triggers a build workflow different than through MSBuild command line and this was causing problems. I posted a new version in GitHub that should now support ClickOnce deployment with app.config tranformation from within Visual Studio and MSBuild command line. Also here is a link for the sample application used to test the new version using the Publish command with the install location set to be from a CD-ROM or DVD-ROM and selected that the application will not check for updates. Thanks to Geoff for spotting the problem.

    Read the article

  • Junk files appeared in my OSX root folder?

    - by user68732
    I see a bunch of junk files in the root folder of my hard drive running Snow Leopard. Here is a screen shot of the files: If I inspect these files, they appear to contain XML with DeviceCertificate, DevicePublicKey, and SystemBUID information, among other XML elements. I do iOS development on this machine. Can anyone explain from where these files came, and if they are an indication of anything serious, such as malware or spyware?

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >