Search Results

Search found 15401 results on 617 pages for 'memory optimization'.

Page 189/617 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • How to optimize this javascript code?

    - by Andrija
    I have a jsp which uses a lot of javascript and it's just not fast enough. I would like to optimize it so first, here's a part of the code: In the jsp I have the initialization: window.onload = function () { formCollection.pageSize.value = "<%= pagingSize%>"; elemCollection = iDom3.Table.all["spis"].XML.DOM; <% if (resultList != null) { %> elementsNumber = <%= resultList.size() %>; <%} else { %> elementsNumber = 0; <% } %> contextPath = "<%= request.getContextPath() %>"; } In my js file I have two types of js functions: // gets the first element and sets it's value to all the other; //the selectSingleNode function is used because I use XSLT transformation //to generate the table _setTehJed = function(){ var resultId = formCollection.elements["idTehJedinice_spis_1"].value; var resultText = formCollection.elements["tehnicka_spis_1"].value; if (resultId != ""){ var counter = 1; while (counter<elementsNumber){ counter++; if(formCollection.elements["idTehJedinice_spis_"+counter] != null){ formCollection.elements["idTehJedinice_spis_"+counter].value=resultId; formCollection.elements["tehnicka_spis_"+counter].value=resultText; } var node=elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'tehnicka']/title"); node.text=resultText; var node2=elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'idTehJedinice']/title"); node2.text=resultId; } } } // sets the elements checkbox to checked or unchecked _SelectCheckRokCuvanja = { all : [], Item : function (oItem, sId) { this.all["spis_"+sId] = oItem.value; if (oItem.checked) { elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "true"); }else{ elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "false"); } } } I've used these tips: http://blogs.msdn.com/b/ie/archive/2006/08/28/728654.aspx http://code.google.com/speed/articles/optimizing-javascript.html but I still think something could be done like defining the functions like this: In the jsp: window.onload = function () { iDom3.DigitalnaArhivaPrihvat.formCollection=document.forms["controller"]; iDom3.DigitalnaArhivaPrihvat.formCollection.pageSize.value = "<%= pagingSize%>"; iDom3.DigitalnaArhivaPrihvat.elemCollection = iDom3.Table.all["spis"].XML.DOM; <% if (resultList != null) { %> iDom3.DigitalnaArhivaPrihvat.elementsNumber = <%= resultList.size() %> <%} else { %> iDom3.DigitalnaArhivaPrihvat.elementsNumber = 0; <% } %> } in the js: iDom3.DigitalnaArhivaPrihvat = { formCollection:null, elemCollection:null, elementsNumber:null, _setTehJed : function(){ var resultId = this.formCollection.elements.idTehJedinice_spis_1.value; var resultText = this.formCollection.elements.tehnicka_spis_1.value; if (resultId != ""){ var counter = 1; while (counter<this.elementsNumber){ counter++; if(this.formCollection.elements["idTehJedinice_spis_"+counter] !== null){ this.formCollection.elements["idTehJedinice_spis_"+counter].value=resultId; this.formCollection.elements["tehnicka_spis_"+counter].value=resultText; } var node=this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'tehnicka']/title"); node.text=resultText; var node2=this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'idTehJedinice']/title"); node2.text=resultId; } } }, _SelectCheckRokCuvanja = { all : [], Item : function (oItem, sId) { this.all["spis_"+sId] = oItem.value; if (oItem.checked) { this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "true"); }else{ this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "false"); } } } but the problem is scoping (if I do it like this, the second function does not execute properly). Any suggestions?

    Read the article

  • Improving MySQL Update Query Efficiency

    - by Russell C.
    In our database tables we keep a number of counting columns to help reduce the number of simple lookup queries. For example, in our users table we have columns for the number of reviews written, photos uploaded, friends, followers, etc. To help make sure these stay in sync we have a script that runs periodically to check and update these counting columns. The problem is that now that our database has grown significantly the queries we have been using are taking forever to run since they are totally inefficient. I would appreciate someone with more MySQL knowledge than myself to recommend how we can improve it's efficiency: update users set photos=(select count(*) from photos where photos.status="A" AND photos.user_id=users.id) where users.status="A"; If this were a select statement I would just use a join but I'm not sure if that is possible with update. Thanks in advance for your help!

    Read the article

  • How to batch retrieve documents with mongoDB?

    - by edude05
    Hello everyone, I have an application that queries data from a mongoDB using the mongoDB C# driver something like this: public void main() { foreach (int i in listOfKey) { list.add(getObjectfromDB(i); } } public myObject getObjFromDb(int primaryKey) { document query = new document(); query["primKey"] = primaryKey; document result= mongo["myDatabase"]["myCollection"].findOne(query); return parseObject(result); } On my local (development) machine to get 100 object this way takes less than a second. However, I recently moved the database to a server on the internet, and this query takes about 30 seconds to execute for the same number of object. Furthermore, looking at the mongoDB log, it seems to open about 8-10 connections to the DB to perform this query. So what I'd like to do is have the query the database for an array of primaryKeys and get them all back at once, then do the parsing in a loop afterwards, using one connection if possible. How could I optimize my query to do so? Thanks, --Michael

    Read the article

  • How much faster is a database running in RAM?

    - by orokusaki
    I"m looking to run PostgreSQL in RAM for performance enhancement. The database isn't more than 1GB and shouldn't ever grow to more than 5GB. Is it worth doing? Are there any benchmarks out there? Is it buggy? My second major concern is: How easy is it to back things up when it's running purely in RAM. Is this just like using RAM as tier 1 HD, or is it much more complicated?

    Read the article

  • Anything wrong with this MySQL quert? takes 10 seconds+ to load

    - by user345426
    I have a search that is taking 10 seconds+ to execute! Keep in mind it is also searching over 200,000 products in the database. I posted the explain and MySQL query here. 1 SIMPLE p ref PRIMARY,products_status,prod_prodid_status,product... products_status 1 const 9048 Using where; Using temporary; Using filesort 1 SIMPLE v ref PRIMARY,vendors_id,vendors_vendorid vendors_vendorid 4 rhinomar_rhinomartnew.p.vendors_id 1 1 SIMPLE s ref products_id products_id 4 rhinomar_rhinomartnew.p.products_id 1 1 SIMPLE pd ref PRIMARY,products,prod_desc_prodid_prodname prod_desc_prodid_prodname 4 rhinomar_rhinomartnew.p.products_id 1 1 SIMPLE p2c ref PRIMARY,ptc_catidx PRIMARY 4 rhinomar_rhinomartnew.p.products_id 1 Using where; Using index 1 SIMPLE c eq_ref PRIMARY PRIMARY 4 rhinomar_rhinomartnew.p2c.categories_id 1 Using where MySQL Query: select p.products_id, p.products_image, p.products_price, p.products_weight, p.products_unit_quantity, s.specials_new_products_price, s.status, pd.products_name, pd.products_img_alt from products p left join vendors v ON v.vendors_id = p.vendors_id left join specials s on s.products_id = p.products_id left join products_description pd on pd.products_id = p.products_id left join products_to_categories p2c on p2c.products_id = p.products_id left join categories c on c.categories_id = p2c.categories_id where ( ( pd.products_name like '%apparel%' ) or p2c.categories_id IN (773, 132, 135, 136, 119, 122, 124, 125, 126, 1749, 1753, 1747, 123, 127, 130, 131, 178, 137, 140, 164, 165, 166, 167, 168, 169, 832, 2045 ) or p.products_id = 'apparel' or p.products_model = 'apparel' or CONCAT(v.vendors_prefix, '-') = 'apparel' or CONCAT( v.vendors_prefix, '-', p.products_id ) = 'apparel' ) and p.products_status = '1' and c.categories_status = '1' group by p.products_id order by pd.products_name

    Read the article

  • Fast find object by string property

    - by Andrew Kalashnikov
    Hello, colleagues. I've got task to fast find object by its string property. Object: class DicDomain { public virtual string Id{ get; set; } public virtual string Name { get; set; } } For storing my object I use List[T] dictionary where T is DicDomain for now . I've got 5-10 such lists, which contain about 500-20000 at each one. Task is find objects by its Name. I use next code now: List<T> entities = dictionary.FindAll(s => s.Name.Equals(word, StringComparison.OrdinalIgnoreCase)); I've got some questions: Is my search speed optimal. I think now. Data structure. It List good for this task. What about hashtable,sorted... Method Find. May be i should use string intern?? I haven't much exp at these tasks. Can u give me good advice for increase perfomance. Thanks

    Read the article

  • Is count(*) really expensive ?

    - by Anil Namde
    I have a page where I have 4 tabs displaying 4 different reports based off different tables. I obtain the row count of each table using a select count(*) from <table> query and display number of rows available in each table on the tabs. As a result, each page postback causes 5 count(*) queries to be executed (4 to get counts and 1 for pagination) and 1 query for getting the report content. Now my question is: are count(*) queries really expensive -- should I keep the row counts (at least those that are displayed on the tab) in the view state of page instead of querying multiple times? How expensive are COUNT(*) queries ?

    Read the article

  • How can I optimize this loop?

    - by Moshe
    I've got a piece of code that returns a super-long string that represents "search results". Each result is represented by a double HTML break symbol. For example: Result1<br><br>Result 2<br><br>Result3 I've got the following loop that takes each result and puts it into an array, stripping out the break indicator, "kBreakIndicator" (<br><br>). The problem is that this lopp takes way too long to execute. With a few results it's fine, but once you hit a hundred results, it's about 20-30 seconds slower. It's unacceptable performance. What can I do to improve performance? Here's my code: content is the original NSString. NSMutableArray *results = [[NSMutableArray alloc] init]; //Loop through the string of results and take each result and put it into an array while(![content isEqualToString:@""]){ NSRange rangeOfResult = [content rangeOfString:kBreakIndicator]; NSString *temp = (rangeOfResult.location != NSNotFound) ? [content substringToIndex:rangeOfResult.location] : nil; if (temp) { [results addObject:temp]; content = [[[content stringByReplacingOccurrencesOfString:[NSString stringWithFormat:@"%@%@", temp, kBreakIndicator] withString:@""] mutableCopy] autorelease]; }else{ [results addObject:[content description]]; content = [[@"" mutableCopy] autorelease]; } } //Do something with the results array. [results release];

    Read the article

  • Animate screen while loading textures

    - by Omega
    My RPG-like game has random battles. When the player enters a random battle, it is necessary for my game to load the textures used within that battle (animated monsters, animations, etc). The textures are quite a lot, and rather big (the battles are very graphical intensive). Such process consumes significant time. And while it is loading, the whole screen freezes. The game's map freezes, and the wait time is significant - I personally find it annoying. I can't afford to preload the textures because, after doing some math, I realized: If I preload all the textures at the beginning of the game, the application will definitely crash. If I preload the textures that are used in a specific map when the player enters the map, the application is very likely to crash as well. I can only afford to load the textures when I need them, and dispose of them as soon as the battle ends. I'd prefer to not use a "loading screen" image because it affects my game's design and concept. I want to avoid this approach. If I could do some kind of animation while loading the textures, it would be great, which leads to my question: is that possible? What kind of animation, you ask? Well, how about... you remember when Final Fantasy used to distort the screen while apparently loading the textures? Something like that. But well, distorting is quite a time-consuming process as well, so maybe just a cool frame-by-frame animation or something. While writing this, I realized that I could make small pauses between textures (there are multiple textures), and during such pauses, I update the screen to represent the animation's state. However, this is very unlikely to happen, because each texture is 2048x2048, so the animation would be refreshed at a rather laggy (and annoying) rate. I'd prefer to avoid this as well.

    Read the article

  • Composite primary keys in N-M relation or not?

    - by BerggreenDK
    Lets say we have 3 tables (actually I have 2 at the moment, but this example might illustrate the thought better): [Person] ID: int, primary key Name: nvarchar(xx) [Group] ID: int, primary key Name: nvarchar(xx) [Role] ID: int, primary key Name: nvarchar(xx) [PersonGroupRole] Person_ID: int, PRIMARY COMPOSITE OR NOT? Group_ID: int, PRIMARY COMPOSITE OR NOT? Role_ID: int, PRIMARY COMPOSITE OR NOT? Should any of the 3 ID's in the relation PersonGroupRole be marked as PRIMARY key or should they all 3 be combined into one composite?? whats the real benefit of doing it or not? I can join anyways as far as I know, so Person JOIN PersonGroupRole JOIN Group gives me which persons are in which Groups etc. I will be using LINQ/C#/.NET on top of SQL-express and SQL-server, so if there is any reasons regarding language/SQL that might make the choice more clear, thats the platform I ask about. Looking forward to see what answers pops up, as I have thought of these primary keys/indexes many times when making combined ones.

    Read the article

  • Common causes of slow performing jQuery and how to optimize the code?

    - by Polaris878
    Hello, This might be a bit of a vague or general question, but I figure it might be able to serve as a good resource for other jQuery-ers. I'm interested in common causes of slow running jQuery and how to optimize these cases. We have a good amount of jQuery/JavaScript performing actions on our page... and performance can really suffer with a large number off elements. What are some obvious performance pitfalls you know of with jQuery? What are some general optimizations a jQuery-er can do to squeeze every last bit of performance out of his/her scripts? One example: a developer may use a selector to access an element that is slower than some other way. Thanks

    Read the article

  • Execute a method less times possible - PHP

    - by serhio
    I have a site in multiple languages. I have a method that returns me the today currencies in a array. I display that currencies in a table then. // --- en/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); // --- fr/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); ... // somewhere in the page <td><?php echo $currencies["eur"]["today"]; ?></td> So, every time I load, en/ or fr/ or other language, I request the exchange rates from a external site. Can I optimize this behavior (reading once per day or session)? maybe to store a global variable and check the update date?

    Read the article

  • optimize a string.Format + replace.

    - by acidzombie24
    I have this function. The visual studio profile marked the line with string.Format as hot and were i spend much of my time. How can i write this loop more efficiently? public string EscapeNoPredicate(string sz) { var s = new StringBuilder(sz); s.Replace(sepStr, sepStr + sepStr); foreach (char v in IllegalChars) { string s2 = string.Format("{0}{1:X2}", seperator, (Int16)v); s.Replace(v.ToString(), s2); } return s.ToString(); }

    Read the article

  • MySQL query paralyzes site

    - by nute
    Once in a while, at random intervals, our website gets completely paralyzed. Looking at SHOW FULL PROCESSLIST;, I've noticed that when this happens, there is a specific query that is "Copying to tmp table" for a loooong time (sometimes 350 seconds), and almost all the other queries are "Locked". The part I don't understand is that 90% of the time, this query runs fine. I see it going through in the process list and it finishes pretty quickly most of the time. This query is being called by an ajax call on our homepage to display product recommendations based your browsing history (a la amazon). Just sometimes, randomly (but too often), it gets stuck at "copying to tmp table". Here is a caught instance of the query that was up 109 seconds when I looked: SELECT DISTINCT product_product.id, product_product.name, product_product.retailprice, product_product.imageurl, product_product.thumbnailurl, product_product.msrp FROM product_product, product_xref, product_viewhistory WHERE ( (product_viewhistory.productId = product_xref.product_id_1 AND product_xref.product_id_2 = product_product.id) OR (product_viewhistory.productId = product_xref.product_id_2 AND product_xref.product_id_1 = product_product.id) ) AND product_product.outofstock='N' AND product_viewhistory.cookieId = '188af1efad392c2adf82' AND product_viewhistory.productId IN (24976, 25873, 26067, 26073, 44949, 16209, 70528, 69784, 75171, 75172) ORDER BY product_xref.hits DESC LIMIT 10 Of course the "cookieId" and the list of "productId" changes dynamically depending on the request. I use php with PDO.

    Read the article

  • Writing to pointer out of bounds after malloc() not causing error

    - by marr
    Hi, when I try the code below it works fine. Am I missing something? main() { int *p; p=malloc(sizeof(int)); printf("size of p=%d\n",sizeof(p)); p[500]=999999; printf("p[0]=%d",p[500]); return 0; } I tried it with malloc(0*sizeof(int)) or anything but it works just fine. The program only crashes when I don't use malloc at all. So even if I allocate 0 memory for the array p, it still stores values properly. So why am I even bothering with malloc then?

    Read the article

  • How can I speed up line by line reading of an ASCII file? (C++)

    - by Jon
    Here's a bit of code that is a considerable bottleneck after doing some measuring: //----------------------------------------------------------------------------- // Construct dictionary hash set from dictionary file //----------------------------------------------------------------------------- void constructDictionary(unordered_set<string> &dict) { ifstream wordListFile; wordListFile.open("dictionary.txt"); string word; while( wordListFile >> word ) { if( !word.empty() ) { dict.insert(word); } } wordListFile.close(); } I'm reading in ~200,000 words and this takes about 240 ms on my machine. Is the use of ifstream here efficient? Can I do better? I'm reading about mmap() implementations but I'm not understanding them 100%. The input file is simply text strings with *nix line terminations.

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • C++ pimpl idiom wastes an instruction vs. C style?

    - by Rob
    (Yes, I know that one machine instruction usually doesn't matter. I'm asking this question because I want to understand the pimpl idiom, and use it in the best possible way; and because sometimes I do care about one machine instruction.) In the sample code below, there are two classes, Thing and OtherThing. Users would include "thing.hh". Thing uses the pimpl idiom to hide it's implementation. OtherThing uses a C style – non-member functions that return and take pointers. This style produces slightly better machine code. I'm wondering: is there a way to use C++ style – ie, make the functions into member functions – and yet still save the machine instruction. I like this style because it doesn't pollute the namespace outside the class. Note: I'm only looking at calling member functions (in this case, calc). I'm not looking at object allocation. Below are the files, commands, and the machine code, on my Mac. thing.hh: class ThingImpl; class Thing { ThingImpl *impl; public: Thing(); int calc(); }; class OtherThing; OtherThing *make_other(); int calc(OtherThing *); thing.cc: #include "thing.hh" struct ThingImpl { int x; }; Thing::Thing() { impl = new ThingImpl; impl->x = 5; } int Thing::calc() { return impl->x + 1; } struct OtherThing { int x; }; OtherThing *make_other() { OtherThing *t = new OtherThing; t->x = 5; } int calc(OtherThing *t) { return t->x + 1; } main.cc (just to test the code actually works...) #include "thing.hh" #include <cstdio> int main() { Thing *t = new Thing; printf("calc: %d\n", t->calc()); OtherThing *t2 = make_other(); printf("calc: %d\n", calc(t2)); } Makefile: all: main thing.o : thing.cc thing.hh g++ -fomit-frame-pointer -O2 -c thing.cc main.o : main.cc thing.hh g++ -fomit-frame-pointer -O2 -c main.cc main: main.o thing.o g++ -O2 -o $@ $^ clean: rm *.o rm main Run make and then look at the machine code. On the mac I use otool -tv thing.o | c++filt. On linux I think it's objdump -d thing.o. Here is the relevant output: Thing::calc(): 0000000000000000 movq (%rdi),%rax 0000000000000003 movl (%rax),%eax 0000000000000005 incl %eax 0000000000000007 ret calc(OtherThing*): 0000000000000010 movl (%rdi),%eax 0000000000000012 incl %eax 0000000000000014 ret Notice the extra instruction because of the pointer indirection. The first function looks up two fields (impl, then x), while the second only needs to get x. What can be done?

    Read the article

  • Working with PHP and MySQL - need a good and secure design with OO design

    - by Andrew
    I am new to PHP- first time developer. I am working on my web application and it is nearly done; nevertheless, most of my sql was done directly via code using direct mysql requests. This is the way I approached it: In classes_db.php I declared the db settings and created methods that I use to open and close DB connections. I declare those objects on my regular pages: class classes_db { public $dbserver = 'server; public $dbusername = 'user'; public $dbpassword = 'pass'; public $dbname = 'db'; function openDb() { $dbhandle = mysql_connect($this->dbserver, $this->dbusername, $this->dbpassword); if (!$dbhandle) { die('Could not connect: ' . mysql_error()); } $selected = mysql_select_db($this->dbname, $dbhandle) or die("Could not select the database"); return $dbhandle; } function closeDb($con) { mysql_close($con); } } On my regular page, I do this: <?php require 'classes_db.php'; session_start(); //create instance of the DB class $db = new classes_db(); //get dbhandle $dbhandle = $db->openDb(); //process query $result = mysql_query("update user set username = '" . $usernameFromForm . "' where iduser= " . $_SESSION['user']->iduser); //close the connection if (isset($dbhandle)) { $db->closeDb($dbhandle); } ?> My questions is: how to do it right and make it OO and secure? I know that I need incorporate prepared queries- how to do it the best way? Please provide some code

    Read the article

  • Specific template for the first element.

    - by Kalinin
    I have a template: <xsl:template match="paragraph"> ... </xsl:template> I call it: <xsl:apply-templates select="paragraph"/> For the first element I need to do: <xsl:template match="paragraph[1]"> ... <xsl:apply-templates select="."/><!-- I understand that this does not work --> ... </xsl:template> How to call <xsl:apply-templates select="paragraph"/> (for the first element paragraph) from the template <xsl:template match="paragraph[1]">? So far that I have something like a loop. I solve this problem so (but I do not like it): <xsl:for-each select="paragraph"> <xsl:choose> <xsl:when test="position() = 1"> ... <xsl:apply-templates select="."/> ... </xsl:when> <xsl:otherwise> <xsl:apply-templates select="."/> </xsl:otherwise> </xsl:choose> </xsl:for-each>

    Read the article

  • Force an object to be allocated on the heap

    - by Warren Seine
    A C++ class I'm writing uses shared_from_this() to return a valid boost::shared_ptr<>. Besides, I don't want to manage memory for this kind of object. At the moment, I'm not restricting the way the user allocates the object, which causes an error if shared_from_this() is called on a stack-allocated object. I'd like to force the object to be allocated with new and managed by a smart pointer, no matter how the user declares it. I thought it could be done through a proxy or an overloaded new operator, but I can't find a proper way of doing that. Is there a common design pattern for such usage? If it's not possible, how can I test it at compile time?

    Read the article

  • Why isn't the copy constructor elided here?

    - by Jesse Beder
    (I'm using gcc with -O2.) This seems like a straightforward opportunity to elide the copy constructor, since there are no side-effects to accessing the value of a field in a bar's copy of a foo; but the copy constructor is called, since I get the output meep meep!. #include <iostream> struct foo { foo(): a(5) { } foo(const foo& f): a(f.a) { std::cout << "meep meep!\n"; } int a; }; struct bar { foo F() const { return f; } foo f; }; int main() { bar b; int a = b.F().a; return 0; }

    Read the article

  • How can I store large amount of data from a database to XML (speed problem, part three)?

    - by Andrija
    After getting some responses, the current situation is that I'm using this tip: http://www.ibm.com/developerworks/xml/library/x-tipbigdoc5.html (Listing 1. Turning ResultSets into XML), and XMLWriter for Java from http://www.megginson.com/downloads/ . Basically, it reads date from the database and writes them to a file as characters, using column names to create opening and closing tags. While doing so, I need to make two changes to the input stream, namely to the dates and numbers. // Iterate over the set while (rs.next()) { w.startElement("row"); for (int i = 0; i < count; i++) { Object ob = rs.getObject(i + 1); if (rs.wasNull()) { ob = null; } String colName = meta.getColumnLabel(i + 1); if (ob != null ) { if (ob instanceof Timestamp) { w.dataElement(colName, Util.formatDate((Timestamp)ob, dateFormat)); } else if (ob instanceof BigDecimal){ w.dataElement(colName, Util.transformToHTML(new Integer(((BigDecimal)ob).intValue()))); } else { w.dataElement(colName, ob.toString()); } } else { w.emptyElement(colName); } } w.endElement("row"); } The SQL that gets the results has the to_number command (e.g. to_number(sif.ID) ID ) and the to_date command (e.g. TO_DATE (sif.datum_do, 'DD.MM.RRRR') datum_do). The problems are that the returning date is a timestamp, meaning I don't get 14.02.2010 but rather 14.02.2010 00:00:000 so I have to format it to the dd.mm.yyyy format. The second problem are the numbers; for some reason, they are in database as varchar2 and can have leading zeroes that need to be stripped; I'm guessing I could do that in my SQL with the trim function so the Util.transformToHTML is unnecessary (for clarification, here's the method): public static String transformToHTML(Integer number) { String result = ""; try { result = number.toString(); } catch (Exception e) {} return result; } What I'd like to know is a) Can I get the date in the format I want and skip additional processing thus shortening the processing time? b) Is there a better way to do this? We're talking about XML files that are in the 50 MB - 250 MB filesize category.

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >