Search Results

Search found 10417 results on 417 pages for 'large'.

Page 60/417 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Setting up SVN (subvsersion) to manage our companies files, how to exclude large files from being ve

    - by Roeland
    Me and two other guys recently started our own web development company. We each work from our homes and have decided we want to keep one central location for all of our files. These files include word documents, spreadsheets, client files, designs.. etc. Anything pertaining to our company. I have a pretty solid internet connection and a windows 2008 server box sitting at home so I set up a subversion repository. Our file repository will look something like this. Clients Company A Design (photoshop files, wireframes, concepts) Documents ( logins, quotes, proposals etc) Site Backups Company B Design Documents Site Backups Prospects Company C Company D Our Company Our Website Documents (contract, operating procudres) My question is in regards to design files. The photoshop files that my designer works with range in sizes from 10mb to 100mb. I don't think we need to keep these files version-ed as this would eat up space incredibly fast. How do I go about controlling which files get version-ed, and which files are just stored. What I am thinking is that all documents need to be version-ed, and any files other then that should not be. Any help would be appreciated, thanks! Edit I am also curious whether this is the way to go. I just like this system since it keeps version of all my documents and at the same time. Also essentially I will have 3 backups in 3 different locations (3 local copies) so no need for backing it up. I am unsure of how svn would perform as purely a huge file repository.

    Read the article

  • Why does File::Find finished short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system.

    Read the article

  • Is there a performance gain from defining routes in app.yaml versus one large mapping in a WSGIAppli

    - by jgeewax
    Scenario 1 This involves using one "gateway" route in app.yaml and then choosing the RequestHandler in the WSGIApplication. app.yaml - url: /.* script: main.py main.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page1/', Page1), ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Scenario 2: This involves defining two routes in app.yaml and then two separate scripts for each (page1.py and page2.py). app.yaml - url: /page1/ script: page1.py - url: /page2/ script: page2.py page1.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") application = webapp.WSGIApplication([ ('/page1/', Page1), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() page2.py from google.appengine.ext import webapp class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Question What are the benefits and drawbacks of each pattern? Is one much faster than the other?

    Read the article

  • How do you specify a really large character in UIButton?

    - by Epsilon Prime
    I have a series of buttons that have suit symbols on them. Currently I provide these suit symbols as bitmaps. In preparation for iPhone 4 I'd like to use text instead. However Interface Builder rescales the button to account for whitespace underneath the symbol so I can't get the image to fill the button completely. Any hints on getting Interface Builder to behave?

    Read the article

  • Exhaustive (or even just large) list of languages/stacks used for popular sites?

    - by jacko
    As a result of a conversation with a colleague today, I've been searching (unsuccessfully) for a large'ish list of what technology stacks are being used popular websites and standalone applications today. We're aware of the big ones like Facebook (php/ ), Twitter (scala/cassandra), Youtube (python/?), Digg (php/cassandra), stackoverflow (.net mvc/sqlserver), but we're looking for a more complete list. It would also be interesting to hear about any stats for desktop apps also? Can anyone help?

    Read the article

  • Does anybody have any suggestions on which of these two approaches is better for large delete?

    - by RPS
    Approach #1: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE TOP (@count) FROM ProductOrderInfo WHERE ProductId = @product_id AND bCopied = 1 AND FileNameCRC = @localNameCrc SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' Approach #2: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE FROM ProductOrderInfo WHERE ProductId = @product_id AND FileNameCRC IN ( SELECT TOP(@count) FileNameCRC FROM ProductOrderInfo WITH (NOLOCK) WHERE bCopied = 1 AND FileNameCRC = @localNameCrc ) SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' END

    Read the article

  • How can I get a iterable resultset from the database using pdo, instead of a large array?

    - by Tchalvak
    I'm using PDO inside a database abstraction library function query. I'm using fetchAll(), which if you have a lot of results, can get memory intensive, so I want to provide an argument to toggle between a fetchAll associative array and a pdo result set that can be iterated over with foreach and requires less memory (somehow). I remember hearing about this, and I searched the PDO docs, but I couldn't find any useful way to do that. Does anyone know how to get an iterable resultset back from PDO instead of just a flat array? And am I right that using an iterable resultset will be easier on memory? I'm using Postgresql, if it matters in this case. . . . The current query function is as follows, just for clarity. /** * Running bound queries on the database. * * Use: query('select all from players limit :count', array('count'=>10)); * Or: query('select all from players limit :count', array('count'=>array(10, PDO::PARAM_INT))); **/ function query($sql_query, $bindings=array()){ DatabaseConnection::getInstance(); $statement = DatabaseConnection::$pdo->prepare($sql_query); foreach($bindings as $binding => $value){ if(is_array($value)){ $statement->bindParam($binding, $value[0], $value[1]); } else { $statement->bindValue($binding, $value); } } $statement->execute(); // TODO: Return an iterable resultset here, and allow switching between array and iterable resultset. return $statement->fetchAll(PDO::FETCH_ASSOC); }

    Read the article

  • What's the most scalable way to handle somewhat large file uploads in a Python webapp?

    - by Jason Baker
    We have a web application that takes file uploads for some parts. The file uploads aren't terribly big (mostly word documents and such), but they're much larger than your typical web request and they tend to tie up our threaded servers (zope 2 servers running behind an Apache proxy). I'm mostly in the brainstorming phase right now and trying to figure out a general technique to use. Some ideas I have are: Using a python asynchronous server like tornado or diesel or gunicorn. Writing something in twisted to handle it. Just using nginx to handle the actual file uploads. It's surprisingly difficult to find information on which approach I should be taking. I'm sure there are plenty of details that would be needed to make an actual decision, but I'm more worried about figuring out how to make this decision than anything else. Can anyone give me some advice about how to proceed with this?

    Read the article

  • Syncing large personal school-material -git-repo with things such as casual notes? Rsync, wget and Git -- or some ready tool?

    - by hhh
    My friend wants to store electrically her school -notes and process them fast, with backups. She has over 2GB -size repo already and growing all the time (mostly appended material i.e. more school notes, different formats, pdf, pictures and scanned, some text -files, etc). The goal of my friend is to process fast the notes. I suggested command like this here i.e. "# crontab -e @weekly wget --random-wait -e robots=off -U mozilla -mirror http://VeryLong.com". But I think plugging in Rsync somewhere could make it much better with Git. How would you help my friend to process and store the school -material under Git-version-controlling and still keep the size reasonable? Perhaps related rsync .git directory rsync git big repository Different scope Git/rsync mix for projects with large binaries and text files What's a good way to organize a large collection of personal scripts using git?

    Read the article

  • How to select number of lines from large text files?

    - by MiNdFrEaK
    I was wondering how to select number of lines from a certain text file. As an example: I have a text file containing the following lines: branch 27 : rect id 23400 rect: -115.475609 -115.474907 31.393650 31.411301 branch 28 : rect id 23398 rect: -115.474907 -115.472282 31.411301 31.417351 branch 29 : rect id 23396 rect: -115.472282 -115.468033 31.417351 31.427151 branch 30 : rect id 23394 rect: -115.468033 -115.458733 31.427151 31.438181 Non-Leaf Node: level=1 count=31 address=53 branch 0 : rect id 42 rect: -115.768539 -106.251556 31.425039 31.717550 branch 1 : rect id 50 rect: -109.559479 -106.009361 31.296721 31.775299 branch 2 : rect id 51 rect: -110.937401 -106.226143 31.285870 31.771971 branch 3 : rect id 54 rect: -109.584412 -106.069092 31.285240 31.775230 branch 4 : rect id 56 rect: -109.570961 -106.000954 31.296721 31.780769 branch 5 : rect id 58 rect: -115.806213 -106.366188 31.400450 31.687519 branch 6 : rect id 59 rect: -113.173859 -106.244057 31.297440 31.627750 branch 7 : rect id 60 rect: -115.811478 -106.278252 31.400450 31.679470 branch 8 : rect id 61 rect: -109.953888 -106.020111 31.325319 31.775270 branch 9 : rect id 64 rect: -113.070969 -106.015968 31.331841 31.704750 branch 10 : rect id 68 rect: -113.065689 -107.034576 31.326300 31.770809 branch 11 : rect id 71 rect: -112.333344 -106.059860 31.284081 31.662920 branch 12 : rect id 73 rect: -115.071083 -106.309677 31.267879 31.466850 branch 13 : rect id 74 rect: -116.094414 -106.286308 31.236290 31.424770 branch 14 : rect id 75 rect: -115.423264 -106.286308 31.229691 31.415510 branch 15 : rect id 76 rect: -116.111656 -106.313110 31.259390 31.478300 branch 16 : rect id 77 rect: -116.247467 -106.309677 31.240231 31.451799 branch 17 : rect id 78 rect: -116.170792 -106.094543 31.156429 31.391781 branch 18 : rect id 79 rect: -116.225723 -106.292709 31.239960 31.442850 branch 19 : rect id 80 rect: -116.268013 -105.769913 31.157240 31.378111 branch 20 : rect id 82 rect: -116.215424 -105.827202 31.198441 31.383421 branch 21 : rect id 83 rect: -116.095734 -105.826439 31.197460 31.373819 branch 22 : rect id 84 rect: -115.423264 -105.815018 31.182640 31.368891 branch 23 : rect id 85 rect: -116.221527 -105.776512 31.160931 31.389830 branch 24 : rect id 86 rect: -116.203369 -106.473831 31.168350 31.367611 branch 25 : rect id 87 rect: -115.727631 -106.501587 31.189100 31.395941 branch 26 : rect id 88 rect: -116.237289 -105.790756 31.164780 31.358959 branch 27 : rect id 89 rect: -115.791344 -105.990044 31.072620 31.349529 branch 28 : rect id 90 rect: -115.736847 -106.495079 31.187969 31.376900 branch 29 : rect id 91 rect: -115.721710 -106.000130 31.160351 31.354601 branch 30 : rect id 92 rect: -115.792236 -106.000793 31.166620 31.378811 Leaf Node: level=0 count=21 address=42 branch 0 : rect id 18312 rect: -106.412270 -106.401367 31.704750 31.717550 branch 1 : rect id 18288 rect: -106.278252 -106.253387 31.520321 31.548361 I just want those lines which are in between Non-Leaf Node level=1 to Leaf Node Level=0 and also there are a lot of segments like this and I need them all.

    Read the article

  • Why did File::Find finish short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system. Here is the meat of the script; I don't think including DBI stuff would be relevant since I reran the script with nothing but a counter in process_img() which returned the same number. find(\&process_img, $path_from); sub process_img { eval { return if ($_ eq "." or $_ eq ".."); ## Omitted querying and composing new paths for brevity. make_path("$path_to\\img\\$dir_area\\$dir_address\\$type"); copy($File::Find::name, "$path_to\\img\\$dir_area\\$dir_address\\$type\\$new_name"); }; if ($@) { print STDERR "eval barks: $@\n"; return } } And here is another method I used to count files: count_images($path_from); sub count_images { my $path = shift; opendir my $images, $path or die "died opening $path"; while (my $item = readdir $images) { next if $item eq '.' or $item eq '..'; $img_counter++ && next if -f "$path/$item"; count_images("$path/$item") if -d "$path/$item"; } closedir $images or die "died closing $path"; } print $img_counter;

    Read the article

  • When I try to pass large amounts of information using jquery $.ajax(post) method. it throws potenti

    - by dotnetrocks
    I am trying to create a preview window for my texteditor in my blog page. I need to send the content to the server to clean up the text entered before I can preview it on the preview window. I was trying to use $.ajax({ type: method, url: url, data: values, success: LoadPageCallback(targetID), error: function(msg) { $('#' + targetID).attr('innerHTML', 'An error has occurred. Please try again.'); } }); Whenever I tried to click on the preview button it returns an XMLHTTPRequest error. The error description - Description: Request Validation has detected a potentially dangerous client input value, and processing of the request has been aborted. This value may indicate an attempt to compromise the security of your application, such as a cross-site scripting attack. You can disable request validation by setting validateRequest=false in the Page directive or in the configuration section. However, it is strongly recommended that your application explicitly check all inputs in this case. The ValidateRequest for the page is set to false. Is there a way I can set validaterequest to false for the ajax call.Please advise Thank you for reading my post.

    Read the article

  • Large scale Merge Replication strategy - what can go wrong?

    - by niidto
    Hi, I'm developing a piece of software that uses Merge Replication and SQL Compact on Windows Mobile 6. At the moment it is running on 5 devices reasonably well. The issues I've come up against are as follows: The schema has had to change a lot, and will continue to have to change as the application evolves. There have been various errors replicating these schema changes down to the device, uploads failing due to schema inconsistencies. Subscriptions expiring (after 14 days) and unable to reinitialize with upload - AKA, potential data los of unsynced data up to that point. Basically, the worst case scenario is data loss, and when merge replication fails, there seems to be no way back to get the data off. My method until now has been to drop and create the subscription on the device. I don't hear many people doing this, though it seems to solve everything. The long term plan is to role this out to 500+ devices. Any advice on people who have undertaken similar projects, and how to minimise data loss and make it so that there's appropriate error handling code to recover from sync failures would be much appreciated. James

    Read the article

  • Is it ok to store large objects (java component for example) in an Application variable?

    - by DustMason
    I am developing an app right now which creates and stores a connection to a local XMPP server in the Application scope. The connection methods are stored in a cfc that makes sure the Application.XMPPConnection is connected and authorized each time it is used, and makes use of the connection to send live events to users. As far as I can tell, this is working fine. BUT it hasn't been tested under any kind of stress. My question is: Will this set up cause problems later on? I only ask because I can't find evidence of other people using Application variables in this way. If I weren't using railo I would be using CF's event gateway instead to accomplish the same task.

    Read the article

  • Why does s ++ t not lead to a stack overflow for large s?

    - by martingw
    I'm wondering why Prelude> head $ reverse $ [1..10000000] ++ [99] 99 does not lead to a stack overflow error. The ++ in the prelude seems straight forward and non-tail-recursive: (++) :: [a] -> [a] -> [a] (++) [] ys = ys (++) (x:xs) ys = x : xs ++ ys So just with this, it should run into a stack overflow, right? So I figure it probably has something to do with the ghc magic that follows the definition of ++: {-# RULES "++" [~1] forall xs ys. xs ++ ys = augment (\c n -> foldr c n xs) ys #-} Is that what helps avoiding the stack overflow? Could someone provide some hint for what's going on in this piece of code?

    Read the article

  • C# slowdown while creating a bitmap - calculating distances from a large List of places for each pixel

    - by user576849
    I'm creating a graphic of the glow of lights above a geographic location based upon Walkers Law: Skyglow=0.01*Population*DistanceFromCenter^-2.5 I have a CSV file of places with 66,000 records using 5 fields (id,name,population,latitude,longitude), parsed on the FormLoad event and stored it in: List<string[]> placeDataList Then I set up nested loops to fill in a bitmap using SetPixel. For each pixel on the bitmap, which represents a coordinate on a map (latitude and longitude), the program loops through placeDataList – calculating the distance from that coordinate (pixel) to each place record. The distance (along with population) is used in a calculation to find how much cumulative sky glow is contributed to the coordinate from each place record. So, for every pixel, 66,000 distance calculations must be made. The problem is, this is predictably EXTREMELY slow – on the order of one line of pixels per 30 seconds or so on a 320 pixel wide image. This is unrelated to SetPixel, which I know is also slow, because the speed is similarly slow when adding the distance calculation results to an array. I don’t actually need to test all 66,000 records for every pixel, only the records within 150 miles (i.e. no skyglow is contributed to a coordinate from a small town 3000 miles away). But to find which records are within 150 miles of my coordinate I would still need to loop through all the records for each pixel. I can't use a smaller number of records because all 66,000 places contribute to skyglow for SOME coordinate in my map as it loops. This seems like a Catch-22, so I know there must be a better method out there. Like I mentioned, the slowdown is related to how many calculations I’m making per pixel, not anything to do with the bitmap. Any suggestions? private void fillPixels(int width) { Color pixelColor; int pixel_w = width; int pixel_h = (int)Math.Floor((width * 0.424088664)); Bitmap bmp = new Bitmap(pixel_w, pixel_h); for (int i = 0; i < pixel_h; i++) for (int j = 0; j < pixel_w; j++) { pixelColor = getPixelColor(i, j); bmp.SetPixel(j, i, pixelColor); } bmp.Save("Nightfall", System.Drawing.Imaging.ImageFormat.Jpeg); pictureBox1.Image = bmp; MessageBox.Show("Done"); } private Color getPixelColor(int height, int width) { int c; double glow,d,cityLat,cityLon,cityPop; double testLat, testLon; int size_h = (int)Math.Floor((size_w * 0.424088664)); ; testLat = (height * (24.443136 / size_h)) + 24.548874; testLon = (width * (57.636853 / size_w)) -124.640767; glow = 0; for (int i = 0; i < placeDataList.Count; i++) { cityPop=Convert.ToDouble(placeDataList[i][2]); cityLat=Convert.ToDouble(placeDataList[i][3]); cityLon=Convert.ToDouble(placeDataList[i][4]); d = distance(testLat, testLon, cityLat, cityLon,"M"); if(d<150) glow = glow+(0.01 * cityPop * Math.Pow(d, -2.5)); } if (glow >= 1) glow=1; c = (int)Math.Ceiling(glow * 255); return Color.FromArgb(c, c, c); }

    Read the article

  • I have a 18MB MySQL table backup. How can I restore such a large SQL file?

    - by Henryz
    I use a Wordpress plugin called 'Shopp'. It stores product images in the database rather than the filesystem as standard, I didn't think anything of this until now. I have to move server, and so I made a backup, but restoring the backup is proving a horrible task. I need to restore one table called wp_shopp_assets which is 18MB. Any advice is hugely appreciated. Thanks, Henry.

    Read the article

  • how to animate a menu item into a large div (window) using jquery's animate?

    - by ijjo
    i'm pretty sure this can be done pretty easily with jquery's animate api, but i'm not good enough to figure it out. what i want to do is this: i have a menu item at the top of the viewport that the user will click on. when the user clicks on it, you'll see something that looks like the div "popping" out of the menu and float to a particular location on the screen. when i say popping i don't mean anything fancy - i just mean it appears to be originating from the menu item and settling somewhere on the screen that i specify. but the important part is that this animation happens really fast. fast enough where you don't have to actually wait for the window to appear, but slow enough so the eye sees the animation start from the menu item and end up at a new location where the window will actually appear, and appear with a specify height and width. hope that all made sense?

    Read the article

  • If I take a large datatype. Will it affect performance in sql server

    - by Shantanu Gupta
    If i takes larger datatype where i know i should have taken datatype that was sufficient for possible values that i will insert into a table will affect any performance in sql server in terms of speed or any other way. eg. IsActive (0,1,2,3) not more than 3 in any case. I know i must take tinyint but due to some reasons consider it as compulsion, i am taking every numeric field as bigint and every character field as nVarchar(Max) Please give statistics if possible, to let me try to overcoming that compulsion. I need some solid analysis that can really make someone rethink before taking any datatype.

    Read the article

  • Best practice for handling memory leaks in large Java projects?

    - by knorv
    In almost all larger Java projects I've been involved with I've noticed that the quality of service of the application degrades with the uptime of the container. This is most probably due to memory leaks in the code. The correct way to solve this problem is obviously to trace back to the root cause of the problem and fix the leaks in the code. The quick and dirty way of solving the problem is simply restarting Tomcat (or whichever servlet container you're using). These are my three questions: Assume that you choose to solve the problem by tracing the root cause of the problem (the memory leaks), how would you collect data to zoom in on the problem? Assume that you choose the quick and dirty way of speeding things up by simply restarting the container, how would you collect data to choose the optimal restart cycle? Have you been able to deploy and run projects over an extended period of time without ever restarting the servlet container to regain snappiness? Or is an occasional servlet restart something that one has to simply accept?

    Read the article

  • Fast inter-process (inter-threaded) communications IPC on large multi-cpu system.

    - by IPC
    What would be the fastest portable bi-directional communication mechanism for inter-process communication where threads from one application need to communicate to multiple threads in another application on the same computer, and the communicating threads can be on different physical CPUs). I assume that it would involve a shared memory and a circular buffer and shared synchronization mechanisms. But shared mutexes are very expensive (and there are limited number of them too) to synchronize when threads are running on different physical CPUs.

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • How can I store large amount of data from a database to XML (speed problem, part three)?

    - by Andrija
    After getting some responses, the current situation is that I'm using this tip: http://www.ibm.com/developerworks/xml/library/x-tipbigdoc5.html (Listing 1. Turning ResultSets into XML), and XMLWriter for Java from http://www.megginson.com/downloads/ . Basically, it reads date from the database and writes them to a file as characters, using column names to create opening and closing tags. While doing so, I need to make two changes to the input stream, namely to the dates and numbers. // Iterate over the set while (rs.next()) { w.startElement("row"); for (int i = 0; i < count; i++) { Object ob = rs.getObject(i + 1); if (rs.wasNull()) { ob = null; } String colName = meta.getColumnLabel(i + 1); if (ob != null ) { if (ob instanceof Timestamp) { w.dataElement(colName, Util.formatDate((Timestamp)ob, dateFormat)); } else if (ob instanceof BigDecimal){ w.dataElement(colName, Util.transformToHTML(new Integer(((BigDecimal)ob).intValue()))); } else { w.dataElement(colName, ob.toString()); } } else { w.emptyElement(colName); } } w.endElement("row"); } The SQL that gets the results has the to_number command (e.g. to_number(sif.ID) ID ) and the to_date command (e.g. TO_DATE (sif.datum_do, 'DD.MM.RRRR') datum_do). The problems are that the returning date is a timestamp, meaning I don't get 14.02.2010 but rather 14.02.2010 00:00:000 so I have to format it to the dd.mm.yyyy format. The second problem are the numbers; for some reason, they are in database as varchar2 and can have leading zeroes that need to be stripped; I'm guessing I could do that in my SQL with the trim function so the Util.transformToHTML is unnecessary (for clarification, here's the method): public static String transformToHTML(Integer number) { String result = ""; try { result = number.toString(); } catch (Exception e) {} return result; } What I'd like to know is a) Can I get the date in the format I want and skip additional processing thus shortening the processing time? b) Is there a better way to do this? We're talking about XML files that are in the 50 MB - 250 MB filesize category.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >