Search Results

Search found 2417 results on 97 pages for 'mb'.

Page 49/97 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Jboss Seam Booking Example Extract Shared Libs From Ear

    - by michael lucas
    Example Booking Application, which JBoss Seam is shipped with, build into EAR file of about 7 MB. That's pretty much if you consider deploying this package to a remote Jboss server and possibly redeploying it package many times during your regular work. Lib files like richfaces and jsf-facelet make the lion's share of that EAR size. Why can't we just extract lib files into jboss-web.deployer directory on JBoss 4.2.0 GA server?

    Read the article

  • Compute jvm heap size to host web application

    - by Enrique
    Hello, I want to host a web application on a private JVM they offer 32, 64, 128, 256 MB plans. My web application uses Spring. And I store some objects for every logged in user session. My question is: How can I profile my web app to see how much heap size it needs so I can choose a plan?, How can I simulate hundreds of users logged in at the same time? I'm developing the application using Netbeans 6.7 Java 1.6 Tomcat 6.0.18 Thank you.

    Read the article

  • NetBeans needs Javadoc, Eclipse does not?

    - by ducdeeze
    I just installed NetBeans, and want to try it out. Some context tips (popup javadoc stuff) work, but nothing detailed. It says "Javadoc not found...". However, I use Eclipse (my current IDE) and it has no problem showing detailed context tips. Do I HAVE to download the 100+mb zip file to get the javadoc, or can I have Netbeans point to whatever Eclipse is already aware of?

    Read the article

  • GUNZIP / Extract file "portion by portion"

    - by Dave
    Hi. I'm on a shared server with restricted disk space and i've got a gz file that super expands into a HUGE file, more than what i've got. How can I extract it "portion" by "portion (lets say 10 MB at a time), and process each portion, without extracting the whole thing even temporarily! No, this is just ONE super huge compressed file, not a set of files please...

    Read the article

  • Lightest Database to be packed with an application

    - by Yatendra Goel
    I am developing a Java Desktop Application and want a light database that can be used with Hibernate and that can be packed with an application. I was going to use Derby database. It's size is near 2 MB. But before that I wanted to have views of experts on SO. Will it work with Hibernate? Actually, I am new to Hibernate and was studying that it requires a Dialect for a database so Is Hibernate has dialect for Derby?

    Read the article

  • Copying MYSQL backup to another server

    - by Yeti
    I'm new to SSH. How to copy a .gz file from one server to another using SSH? I'm using cron to backup mysql databases and want to also automate the process of copying the .gz files a different web host. Any information on the limit of file size that can be copied would also be great. The backup file size range from 100 MB to few GB.

    Read the article

  • Ria Services - Delay load images (or any heavy binary data)

    - by vidalsasoon
    I have an RIA Service that returns image data (Byte[]) and caption of image (String) from SQL Server. The data part can sometimes be a few MB's so it can take quite a while to load. I would like to load the bytes independently of the caption (which loads very fast). Is there a way to do this without having to create a second image context?

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • Jaxb to generate the XML directly to the OutputStream

    - by sonu
    Hi, I have a 500Mb csv file. I need to convert it into XML file. I am using the Jaxb to created the xml file. It is working fine for small amout of data. but for large amout of data like 300 mb it is throwing out of memory exception. Can anyone tell me that How can I create each element and write it into a file without creating the whole tree using the jaxb?" Thanks Sonu

    Read the article

  • Fade unfocused GNU Emacs frame (X window)

    - by Mischa Arefiev
    Is it possible to make GNU Emacs 24 dim unfocused windows a bit? For example, I can set my rxvt-unicode clients to become darker when their windows don't have focus with this string in ~/.Xdefaults: URxvt*fading: 50 It greatly reduces discomfort when you have a lot of terminal windows on 2+ monitors. I would like a similar feature in Emacs, but couldn't google up anything. Here is how it looks like with urxvt (png, 1.43 MB)

    Read the article

  • Android googlemap Out of memory

    - by Xiaofeng
    Hi, I made an android application with googlemap api, and draw some 16x16 png (about 200 count) on overlay. When I move or zoom on/in mapview, "out of memory" error occurs very often. I also used the googlemap appication in my htc itself. Seams that it uses about 14+MB memmory, and never happens "out of memory". How to save memmory usage in a googlemap api, or how to enlarge android api memmory limit. Thanks a lot!

    Read the article

  • Unable to upload large files to Google Docs

    - by Preeti
    I am uploading documents on Google Docs as: DocumentsService myService = new DocumentsService(""); myService.setUserCredentials("[email protected]", password ); DocumentEntry newEntry = myService.UploadDocument(@"C:\Sample.txt", "Sample.txt"); But when I try to upload a file of 3 MB I get an exception: An unhandled exception of type 'Google.GData.Client.GDataRequestException' occurred in Google.GData.Client.dll Additional information: Execution of request failed: http://docs.google.com/feeds/documents/private/full How can I upload large files to Google Docs? I am using Google API ver 2.

    Read the article

  • .NET object creation and generations

    - by nimoraca
    Is there a way to tell the .NET to allocate a new object in generation 2 heap. I have a problem where I need to allocate approximately 200 MB of objects, do something with them, and throw them away. What happens here is that all the data gets copied two times (from gen0 to gen1 and then from gen1 to gen2).

    Read the article

  • Loading Huge Image

    - by japs
    Hi, I Want to load Image size 2550X3300 (i.e 1.7 Mb size), i have loaded the image into UIImageView and application gets crash due to low memory, Now i have loaded into uiWebview it works fine but i have to save this image into an PDF file in local resource. While iam saving UIImage in background same app gets crash due to low memory. Anyone has some suggestion or help to solve this issue. Thank You.

    Read the article

  • How can I store large amount of data from a database to XML (speed problem, part three)?

    - by Andrija
    After getting some responses, the current situation is that I'm using this tip: http://www.ibm.com/developerworks/xml/library/x-tipbigdoc5.html (Listing 1. Turning ResultSets into XML), and XMLWriter for Java from http://www.megginson.com/downloads/ . Basically, it reads date from the database and writes them to a file as characters, using column names to create opening and closing tags. While doing so, I need to make two changes to the input stream, namely to the dates and numbers. // Iterate over the set while (rs.next()) { w.startElement("row"); for (int i = 0; i < count; i++) { Object ob = rs.getObject(i + 1); if (rs.wasNull()) { ob = null; } String colName = meta.getColumnLabel(i + 1); if (ob != null ) { if (ob instanceof Timestamp) { w.dataElement(colName, Util.formatDate((Timestamp)ob, dateFormat)); } else if (ob instanceof BigDecimal){ w.dataElement(colName, Util.transformToHTML(new Integer(((BigDecimal)ob).intValue()))); } else { w.dataElement(colName, ob.toString()); } } else { w.emptyElement(colName); } } w.endElement("row"); } The SQL that gets the results has the to_number command (e.g. to_number(sif.ID) ID ) and the to_date command (e.g. TO_DATE (sif.datum_do, 'DD.MM.RRRR') datum_do). The problems are that the returning date is a timestamp, meaning I don't get 14.02.2010 but rather 14.02.2010 00:00:000 so I have to format it to the dd.mm.yyyy format. The second problem are the numbers; for some reason, they are in database as varchar2 and can have leading zeroes that need to be stripped; I'm guessing I could do that in my SQL with the trim function so the Util.transformToHTML is unnecessary (for clarification, here's the method): public static String transformToHTML(Integer number) { String result = ""; try { result = number.toString(); } catch (Exception e) {} return result; } What I'd like to know is a) Can I get the date in the format I want and skip additional processing thus shortening the processing time? b) Is there a better way to do this? We're talking about XML files that are in the 50 MB - 250 MB filesize category.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >