Search Results

Search found 3324 results on 133 pages for 'gb'.

Page 103/133 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Increasing the JVM maximum heap size for memory intensive applications

    - by Alceu Costa
    I need to run a Java memory intensive application that uses more than 2GB, but I am having problems to increase the heap maximum size. So far, I have tried the following approaches: Setting the -Xmx parameter, e.g. -Xmx3000m. This approaches fails at the creation of the JVM. From what I've googled, it looks like that -Xmx must be less than 2GB. Using the -XX:+AggressiveHeap option. When I try this approach I get an 'Not enough memory' error that tells that the heap size is 1273.4 MB, even though my computer has 8GB of memory. Is there another approach that I can try to increase the maximum heap size of the JVM? Here's a summary of the computer specs: OS: Windows 7 (64 bit) Processor: Intel Core i7 (2.66 GHz) Memory: 8 GB

    Read the article

  • [Python] How do I read binary pickle data first, then unpickle it?

    - by conradlee
    I'm unpickling a NetworkX object that's about 1GB in size on disk. Although I saved it in the binary format (using protocol 2), it is taking a very long time to unpickle this file---at least half an hour. The system I'm running on has plenty of system memory (128 GB), so that's not the bottleneck. I've read here that pickling can be sped up by first reading the entire file into memory, and then unpickling it (that particular thread refers to python 3.0, which I'm not using, but the point should still be true in python 2.6). How do I first read the binary file, and then unpickle it? I have tried: import cPickle as pickle f = open("big_networkx_graph.pickle","rb") bin_data = f.read() graph_data = pickle.load(bin_data) But this returns: TypeError: argument must have 'read' and 'readline' attributes Any ideas?

    Read the article

  • How many records can i store in a Sql server table before it's getting ugly?

    - by Michel
    Hi, i've been asked to do some performance tests for a new system. It is only just running with a few client, but as they expect to grow, these are the numbers i work with for my test: 200 clients, 4 years of data, and the data changes per.... 5 minutes. So for every 5 minutes for every client there is 1 record. That means 365*24*12 = 105.000 records per client per year, that means 80 milion records for my test. It has one FK to another table, one PK (uniqueidentifier) and one index on the clientID. Is this something SqlServer laughs about because it isn't scaring him, is this getting too much for one quad core 8 GB machine, is this on the edge, or..... Has anybody had any experience with these kind of numbers?

    Read the article

  • Tuning JVM (GC) for high responsive server application

    - by elgcom
    I am running an application server on Linux 64bit with 8 core CPUs and 6 GB memory. The server must be highly responsive. After some inspection I found that the application running on the server creates rather a huge amount of short-lived objects, and has only about 200~400 MB long-lived objects(as long as there is no memory leak) After reading http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html I use these JVM options -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC Result: the minor GC takes 0.01 ~ 0.02 sec, the major GC takes 1 ~ 3 sec the minor GC happens constantly. How can I further improve or tune the JVM? larger heap size? but will it take more time for GC? larger NewSize and MaxNewSize (for young generation)? other collector? parallel GC? is it a good idea to let major GC take place more often? and how?

    Read the article

  • UNIX-style RegExp Replace running extremely slowly under windows. Help?

    - by John Sullivan
    I'm trying to run a unix regEXP on every log file in a 1.12 GB directory, then replace the matched pattern with ''. Test run on a 4 meg file is took about 10 minutes, but worked. Obviously something is murdering performance by several orders of magnitude. Find: ^(?!.*155[0-2][0-9]{4}\s.*).*$ -- NOTE: match any line NOT starting 152[0-2]NNNN where in is a number 0-9. Replace with: ''. Is there some justifiable reason for my regExp to take this long to replace matching text, or is the program I am using (this is windows / a program called "grepWin") most likely poorly optimized? Thanks.

    Read the article

  • MATLAB - Delete elements of binary files without loading entire file

    - by Doresoom
    This may be a stupid question, but Google and MATLAB documentation have failed me. I have a rather large binary file (10 GB) that I need to open and delete the last forty million bytes or so. Is there a way to do this without reading the entire file to memory in chunks and printing it out to a new file? It took 6 hours to generate the file, so I'm cringing at the thought of re-reading the whole thing. EDIT: The file is 14,440,000,000 bytes in size. I need to chop it to 14,400,000,000.

    Read the article

  • Remote Development Workflow with Tomcat and Eclipse

    - by Smithers
    Currently, I have tomcat installed on the production server to serve my java webapps. I develop in eclipse at my personal workstation and then I use an ant script to build the project into a war file and deploy that on the server. This setup works well when I am on the same network as the server because deploying is almost instantaneous. However, now that I am working remotely uploading the war file to the server is slow and in most cases very redundant (there are about .5 GB of static media included in the war file). Is there a better way to update my webapp on tomcat from eclipse and if so what are the best options for implementing such a solution with minimal effort?

    Read the article

  • Java library to partially export a database while respecting referential integrity constraints

    - by Mwanji Ezana
    My production database is several GB's uncompressed and it's getting to be a pain to download and run locally when trying to reproduce a bug or test a feature with real data. I would like to be able to select the specific records that interest me, then have the library figure out what other records are necessary to produce a dataset that respects the databases integrity constraints and finally print it out as a list of insert statements or dump that I can restore. For example: given Author, Blog and Comment tables when I select comments posted after a certain date I should get inserts for the Blog records the comments have foreign keys to and the Author records those Blogs have foreign keys to.

    Read the article

  • how much concurrent http request can erlang handle

    - by user209123
    I am developing a application for benchmarking purposes, for which I require to create large number of http connection in a short time, I created a program in java to test how much threads is java able to create, it turns out in my 2GB single core machine, the limit is variable between 5000 and 6000 with 1 GB of memory given to JVM after which it hits outofmemoryerror with heap limit reached. It is suggested around that erlang will be able to generate much more concurrent processes, I am willing to learn erlang if it is capable of solving the problem , although I am interested in knowing can erlang be able to say generate somewhere around 100000 processes which are essentially http requests waiting for responses, in a matter of few seconds without reaching any limit like memory error etc.,

    Read the article

  • Reason for monolithic data files

    - by Ali Lown
    Primarily this seems to be a technique used by games, where they have all the sounds in one file, textures in another etc. With these files commonly reaching the GB size. What is the reason behind doing this over maintaining it all in subdirectories as small files - one per texture which many small games use this, with the monolithic system being favoured by larger companies? Is there some file system overhead with lots of small files? Are they trying to protect their property - although most just seem to be a compressed file with a new extension?

    Read the article

  • Preallocate memory for a program in Linux before it gets started

    - by Fyg
    Hi, folks, I have a program that repeatedly solves large systems of linear equations using cholesky decomposition. Characterising is that I sometimes need to store the complete factorisation which can exceed about 20 GB of memory. The factorisation happens inside a library that I call. Furthermore, this matrix and the resulting factorisation changes quite frequently and as such the memory requirements as well. I am not the only person to use this compute-node. Therefore, is there a way to start the program under Linux and preallocate free memory for the process? Something like: $: prealloc -m 25G ./program

    Read the article

  • How to set a __str__ method for all ctype Structure classes?

    - by Reuben Thomas
    [Since asking this question, I've found: http://www.cs.unc.edu/~gb/blog/2007/02/11/ctypes-tricks/ which gives a good answer.] I just wrote a __str__ method for a ctype-generated Structure class 'foo' thus: def foo_to_str(self): s = [] for i in foo._fields_: s.append('{}: {}'.format(i[0], foo.\_\_getattribute__(self, i[0]))) return '\n'.join(s) foo.\_\_str__ = foo_to_str But this is a fairly natural way to produce a __str__ method for any Structure class. How can I add this method directly to the Structure class, so that all Structure classes generated by ctypes get it? (I am using the h2xml and xml2py scripts to auto-generate ctypes code, and this offers no obvious way to change the names of the classes output, so simply subclassing Structure, Union &c. and adding my __str__ method there would involve post-processing the output of xml2py.)

    Read the article

  • Restoring using SyncBack without profiles

    - by Thomas Matthews
    I backed up my internal hard drive (C:) using SyncBack onto an external (USB) hard drive with maximum compression. I then performed a clean install of Windows Vista onto the computer. I forgot to copy the SyncBack logs before the clean install. And now when ever I try to restore a directory, the RAR/ZIP files are copied to the system hard drive instead of extracting their contents to the hard drive. Also, SyncBack is not traversing the folders during the Restore process. How can I tell SyncBack to expand the compressed files? I am running the freeware version of SyncBack. I have to create new log files (unless SyncBack put them somewhere on the external drive). My alternative is to write a program that traverses the folders on the external drive and extracts files from the RAR/ZIP files. I am using Windows Vista, Service Pack 2, and the data size prior to backup was about 200 GB. (The backup process took over 72 hours due to "hiccups").

    Read the article

  • Extract anything that looks like links from large amount of data in python

    - by Riz
    Hi, I have around 5 GB of html data which I want to process to find links to a set of websites and perform some additional filtering. Right now I use simple regexp for each site and iterate over them, searching for matches. In my case links can be outside of "a" tags and be not well formed in many ways(like "\n" in the middle of link) so I try to grab as much "links" as I can and check them later in other scripts(so no BeatifulSoup\lxml\etc). The problem is that my script is pretty slow, so I am thinking about any ways to speed it up. I am writing a set of test to check different approaches, but hope to get some advices :) Right now I am thinking about getting all links without filtering first(maybe using C module or standalone app, which doesn't use regexp but simple search to get start and end of every link) and then using regexp to match ones I need.

    Read the article

  • What performance indicators can I use to convince management that I need my development PC upgraded?

    - by Aaron Daniels
    At work, my PC is slow. I feel that I can be way more productive if I just wasn't waiting for Visual Studio and everything else to respond. My PC isn't bad (dual-core, 3GB of RAM), but there is a lot of corporate software and whatnot to slow everything down and sometimes lock it up. Now, some developers have begun getting Windows 7 machines with 8 GB of RAM. Of course, I start salivating at this. However, I was told that I "had to justify" why I should get a new machine. I can think of a lot of different things, but I am curious as to what every one else on SO would have to say. NOTE: Ideally, these reasons should be specifically related to .NET development in Visual Studio on a Windows machine. This isn't a "how can I make my machine faster" question.

    Read the article

  • Control SQL Server CLR Reserved Memory

    - by Ryu
    I've recently enabled CLR on my 64 bit SQL Server 2005 machine for usage of about 3 procs. When I run the following query to gather some info on memory usage... select single_pages_kb+ multi_pages_kb + virtual_memory_committed_kb as TotalMemoryUsage, virtual_memory_reserved_kb from sys.dm_os_memory_clerks where type = 'MEMORYCLERK_SQLCLR' I get 129 mb MemoryUsage and 6.3 gb Virtual Memory Reserved The total memory of the machine is 21 gig. What does reserved virtual memory mean exactly and how can I control the size that is allocated? 6 gig is overkill for what we're doing and the memory would be much better utilized by the sproc cache. I'm concerned this reserved memory will cause swapping to the page file. Please help me take back control of the memory! Thanks

    Read the article

  • A 110 kb .NET 4.0 app needs 10 seconds for a cold start, thats not acceptable !

    - by msfanboy
    Hello, I am using the .NET 4.0 client profile for my app and I run a dual core with 4 GB Ram and a fast hard disk. nothing big is done at the start just showing a generic List in a wpf listview. How can I make the cold start faster of my assembly ? I have done now again a cold start and run the windowsapplication.exe in my \obj\x86\Debug folder and my harddisk run like hell and it took 10,5 seconds ??? What is wrong? The warm start after the cold one took 1 second. Java 6 apps has not that problem, not at all just to compare...

    Read the article

  • Read random lines from huge CSV file in Python

    - by jbssm
    I have this quite big CSV file (15 Gb) and I need to read about 1 million random lines from it. As far as I can see - and implement - the CSV utility in Python only allows to iterate sequentially in the file. It's very memory consuming to read the all file into memory to use some random choosing and it's very time consuming to go trough all the file and discard some values and choose others, so, is there anyway to choose some random line from the CSV file and read only that line? I tried without success: import csv with open('linear_e_LAN2A_F_0_435keV.csv') as file: reader = csv.reader(file) print reader[someRandomInteger] A sample of the CSV file: 331.093,329.735 251.188,249.994 374.468,373.782 295.643,295.159 83.9058,0 380.709,116.221 352.238,351.891 183.809,182.615 257.277,201.302 61.4598,40.7106

    Read the article

  • Why does this bash command take up all space on device?

    - by chelmertz
    Hey! I'm a little new on searching via bash, so feel free to give me suggestions on the methods to use instead of this, which I'll never use again :) I'm searching for occurances of a string, recursively in a directory, with ~50 not-that-large php-files in it; some in current directory, some in directories beneath current dir, three levels of directories down at most. The method I'm using is: find . | xargs grep "module" > module.txt When in simple (one level) directories, this works fine, but in this case, the file became 4 GB large until it filled up all space on the partition :) It wasn't even done yet.. Would someone educate me so I won't embarass myself again?

    Read the article

  • Where does Subversion physically stores its DataBase ?

    - by Mika Jacobi
    After reading many introductions, starting guides, and documentation on SVN, I still cannot figure out where is my versioning data stored. I mean physically. I have over 3 GB of code checked in, and the repo is just a few MB large. This is still Voodoo for me. And, as a coder, I don't really believe in Magic. EDIT : A contributor stated that not all the code was stored in the repo, is that true ? I mean, if I delete my local working copy I still can get back my source code for the repository... If so, I still can't understand how such a compression can occur on my code...

    Read the article

  • windows fails to allocate the amount of free physical memory returned by GlobalMemoryStatusEx

    - by avi
    hello! what i'm trying to do is get the free amount of physical memory allocate it and than manage it ( resizing it or delete it ) depending on what further calls to GlobalMemoryStatusEx return. and the problem : it works on 2 PCs with win 7 x64 one with 2G Ram ( on witch i was able to allocate like 1.3 GB) , the other has one 1GB RAM (max alloc was 630 MB). it fails on the third one with 3GB of ramm. I can't find the problem. !! i tried google!! any solution?

    Read the article

  • When we run an aspx page with client side scripting on IIS, we get an ActiveX control error?

    - by Ananya
    we have implemented the code for installing the messenger theme pack using the client side scripting in a web page. We are creating an object of the messenger using the classid .Using this object we call the installcontent() method and try to install the messenger theme pack hosted at following path: http://www.messengerexpressions.com/assets/worldCup/cabs/en-gb.cab Our code initially checks whether the messenger is installed on the user machine or not. Then it checks for the user login ,once the messenger is installed on the machine. And if the user is signed-in , the messenger theme pack is installed. The code currently when hosted on IIS checks the “Sign-In of the user” but when it tries to install the theme pack an error is thrown “An ActiveX control on this page is unsafe.Your current security settings prohibit running unsafe control on this page.As a result,this page may not be display as intended.” Please let us know if any setting is required on IIS for running this piece of codeor anything that we are missing out.

    Read the article

  • ExtJS Output JSON

    - by venkatesh a
    This is my Json structure. Got load into the form perfectly. Once i alter this form and click submit button. Output have to save with same structure like existing. Can anyone please help me to solve this ASAP?? { "comment": null, "clientinfo": { "clientName": "Timex", "clientCode": "143", "startDate": "04-Oct-2012", "clientType": "CR", "clientID": "TimexGroup", "hasGAMLeft": null, "gamLocation": "GB", "clientName": "xxx", "groupSegment": "FI", "groupSubSegment": "SUB-FI-SUB", "groupDomicileCounrty": "IN", "groupIsicCode": "2403-Copper & zinc" }, "CompanyClients": [ { "CName": "Timex", "ID": "1424317", "cType": "CR", "gType": "SD" }, { "CName": "Casio", "ID": "1416529", "cType": "AR", "gType": "RD" } ], "Country": [ { "CountryID": "Singapore", "CountryText": "SG" }, { "CountryID": "India", "CountryText": "IN" } ] } I used below code to POST. saves fine as its handcoded. i need to save this dynamically.

    Read the article

  • Loading large amounts of data to an Oracle SQL Database

    - by James
    Hey all, I was wondering if anyone had any experience with what I am about to embark on. I have several csv files which are all around a GB or so in size and I need to load them into a an oracle database. While most of my work after loading will be read-only I will need to load updates from time to time. Basically I just need a good tool for loading several rows of data at a time up to my db. Here is what I have found so far: I could use SQL Loader t do a lot of the work I could use Bulk-Insert commands Some sort of batch insert. Using prepared statement somehow might be a good idea. I guess I was wondering what everyone thinks is the fastest way to get this insert done. Any tips?

    Read the article

  • java cannot reserver heap size error on windows server

    - by Prad
    HI, I have the following configuration: Server : windows 2003 server (32 bit) java version: 1.5_0_22 I get the following error when executing from command line ( my code is based off eclipse wihch gives the same error) java -XX:MaxPermSize=256m -Xmx512m Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. The server has over 20GB physical memory with over 19 GB free right now. It does not give an error upto -Xmx486m I have read other articles about contiguous memory space. There is hardly anything running on this server. Can I validae this in any way? Thanks

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >