Search Results

Search found 4715 results on 189 pages for 'ram bhat'.

Page 95/189 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • R in a netbook - system requirements for using R

    - by Brani
    I know it's not a programming question but I'm in a hurry to choose a netbook like this and I haven't been able to find the minimum system requirements for an R installation (e.g. minimum RAM). I am interested in a small netbook so as to be able to use it in class. Has anybody used R in a netbook that would recommend for that use?

    Read the article

  • is there stack size in iphone?

    - by senthilmuthu
    Hi, Every RAM must have stack and heap (like CS,ES,DS,SS 4 segments).but is there like stack size in iphone,is only heap available?some tutorial say when we increase stack size , heap will be decreased,when we increase heap size ,stack will be decreased ...is it true..? or fixed stack size or fixed heap size ? any help please?

    Read the article

  • Can i openly speculate based on App rejection that the iPad has xxx MB of memory?

    - by GamingHorror
    If i were to calculate the iPad's amount of RAM based on just the one fact that my iPad App got rejected due to memory warnings twice, and me fixing it, would this violate the developer NDA? Obviously i know how much memory my App uses, how much the iPhone OS is likely to use and estimate the amount reserved for video memory, then i can deduct from that that the iPad has xxx MB of memory. I just wonder if i can say that number publicly without violating any NDA?

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • Do any clouds support SSD storage?

    - by taw
    I'm using Amazon cloud right now, and the biggest performance issue is horrible I/O performance. As long as something fits RAM it's fine - once it's too big it gets ridiculously slow (in many different scenarios). There are only so many ways one can avoid hitting disk - so the question is - does Amazon or some other cloud provide SSD option?

    Read the article

  • Reading a file N lines at a time in ruby

    - by Sam
    I have a large file (hundreds of megs) that consists of filenames, one per line. I need to loop through the list of filenames, and fork off a process for each filename. I want a maximum of 8 forked processes at a time and I don't want to read the whole filename list into RAM at once. I'm not even sure where to begin, can anyone help me out?

    Read the article

  • General confusion with assembler

    - by gnrlcf
    So I took a look at the x86 assembly language; All the commands are pretty clear but: I don't see anything that can actually trigger something in the computer like: Access RAM and not only CPU registers, read from the HDD, etc. How do you go beyond computations in the CPU with assembler?

    Read the article

  • Running out of memory while analyzing a Java Heap Dump

    - by Abel Morelos
    Hi, I have a curious problem, I need to analyze a Java heap dump (from an IBM JRE) which has 1.5GB in size, the problem is that while analyzing the dump (I've tried HeapAnalyzer and the IBM Memory Analyzer 0.5) the tools runs out of memory I can't really analyze the dump. I have 3GB of RAM in my machine, but seems like it's not enough to analyze the 1.5 GB dump, My question is, do you know a specific tool for heap dump analysis (supporting IBM JRE dumps) that I could run with the amount of memory I have? Thanks.

    Read the article

  • How can I load a file into a DataBag from within a Yahoo PigLatin UDF?

    - by Cervo
    I have a Pig program where I am trying to compute the minimum center between two bags. In order for it to work, I found I need to COGROUP the bags into a single dataset. The entire operation takes a long time. I want to either open one of the bags from disk within the UDF, or to be able to pass another relation into the UDF without needing to COGROUP...... Code: # **** Load files for iteration **** register myudfs.jar; wordcounts = LOAD 'input/wordcounts.txt' USING PigStorage('\t') AS (PatentNumber:chararray, word:chararray, frequency:double); centerassignments = load 'input/centerassignments/part-*' USING PigStorage('\t') AS (PatentNumber: chararray, oldCenter: chararray, newCenter: chararray); kcenters = LOAD 'input/kcenters/part-*' USING PigStorage('\t') AS (CenterID:chararray, word:chararray, frequency:double); kcentersa1 = CROSS centerassignments, kcenters; kcentersa = FOREACH kcentersa1 GENERATE centerassignments::PatentNumber as PatentNumber, kcenters::CenterID as CenterID, kcenters::word as word, kcenters::frequency as frequency; #***** Assign to nearest k-mean ******* assignpre1 = COGROUP wordcounts by PatentNumber, kcentersa by PatentNumber; assignwork2 = FOREACH assignpre1 GENERATE group as PatentNumber, myudfs.kmeans(wordcounts, kcentersa) as CenterID; basically my issue is that for each patent I need to pass the sub relations (wordcounts, kcenters). In order to do this, I do a cross and then a COGROUP by PatentNumber in order to get the set PatentNumber, {wordcounts}, {kcenters}. If I could figure a way to pass a relation or open up the centers from within the UDF, then I could just GROUP wordcounts by PatentNumber and run myudfs.kmeans(wordcount) which is hopefully much faster without the CROSS/COGROUP. This is an expensive operation. Currently this takes about 20 minutes and appears to tack the CPU/RAM. I was thinking it might be more efficient without the CROSS. I'm not sure it will be faster, so I'd like to experiment. Anyway it looks like calling the Loading functions from within Pig needs a PigContext object which I don't get from an evalfunc. And to use the hadoop file system, I need some initial objects as well, which I don't see how to get. So my question is how can I open a file from the hadoop file system from within a PIG UDF? I also run the UDF via main for debugging. So I need to load from the normal filesystem when in debug mode. Another better idea would be if there was a way to pass a relation into a UDF without needing to CROSS/COGROUP. This would be ideal, particularly if the relation resides in memory.. ie being able to do myudfs.kmeans(wordcounts, kcenters) without needing the CROSS/COGROUP with kcenters... But the basic idea is to trade IO for RAM/CPU cycles. Anyway any help will be much appreciated, the PIG UDFs aren't super well documented beyond the most simple ones, even in the UDF manual.

    Read the article

  • Java native memory usage

    - by Adelave
    Hi All, Is there any tool to know how many native memory has been used from my java application ? I've experienced outofmemory from my application : Current setting is : -Xmx900m Computer, Windows 2003 Server 32bit, RAM 4GB. Also is changing boot.ini to /3GB on windows, will make any difference? If is set Xmx900m, how much max native memory can be allocated for this process ? is it 1100m ?

    Read the article

  • Good Postgres graphical client for Windows

    - by alex
    The name pretty much says it all. Right now I'm using Squirrel - it crashes frequently and suffers from memory problems (I've tried increasing the heap size). I don't need anything particularly fancy or full-featured - just something that won't take up 2.4 GB of RAM to store a 1.5 million line, 8 column result set.

    Read the article

  • how to load part of the HTML page which is currently on display ?

    - by ganapati hegde
    Hi, i have an ebook(relatively large size say 800 pages),in HTML format. I am opening that book as webpage using webkit-gtk+. If i load the whole book at a time,it takes too much memory(RAM ).So i dont want to load the whole book at a time, but load the part of the book which is currently on display. and when the user scrolls down, next part should be displayed.How can i implement that ?

    Read the article

  • Importing a large delimited file to a MySQL table

    - by Tom
    I have this large (and oddly formatted txt file) from the USDA's website. It is the NUT_DATA.txt file. But the problem is that it is almost 27mb! I was successful in importing the a few other smaller files, but my method was using file_get_contents which it makes sense why an error would be thrown if I try to snag 27+ mb of RAM. So how can I import this massive file to my MySQL DB without running into a timeout and RAM issue? I've tried just getting one line at a time from the file, but this ran into timeout issue. Using PHP 5.2.0. Here is the old script (the fields in the DB are just numbers because I could not figure out what number represented what nutrient, I found this data very poorly document. Sorry about the ugliness of the code): <? $file = "NUT_DATA.txt"; $data = split("\n", file_get_contents($file)); // split each line $link = mysql_connect("localhost", "username", "password"); mysql_select_db("database", $link); for($i = 0, $e = sizeof($data); $i < $e; $i++) { $sql = "INSERT INTO `USDA` (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17) VALUES("; $row = split("\^", trim($data[$i])); // split each line by carrot for ($j = 0, $k = sizeof($row); $j < $k; $j++) { $val = trim($row[$j], '~'); $val = (empty($val)) ? 0 : $val; $sql .= ((empty($val)) ? 0 : $val) . ','; // this gets rid of those tildas and replaces empty strings with 0s } $sql = rtrim($sql, ',') . ");"; mysql_query($sql) or die(mysql_error()); // query the db } echo "Finished inserting data into database.\n"; mysql_close($link); ?>

    Read the article

  • Horrible eclipse performance on macbook pro running 10.5.8

    - by user246114
    Hi I am using eclipse galileo on my macbook pro. After a few minutes it starts dragging really badly, like it takes 8 seconds to open a file. I don't have many files open at all. I already modified the config file to increase ram and all that stuff. Is there something wrong with this version of eclipse, never had it run so poorly on here, Thanks

    Read the article

  • Horrible eclipse performance on macbook pro running 10.5.8

    - by user246114
    Hi I am using eclipse galileo on my macbook pro. After a few minutes it starts dragging really badly, like it takes 8 seconds to open a file. I don't have many files open at all. I already modified the config file to increase ram and all that stuff. Is there something wrong with this version of eclipse, never had it run so poorly on here, Thanks

    Read the article

  • read-only memory and heap memory

    - by benjamin button
    hi, AFAIK, string literals are stored in read only memory in case of C language. where is this actually present on the hardware. as per my knowledge heap is on RAM.correct me if i am wrong. how different is heap from read only memory? is it OS dependant?

    Read the article

  • Does an XPathDocument load the whole xml document?

    - by Wires
    If I do XPathDocument doc = new XPathDocument("filename.xml"); Does that load the entire document into memory? I'm writing a mobile phone app and the document might store lots of data that doesn't ever need to all be loaded at the same time. Mobile phones don't usually have too much ram!

    Read the article

  • Java heap space

    - by Gandalf StormCrow
    I get this message during build of my project java.lang.OutOfMemoryError: Java heap space How do I increase heap space, I've got 8Gb or RAM its impossible that maven consumed that much, I found this http://vikashazrati.wordpress.com/2007/07/26/quicktip-how-to-increase-the-java-heap-memory-for-maven-2-on-linux/ how to do it on linux, but I'm on windows 7. How can I change java heap space under windows ?

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >