Search Results

Search found 14113 results on 565 pages for 'memory stick'.

Page 71/565 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Increase in Virtual Memory [closed]

    - by Philly
    If we increase the virtual memory of windows server 2008 will it improve the performance on the application. upto what size can we increase the virtual memory in 32bit server. What are disadvantages and advantages of it? we are running bulk jobs(all imports and exports) everyday at 3pm.. because of that we are getting some system out of memory exceptions and and application is going down. Any suggestions. Thanks

    Read the article

  • How to restore/change Alt+Tab behaviour/ram usage and a few other things after Ubuntu upgrade from 11.04 to 11.10?

    - by fiktor
    I use Ubuntu for programming. I recently updated it from 11.04 to 11.10. There are some things I don't like in the new version of Unity desktop interface. I don't actually know if it is hard to restore previous behavior or not, and if it is not, where should I look to do that. I know a bit of programming, but I really don't know much about Linux settings. I used to have 3-6 terminal windows and switch between them with Alt+Tab and Shift+Alt+Tab. I liked half-transparent terminal windows, since with them I could open web-page with some instruction in Firefox, press Alt+Tab and type commands in a console window, being able to recognize text on a web-page under it. Now I have problems with my usual work-style because of the following. List of "negative" changes Alt+Tab shows just one icon for all console windows. When I wait some time, it, however, shows all windows, but I don't like to wait. I prefer to remember order of windows and press Alt+Tab as many times as I need to switch to the right window. Alt+Shift+Tab to switch in reverse order doesn't work now. Console windows are not transparent any more. When I don't wait, and switch to this icon, it shows all console windows altogether. So even if they were transparent, I wouldn't be able to see anything below them (I can read something only from the window, which is directly under current one, not a few levels under). When I run a few console windows in Unity I had 740Mb used on Ubuntu 11.04, but I have 1050Mb now. The question is how to make it back to 750-. I really need my memory, since I use my computer to work with 1512Mb of data and I try to save every 10Mb possible (if it doesn't take too much of machine and, more importantly, my time). When I press "The Super key" I have a field to type the name of the program I want to run. But now it sometimes shows this field, but when I'm trying to type nothing happens. Probably, focus is not on the right field. I don't really mean to restore exactly the same behavior, but I want to make my work in Ubuntu 11.10 efficient (at least as efficient as in Ubuntu 11.04). I would be happy if there are some ways to accomplish that. What have I tried I have installed CompizConfig Settings Manager. I have read this question. However enabling "Static Application Switcher" makes Alt+Tab crazy: after enabling it It says about key-binding conflicts with "Ubuntu Unity Plugin"; "Alt+Tab" switching doesn't change, but "Shift+Alt+Tab" now works and shows all windows; Memory usage increases. I have tried turning off Ubuntu Unity Plugin, but this doesn't seem right thing to do, since it seems to turn off all menus, a lot of keystrokes and app-launcher, which usually activates with "The Super key". I have found, that window transparency can be enabled by "Opacity, Brightness and Saturation" plugin from Accessibility. However I don't know if enabling it is the right thing to do (at least it increases memory usage). Update: everything solved but #3: see my own answer below. I have made a separate question about issue #3 (transparency).

    Read the article

  • Why do browsers leak memory?

    - by Dane Balia
    A colleague and I were speaking about browsers (using a browser control object in a project), and it appears as plain as day that all browsers (Firefox, Chrome, IE, Opera) display the same characteristic or side-effect from their usage and that being 'Leaking Memory'. Can someone explain why that is the case? Surely as with any form of code, there should be proper garbage collection? PS. I've read about some defensive patterns on why this can happen from a developer's perspective. I am aware of an article Crockford wrote on IE; but why is the problem symptomatic of every browser? Thanks

    Read the article

  • mp3 file streaming/download - apache server memory issue

    - by Manolis
    I have a website, in which users can upload mp3 files (uploadify), stream them using an html5 player (jplayer) and download them using a php script (www.zubrag.com/scripts/). When a user uploads a song, the path to the audio file is saved in the database and i'm using that data in order to play and show a download link for the song. The problem that i'm experiencing is that, according to my host, this method is using a lot of memory on the server, which is dedicated. Link to script: http://pastebin.com/Vus8SRa7 How should I handle the script properly? And what would be the best way to track down the problem? Any ideas on cleaning up the code? Any help much appreciated.

    Read the article

  • Why does one loop take longer to detect a shared memory update than another loop?

    - by Joseph Garvin
    I've written a 'server' program that writes to shared memory, and a client program that reads from the memory. The server has different 'channels' that it can be writing to, which are just different linked lists that it's appending items too. The client is interested in some of the linked lists, and wants to read every node that's added to those lists as it comes in, with the minimum latency possible. I have 2 approaches for the client: For each linked list, the client keeps a 'bookmark' pointer to keep its place within the linked list. It round robins the linked lists, iterating through all of them over and over (it loops forever), moving each bookmark one node forward each time if it can. Whether it can is determined by the value of a 'next' member of the node. If it's non-null, then jumping to the next node is safe (the server switches it from null to non-null atomically). This approach works OK, but if there are a lot of lists to iterate over, and only a few of them are receiving updates, the latency gets bad. The server gives each list a unique ID. Each time the server appends an item to a list, it also appends the ID number of the list to a master 'update list'. The client only keeps one bookmark, a bookmark into the update list. It endlessly checks if the bookmark's next pointer is non-null ( while(node->next_ == NULL) {} ), if so moves ahead, reads the ID given, and then processes the new node on the linked list that has that ID. This, in theory, should handle large numbers of lists much better, because the client doesn't have to iterate over all of them each time. When I benchmarked the latency of both approaches (using gettimeofday), to my surprise #2 was terrible. The first approach, for a small number of linked lists, would often be under 20us of latency. The second approach would have small spats of low latencies but often be between 4,000-7,000us! Through inserting gettimeofday's here and there, I've determined that all of the added latency in approach #2 is spent in the loop repeatedly checking if the next pointer is non-null. This is puzzling to me; it's as if the change in one process is taking longer to 'publish' to the second process with the second approach. I assume there's some sort of cache interaction going on I don't understand. What's going on?

    Read the article

  • How can i estimate memory usage of stl::map?

    - by Drakosha
    For example, I have a std::map with known sizeof(A) and sizefo(B), while map has N entries inside. How would you estimate its memory usage? I'd say it's something like (sizeof(A) + sizeof(B)) * N * factor But what is the factor? Different formula maybe? Update: Maybe it's easier to ask for upper bound?

    Read the article

  • Dell whitepaper on PowerEdge R810 R910 and M910 Memory Architecture

    - by jchang
    The Dell PowerEdge 11 th Generation Servers: R810, R910 and M910 Memory Guidance whitepaper seems to have caused some confusion. I believe the source is an error in the paper. In the section on FlexMem Bridge Technology, the Dell whitepaper says this applies to the R810 and the M910. The Dell M910 is a 4-way blade server for the Xeon 7500 series processor line. First a breif recap. The R810 is a 2-way server, by which I mean it has two sockets regardless of the number of cores on each processor....(read more)

    Read the article

  • Oracle Database In-Memory: Launch in Frankfurt

    - by Carsten Czarski
    Diesmal gibt es etwas Altes ... und etwas Neues. Zuerst das Neue: Am 11. Juni wird Larry Ellison in Redwood Shores die neue, bahnbrechende Oracle Database In-Memory Funktionalität vorstellen. Mit dieser neuen Technologie profitieren Kunden von beschleunigter Datenbankleistung für Analytics, Data Warehousing, Reporting und Online Transaction Processing (OLTP). Nur 6 Tage später - am 17. Juni -  findet, in Frankfurt, der einzige europäische Launch-Event statt. Neben Fachvorträgen, Panelveranstaltung und Demos wird ein Vortrag von Andy Mendelsohn, Head of Database Product Development, vorgesehen. Melden Sie sich heute noch an. Und hier ist das Alte: Wer erinnert sich noch die die HTML DB ...? In den Archiven der APEX Community Seite haben wir ein Video gefunden, welches zeigt, wie man Seiten in der HTML DB für andere Entwickler sperren konnte. Das gibt es heute übrigens auch noch - es sieht nur etwas anders aus. Viel Spaß beim Ansehen.

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    - by Damian
    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014). On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491. The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.There are some issues fixed you might find if you tried to run the previous samples on the RTM version:Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem. 

    Read the article

  • OpenGL Shading Program Object Memory Requirement

    - by Hans Wurst
    gDEbugger states that OpenGL's program objects only occupy an insignificant amount of memory. How much is this actually? I don't know if the stuff I looked up in mesa is actually that I was looking for but it requires 16KB [Edit: false, confusing struct names, less than 1KB immediate, some further behind pointers] per program object. Not quite insignificant. So is it recommended to create a unique program object for each object of the scene? Or to share a single program object and set the scene's object's custom variables just before its draw call?

    Read the article

  • Grails application hogging too much memory

    - by RN
    Tomcat 5.5.x and 6.0.x Grails 1.6.x Java 1.6.x OS CentOS 5.x (64bit) VPS Server with memory as 384M export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m' I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it I am running Tomcat on a VPS Server When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M and used memory is about 156M When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory As you have seen, my app is as light as it can be. Not sure why the memory consumption is as high it is. I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.

    Read the article

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    - by Damian
    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014). On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491. The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.There are some issues fixed you might find if you tried to run the previous samples on the RTM version:Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem. 

    Read the article

  • How can I give eclipse more memory than 512M?

    - by newbie
    I have following setup, but when I put 1024 and replace all 512 with 1024, then eclipse won't start at all. How can I have more than 512M memory for my eclipse JVM? -startup plugins/org.eclipse.equinox.launcher_1.0.201.R35x_v20090715.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.0.200.v20090519 -product com.springsource.sts.ide --launcher.XXMaxPermSize 512M -vm C:\Program Files (x86)\Java\jdk1.6.0_18\bin\javaw -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms512m -Xmx512m -XX:MaxPermSize=512m

    Read the article

  • Strategies to Defeat Memory Editors for Cheating - Desktop Games

    - by ashes999
    I'm assuming we're talking about desktop games -- something the player downloads and runs on their local computer. Many are the memory editors that allow you to detect and freeze values, like your player's health. How do you prevent cheating? What strategies are effective to combat this kind of cheating? I'm looking for some good ones. Two I use that are mediocre are: Displaying values as a percentage instead of the number (eg. 46/50 = 92% health) A low-level class that holds values in an array and moves them with each change

    Read the article

  • Best way to store motion changes to reduce memory

    - by Andrew Simpson
    I am comparing jpeg to jpeg in a constant 'video-stream'. i am using EMGU/OpenCV to compare each pixels at the byte level. There are 3 channels to each image (RGB). I had heard that it is common practice to store only the pixels that have changed between frames as a way of conserving memory space. But, if for instance/example I say EVERY pixel has changed (pls note i am using an exaggerated example to make my point and i would normally discard such large changes) then the resultant bytes saved is 3 times larger than the original jpeg. How can I store such motion changes efficiently? thanks

    Read the article

  • memory map huge file with boost

    - by HaveF
    I want to handle huge files(TB), after several searches, I find boost could be help boost/interprocess/file_mapping.hpp and I also find the demo code. Because the file that I read is too large(TB), so I think I should create a fixed-size of memory(say 1GB), and remap it when the data isn't on the page. But I don't know how to write this part. I only find another web page, which use "boost.iostreams" to handle this problem. I should use the boost.iostreams? or boost.interprocess.file_mapping? (if this one, please show me some codes), thanks!

    Read the article

  • Book about tcp, http, named pipe, shared memory, wcf and other inter-process communication protocol

    - by Samuel
    Recently, I had to create a program to send messages between two winforms executable. I used a tool with simple built-in functionalities to prevent having to figure out all the ins and outs of this vast quantity of protocols that exist. But now, I'm ready to learn more about the internals difference between each of theses protocols. I googled a couple of them but it would be greatly appreciate to have a good reference book that gives me a clean idea of how each protocol works and what are the pros and cons in a couple of context. Here is a list of nice protocols that I found: Shared memory TCP List item Named Pipe File Mapping Mailslots MSMQ (Microsoft Queue Solution) WCF I know that all of these protocols are not specific to a language, it would be nice if example could be in .net. Thank you very much.

    Read the article

  • Gradual memory leak in loop over contents of QTMovie

    - by Benji XVI
    I have a simple foundation tool that exports every frame of a movie as a .tiff file. Here is the relevant code: NSString* movieLoc = [NSString stringWithCString:argv[1]]; QTMovie *sourceMovie = [QTMovie movieWithFile:movieLoc error:nil]; int i=0; while (QTTimeCompare([sourceMovie currentTime], [sourceMovie duration]) != NSOrderedSame) { // save image of movie to disk NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSData *currentImageData = [[sourceMovie currentFrameImage] TIFFRepresentation]; [currentImageData writeToFile:filePath atomically:NO]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } [pool drain]; return 0; As you can see, in order to prevent very large memory buildups with the various transparently-autoreleased variables in the loop, we create, and flush, an autoreleasepool with every run through the loop. However, over the course of stepping through a movie, the amount of memory used by the program still gradually increases. Instruments is not detecting any memory leaks per se, but the object trace shows certain General Data blocks to be increasing in size. [Edited out reference to slowdown as it doesn't seem to be as much of a problem as I thought.] Edit: let's knock out some parts of the code inside the loop & see what we find out... Test 1 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } Here we simply loop over the whole movie, creating the filename and logging it. Memory characteristics: remains at 15MB usage for the duration. Test 2 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSImage *image = [sourceMovie currentFrameImage]; [sourceMovie stepForward]; [arp release]; } Here we add back in the creation of the NSImage from the current frame. Memory characteristics: gradually increasing memory usage. RSIZE is at 60MB by frame 200; 75MB by f300. Test 3 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSImage *image = [sourceMovie currentFrameImage]; NSData *imageData = [image TIFFRepresentation]; [sourceMovie stepForward]; [arp release]; } We've added back in the creation of an NSData object from the NSImage. Memory characteristics: Memory usage is again increasing: 62MB at f200; 75MB at f300. In other words, largely identical. It looks like a memory leak in the underlying system QTMovie uses to do currentFrameImage, to me.

    Read the article

  • in memory datastore in haskell

    - by Simon
    I want to implement an in memory datastore for a web service in Haskell. I want to run transactions in the stm monad. When I google hash table steam Haskell I only get this: Data. BTree. HashTable. STM. The module name and complexities suggest that this is implemented as a tree. I would think that an array would be more efficient for mutable hash tables. Is there a reason to avoid using an array for an STM hashtable? Do I gain anything with this stem hash table or should I just use a steam ref to an IntMap?

    Read the article

  • What makes an application memory bandwidth bound?

    - by TheLQ
    This has been something that's been bothering me for a while: What makes an application memory bandwidth bound? For example, take this monstrosity of a computer that calculated the 5 trillionth digit of pi (and later 10 trillionth digit). I was surprised that they choose the lower but faster 98 GB RAM at 1066 MHz instead of the larger but slower 144 GB at 800 MHz. This is especially surprising considering they are using 22 TB HD array to store the results from computation; more RAM means less need for hard drives. Maybe its because I don't write applications for HPC servers, but how would RAM be the bottleneck? Are there any other non-HPC applications that usually run into this problem?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >