Search Results

Search found 258446 results on 10338 pages for 'stack memory'.

Page 72/10338 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • vagrant fails to bring up additional adapter for centos vm using virtual box provider

    - by Anadi Misra
    this is in continuation of the question asked here about host only adapter on dhcp I upgraded to vagrant 1.6.3 and the updated Vagrantfile to following setting for multiple adapters # add additional adapter for inter machine networking dev.vm.network :private_network, :type => "dhcp", :adapter => "2", :netmask => "255.255.255.0" it goes through creating adapters but then fails bringing up the mic on vm Anadis-MacBook-Pro:full-stack-env anadi$ vagrant up Bringing machine 'full-stack-env' up with 'virtualbox' provider... ==> full-stack-env: Clearing any previously set forwarded ports... ==> full-stack-env: Clearing any previously set network interfaces... ==> full-stack-env: Preparing network interfaces based on configuration... full-stack-env: Adapter 1: nat full-stack-env: Adapter 2: hostonly ==> full-stack-env: Forwarding ports... full-stack-env: 22 => 4223 (adapter 1) full-stack-env: 8080 => 8090 (adapter 1) ==> full-stack-env: Running 'pre-boot' VM customizations... ==> full-stack-env: Booting VM... ==> full-stack-env: Waiting for machine to boot. This may take a few minutes... full-stack-env: SSH address: 127.0.0.1:4223 full-stack-env: SSH username: vagrant full-stack-env: SSH auth method: private key full-stack-env: Warning: Connection timeout. Retrying... full-stack-env: Warning: Connection timeout. Retrying... full-stack-env: Warning: Remote connection disconnect. Retrying... ==> full-stack-env: Machine booted and ready! ==> full-stack-env: Checking for guest additions in VM... ==> full-stack-env: Setting hostname... ==> full-stack-env: Configuring and enabling network interfaces... The following SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed! ARPCHECK=no /sbin/ifup eth 2> /dev/null Stdout from the command: Device eth does not seem to be present, delaying initialization. Stderr from the command: how ever when I log in to the environment I see two network interfaces as expected Anadis-MacBook-Pro:full-stack-env anadi$ vagrant ssh Last login: Wed Jun 4 12:54:47 2014 from 10.0.2.2 [vagrant@full-stack-env ~]$ ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:BD:39:57 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:febd:3957/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:511 errors:0 dropped:0 overruns:0 frame:0 TX packets:360 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:54574 (53.2 KiB) TX bytes:46675 (45.5 KiB) eth1 Link encap:Ethernet HWaddr 08:00:27:A3:86:C9 inet addr:172.28.128.3 Bcast:172.28.128.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:86c9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1360 (1.3 KiB) TX bytes:894 (894.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) I am bit confused here on why it is trying to add another mic (eth2)? In the VM I used for creating this vagrant box, I had added two NICs already.

    Read the article

  • Find out how many memory my server would need ideally?

    - by Daniel
    I have a pretty busy GNU/Linux server that I think needs more RAM. I know that the free command doesn't show the amount of RAM that is used. So I was stumbling upon Commited_As in /proc/meminfo. It currently shows 57972 kB which isn't much. Is this the amount of RAM that the processes use "right now" or is this an estimate of how many additional RAM it would take to never run out of memory with this load?

    Read the article

  • Auto restart server if virtual memory is too low

    - by Sukhjinder Singh
    There are quite number of software running on my server: httpd, varnish, mysql, memcache, java.. Each of them is using a part of the virtual memory and varnish was configured to be allocated 3GB of memory to run. Due to high traffic load which is 100K, our server ran out of memory and oom-killer is invoked. We've to reboot the server. We have 8GB of Virtual Memory and due to some reason we cannot extend to larger memory. My question is - Is there any automated script, which will monitor how much virtual memory left and based upon certain criteria, lets say if 500MB left than restart the server automatically? I do know this is not the proper solution but we have to do it, otherwise we don't know when server will get OOM and by the time we know and restart the server, we lost our visiting users.

    Read the article

  • non-mapped virtual memory & total number of connections

    - by tszming
    We have two MongoDB data nodes (replica set) - Primary & Secondary. I noticed that the non-mapped virtual memory is relatively high and wondering if they are hurting our MongoDB performance (The server usually peaked at around 6-7K queries per sec). In MMS, it was stated: "The most common case of usage of a high amount of memory for non-mapped is that there are very many connections to the database." So we checked the memory usage with db.serverStatus().mem in our Secondary: { "bits" : 64, "resident" : 6846, "virtual" : 416797, "supported" : true, "mapped" : 205549, "mappedWithJournal" : 411098, "note" : "virtual minus mapped is large. could indicate a memory leak" } Note: We are using 2.0.4 and now the default stack size should be 1MB per connection. The current number of connections is around 1.1K, but the non-mapped virtual memory (virtual-mappedWithJournal) is around 5699 MB. The trend is quite stable so I can't say there is a leak here, but where is the memory gone? Any idea?

    Read the article

  • Why does one loop take longer to detect a shared memory update than another loop?

    - by Joseph Garvin
    I've written a 'server' program that writes to shared memory, and a client program that reads from the memory. The server has different 'channels' that it can be writing to, which are just different linked lists that it's appending items too. The client is interested in some of the linked lists, and wants to read every node that's added to those lists as it comes in, with the minimum latency possible. I have 2 approaches for the client: For each linked list, the client keeps a 'bookmark' pointer to keep its place within the linked list. It round robins the linked lists, iterating through all of them over and over (it loops forever), moving each bookmark one node forward each time if it can. Whether it can is determined by the value of a 'next' member of the node. If it's non-null, then jumping to the next node is safe (the server switches it from null to non-null atomically). This approach works OK, but if there are a lot of lists to iterate over, and only a few of them are receiving updates, the latency gets bad. The server gives each list a unique ID. Each time the server appends an item to a list, it also appends the ID number of the list to a master 'update list'. The client only keeps one bookmark, a bookmark into the update list. It endlessly checks if the bookmark's next pointer is non-null ( while(node->next_ == NULL) {} ), if so moves ahead, reads the ID given, and then processes the new node on the linked list that has that ID. This, in theory, should handle large numbers of lists much better, because the client doesn't have to iterate over all of them each time. When I benchmarked the latency of both approaches (using gettimeofday), to my surprise #2 was terrible. The first approach, for a small number of linked lists, would often be under 20us of latency. The second approach would have small spats of low latencies but often be between 4,000-7,000us! Through inserting gettimeofday's here and there, I've determined that all of the added latency in approach #2 is spent in the loop repeatedly checking if the next pointer is non-null. This is puzzling to me; it's as if the change in one process is taking longer to 'publish' to the second process with the second approach. I assume there's some sort of cache interaction going on I don't understand. What's going on?

    Read the article

  • How can i estimate memory usage of stl::map?

    - by Drakosha
    For example, I have a std::map with known sizeof(A) and sizefo(B), while map has N entries inside. How would you estimate its memory usage? I'd say it's something like (sizeof(A) + sizeof(B)) * N * factor But what is the factor? Different formula maybe? Update: Maybe it's easier to ask for upper bound?

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • Grails application hogging too much memory

    - by RN
    Tomcat 5.5.x and 6.0.x Grails 1.6.x Java 1.6.x OS CentOS 5.x (64bit) VPS Server with memory as 384M export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m' I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it I am running Tomcat on a VPS Server When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M and used memory is about 156M When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory As you have seen, my app is as light as it can be. Not sure why the memory consumption is as high it is. I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.

    Read the article

  • How can I give eclipse more memory than 512M?

    - by newbie
    I have following setup, but when I put 1024 and replace all 512 with 1024, then eclipse won't start at all. How can I have more than 512M memory for my eclipse JVM? -startup plugins/org.eclipse.equinox.launcher_1.0.201.R35x_v20090715.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.0.200.v20090519 -product com.springsource.sts.ide --launcher.XXMaxPermSize 512M -vm C:\Program Files (x86)\Java\jdk1.6.0_18\bin\javaw -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms512m -Xmx512m -XX:MaxPermSize=512m

    Read the article

  • Gradual memory leak in loop over contents of QTMovie

    - by Benji XVI
    I have a simple foundation tool that exports every frame of a movie as a .tiff file. Here is the relevant code: NSString* movieLoc = [NSString stringWithCString:argv[1]]; QTMovie *sourceMovie = [QTMovie movieWithFile:movieLoc error:nil]; int i=0; while (QTTimeCompare([sourceMovie currentTime], [sourceMovie duration]) != NSOrderedSame) { // save image of movie to disk NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSData *currentImageData = [[sourceMovie currentFrameImage] TIFFRepresentation]; [currentImageData writeToFile:filePath atomically:NO]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } [pool drain]; return 0; As you can see, in order to prevent very large memory buildups with the various transparently-autoreleased variables in the loop, we create, and flush, an autoreleasepool with every run through the loop. However, over the course of stepping through a movie, the amount of memory used by the program still gradually increases. Instruments is not detecting any memory leaks per se, but the object trace shows certain General Data blocks to be increasing in size. [Edited out reference to slowdown as it doesn't seem to be as much of a problem as I thought.] Edit: let's knock out some parts of the code inside the loop & see what we find out... Test 1 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } Here we simply loop over the whole movie, creating the filename and logging it. Memory characteristics: remains at 15MB usage for the duration. Test 2 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSImage *image = [sourceMovie currentFrameImage]; [sourceMovie stepForward]; [arp release]; } Here we add back in the creation of the NSImage from the current frame. Memory characteristics: gradually increasing memory usage. RSIZE is at 60MB by frame 200; 75MB by f300. Test 3 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSImage *image = [sourceMovie currentFrameImage]; NSData *imageData = [image TIFFRepresentation]; [sourceMovie stepForward]; [arp release]; } We've added back in the creation of an NSData object from the NSImage. Memory characteristics: Memory usage is again increasing: 62MB at f200; 75MB at f300. In other words, largely identical. It looks like a memory leak in the underlying system QTMovie uses to do currentFrameImage, to me.

    Read the article

  • How can I find out how much memory is physically installed in Windows?

    - by Randall
    I need to log information about how much RAM the user has. My first approach was to use GlobalMemoryStatusEx but that only gives me how much memory is available to windows, not how much is installed. I found this function GetPhysicallyInstalledSystemMemory but its only Vista and later. I need this to work on XP. Is there a fairly simple way of querying the SMBIOS information that GetPhysicallyInstalledSystemMemory was using or is there a registry value somewhere that I can find this out.

    Read the article

  • Reusing a NSString variable - does it cause a memory leak?

    - by Chris S
    Coming from a .NET background I'm use to reusing string variables for storage, so is the code below likely to cause a memory leak? The code is targeting OS X on the iphone/itouch so no automatic GC. -(NSString*) stringExample { NSString *result = @"example"; result = [result stringByAppendingString:@" test"]; // where does "example" go? return result; } What confuses me is an NSStrings are immutable, but you can reuse an 'immutable' variable with no problem.

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • How can I run ARM code from external memory?

    - by samoz
    I am using an LPC2132 ARM chip to develop a program. However, my program has grown larger than the space on the chip. How can I connect my chip to some sort of external memory chip to hold additional executable code? Is this possible? If not, what do people normally do when they run out of chip space?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >