Search Results

Search found 258446 results on 10338 pages for 'stack memory'.

Page 2/10338 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Confusion of the "stack" in Assembly-level programming

    - by Bigyellow Bastion
    What is the "stack" exactly? I've read articles, tried comprehending it through my understanding, experience, and educated guessing of programming and computers, but I'm a bit perplexed here. The "stack" is a region in RAM? Or is it some other space I'm uncertain of here? The processor pushes bits through registers on to the stack in RAM, or do I have it wrong here? Also, the processor moves the bits from the RAM to the register to "process" it, such as maybe a compare, arithmetic, etc. But what actually can help understand, in some visual or verbal description or both, of how to implement the idea of a "stack" here? Is the stack actually the same in terminology with a "machine stack" meaning it's in RAM? I'm sorry, I don't want to solicit debate or arguments, but I really could use some help here if anyone can straighten things out. TO ADD: I know what a software stack is. I know about LIFO, FIFO, etc. I just want to gain a better understanding of the Assembly-level stack, what it is, where it is, how exactly it works, etc. Thanks for reading!

    Read the article

  • The Ideal Platform for Oracle Database 12c In-Memory and in-memory Applications

    - by Michael Palmeter (Engineered Systems Product Management)
    Oracle SuperCluster, Oracle's SPARC M6 and T5 servers, Oracle Solaris, Oracle VM Server for SPARC, and Oracle Enterprise Manager have been co-engineered with Oracle Database and Oracle applications to provide maximum In-Memory performance, scalability, efficiency and reliability for the most critical and demanding enterprise deployments. The In-Memory option for the Oracle Database 12c, which has just been released, has been specifically optimized for SPARC servers running Oracle Solaris. The unique combination of Oracle's M6 32 Terabytes Big Memory Machine and Oracle Database 12c In-Memory demonstrates 2X increase in OLTP performance and 100X increase in analytics response times, allowing complex analysis of incredibly large data sets at the speed of thought. Numerous unique enhancements, including the large cache on the SPARC M6 processor, massive 32 TB of memory, uniform memory access architecture, Oracle Solaris high-performance kernel, and Oracle Database SGA optimization, result in orders of magnitude better transaction processing speeds across a range of in-memory workloads. Oracle Database 12c In-Memory The Power of Oracle SuperCluster and In-Memory Applications (Video, 3:13) Oracle’s In-Memory applications Oracle E-Business Suite In-Memory Cost Management on the Oracle SuperCluster M6-32 (PDF) Oracle JD Edwards Enterprise One In-Memory Applications on Oracle SuperCluster M6-32 (PDF) Oracle JD Edwards Enterprise One In-Memory Sales Advisor on the SuperCluster M6-32 (PDF) Oracle JD Edwards Enterprise One Project Portfolio Management on the SuperCluster M6-32 (PDF)

    Read the article

  • Disable Memory Modules In BIOS for Testing Purposes (Optimize Nehalem/Gulftown Memory Performance)

    - by Bob
    I recently acquired an HP Z800 with two Intel Xeon X5650 (Gulftown) 6 core processors. The person that configured the system chose 16GB (8 x 2GB DDR3-1333). I'm assuming this person was unaware these processors have 3 memory channels and to optimize memory performance one should choose memory in multiples of three. Based on this information, I have a question: By entering the BIOS, can I disable the bank on each processor that has the single memory module? If so, will this have any adverse effects or behave differently than physically removing the modules? I ask due to the fact that I prefer to store the extra memory in the system if it truly behaves as if the memory is not even there. Also, I see this as an opportunity to test 12GB vs. 16GB to see if there is a noticeable difference. Note: According to http://www.delltechcenter.com/page/04-08-2009+-+Nehalem+and+Memory+Configurations?t=anon, the current configuration reduces the overall data transfer speed to 1066 and in addition, the memory bandwidth goes down by about 23%.

    Read the article

  • How can I configure Firefox to assume I have less memory?

    - by WoLpH
    Firefox has a few different settings that automatically get tuned based on the system ram. This is all great if you're running nothing besides Firefox, but when you're running half a dozen apps at the same time and they all assume that they can take a decent chunk of mem it just kills the box. Example settings: http://kb.mozillazine.org/Browser.sessionhistory.max_total_viewers http://kb.mozillazine.org/Browser.cache.memory.capacity How can I make Firefox automatically configure all these settings with the assumption that I only have 512MB of memory instead of 4GB (or whatever number, but you get the idea). I am running Ubuntu 12.04 with Firefox 14 Current workarounds: Running a Windows XP virtual machine with 512MB of ram. It actually runs smooth and takes less memory (including Windows) to run than having Firefox (or Chrome for that matter) run standalone. Install the 32 bit version of Firefox By installing the 32 bit version of firefox (apt-get install firefox:i386) the base memory usage is only about 50% of what it is with the 64 bit.

    Read the article

  • Yet another Memory Leak Issue (memory is still gone when program terminates)- C program on SLES

    - by user1426181
    I run my C program on Suse Linux Enterprise that compresses several thousand large files (between 10MB and 100MB in size), and the program gets slower and slower as the program runs (it's running multi-threaded with 32 threads on a Intel Sandy Bridge board). When the program completes, and it's run again, it's still very slow. When I watch the program running, I see that the memory is being depleted while the program runs, which you would think is just a classic memory leak problem. But, with a normal malloc()/free() mismatch, I would expect all the memory to return when the program terminates. But, most of the memory doesn't get reclaimed when the program completes. The free or top command shows Mem: 63996M total, 63724M used, 272M free when the program is slowed down to a halt, but, after the termination, the free memory only grows back to about 3660M. When the program is rerun, the free memory is quickly used up. The top program only shows that the program, while running, is using at most 4% or so of the memory. I thought that it might be a memory fragmentation problem, but, I built a small test program that simulates all the memory allocation activity in the program (many randomized aspects were built in - size/quantity), and it always returns all the memory upon completion. So, I don't think that's it. Questions: Can there be a malloc()/free() mismatch that will lose memory permanently, i.e. even after the process completes? What other things in a C program (not C++) can cause permanent memory loss, i.e. after the program completes, and even the terminal window closes? Only a reboot brings the memory back. I've read other posts about files not being closed causing problems, but, I don't think I have that problem. Is it valid to be looking at top and free for the memory statistics, i.e. do they accurately describe the memory situation? They do seem to correspond to the slowness of the program. If the program only shows a 4% memory usage, will something like valgrind find this problem?

    Read the article

  • How UIWindow#addSubview can make memory leak?

    - by Jakub
    Hello, I started to learn using Instrument, but I cannot figure it out. After I start my application, the UI shows up, I do nothing and after few seconds I can see memory leak detected: When I have a look at the second leak I can see the following stack: When I double click on the cell related to my code I can see that it is pointing to the following line of code: [window addSubview:newPostUIViewController.view]; from the method: - (void)applicationDidFinishLaunching:(UIApplication *)application { //creating view controller newPostUIViewController = [[NewPostUIViewController alloc] initWithNibName:@"NewPostView" bundle:nil]; newPostUIViewController.title = @"Post it!"; [window addSubview:newPostUIViewController.view]; // Override point for customization after application launch [window makeKeyAndVisible]; } I wonder, how this can be a reason of a leak? I release newPostUIViewController in the dealloc method of PostItAppDelegate class. Any ideas how this could be explained?

    Read the article

  • stack and heap issue for iPhone memory management

    - by Forrest
    From this post I got know that the Objective-C runtime does not allow objects to be instantiated on the stack, but only on the heap; this means that you don’t have “automatic objects”, nor things like auto_ptr objects to help you manage memory; Someone give one example in post Objective C: Memory Allocation on stack vs. heap NSString* str = @"hello"; but this NSString is also not allocated in stack. Feel odd that this str is static. (Who can explain this ? ) Question here is that why there is no heap ? even mixing c++ together with Object C ? /////////////////////////////// Clear my question /////////////////////////////// I am confused , so questions are not clear. Let me put in this way. 1) All Object C objects should be alloc in stack ? ( I think yes ) 2)In C++, there are stack for memory, so for iOS app, also have stack ? ( I think yes ) 3) for iOS app, if only use Object C, so what is the usage of stack ? what kind of objects should use stack then ?

    Read the article

  • Write magic bytes to the stack to monitor its usage

    - by tkarls
    I have a problem on an embedded device that I think might be related to a stack overflow. In order to test this I was planning to fill the stack with magic bytes and then periodically check if the stack has overflowed by examining how much of my magic bytes that are left intact. But I can't get the routine for marking the stack to work. The application keeps crashing instantly. This is what I have done just at the entry point of the program. //fill most of stack with magic bytes int stackvar = 0; int stackAddr = int(&stackvar); int stackAddrEnd = stackAddr - 25000; BYTE* stackEnd = (BYTE*) stackAddrEnd; for(int i = 0; i < 25000; ++i) { *(stackEnd + i) = 0xFA; } Please note that the allocated stack is larger than 25k. So I'm counting on some stack space to already be used at this point. Also note that the stack grows from higher to lower addresses that's why I'm trying to fill from the bottom and up. But as I said, this will crash. I must be missing something here.

    Read the article

  • what kind of memory can be categorized as Modified Memory in Resource Monitor

    - by Kavin
    In Windows 7 and Windows 2008 R2, there is a new Resource Monitor that is very useful and powerful to monitor the system. In the Memory section, I see a section called Modified (orange) The official description is: Memory whose contents must be to disk before it can be used for another purpose. But I am still confused. What kinds of memory is Modified? In which case can we say that this number of memory is Modified? Can anyone give me a specific example? Is the following guess correct? When a program want to write something into disk, it actually write the content to an IO buffer, which is in the memory. After OS flush this area of memory into disk, the memory is modified or standby?

    Read the article

  • Stack & heap understanding question

    - by Petr
    Hi, I would really appreciate if someone could tell me whether I understand it well: class X { A a1=new A() //reference on the stack, object value on the heap a1.VarA=5; //on the stack - value type A a2=new A() //reference on the stack, object value on the heap a2.VarA=10; //on the stack - value type a1=a2; //on the stack, the target of a1 reference is updated to a2 value on the heap //also both a1 and a2 references are on the stack, while their "object" values on the heap. But what about VarA variable, its still pure value type? } class A { int VarA; }

    Read the article

  • Problem with memory leaks

    - by user191723
    Sorry, having difficulty formattin code to appear correct here??? I am trying to understand the readings I get from running instruments on my app which are telling me I am leaking memory. There are a number, quite a few in fact, that get reported from inside the Foundation, AVFoundation CoreGraphics etc that I assume I have no control over and so should ignore such as: Malloc 32 bytes: 96 bytes, AVFoundation, prepareToRecordQueue or Malloc 128 bytes: 128 bytes, CoreGraphics, open_handle_to_dylib_path Am I correct in assuming these are something the system will resolve? But then there are leaks that are reported that I believe I am responsible for, such as: This call reports against this line leaks 2.31KB [self createAVAudioRecorder:frameAudioFile]; Immediately followed by this: -(NSError*) createAVAudioRecorder: (NSString *)fileName { // flush recorder to start afresh [audioRecorder release]; audioRecorder = nil; // delete existing file to ensure we have clean start [self deleteFile: fileName]; VariableStore *singleton = [VariableStore sharedInstance]; // get full path to target file to create NSString *destinationString = [singleton.docsPath stringByAppendingPathComponent: fileName]; NSURL *destinationURL = [NSURL fileURLWithPath: destinationString]; // configure the recording settings NSMutableDictionary *recordSettings = [[NSMutableDictionary alloc] initWithCapacity:6]; //****** LEAKING 384 BYTES [recordSettings setObject:[NSNumber numberWithInt:kAudioFormatLinearPCM] forKey: AVFormatIDKey]; //***** LEAKING 32 BYTES float sampleRate = 44100.0; [recordSettings setObject:[NSNumber numberWithFloat: sampleRate] forKey: AVSampleRateKey]; //***** LEAKING 48 BYTES [recordSettings setObject:[NSNumber numberWithInt:2] forKey:AVNumberOfChannelsKey]; int bitDepth = 16; [recordSettings setObject: [NSNumber numberWithInt:bitDepth] forKey:AVLinearPCMBitDepthKey]; //***** LEAKING 48 BYTES [recordSettings setObject:[NSNumber numberWithBool:YES] forKey:AVLinearPCMIsBigEndianKey]; [recordSettings setObject:[NSNumber numberWithBool: NO]forKey:AVLinearPCMIsFloatKey]; NSError *recorderSetupError = nil; // create the new recorder with target file audioRecorder = [[AVAudioRecorder alloc] initWithURL: destinationURL settings: recordSettings error: &recorderSetupError]; //***** LEAKING 1.31KB [recordSettings release]; recordSettings = nil; // check for erros if (recorderSetupError) { UIAlertView *alert = [[UIAlertView alloc] initWithTitle: @"Can't record" message: [recorderSetupError localizedDescription] delegate: nil cancelButtonTitle: @"OK" otherButtonTitles: nil]; [alert show]; [alert release]; alert = nil; return recorderSetupError; } [audioRecorder prepareToRecord]; //***** LEAKING 512 BYTES audioRecorder.delegate = self; return recorderSetupError; } I do not understand why there is a leak as I release audioRecorder at the start and set to nil and I release recordSettings and set to nil? Can anyone enlighten me please? Thanks

    Read the article

  • Understanding the memory consumption on iPhone

    - by zoul
    Hello! I am working on a 2D iPhone game using OpenGL ES and I keep hitting the 24 MB memory limit – my application keeps crashing with the error code 101. I tried real hard to find where the memory goes, but the numbers in Instruments are still much bigger than what I would expect. I ran the application with the Memory Monitor, Object Alloc, Leaks and OpenGL ES instruments. When the application gets loaded, free physical memory drops from 37 MB to 23 MB, the Object Alloc settles around 7 MB, Leaks show two or three leaks a few bytes in size, the Gart Object Size is about 5 MB and Memory Monitor says the application takes up about 14 MB of real memory. I am perplexed as where did the memory go – when I dig into the Object Allocations, most of the memory is in the textures, exactly as I would expect. But both my own texture allocation counter and the Gart Object Size agree that the textures should take up somewhere around 5 MB. I am not aware of allocating anything else that would be worth mentioning, and the Object Alloc agrees. Where does the memory go? (I would be glad to supply more details if this is not enough.) Update: I really tried to find where I could allocate so much memory, but with no results. What drives me wild is the difference between the Object Allocations (~7 MB) and real memory usage as shown by Memory Monitor (~14 MB). Even if there were huge leaks or huge chunks of memory I forget about, the should still show up in the Object Allocations, shouldn’t they? I’ve already tried the usual suspects, ie. the UIImage with its caching, but that did not help. Is there a way to track memory usage “debugger-style”, line by line, watching each statement’s impact on memory usage? What I have found so far: I really am using that much memory. It is not easy to measure the real memory consumption, but after a lot of counting I think the memory consumption is really that high. My fault. I found no easy way to measure the memory used. The Memory Monitor numbers are accurate (these are the numbers that really matter), but the Memory Monitor can’t tell you where exactly the memory goes. The Object Alloc tool is almost useless for tracking the real memory usage. When I create a texture, the allocated memory counter goes up for a while (reading the texture into the memory), then drops (passing the texture data to OpenGL, freeing). This is OK, but does not always happen – sometimes the memory usage stays high even after the texture has been passed on to OpenGL and freed from “my” memory. This means that the total amount of memory allocated as shown by the Object Alloc tool is smaller than the real total memory consumption, but bigger than the real consumption minus textures (real – textures < object alloc < real). Go figure. I misread the Programming Guide. The memory limit of 24 MB applies to textures and surfaces, not the whole application. The actual red line lies a bit further, but I could not find any hard numbers. The consensus is that 25–30 MB is the ceiling. When the system gets short on memory, it starts sending the memory warning. I have almost nothing to free, but other applications do release some memory back to the system, especially Safari (which seems to be caching the websites). When the free memory as shown in the Memory Monitor goes zero, the system starts killing. I had to bite the bullet and rewrite some parts of the code to be more efficient on memory, but I am probably still pushing it. I

    Read the article

  • Decreasing cached memory and increasing Free memory in RAM

    - by Greenhorn
    Hi, Im using windows 2007 server 64 bit OS, I've uploaded the snap shot of my task manager when minimum processes running It shows Total memory 8190 mb *Cached memory 4315 mb* Free 3402 mb So effectively I get only 3402 mb of total RAM usage My question here is more than half is used for cached memory is there any means I can decrease this cached memory inturn I can increase my free memory. I need to do this because my Application requires at least 5GB RAM and it crashed when run in this system. Please give me a solution for this Thanks in advance

    Read the article

  • Why is Available Physical Memory (dwAvailPhys) > Available Virtual Memory (dwAvailVirtual) in call G

    - by Andrew
    I am playing with an MSDN sample to do memory stress testing (see: http://msdn.microsoft.com/en-us/magazine/cc163613.aspx) and an extension of that tool that specifically eats physical memory (see http://www.donationcoder.com/Forums/bb/index.php?topic=14895.0;prev_next=next). I am obviously confused though on the differences between Virtual and Physical Memory. I thought each process has 2 GB of virtual memory (although I also read 1.5 GB because of "overhead". My understanding was that some/all/none of this virtual memory could be physical memory, and the amount of physical memory used by a process could change over time (memory could be swapped out to disc, etc.)I further thought that, in general, when you allocate memory, the operating system could use physical memory or virtual memory. From this, I conclude that dwAvailVirtual should always be equal to or greater than dwAvailPhys in the call GlobalMemoryStatus. However, I often (always?) see the opposite. What am I missing. I apologize in advance if my question is not well formed. I'm still trying to get my head around the whole memory management system in Windows. Tutorials/Explanations/Book recs are most welcome! Andrew

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • sizes of RAM, of virtual memory and of swap for 32-bit OS

    - by Tim
    If I understand correctly, a 32-bit OS (Ubuntu) can only address 4GiB memory, so RAM with size larger than 4Gib will only be used 4Gib of itself and the rest is a waste. I am now confused about this situation for RAM with similar one for virtual memory and for swap. with virtual memory being swap + RAM, if the size of the virtual memory exceeds 4Gib, will the exceeding part be a waste for the 32-bit OS? if I now have to choose the size for my swap partition, is it a factor to consider that the 32-bit OS can only address 4GiB memory? Does the size of swap have to be chosen with respect to the 4Gib addressible limitation? Will the swap exceeding 4GiB always be a waste? is virtual memory equal to RAM and swap? or can virtual memory use space on the hard drive outside the swap partition? Thanks and regards!

    Read the article

  • Does scheduling recycling app pool in IIS7 help the server conserve memory better?

    - by user29266
    Hello, I have a VPS (IIS7 with Win 2008) It's got: 40 websites and a SQL Server 2008 powering them with only 2 Gigs of RAM. None of the sites are mission critical, they are all just demos. I often have ram issues on the server because each site has does caching and generally uses a lot of memory. Would it make sense to set the application pools to recycle every 3 hours? I'm sure this would free up any memory leaks or processes left "hanging" Are there any other tips on this? Thank you very much!, Aron

    Read the article

  • Is there Any Limit on stack memory!

    - by Vikas
    I was going through one of the threads. A program crashed because It had declared an array of 10^6 locally inside a function. Reason being given was memory allocation failure on stack leads to crash. when same array was declared globally, it worked well.(memory on heap saved it). Now for the moment ,Let us suppose, stack grows downward and heap upwards. We have: ---STACK--- ---HEAP---- Now , I believe that if there is failure in allocation on stack, it must fail on heap too. So my question is :Is there any limit on stack size? (crossing the limit caused the program to crash). Or Am I missing something?

    Read the article

  • Loads of memory in "standby" on Windows Server 2008 R2

    - by Jaap
    In our SharePoint farm, our Web Front End servers all have loads of memory in "standby" mode, meaning very little is available for our IIS worker process. We have 32 GB of RAM in each of the boxes, and standby memory will creep up to about 28 GB, whereas the IIS worker process only seems to be using about 2 GB. Also, we've seen the machine use the swap file extensively while this memory was in standby, so I am starting to think that this memory in standby mode is stopping IIS from using it, forcing it to swap to disk, causing more performance problems. I used SysInternals RamMap to indentify what is being kept in memory, and it was able to tell me that almost everything in standby memory is of type "Mapped File". When I sort the files listed under the file summary tab in RamMap by file size, the largest files (around a few hundred meg each) are IIS log files and SharePoint log files. I would like to understand which process is loading these files into standby memory and why they are not being released. When I do an iisreset, it does not release the memory. Any ideas? Thanks!

    Read the article

  • Memory Usage of SQL Server

    - by Ashish
    SQL Server instance on my server is using almost full memory available in my Physical Server. Say if i am having 8GB of RAM than SQL Server is using 7.8 GB of RAM from system. I also have read articles and also read many similar questions regarding same on this forum and i understand that memory is reserved and it is using memory. But i have 2 same servers and 2 SQL Servers, why this is happening on a single SQL Instance not on other. Also when i run DBCC MemoryStatus than it is showing up... VM Reserved 8282008 VM Committed 537936 so from this we know that SQL reserved whole 8GB memory, but why this VM Committed keeps increasing. What i understand is VM Committed is: VM Committed: This value shows the overall amount of VAS that SQL Server has committed. VAS that is committed has been associated with physical memory. So this is the memory SQL Server has committed (from this i understand that physical memory actually SQL Server is using at instance). So like to know the reason behind this ever increasing VM Committed memory on my server and not on another. Thanks in Advance.

    Read the article

  • OS memory allocation addresses

    - by user1777914
    Quick curious question, memory allocation addresses are choosed by the language compiler or is it the OS which chooses the addresses for the memory asked? This is from a doubt about virtual memory, where it could be quickly explained as "let the process think he owns all the memory", but what happens on 64 bits architectures where only 48 bits are used for memory addresses if the process wants a higher address? Lets say you do a int a = malloc(sizeof(int)); and you have no memory left from the previous system call so you need to ask the OS for more memory, is the compiler the one who determines the memory address to allocate this variable, or does it just ask the OS for memory and it allocates it on the address returned by it?

    Read the article

  • Virtual memory on Linux doesn't add up?

    - by Brendan Long
    I was looking at System Monitor on Linux and noticed that Firefox is using 441 MB of memory, and several other applications are using 274, 257, 232, etc (adding up to over 3 GB of virtual memory). So I switch over to the Resources tab, and it says I'm using 462 MB of memory and not touching swap. I'm confused. What does the virtual memory amount mean then if the programs aren't actually using it. I was thinking maybe memory they've requested but aren't using, but how would the OS know that? I can't think of any "I might need this much memory in the future" function..

    Read the article

  • Free / Cached / Available memory on Linux

    - by pkoraca
    I have read that linux uses free memory for caching, to make system faster. However, both Nagios and Paessler PRTG monitoring system show me that my memory usage is critical. I could change Nagios mem_usage script to sum free and cached memory, but would that be correct information? I doubt that they misunderstood Linux memory usage. Lets say I have 8 GB RAM. 5 GB are used, 2 GB is cached, and I have 1 GB of free memory. Real available memory should be free+cached (3 GB)? If some new application would need additional 3 GB RAM, could it take everything from cache and free without using swap, or is there a minimum that should be in cache? Real example: $ cat /proc/meminfo MemTotal: 5984256 kB MemFree: 137052 kB Buffers: 140484 kB Cached: 3439616 kB SwapCached: 244 kB Active: 3148824 kB Inactive: 2341768 kB ... My monitoring tools show that I have 137 MB free RAM, however I have ~3,5 GB in Cache. Thanks!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >