Search Results

Search found 17317 results on 693 pages for 'memory upgrade'.

Page 26/693 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Max ram for computer 16GB or 8GB

    - by Laptop memory question
    Manufacturer's specifications for my notebook say memory can be extended from 4GB to 8GB. Whereas, running sudo dmidecode suggests the computer can use 16GB as below: Handle 0x0037, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 16 GB Error Information Handle: Not Provided Number Of Devices: 4 Which one is correct?

    Read the article

  • Why do we use to talk about addresses and memory of variable in C?

    - by user2720323
    Why do we use to talk about addresses and memory of variable in C, where in other languages (like in Java, .Net etc) we do not talk about variable address and memory in a program, we will directly use the variables. But in C Language we are listening the word address and memory. How to explain this? I hope C is high level language designed over the assembly language. So C is a thin layer over assembly language (in assembly language we will use memory locations to store a variable and track a variable). But in other languages these addresses and memory related things are wrapped in that specific language, so that we will not listen these words.

    Read the article

  • Upgrade Intel Xeon Prestonia to a 64-bit processor

    - by IDisposable
    In theory, could I upgrade a mPGA604-socket motherboard with a Prestonia processor to some Intel Xeon processor with 64-bit? I've got a Dell PowerEdge 1750 with dual 2.8GHz Xeon processors running my Windows Home Server machine. I want to upgrade to the upcoming Vail release, but it is 64-bit only. The processors are Prestonia-core, which is pre-64bit, but I was wondering if it was possible to swap in some pin-compatible later generation processor. According to wikipedia, the mPGA604-socket continues to be used for several later generations that do have same pinout. So, IN THEORY, could I swap in a 64-bit, like a Nocona-core?

    Read the article

  • Upgrade OpenSSL 0.9.8k to OpenSSL 1.0.1c on Ubuntu 10.04

    - by Nina
    We're currently using Ubuntu 10.04 and based on the PCI Compliance results, we're told to upgrade our OpenSSL. I attempted to do this using this reference: http://sandilands.info/sgordon/upgrade-latest-version-openssl-on-ubuntu and http://www.lunarforums.com/dedicated_web_hosting_at_lunarpages/upgrading_openssl-t35015.0.html Unfortunately, they didn't work for me. And when I attempted to remove the old version prior to installation, it looks like it broke a few thins in the system. The article from Steve Gordon seemed like it would work for me, but when I ran the openssl version command, it still read that it was the old version. I was wondering if anyone has any suggestions on what I should do. Fix: After following the steps from Steven Gordon, make sure you restart apache and / or restart your computer (I did both, but I'm sure a simple restart will fix it right up).

    Read the article

  • Register for windows Upgrade Offer from Windows 7 to Windows 8

    - by BumbleBee
    I have to register for windows Upgrade Offer from here. I purchased Dell Inspiron 5520 laptop before 2 weeks and I got windows 7 Home Basic in it. But now I want to register for windows upgrade offer, When I filled up registration form and submit then it displayed that I am not eligible for this kind of offer. I don't know why this message is displaying although I bought laptop in between eligible time period. I think, I was filling wrong details in form. Because I am not sure about what to fill in Retailer's Name, Purchase Date and PC Model. And one thing is how to find right purchase Date from my product's Service Tag ? Which date I have to fill in form, Shipping Date OR Manufacture Date? Please Provide me a right direction to register and correct information regarding Retailer'Name and Purchase Date and PC Model.

    Read the article

  • Queries related to Windows 8 Upgrade installation media

    - by Karan
    I don't expect all these questions to be answered right away. Gradually over time is fine. Also, I guess the answers would differ depending on the language, so I'm looking for information related to the English/en-US versions only (I believe there is an en-GB version as well?) The Windows 8 Upgrade Assistant allows one to create USB/DVD (ISO) media: Is the media customised per-PC in any way, or are copies created on different PCs with different base OSes (XP/Vista/7) exactly the same? If the ISOs are not customised, are they exactly the same content-wise as the Upgrade DVDs available for purchase?

    Read the article

  • Upgrade to Genuine Windows 8 Pro from non genuine Windows 7

    - by mark
    I have a computer with non-genuine windows 7 (cracked with windows loader). I was thinking of buying / upgrading to Windows 8 Pro. I ran Windows8-UpgradeAssistant.exe and was said that I can upgrade to Windows 8 Pro. Can I perform a clean upgrade (format and install) from my current windows 7 to windows 8? In future, in order to re-install Windows 8 do I need to re-install the non-genuine Windows 7 and install on top of it? If my hard disk crash, or I want to install on a new hard disk (clean install), do I need to install windows 7 again before upgrading to Windows 8? If I don't like Windows 8, can I downgrade to Windows 7 genuine?

    Read the article

  • F5 BigIP upgrade from 9.x to 10.x

    - by mbuk2k
    Having a few difficulties upgrading a Big IP 3400 from 9.4.8 to any version 10.x image. The following are the versions I've tried: 10.1.0.3341.0 10.2.2.763.3 10.2.3.112.0 10.2.4.577.0 To upgrade I'm running the following command: image2disk --format=volumes BIGIP-10.1.0.3341.0.iso Obviously replacing the version number with the relevant image I'm trying to upgrade to each time. The F5 reboots, and starts copying packages however after 30 seconds or so just stops copying. The cursor in the console is still flashing but no matter how long it's left, the package doesn't copy. It seems to be a different package with each version/image (but always the same package per version) at point of freezing, which I'm guessing is suggesting a space issue? I've checked free space on the device and it has over 2GB free at root which should surely be enough? If anyone has any advice or pointers, it would be kindly appreciated. Thank you

    Read the article

  • Windows 8 Upgrade : From Windows 7 Trial

    - by Golmaal
    This is a bit complicated it seems. I own Windows Vista Ultimate 64-bit (Retail). It was okay, but around a couple of weeks ago I had some system crash and at that time I decided that I will install Windows 8 as soon as it comes out. However, because of some problems in Vista, at the time of crash, I installed Windows 7 trial. I had some urgent work to do which I accomplished and then I switched the PC off. Now I have purchased Windows 8 Pro Upgrade ($40 version). If I go for a clean install, will it be able to install Windows 8 on not-activated Windows 7? During activation, if it asks for Vista serial number, I can provide it. Or will I first have to install Vista and then only it will allow me to install Windows 8? Also, I used the Upgrade Assistant to download Windows 8 on my laptop (Windows 7 OEM). Will it work on my above mentioned desktop?

    Read the article

  • Input devices stopped working during system upgrade

    - by amorfis
    Hi, I was upgrading Ubuntu on my server (but Ubuntu is in desktop version) when mouse and keyboard stopped working :( So the screen went black (screensaver), and now I can't do anything. I don't know what stage of upgrade it stopped working, probably now it waits for me to answer some question. Keyboard and mouse were connected by KVM, connecting them directly doesn't help. Both are on USB. What I can do, is connecting to the machine by ssh. Can I somehow see and answer questions of update system and somehow finalize process of upgrade?

    Read the article

  • Is Family Tree Maker 2011 the right upgrade?

    - by bill weaver
    My father has used Family Tree Maker for years, but hasn't upgraded since version 11. It is difficult to tell from reviews at Amazon and other places whether upgrading to FTM 2011 is a good choice. File incompatibilities and upgrade woes sound like customer service is lacking, and i've read reports of it uploading your data to their database but then trying to sell you a download of data. Looking at the ancestry.com site makes me think it's solely about selling add-ons and upgrades. On the other hand, the feature set seems fairly rich and the software has a pretty strong following. I was able to get Gramps working on my system, but that's not going to work for my dad. Any advice on a good upgrade path? Doesn't necessarily have to be FTM. The only requirement is a way to import his existing data.

    Read the article

  • Apache Upgrade Version

    - by mcondiff
    I have a live server that I need to upgrade Apache from 2.2.10 to 2.2.15 on Windows Server 2003. I have just inherited this server and need the fastest way to upgrade to the latest stable version without much downtime and without messing up my configuration. My first thought is to copy the httpd.conf, uninstall the current Apache, install the latest stable Apache version and then replace the httpd.conf with the previous live version. Anyone see a problem or pitfall with that? How does a Server Pro do something like this. (I'm a programmer and am new to server systems administration).

    Read the article

  • Automate Windows 8.1 Enterprise upgrade [migrated]

    - by Ben M.
    I have been trying to find the necessary command line switches for automating the upgrade for my Windows 8 clients to Windows 8.1. I have the ISO extracted and I've run setup.exe /? but that doesn't tell me enough. I can't find any relevant information from search engines. Can anyone point me to some documentation or information on how to automate the upgrade so that it keeps user data, programs, etc? I know how to do it when running the installer manually, but I obviously do not wish to do that with 100+ machines.

    Read the article

  • Intel T5600 to upgrade T5250

    - by galets
    I want to upgrade my laptop which has T5250 CPU with a T5600 CPU to support virtualization. I ordered T5600 on ebay, but it didn't fit. It says T5250 supports PPGA478 socket, so I assume that is what I have. T5600 says supports "PBGA479, PPGA478". Since T5600 didn't fit as a replacement, I assume it means there are 2 models of T5600, one supports PBGA479, another one supports PPGA478, and not like I thought - one CPU supports both. Is that a correct statement? Does anybody know if it's even possible to do such an upgrade, or I'm wasting time?

    Read the article

  • Upgrade PHP v5.3.3 to v5.3.4

    - by Ty01
    I currently have PHP v5.3.3 installed and configured. Everything is working perfectly, but I would like to keep PHP up to date and upgrade to v5.3.4. Can someone please describe the usual upgrade process for PHP when compiling manually? For example: is it just as easy as downloading the newest source, uncompressing it, compiling it using the same (or comparable) options that were used on the last version? Is there anything that has to be removed or changed from the previous version installed? I'm clueless. Please help!

    Read the article

  • CentOS 6.3 X86_64 RAM detection

    - by Peter
    I have a machine with 8GB ram (BIOS sees it, so my motherboard and CPU supports it), and I installed CentOS 6.3 on it. When it starts up, it only see 3.1GB. uname says: 2.6.32-279.1.1.el6.x86_64 #1 SMP BIOS-provided physical RAM map: BIOS-e820: 0000000000000000 - 000000000009fc00 (usable) BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved) BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) BIOS-e820: 0000000000100000 - 00000000cf65f000 (usable) BIOS-e820: 00000000cf65f000 - 00000000cf6e8000 (ACPI NVS) BIOS-e820: 00000000cf6e8000 - 00000000cf6ec000 (usable) BIOS-e820: 00000000cf6ec000 - 00000000cf6ff000 (ACPI data) BIOS-e820: 00000000cf6ff000 - 00000000cf700000 (usable) dmesg | grep -i memory says: initial memory mapped : 0 - 20000000 init_memory_mapping: 0000000000000000-00000000cf700000 Reserving 129MB of memory at 48MB for crashkernel (System RAM: 3319MB) PM: Registered nosave memory: 000000000009f000 - 00000000000a0000 PM: Registered nosave memory: 00000000000a0000 - 00000000000e0000 PM: Registered nosave memory: 00000000000e0000 - 0000000000100000 PM: Registered nosave memory: 00000000cf65f000 - 00000000cf6e8000 PM: Registered nosave memory: 00000000cf6ec000 - 00000000cf6ff000 Memory: 3184828k/3398656k available (5152k kernel code, 1016k absent, 212812k reserved, 7166k data, 1260k init) please try 'cgroup_disable=memory' option if you don't want memory cgroups Initializing cgroup subsys memory Freeing initrd memory: 16136k freed Non-volatile memory driver v1.3 agpgart-intel 0000:00:00.0: detected 8192K stolen memory crash memory driver: version 1.1 Freeing unused kernel memory: 1260k freed Freeing unused kernel memory: 972k freed Freeing unused kernel memory: 1732k freed Update: Memtest see all the 8GB, and dmidecode -t 17 | grep Size too. But free -m still see only 3.1 GB. Question: How can I repair/modify the system, to see all the 8GB RAM? Thanks in advance!

    Read the article

  • Clearing Windows file share "memory"

    - by Tom Shaw
    I'm currently upgrading a Samba file server (from 3.0.23d to 3.4.3). I have a problem on the Windows client side: if the client was accessing a UNC path or mapped drive from the Samba server before the upgrade, then after the upgrade those paths or drives are not accessible. However, I can consistently resolve the client side problem for good by rebooting the client and then re-mapping all of the mapped drives. The problem appears to be related to the client's "memory" of the pre-upgrade Samba server, which the reboot and re-map clears. I have the same issue and same fix on Windows XP SP3 and Windows Server 2003 SP2. This question is specifically: is it possible to reproduce the benefits of the Windows reboot without actually rebooting the client? I have tried restarting various Windows services, disabling and enabling the network, logging out and back in again, but nothing except a reboot appears to do the trick.

    Read the article

  • How to fix the "Unable to calculate upgrade" issue when upgrading from 12.04 to 12.10?

    - by Vagrant232
    I've been trying to upgrade to 12.10 ever since it was released today but I keep meeting this error: "An unresolvable problem occurred while calculating the upgrade: E:Unable to correct problems, you have held broken packages. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu" I've tried updating all the currently installed software, removing all the extra PPAs, downgrading the files installed from xorg edgers' ppa but I haven't been able to solve the problem.

    Read the article

  • android memory management outside heap

    - by Daniel Benedykt
    Hi I am working on an application for android and we since we have lots of graphics, we use a lot of memory. I monitor the memory heap size and its about 3-4 Mb , and peeks of 5Mb when I do something that requires more memory (and then goes back to 3). This is not a big deal, but some other stuff is handled outside the heap memory, like loading of drawables. For example if I run the ddms tool outside eclipse, and go to sysinfo, I see that my app is taking 20Mb on the Droid and 12 on the G1, but heap size are the same in both, because data is the same but images are different. So the questions are: How do I know what is taking the memory outside the heap memory? What other stuff takes memory outside the heap memory? Complex layouts (big tree) ? Animations? Thanks Daniel

    Read the article

  • Help with strange memory behavior. Looking for leaks both in my brain and in my code.

    - by BastiBechtold
    I spent the last few days trying to find memory leaks in a program we are developing. First of all, I tried using some leak detectors. After fixing a few issues, they do not find any leaks any more. However, I am also monitoring my application using perfmon.exe. Performance Monitor reports that 'Private Bytes' and 'Working Set - Private' are steadily rising when the app is used. To me, this suggests that the program is using more and more memory the longer it runs. Internal resources seem to be stable however, so this sounds like leaking to me. The program is loading a DLL at runtime. I suspect that these leaks or whatever they are occur in that library and get purged when the library is unloaded, hence they won't get picked up by the leak detectors. I used both DevPartner BoundsChecker and Virtual Leak Detector to look for memory leaks. Both supposedly catch leaks in DLLs. Also, the memory consumption is increasing in steps and those steps roughly, but not exactly, coincide with certain GUI actions I perform in the application. If these were errors in our code, they should get triggered every single time the actions are performed and not just most of the time. Whenever I am confronted with so much strangeness, I begin to question my basic assumptions. So I turn to you, who know everything, for suggestions. Is there a flaw in my assumptions? Do you have an idea of how to go about troubleshooting a problem like this? Edit: I am currently using Microsoft Visual C++ (x86) on Windows 7 64. Edit2: I just used IBM Purify to hunt for leaks. First of all, it lists a full 30% of the program as leaked memory. This can not be true. I guess it is identifying the whole DLL as leaked or something like that. However, if I search for new leaks every few actions, it reports leaks that correspond with the size increase reported by Performance Monitor. This could be a lead to a leak. Sadly, I am only using the trial version of Purify, so it won't show me the actual location of those leaks. (These leaks only show up at runtime. When the program exits, there are no leaks whatsoever reported by any tool.)

    Read the article

  • Determining if Memory Pointer is Valid - C++

    - by Jim Fell
    It has been my observation that if free( ptr ) is called where ptr is not a valid pointer to system-allocated memory, an access violation occurs. Let's say that I call free like this: LPVOID ptr = (LPVOID)0x12345678; free( ptr ); This will most definitely cause an access violation. Is there a way to test that the memory location pointed to by ptr is valid system-allocated memory? It seems to me that the the memory management part of the Windows OS kernel must know what memory has been allocated and what memory remains for allocation. Otherwise, how could it know if enough memory remains to satisfy a given request? (rhetorical) That said, it seems reasonable to conclude that there must be a function (or set of functions) that would allow a user to determine if a pointer is valid system-allocated memory. Perhaps Microsoft has not made these functions public. If Microsoft has not provided such an API, I can only presume that it was for an intentional and specific reason. Would providing such a hook into the system prose a significant threat to system security? Situation Report Although knowing whether a memory pointer is valid could be useful in many scenarios, this is my particular situation: I am writing a driver for a new piece of hardware that is to replace an existing piece of hardware that connects to the PC via USB. My mandate is to write the new driver such that calls to the existing API for the current driver will continue to work in the PC applications in which it is used. Thus the only required changes to existing applications is to load the appropriate driver DLL(s) at startup. The problem here is that the existing driver uses a callback to send received serial messages to the application; a pointer to allocated memory containing the message is passed from the driver to the application via the callback. It is then the responsibility of the application to call another driver API to free the memory by passing back the same pointer from the application to the driver. In this scenario the second API has no way to determine if the application has actually passed back a pointer to valid memory.

    Read the article

  • Toshiba Satellite L655D-S5050 Processor Upgrade

    - by C-dizzle
    I have been searching for hours to see what kind of upgrade I can do with my processor. I just ordered replacement memory so I can go to 8GB instead of 3GB and now want to see what is available for my CPU. Currently this is what is on my laptop: (in case some of you don't know what comes with this model) Windows 7 Home Pro - 64 Bit AMD M880G Chipset AMD Athlon II Dual Core P320 - 2.1 GHz, 1MB L2 Cache The memory I ordered was Crucial 2 x 4GB DDR3 1333 PC3-10600 I'm sure someone out there can help me, because google hasn't been to friendly with me today.

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 2 of 2 &ndash; Memory Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible. Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind… In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve. One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well. In this review, I am going to cover some of the features of the ANTS memory profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program. I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction – Part 2 In my last post, I reviewed the feature packed Red Gate ANTS Performance Profiler.  Separate from the Red Gate Performance Profiler is the Red Gate ANTS Memory Profiler – a simple, easy to use utility for checking how your application is handling memory management…  A tool that I wish I had had many times in the past.  This post will be focusing on the ANTS Memory Profiler and its tool set. The memory profiler has a large assortment of features just like the Performance Profiler, with the new session looking nearly exactly alike: ANTS Memory Profiler Memory profiling is not something that I have to do very often…  In the past, the few cases I’ve had to find a memory leak in an application I have usually just had to trace the code of the operations being performed to look for oddities…  Sadly, I have come across more undisposed/non-using’ed IDisposable objects, usually from ADO.Net than I would like to ever see.  Support is not fun, however using ANTS Memory Profiler makes this task easier.  For this round of testing, I am going to use the same code from my previous example, using the WPF application. This time, I will choose the ‘Profile Memory’ option from the ANTS menu in Visual Studio, which launches the solution in its currently configured state/start-up project, and then launches the ANTS Memory Profiler to help.  It prepopulates all of the fields with the current project information, and all I have to do is select the ‘Start Profiling’ option. When the window comes up, it is actually quite barren, just giving ideas on how to work the profiler.  You start by getting to the point in your application that you want to profile, and then taking a ‘Memory Snapshot’.  This performs a full garbage collection, and snapshots the managed heap.  Using the same WPF app as before, I will go ahead and take a snapshot now. As you can see, ANTS is already giving me lots of information regarding the snapshot, however this is just a snapshot.  The whole point of the profiler is to perform an action, usually one where a memory problem is being noticed, and then take another snapshot and perform a diff between them to see what has changed.  I am going to go ahead and generate 5000 primes, and then take another snapshot: As you can see, ANTS is already giving me a lot of new information about this snapshot compared to the last.  Information such as difference in memory usage, fragmentation, class usage, etc…  If you take more snapshots, you can use the dropdown at the top to set your actual comparison snapshots. If you beneath the timeline, you will see a breadcrumb trail showing how best to approach profiling memory using ANTS.  When you first do the comparison, you start on the Summary screen.  You can either use the charts at the bottom, or switch to the class list screen to get to the next step.  Here is the class list screen: As you can see, it lists information about all of the instances between the snapshots, as well as at the bottom giving you a way to filter by telling ANTS what your problem is.  I am going to go ahead and select the Int16[] to look at the Instance Categorizer Using the instance categorizer, you can travel backwards to see where all of the instances are coming from.  It may be hard to see in this image, but hopefully the lightbox (click on it) will help: I can see that all of these instances are rooted to the application through the UI TextBlock control.  This image will probably be even harder to see, however using the ‘Instance Retention Graph’, you can trace an objects memory inheritance up the chain to see its roots as well.  This is a simple example, as this is simply a known element.  Usually you would be profiling an actual problem, and comparing those differences.  I know in the past, I have spotted a problem where a new context was created per page load, and it was rooted into the application through an event.  As the application began to grow, performance and reliability problems started to emerge.  A tool like this would have been a great way to identify the problem quickly. Overview Overall, I think that the Red Gate ANTS Memory Profiler is a great utility for debugging those pesky leaks.  3 Biggest Pros: Easy to use interface with lots of options for configuring profiling session Intuitive and helpful interface for drilling down from summary, to instance, to root graphs ANTS provides an API for controlling the profiler. Not many options, but still helpful. 2 Biggest Cons: Inability to automatically snapshot the memory by interval Lack of complete integration with Visual Studio via an extension panel Ratings Ease of Use (9/10) – I really do believe that they have brought simplicity to the once difficult task of memory profiling.  I especially liked how it stepped you further into the drilldown by directing you towards the best options. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Features (7/10) – A really great set of features all around in the application, however, I would like to see some ability for automatically triggering snapshots based on intervals or framework level items such as events. Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (9/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (9/10) – Overall, I am happy with the Memory Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  Thank you for reading up to here, or skipping ahead – I told you it would be shorter!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >