Search Results

Search found 12715 results on 509 pages for 'memory profiling'.

Page 7/509 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • EBS: OPP Out of memory issue...

    - by ashish.shrivastava
    FO Processor is little more hungry for memory compare to other Java process. If XSLT scalable option is not set and the same time your RTF template is not well optimized definitely you are going to hit Out of memory exception while working with large volume of data. If the memory requirement is not too bad, you can set the OOP Heap size using following SQL queries. Check the current OPP JVM Heap size using following SQL query SQL select DEVELOPER_PARAMETERS from FND_CP_SERVICES where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP' DEVELOPER_PARAMETERS ----------------------------------------------------- J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx512m Set the JVM Heap size using following SQL query SQL update FND_CP_SERVICES set DEVELOPER_PARAMETERS = 'J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx2048m' where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP'); SQLCommit; . You need to restart the Concurrent Manager to make it effective. If this does not resolve the issue, You need to optimize RTF template and set the XSLT scalable option true.

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • Applications affected by memory performance

    - by robotron
    I'm writing a paper on the topic of applications affected more by memory performance than processor performance. I've got a lot written regarding the gap between the two, however I can't seem to find anything about the applications that might be affected more by memory performance than by processor speed. I suppose these are applications that make a large amount of memory references, but I have no idea what kind of applications would make such large number of references to make it stand out? Perhaps databases? Can you please give me any pointers on how to proceed, some links to papers? I'm really stuck.

    Read the article

  • Wheres my memory going?

    - by Stu2000
    My machine keeps 'freezing' before eventaully logging out with all the programs exiting. This is rather annoying, and I think its because I keep running out of memory. I am not running any custom software, just netbeans, chrome etc. (Stuff I usually run on other ubuntu computers without issue). For some reason my memory usage is through the roof as seen here, but I can't quite figure out why. Here is a screenshot which may be useful with htop and gnome-system monitor open as user and as root. I notice that my console-kit-daemon is taking up about a gig of 'virtual memory'. Is that normal? Any tips/advice will be helpful. In the meantime I have ordered 2 x 4 gig ram sticks to try and just throw hardware at the issue.

    Read the article

  • Loading saved byte array to memory stream causes out of memory exception

    - by user2320861
    At some point in my program the user selects a bitmap to use as the background image of a Panel object. When the user does this, the program immediately draws the panel with the background image and everything works fine. When the user clicks "Save", the following code saves the bitmap to a DataTable object. MyDataSet.MyDataTableRow myDataRow = MyDataSet.MyDataTableRow.NewMyDataTableRow(); //has a byte[] column named BackgroundImageByteArray using (MemoryStream stream = new MemoryStream()) { this.Panel.BackgroundImage.Save(stream, ImageFormat.Bmp); myDataRow.BackgroundImageByteArray = stream.ToArray(); } Everything works fine, there is no out of memory exception with this stream, even though it contains all the image bytes. However, when the application launches and loads saved data, the following code throws an Out of Memory Exception: using (MemoryStream stream = new MemoryStream(myDataRow.BackGroundImageByteArray)) { this.Panel.BackgroundImage = Image.FromStream(stream); } The streams are the same length. I don't understand how one throws an out of memory exception and the other doesn't. How can I load this bitmap? P.S. I've also tried using (MemoryStream stream = new MemoryStream(myDataRow.BackgroundImageByteArray.Length)) { stream.Write(myDataRow.BackgroundImageByteArray, 0, myDataRow.BackgroundImageByteArray.Length); //throw OoM exception here. }

    Read the article

  • Memory allocation in case of static variables

    - by eSKay
    I am always confused about static variables, and the way memory allocation happens for them. For example: int a = 1; const int b = 2; static const int c = 3; int foo(int &arg){ arg++; return arg; } How is the memory allocated for a,b and c? What is the difference (in terms of memory) if I call foo(a), foo(b) and foo(c)?

    Read the article

  • Is there an NSCFTimer memory leak?

    - by mystify
    I tracked down a memory leak with instruments. I always end up with the information that the responsible library is Foundation. When I track that down in my code, I end up here, but there's nothing wrong with my memory management: - (void)setupTimer { // stop timer if still there [self stopAnimationTimer]; NSTimer *timer = [NSTimer scheduledTimerWithTimeInterval:0.2 target:self selector:@selector(step:) userInfo:nil repeats:YES]; self.animationTimer = timer; // retain property, -release in -dealloc method } the property animationTimer is retaining the timer. In -dealloc I -release it. Now that looks like a framework bug? I checked with iPhone OS 3.0 and 3.1, both have that problem every time I use NSTimer like this. Any idea what else could be the problem? (my memory leak scan interval was 0.1 seconds. but same thing with 5 seconds)

    Read the article

  • UIWebView memory management

    - by wolfrevo
    Hello, I have a problem with memory management. I am developing an application that makes heavy use of UIWebView. This app generates dynamically lots of UIWebViews while loading content from my server. Some of these UIWebViews are quite large and have a lot of pictures. If I use instruments to detect leaks, I do not detect any. However, lots of objects are allocated and I suspect that has to do with the UIWebViews. When the webviews release because no longer needed, it appears that not all memory is released. I mean, after a request to my server the app creates an UITableView and many webviews (instruments say about 8Mb). When user tap back, all of them are released but memory usage only decrements about 2-3 Mb, and after 5-10 minutes using the app it crashes. Am I missing something? Anyone know what could be happening? Thank you!

    Read the article

  • Does it take time to deallocate memory?

    - by jm1234567890
    I have a C++ program which, during execution, will allocate about 3-8Gb of memory to store a hash table (I use tr1/unordered_map) and various other data structures. However, at the end of execution, there will be a long pause before returning to shell. For example, at the very end of my main function I have std::cout << "End of execution" << endl; But the execution of my program will go something like $ ./program do stuff... End of execution [long pause of maybe 2 min] $ -- returns to shell Is this expected behavior or am I doing something wrong? I'm guessing that the program is deallocating the memory at the end. But, commercial applications which use large amounts of memory (such as photoshop) do not exhibit this pause when you close the application. Please advise :)

    Read the article

  • How does VirtualBox's memory usage work?

    - by DrFredEdison
    I've been running several VM's with VirtualBox, and the memory usage reported from various perspectives, and I'm having trouble figuring how much memory my VMs actually use. Here is an example: I have a VM running Windows 7 (as the Guest OS) on my windows XP Host machine. The Host Machine Has 3 GB of RAM The Guest VM is setup to have a base memory of 1 GB If I run Task Manger on the Guest OS, I see memory usage of 430 MB If I run Task Manger on the host OS, I see 3 processes that seem to belong to VirtualBox: VirtualBox.exe (1), using 60 MB of memory (This one seems to have the most CPU usage) VirtualBox.exe (2), using 20 MB of memory VBoxSvc.exe, using 11.5 MB of memory While running the VM, the Host OS's memory usage is about 2 GB When I shut down the VM, the Host OS's it goes back to memory usage goes down to about 900 MB So clearly, there are some huge differences here. I really don't understand how the GuestOS can use 400+ MB, while the Host OS only shows about 75 MB allocated to the VM. Are there other processes used by VirtualBox that aren't as obviously named? Also, I'd like to know if I run a machine with 1 GB, is that going to take 1 GB away from my host OS, or only the amount of memory the Guest machine is currently using? update Somene expressed distrust over my memory usage numbers, and I'm not sure if that distrust was directed at me, or my Host OS's Task Manager's reporting (which is perhaps the culprit), but for any skeptics, here is a screenshot of those processes on the host machine:

    Read the article

  • Bios Memory settings and Virtualization + Ubuntu (Unofficial Answers Welcome) [closed]

    - by TardisGuy
    Attempting to optimize my (Main Windowless) Ubuntu system for my uses I will detail questions below, I understand this might be the wrong place to ask these questions. If so, my apologies and I thank you so much for your patience. Thanks to all the volenteers that have helped me learn ubuntu over the years (Since 5.10) This is a "short" list of questions I have been trying to figure out for some time. If you feel you can answer one but not another, that's already more than I could ask for. I have wrote this up in a format for easy navigation to important points Hopefully to less annoy your eyes. You're welcome :) or i'm sorry i annoy you. :( If you would be so kind, Please format answers as follows: question 1: _ _ _ _ _ or question 1-a: _ _ _ _ _ If you want to simply link me to relevant information, rather than type up something really detailed; that would be more than awesome! Memory Specific Questions Goal: Maximizing memory bandwith to better perform in Virtualization, and Large file compression. (Possible conflict?) Ganged vs Unganged "which is better?"** is relative, i know. But what about ganged vs unganged - With or without Bank/channel interleaving? a: Speculation - If i understand correctly, "channel interleaving has something to do with using both channels to read or write in a kind of "striping" pattern, as opposed to a standard half duplex operation.(probably wrong) but wouldn't ganged channels make this irrelevant? Memory Interleaving(bank). Does it have a down side? Does it require a ratio of clocks? (If I run 4x4gig ddr3) a. If im reading correctly(trying to learn), this is designed to spread operations between latency cycles to work around the higher latency of "normal" operation. b. However it seems to me that it has to be: divisible by fractions of a master clock? So if i run memory at 1333mhz, then the mean between 2 (physical) banks would operate every (roughly) 600Mhz? Warning! Possibly utter nonsense: (1333/2 interleaving to act like 1 memory module per 2 sticks of a total of 4 sticks, meaning 2x channels@4) c. which makes me wonder if there would be left over clock cycles the system would have to... "truncate/balance" or something? But I'm certain theres a feature somewhere i don't understand. Virtualization Questions AMD-V - Option of IOMMU Turned it on, why do i have extra option of "64MB"? If IOMMU is on, but "64MB" is "disabled", Is it on? (have scoured google, I still dont know) a. I think i understand that its supposed to (kind of) "set aside" a part of ram to act as a faster interactive zone for "stuff" (usb, Graphics, and... what?) b. I am using Nvidia graphics on AMD (Used kernel option "iommu=pt iommu=1, pt "passthrough"? No idea what they do, found it on google to solve boot up issue) c. Will this option help me use low latency sound hardware, like my midi keyboard? Can you recommend any additional tweaks? a. sysctl settings? b. swap settings? Grats, youve reached the end. Thanks for Reading.

    Read the article

  • XNA WP7 Texture memory and ContentManager

    - by jlongstreet
    I'm trying to get my WP7 XNA game's memory under control and under the 90MB limit for submission. One target I identified was UI textures, especially fullscreen ones, that would never get unloaded. So I moved UI texture loads to their own ContentManager so I can unload them. However, this doesn't seem to affect the value of Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage"), so it doesn't look like the memory is actually being released. Example: splash screens. In Game.LoadContent(): Application.Instance.SetContentManager("UI"); // set the current content manager for (int i = 0; i < mSplashTextures.Length; ++i) { // load the splash textures mSplashTextures[i] = Application.Instance.Content.Load<Texture2D>(msSplashTextureNames[i]); } // set the content manager back to the global one Application.Instance.SetContentManager("Global"); When the initial load is done and the title screen starts, I do: Application.Instance.GetContentManager("UI").Unload(); The three textures take about 6.5 MB of memory. Before unloading the UI ContentManager, I see that ApplicationCurrentMemoryUsage is at 34.29 MB. After unloading the ContentManager (and doing a full GC.Collect()), it's still at 34.29 MB. But after that, I load another fullscreen texture (for the title screen background) and memory usage still doesn't change. Could it be keeping the memory for these textures allocated and reusing it? edit: very simple test: protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); PrintMemUsage("Before texture load: "); // TODO: use this.Content to load your game content here red = this.Content.Load<Texture2D>("Untitled"); PrintMemUsage("After texture load: "); } private void PrintMemUsage(string tag) { Debug.WriteLine(tag + Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage")); } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here if (count++ == 100) { GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("Before CM unload: "); this.Content.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After CM unload: "); red = null; spriteBatch.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After SpriteBatch Dispose(): "); } base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here if (red != null) { spriteBatch.Begin(); spriteBatch.Draw(red, new Vector2(0,0), Color.White); spriteBatch.End(); } base.Draw(gameTime); } This prints something like (it changes every time): Before texture load: 7532544 After texture load: 10727424 Before CM unload: 9875456 After CM unload: 9953280 After SpriteBatch Dispose(): 9953280

    Read the article

  • What is the maximum memory a process (MySQL) can consume on a 32-bit OS?

    - by mmattax
    I have MySQL running on a 32-bit RHEL box. The server itself has 4GB total memory with 2GB allocated to MySQL. I would like to know the max amount of memory I can put in the box and how much of that I can allocate to MySQL. I have heard both 2GB and 4GB as the per-process-limit on a 32-bit OS... Ultimately I'd like to know if I can increase the memory for MySQL without upgrading to a 64-bit OS.

    Read the article

  • The Hot-Add Memory Hogs

    - by Andrew Clarke
    One of the more difficult tasks, when virtualizing a server, is to determine the amount of memory that Hypervisor should assign to the virtual machine. This requires accurate monitoring and, because of the consequences of setting the value too low, there is a great temptation to err on the side of over-provisioning. This results in fewer guest VMs and, in fact, with more accurate memory provisioning, many virtual environments could support 30% more VMs. In order to achieve a better consolidation (aka VM density) ratio, Windows Server 2008 R2 SP1 has introduced what Microsoft calls ‘Dynamic Memory’. This means that the start-up RAM VM memory assigned to guest virtual machines can be allowed to vary according to demand, changing dynamically while the VM is running, based on the workload of applications running inside. If demand outstrips supply, then memory can be rationed according to the ‘memory weight’ assigned to the guest VM. By this mechanism, memory becomes a shared resource that can be reallocated automatically as demand patterns vary. Unlike VMWare’s Memory Overcommit technology, the sum of all the memory allocations to each virtual machine will not exceed the total memory of the host computer. This is fine for applications that are self-regulating in their demands for memory, releasing memory back into the 'pool' when not under peak load. Other applications however, such as SQL Server Standard and Enterprise, are by nature, memory hogs under high workload; they can grab hot-add memory whilst running under load and then never release it. This requires more careful setting-up and the SQLOS team have provided some guidelines from for configuring SQL Server in virtual environments. Whereas VMWare’s Memory Overcommit is well-proven in a number of different configurations, Hyper-V’s ‘Dynamic Memory’ is new. So far, the indications are that it will improve the business case for virtualizing and it is probably a far more intuitive technology for the average IT professional to grasp. It is certainly worth testing to see whether it works for you.

    Read the article

  • Data Profiling without SSIS

    Strangely enough for a predominantly SSIS blog, this post is all about how to perform data profiling without using SSIS. Whilst the Data Profiling Task is a worthy addition, there are a couple of limitations I’ve encountered of late. The first is that it requires SQL Server 2008, and not everyone is there yet. The second is that it can only target SQL Server 2005 and above. What about older systems, which are the ones that we probably need to investigate the most, or other vendor databases such as Oracle? With these limitations in mind I did some searching to find a quick and easy alternative to help me perform some data profiling for a project I was working on recently. I only had SQL Server 2005 available, and anyway most of my target source systems were Oracle, and of course I had short timescales. I looked at several options. Some never got beyond the download stage, they failed to install or just did not run, and others provided less than I could have produced myself by spending 2 minutes writing some basic SQL queries. In the end I settled on an open source product called DataCleaner. To quote from their website: DataCleaner is an Open Source application for profiling, validating and comparing data. These activities help you administer and monitor your data quality in order to ensure that your data is useful and applicable to your business situation. DataCleaner is the free alternative to software for master data management (MDM) methodologies, data warehousing (DW) projects, statistical research, preparation for extract-transform-load (ETL) activities and more. DataCleaner is developed in Java and licensed under LGPL. As quoted above it claims to support profiling, validating and comparing data, but I didn’t really get past the profiling functions, so won’t comment on the other two. The profiling whilst not prefect certainly saved some time compared to the limited alternatives. The ability to profile heterogeneous data sources is a big advantage over the SSIS option, and I found it overall quite easy to use and performance was good. I could see it struggling at times, but actually for what it does I was impressed. It had some data type niggles with Oracle, and some metrics seem a little strange, although thankfully they were easy to augment with some SQL queries to ensure a consistent picture. The report export options didn’t do it for me, but copy and paste with a bit of Excel magic was sufficient. One initial point for me personally is that I have had limited exposure to things of the Java persuasion and whilst I normally get by fine, sometimes the simplest things can throw me. For example installing a JDBC driver, why do I have to copy files to make it all work, has nobody ever heard of an MSI? In case there are other people out there like me who have become totally indoctrinated with the Microsoft software paradigm, I’ve written a quick start guide that details every step required. Steps 1- 5 are the key ones, the rest is really an excuse for some screenshots to show you the tool. Quick Start Guide Step 1  - Download Data Cleaner. The Microsoft Windows zipped exe option, and I chose the latest stable build, currently DataCleaner 1.5.3 (final). Extract the files to a suitable location. Step 2 - Download Java. If you try and run datacleaner.exe without Java it will warn you, and then open your default browser and take you to the Java download site. Follow the installation instructions from there, normally just click Download Java a couple of times and you’re done. Step 3 - Download Microsoft SQL Server JDBC Driver. You may have SQL Server installed, but you won’t have a JDBC driver. Version 3.0 is the latest as of April 2010. There is no real installer, we are in the Java world here, but run the exe you downloaded to extract the files. The default Unzip to folder is not much help, so try a fully qualified path such as C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\ to ensure you can find the files afterwards. Step 4 - If you wish to use Windows Authentication to connect to your SQL Server then first we need to copy a file so that Data Cleaner can find it. Browse to the JDBC extract location from Step 3 and drill down to the file sqljdbc_auth.dll. You will have to choose the correct directory for your processor architecture. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\auth\x86\sqljdbc_auth.dll. Now copy this file to the Data Cleaner extract folder you chose in Step 1. An alternative method is to edit datacleaner.cmd in the data cleaner extract folder as detailed in this data cleaner wiki topic, but I find copying the file simpler. Step 5 – Now lets run Data Cleaner, just run datacleaner.exe from the extract folder you chose in Step 1. Step 6 – Complete or skip the registration screen, and ignore the task window for now. In the main window click settings. Step 7 – In the Settings dialog, select the Database drivers tab, then click Register database driver and select the Local JAR file option. Step 8 – Browse to the JDBC driver extract location from Step 3 and drill down to select sqljdbc4.jar. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\sqljdbc4.jar Step 9 – Select the Database driver class as com.microsoft.sqlserver.jdbc.SQLServerDriver, and then click the Test and Save database driver button. Step 10 - You should be back at the Settings dialog with a the list of drivers that includes SQL Server. Just click Save Settings to persist all your hard work. Step 11 – Now we can start to profile some data. In the main Data Cleaner window click New Task, and then Profile from the task window. Step 12 – In the Profile window click Open Database Step 13 – Now choose the SQL Server connection string option. Selecting a connection string gives us a template like jdbc:sqlserver://<hostname>:1433;databaseName=<database>, but obviously it requires some details to be entered for example  jdbc:sqlserver://localhost:1433;databaseName=SQLBits. This will connect to the database called SQLBits on my local machine. The port may also have to be changed if using such as when you have a multiple instances of SQL Server running. If using SQL Server Authentication enter a username and password as required and then click Connect to database. You can use Window Authentication, just add integratedSecurity=true to the end of your connection string. e.g jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true.  If you didn’t complete Step 4 above you will need to do so now and restart Data Cleaner before it will work. Manually setting the connection string is fine, but creating a named connection makes more sense if you will be spending any length of time profiling a specific database. As highlighted in the left-hand screen-shot, at the bottom of the dialog it includes partial instructions on how to create named connections. In the folder shown C:\Users\<Username>\.datacleaner\1.5.3, open the datacleaner-config.xml file in your editor of choice add your own details. You’ll see a sample connection in the file already, just add yours following the same pattern. e.g. <!-- Darren's Named Connections --> <bean class="dk.eobjects.datacleaner.gui.model.NamedConnection"> <property name="name" value="SQLBits Local Connection" /> <property name="driverClass" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" /> <property name="connectionString" value="jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true" /> <property name="tableTypes"> <list> <value>TABLE</value> <value>VIEW</value> </list> </property> </bean> Step 14 – Once back at the Profile window, you should now see your schemas, tables and/or views listed down the left hand side. Browse this tree and double-click a table to select it for profiling. You can then click Add profile, and choose some profiling options, before finally clicking Run profiling. You can see below a sample output for three of the most common profiles, click the image for full size.   I hope this has given you a taster for DataCleaner, and should help you get up and running pretty quickly.

    Read the article

  • Python memory leaks

    - by Fragsworth
    I have a long-running script which, if let to run long enough, will consume all the memory on my system. Without going into details about the script, I have two questions: Are there any "Best Practices" to follow, which will help prevent leaks from occurring? What techniques are there to debug memory leaks in Python?

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #050

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Executing Remote Stored Procedure – Calling Stored Procedure on Linked Server In this example we see two different methods of how to call Stored Procedures remotely.  Connection Property of SQL Server Management Studio SSMS A very simple example of the how to build connection properties for SQL Server with the help of SSMS. Sample Example of RANKING Functions – ROW_NUMBER, RANK, DENSE_RANK, NTILE SQL Server has a total of 4 ranking functions. Ranking functions return a ranking value for each row in a partition. All the ranking functions are non-deterministic. T-SQL Script to Add Clustered Primary Key Jr. DBA asked me three times in a day, how to create Clustered Primary Key. I gave him following sample example. That was the last time he asked “How to create Clustered Primary Key to table?” 2008 2008 – TRIM() Function – User Defined Function SQL Server does not have functions which can trim leading or trailing spaces of any string at the same time. SQL does have LTRIM() and RTRIM() which can trim leading and trailing spaces respectively. SQL Server 2008 also does not have TRIM() function. User can easily use LTRIM() and RTRIM() together and simulate TRIM() functionality. http://www.youtube.com/watch?v=1-hhApy6MHM 2009 Earlier I have written two different articles on the subject Remove Bookmark Lookup. This article is as part 3 of original article. Please read the first two articles here before continuing reading this article. Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 Interesting Observation – Query Hint – FORCE ORDER SQL Server never stops to amaze me. As regular readers of this blog already know that besides conducting corporate training, I work on large-scale projects on query optimizations and server tuning projects. In one of the recent projects, I have noticed that a Junior Database Developer used the query hint Force Order; when I asked for details, I found out that the basic concept was not properly understood by him. Queries Waiting for Memory Allocation to Execute In one of the recent projects, I was asked to create a report of queries that are waiting for memory allocation. The reason was that we were doubtful regarding whether the memory was sufficient for the application. The following query can be useful in similar cases. Queries that do not have to wait on a memory grant will not appear in the result set of following query. 2010 Quickest Way to Identify Blocking Query and Resolution – Dirty Solution As the title suggests, this is quite a dirty solution; it’s not as elegant as you expect. However, it works totally fine. Simple Explanation of Data Type Precedence While I was working on creating a question for SQL SERVER – SQL Quiz – The View, The Table and The Clustered Index Confusion, I had actually created yet another question along with this question. However, I felt that the one which is posted on the SQL Quiz is much better than this one because what makes that more challenging question is that it has a multiple answer. Encrypted Stored Procedure and Activity Monitor I recently had received questionable if any stored procedure is encrypted can we see its definition in Activity Monitor.Answer is - No. Let us do a quick test. Let us create following Stored Procedure and then launch the Activity Monitor and check the text. Indexed View always Use Index on Table A single table can have maximum 249 non clustered indexes and 1 clustered index. In SQL Server 2008, a single table can have maximum 999 non clustered indexes and 1 clustered index. It is widely believed that a table can have only 1 clustered index, and this belief is true. I have some questions for all of you. Let us assume that I am creating view from the table itself and then create a clustered index on it. In my view, I am selecting the complete table itself. 2011 Detecting Database Case Sensitive Property using fn_helpcollations() I received a question on how to determine the case sensitivity of the database. The quick answer to this is to identify the collation of the database and check the properties of the collation. I have previously written how one can identify database collation. Once you have figured out the collation of the database, you can put that in the WHERE condition of the following T-SQL and then check the case sensitivity from the description. Server Side Paging in SQL Server CE (Compact Edition) SQL Server Denali is coming up with new T-SQL of Paging. I have written about the same earlier.SQL SERVER – Server Side Paging in SQL Server Denali – A Better Alternative,  SQL SERVER – Server Side Paging in SQL Server Denali Performance Comparison, SQL SERVER – Server Side Paging in SQL Server Denali – Part2 What is very interesting is that SQL Server CE 4.0 have the same feature introduced. Here is the quick example of the same. To run the script in the example, you will have to do installWebmatrix 4.0 and download sample database. Once done you can run following script. Why I am Going to Attend PASS Summit Unite 2011 The four-day event will be marked by a lot of learning, sharing, and networking, which will help me increase both my knowledge and contacts. Every year, PASS Summit provides me a golden opportunity to build my network as well as to identify and meet potential customers or employees. 2012 Manage Help Settings – CTRL + ALT + F1 This is very interesting read as my daughter once accidently came across a screen in SQL Server Management Studio. It took me 2-3 minutes to figure out how she has created the same screen. Recover the Accidentally Renamed Table “I accidentally renamed table in my SSMS. I was scrolling very fast and I made mistakes. It was either because I double clicked or clicked on F2 (shortcut key for renaming). However, I have made the mistake and now I have no idea how to fix this. If you have renamed the table, I think you pretty much is out of luck. Here are few things which you can do which can give you an idea about what your table name can be if you are lucky. Identify Numbers of Non Clustered Index on Tables for Entire Database Here is the script which will give you numbers of non clustered indexes on any table in entire database. Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video Here is the complete complete script which I have used in the SQL in Sixty Seconds Video. Thanks Harsh for important Tip in the comment. http://www.youtube.com/watch?v=3kDHC_Tjrns Advanced Data Quality Services with Melissa Data – Azure Data Market For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Report Direct3D memory usage

    - by Jazz
    I have a Direct3D 9 application and I would like to monitor the memory usage. Is there a tool to know how much system and video memory is used by Direct3D? Ideally, it would also report how much is allocated for textures, vertex buffers, index buffers...

    Read the article

  • Memory layout of executable

    - by Ross
    Hi all, When loading an executable then segments like the code, data, bss and so on need to be placed in memory. I am just wondering, if someone could tell me where on a standard x86 for example the libc library is placed. Is that at the top or bottom of memory. My guess is at the bottom, close to the application code, ie., that would look something like this here: --------- 0x1000 Stack | V ^ | Heap ---------- Data + BSS ---------- App Code ---------- libc ---------- 0x0000 Thanks a lot, Ross

    Read the article

  • Help Finding Memory Leak

    - by Neal L
    Hi all, I am writing an iPad app that downloads a rather large .csv file and parses the file into objects stored in Core Data. The program keeps crashing, and I've run it along with the Allocations performance tool and can see that it's eating up memory. Nothing is alloc'ed or init'ed in the code, so why am I gobbling up memory? Code at: http://pastie.org/955960 Thanks! -Neal

    Read the article

  • Python Profiling in Eclipse

    - by Jordan L. Walbesser
    This questions is semi-based of this one here: http://stackoverflow.com/questions/582336/how-can-you-profile-a-python-script I thought that this would be a great idea to run on some of my programs. Although profiling from a batch file as explained in the aforementioned answer is possible, I think it would be even better to have this option in Eclipse. At the same time, making my entire program a function and profiling it would mean I have to alter the source code? How can I configure eclipse such that I have the ability to run the profile command on my existing programs? Any tips or suggestions are welcomed!

    Read the article

  • Please help with iPhone Memory & Images, memory usage crashing app

    - by Andrew Gray
    I have an issue with memory usage relating to images and I've searched the docs and watched the videos from cs193p and the iphone dev site on memory mgmt and performance. I've searched online and posted on forums, but I still can't figure it out. The app uses core data and simply lets the user associate text with a picture and stores the list of items in a table view that lets you add and delete items. Clicking on a row shows the image and related text. that's it. Everything runs fine on the simulator and on the device as well. I ran the analyzer and it looked good, so i then starting looking at performance. I ran leaks and everything looked good. My issue is when running Object Allocations as every time i select a row and the view with the image is shown, the live bytes jumps up a few MB and never goes down and my app eventually crashes due to memory usage. Sorting the live bytes column, i see 2 2.72MB mallocs (5.45Mb total), 14 CFDatas (3.58MB total), 1 2.74MB malloc and everything else is real small. the problem is all the related info in instruments is really technical and all the problem solving examples i've seen are just missing a release and nothing complicated. Instruments shows Core Data as the responsible library for all but one (libsqlite3.dylib the other) with [NSSQLCore _prepareResultsFromResultSet:usingFetchPlan:withMatchingRows:] as the caller for all but one (fetchResultSetReallocCurrentRow the other) and im just not sure how to track down what the problem is. i've looked at the stack traces and opened the last instance of my code and found 2 culprits (below). I havent been able to get any responses at all on this, so if anyone has any tips or pointers, I'd really appreciate it!!!! //this is from view controller that shows the title and image - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; self.title = item.title; self.itemTitleTextField.text = item.title; if ([item.notes length] == 0) { self.itemNotesTextView.hidden = YES; } else { self.itemNotesTextView.text = item.notes; } //this is the line instruments points to UIImage *image = item.photo.image; itemPhoto.image = image; } - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the managed object for the given index path NSManagedObjectContext *context = [fetchedResultsController managedObjectContext]; [context deleteObject:[fetchedResultsController objectAtIndexPath:indexPath]]; // Save the context. NSError *error = nil; if (![context save:&error]) //this is the line instruments points to { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); exit(-1); } } }

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >