Search Results

Search found 27 results on 2 pages for 'mmr'.

Page 1/2 | 1 2  | Next Page >

  • What causes Multi-Page allocations?

    - by SQLOS Team
    Writing about changes in the Denali Memory Manager In his last post Rusi mentioned: " In previous SQL versions only the 8k allocations were limited by the ‘max server memory’ configuration option.  Allocations larger than 8k weren’t constrained." In SQL Server versions before Denali single page allocations and multi-Page allocations are handled by different components, the Single Page Allocator (which is responsible for Buffer Pool allocations and governed by 'max server memory') and the Multi-Page allocator (MPA) which handles allocations of greater than an 8K page. If there are many multi-page allocations this can affect how much memory needs to be reserved outside 'max server memory' which may in turn involve setting the -g memory_to_reserve startup parameter. We'll follow up with more generic articles on the new Memory Manager structure, but in this post I want to clarify what might cause these larger allocations. So what kinds of query result in MPA activity? I was asked this question the other day after delivering an MCM webcast on Memory Manager changes in Denali. After asking around our Dev team I was connected to one of our test leads Sangeetha who had tested the plan cache, and kindly provided this example of an MPA intensive query: A workload that has stored procedures with a large # of parameters (say > 100, > 500), and then invoked via large ad hoc batches, where each SP has different parameters will result in a plan being cached for this “exec proc” batch. This plan will result in MPA.   Exec proc_name @p1, ….@p500 Exec proc_name @p1, ….@p500 . . . Exec proc_name @p1, ….@p500 Go   Another workload would be large adhoc batches of the form: Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) … Go  In Denali all page allocations are handled by an "Any size page allocator" and included in 'max server memory'. The buffer pool effectively becomes a client of the any size page allocator, which in turn relies on the memory manager. - Guy Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Improving the performance of a db import process

    - by mmr
    I have a program in Microsoft Access that processes text and also inserts data in MySQL database. This operation takes 30 mins or less to finished. I translated it into VB.NET and it takes 2 hours to finish. The program goes like this: A text file contains individual swipe from a corresponding person, it contains their id, time and date of swipe in the machine, and an indicator if it is a time-in or a time-out. I process this text, segregate the information and insert the time-in and time-out per row. I also check if there are double occurrences in the database. After checking, I simply merge the time-in and time-out of the corresponding person into one row only. This process takes 2 hours to finished in VB.NET considering I have a table to compare which contains 600,000+ rows. Now, I read in the internet that python is best in text processing, i already have a test but i doubt in database operation. What do you think is the best programming language for this kind of problem? How can I speed up the process? My first idea was using python instead of VB.NET, but since people here telling me here on SO that this most probably won't help I am searching for different solutions.

    Read the article

  • Clonezilla failing as soon as image copying begins

    - by mmr
    I have been trying unsuccessfully to create an image of an Ubuntu 10.04 laptop system. As soon as the copying itself starts, the entire system crashes to a black screen. I suspect that the problem is overheating, and that's why I put an ice pack under the machine. That seems to have helped a bit, but it's still not getting through the copying process. Is there any other possible explanation for dying to a black screen like this? I'm just not relishing the task of removing the hard drive, mounting it elsewhere, and then doing a backup that way.

    Read the article

  • How to use American English spelling dictionary in Firefox?

    - by mmr
    My Firefox spellchecker was complaining this morning that I spelled 'neighbor' in the American English style, not the British English style ('neighbour'). Same is true for color (colour), analyze (analyse), etc. I've checked in the edit-preferences-content-language tab, and en-us is selected. I also found this link here: http://ubuntuforums.org/showthread.php?t=1013043 Suggesting that there's some kind of system panel I can use to ensure that I've got the right language, but I can't see where that is (I guess that's for an older Ubuntu that let people get to system settings). So either the dictionary for Firefox for en-us is corrupted, just a copy of the British English dictionary, or somehow the setting isn't propagated properly. How can I get the American dictionary back?

    Read the article

  • How to use American English spelling dictionary in Firefox?

    - by mmr
    My Firefox spellchecker was complaining this morning that I spelled 'neighbor' in the American English style, not the British English style ('neighbour'). Same is true for color (colour), analyze (analyse), etc. I've checked in the edit-preferences-content-language tab, and en-us is selected. I also found this link here: http://ubuntuforums.org/showthread.php?t=1013043 Suggesting that there's some kind of system panel I can use to ensure that I've got the right language, but I can't see where that is (I guess that's for an older Ubuntu that let people get to system settings). So either the dictionary for Firefox for en-us is corrupted, just a copy of the British English dictionary, or somehow the setting isn't propagated properly. How can I get the American dictionary back?

    Read the article

  • Why can't tuxboot and ubuntu play well together?

    - by mmr
    I'm trying to get clonezilla to run off of a usb stick, and it seems that the right way to do that is via tuxboot. Tuxboot is not compilable on ubuntu. I used git to get it from the repository, and then when I run the 'install' script (because building it is apparently not allowed, since the build script just tries to install windows things). Qmake-linux wants my qmake executable to be in the same directory as the stuff I pulled down, and let's just say that if there's a way to do this easily, I ain't seein' it. So then I download the linux file, the most recent of which is tuxboot-linux-25. Try to run it, get a failure that libpng12.so.0 isn't found. OK, then I go to install that via the instructions I found on the web but firefox seems to have already deleted from my history (yay!) Then I add the /usr/local/lib directory to ldconfig via emacs (had to install that too, of course): http://ubuntuforums.org/showthread.php?t=369848 I still get the errors that libpng12.so.0 cannot be opened because 'No such file or directory'. ldconfig -p | grep libpng shows that the library is there, but it still doesn't seem to be findable. What to do next? (for the record, doing this in windows is painless-- download, click, and it's done. But I'm trying to be all linuxy and get away from Windows for this...)

    Read the article

  • nginx server over https using up all available file handles

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it?

    Read the article

  • How can I do a large file upload using Sinatra, haml, nginx, and passenger?

    - by mmr
    Hi all, I need to be able to allow a user to upload 30-60 mb files at a time. Right now, I'm solving the problem with a simple form post: %form{:action=>"/Upload",:method=>"post",:enctype=>"multipart/form-data"} - @theModelHash.each do |key,value| %br %input{:type=>"checkbox", :name=>"#{key}", :value=>1, :checked=>value} =key %br %input{:type=>"file",:name=>"file"} %input{:type=>"submit",:value=>"Upload"} This form allows the user to select processing options contained in theModelHash and upload a file for processing. Problem is, this method both freezes the user's UI and also requires that the entire form be reposted when the user presses the 'back' button. I've looked at SWFUpload, but have no idea how to integrate that into my relatively simple app. There's a page here about integrating it with Rails, but I'm using Sinatra, and am new enough to this whole web programming thing that I don't know how to modify those files to work with what I need to do. Is there a how-to to add large file uploads to my form there? Something relatively simple that just adds in a progress bar and doesn't repost? I feel like I'm having to triple the size of my application just to make this feature play nice, and that's bothering me a bit.

    Read the article

  • nginx server over https using up all available file handles (upd: infinite loop?)

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it? EDIT: nginx 0.7.63, ubuntu linux, sinatra 1.0 EDIT 2: Here's the offending code. It's sinatra serving jnlp, which I finally figured out: get '/uploader' do #read in the launch.jnlp file theJNLP = "" File.open("/launch.jnlp", "r+") do |file| while theTemp = file.gets theJNLP = theJNLP + theTemp end end content_type :jnlp theJNLP end If I serve this with Sinatra via Mongrel and http, everything works fine. If I serve this with Sinatra and nginx via https, I get the above error. All other parts of the website appear to be equivalent. EDIT: I have since upgraded to passenger 2.2.14, ruby 1.9.1, nginx 0.8.40, openssl 1.0.0a, and no change. EDIT: The culprit appears to be infinite redirects due to using SSL. I don't know how to fix this, other than hosting the jnlp file in the root directory of the server (which I'd rather not do, since it limits me to one jnlp-based app at a time). The relevant lines from nginx.conf: # HTTPS server # server { listen 443; server_name MyServer.org root /My/Root/Dir; passenger_enabled on; expires 1d; proxy_set_header X-FORWARDED_PROTO https; proxy_set_header X_FORWARDED_PROTO https;#the almighty google is not clear on which to use location /upload { proxy_pass https://127.0.0.1:443; } } The funny thing about this is, first, I was putting the jnlp into a directory called 'uploader', not 'upload', but that still appeared to trigger the problem, since that proxy_pass directive appeared in the logs. Second, again, moving the jnlp into root avoided the problem, because there wasn't any of this proxying due to ssl. So, how can I avoid the infinite proxy_pass loop in nginx?

    Read the article

  • Scponly: how can I block changing directories?

    - by mmr
    Hi all, I want to block users from changing directories when they log in via scponly's shell. How can I do that? I need to be able to provide users with their own upload directory that only they can see and read/write. They should not be allowed to execute any code, ie, change directories or the like. Thanks.

    Read the article

  • How do I make a WiX 3.5 installer with a completely self-contained .NET 4.0 installer?

    - by mmr
    Continuing a previous question I asked here, I now need to move to vs2010. I've gotten the most recent weekly build of WiX 3.5, the June 5th 2010 version. Here's the relevant lines from my installer: <ItemGroup> <BootstrapperFile Include="Microsoft.Net.Framework.4.0"> <ProductName>.NET Framework 4.0</ProductName> </BootstrapperFile> <BootstrapperFile Include="Microsoft.Windows.Installer.4.5"> <ProductName>Windows Installer 4.5</ProductName> </BootstrapperFile> </ItemGroup> and <GenerateBootstrapper ApplicationFile="MySetup.msi" ApplicationName="MyProgram" BootstrapperItems="@(BootstrapperFile)" Path="C:\Program Files\Microsoft SDKs\Windows\v7.0A\Bootstrapper\" ComponentsLocation="Relative" OutputPath="$(OutputPath)" Culture="en" /> However, it's just not working. In vs2010, there are exclamation points next to the .NET Framework 4.0 and Windows Installer 4.5 files, and the properties page lists them as 'Unknown BuildAction BootstrapperFile', and the build just does not appear to install .NET 4.0 at all. The relevant warning is: C:\source\depot\project\vs2010\WiXSetup\WiXSetup.wixproj(68,5): warning MSB3155: Item 'Microsoft.Net.Framework.4.0' could not be located in 'C:\Program Files\Microsoft SDKs\Windows\v7.0A\Bootstrapper\'.

    Read the article

  • Incorporating the Windows 7 onscreen keyboard into a WPF app

    - by mmr
    Windows 7 has a really nice onscreen keyboard program/control for touchscreens. I have a touchscreen app that was originally written for, and will be deployed on, XP. Is it possible to incorporate this keyboard directly into my app, rather than me using a custom control? I can find no programmatic information about it, so any links would be very helpful. Specifically, I'd need: To be able to use the keyboard on an XP machine that will have .NET 3.5 sp1 installed on it. To be able to hide the native keyboard on Windows 7, because I've already incorporated the touchscreen keyboard in my UI and so I don't need another one cluttering up the UI. This native keyboard has two attractive aspects to it. First off, it's automatically localized to the customer's language (though the rest of the app will need modification), and second off, it doesn't seem to suffer from 'touch lag' as the OS tries to figure out whether or not I'm doing a gesture, because I'm clearly typing on a keyboard. The app is WPF based, which should mean easy integration with Windows 7 based controls. EDIT: I'd really like the XP thing, but it's not a requirement. The ability to use the keyboard in Win7, though, seems like it should be possible and even the right way to do it.

    Read the article

  • What is the easiest way to get an embedded upload progress bar using Ruby/Sinatra/Haml/Passenger/ngi

    - by mmr
    I have a website where people can upload 30+mb of data in a single block, and I want to be able to show them the progress of their upload without causing the web page to become unresponsive, similar to how flash uploads work in gmail. There's this question here, but I don't know if that progress bar is embedded in the page or if it's using the browser's progress bar. I'm also a bit of a web newb, so I'm not sure if it's the 'easiest'. I asked the swfupload guys how to do this here, and the answer I got is 'this tool requires some knowledge to use it' without giving me much help in figuring out where to get started. I also asked this question on ServerFault, and got no response, so maybe that was the wrong place to ask. I'm all for learning new things and so forth, but there are a lot of potential pathways to take here. Where should I start, and what do I need to know to make everything work with sinatra, haml, ruby, passenger, and nginx? Thanks!

    Read the article

  • Display ñ on a C# .NET application

    - by mmr
    I have a localization issue. One of my industrious coworkers has replaced all the strings throughout our application with constants that are contained in a dictionary. That dictionary gets various strings placed in it once the user selects a language (English by default, but target languages are German, Spanish, French, Portuguese, Mandarin, and Thai). For our test of this functionality, we wanted to change a button to include text which has a ñ character, which appears both in Spanish and in the Arial Unicode MS font (which we're using throughout the application). Problem is, the ñ is appearing as a square block, as if the program did not know how to display it. When I debug into that particular string being read from disk, the debugger reports that character as a square block as well. So where is the failure? I think it could be in a few places: 1) Notepad may not be unicode aware, so the ñ displayed there is not the same as what vs2008 expects, and so the program interprets the character as a square (EDIT: notepad shows the same characters as vs; ie, they both show the ñ. In the same place.). 2) vs2008 can't handle ñ. I find that very, very hard to believe. 3) The text is read in properly, but the default font for vs2008 can't display it, which is why the debugger shows a square. 4) The text is not read in properly, and I should use something other than a regular StreamReader to get strings. 5) The text is read in properly, but the default String class in C# doesn't handle ñ well. I find that very, very hard to believe. 6) The version of Arial Unicode MS I have doesn't have ñ, despite it being listed as one of the 50k characters by http://www.fileinfo.info. Anything else I could have left out? Thanks for any help!

    Read the article

  • How can I change the precision of printing with the stl?

    - by mmr
    This might be a repeat, but my google-fu failed to find it. I want to print numbers to a file using the stl with the number of decimal places, rather than overall precision. So, if I do this: int precision = 16; std::vector<double> thePoint(3); thePoint[0] = 86.3671436; thePoint[0] = -334.8866574; thePoint[0] = 24.2814; ofstream file1(tempFileName, ios::trunc); file1 << std::setprecision(precision) << thePoint[0] << "\\" << thePoint[1] << "\\" << thePoint[2] << "\\"; I'll get numbers like this: 86.36714359999999\-334.8866574\24.28140258789063 What I want is this: 86.37\-334.89\24.28 In other words, truncating at two decimal points. If I set precision to be 4, then I'll get 86.37\-334.9\24.28 ie, the second number is improperly truncated. I do not want to have to manipulate each number explicitly to get the truncation, especially because I seem to be getting the occasional 9 repeating or 0000000001 or something like that that's left behind. I'm sure there's something obvious, like using the printf(%.2f) or something like that, but I'm unsure how to mix that with the stl << and ofstream.

    Read the article

  • How can I marshall a vector<int> from a C++ dll to a C# application?

    - by mmr
    I have a C++ function that produces a list of rectangles that are interesting. I want to be able to get that list out of the C++ library and back into the C# application that is calling it. So far, I'm encoding the rectangles like so: struct ImagePatch{ int xmin, xmax, ymin, ymax; } and then encoding some vectors: void MyFunc(..., std::vector<int>& rectanglePoints){ std::vector<ImagePatch> patches; //this is filled with rectangles for(i = 0; i < patches.size(); i++){ rectanglePoints.push_back(patches[i].xmin); rectanglePoints.push_back(patches[i].xmax); rectanglePoints.push_back(patches[i].ymin); rectanglePoints.push_back(patches[i].ymax); } } The header for interacting with C# looks like (and works for a bunch of other functions): extern "C" { __declspec(dllexport) void __cdecl MyFunc(..., std::vector<int>& rectanglePoints); } Are there some keywords or other things I can do to get that set of rectangles out? I found this article for marshalling objects in C#, but it seems way too complicated and way too underexplained. Is a vector of integers the right way to do this, or is there some other trick or approach?

    Read the article

  • How do I catch this WPF Bitmap loading exception?

    - by mmr
    I'm developing an application that loads bitmaps off of the web using .NET 3.5 sp1 and C#. The loading code looks like: try { CurrentImage = pics[unChosenPics[index]]; bi = new BitmapImage(CurrentImage.URI); // BitmapImage.UriSource must be in a BeginInit/EndInit block. bi.DownloadCompleted += new EventHandler(bi_DownloadCompleted); AssessmentImage.Source = bi; } catch { System.Console.WriteLine("Something broke during the read!"); } and the code to load on bi_DownloadCompleted is: void bi_DownloadCompleted(object sender, EventArgs e) { try { double dpi = 96; int width = bi.PixelWidth; int height = bi.PixelHeight; int stride = width * 4; // 4 bytes per pixel byte[] pixelData = new byte[stride * height]; bi.CopyPixels(pixelData, stride, 0); BitmapSource bmpSource = BitmapSource.Create(width, height, dpi, dpi, PixelFormats.Bgra32, null, pixelData, stride); AssessmentImage.Source = bmpSource; Loading.Visibility = Visibility.Hidden; AssessmentImage.Visibility = Visibility.Visible; } catch { System.Console.WriteLine("Exception when viewing bitmap."); } } Every so often, an image comes along that breaks the reader. I guess that's to be expected. However, rather than being caught by either of those try/catch blocks, the exception is apparently getting thrown outside of where I can handle it. I could handle it using global WPF exceptions, like this SO question. However, that will seriously mess up the control flow of my program, and I'd like to avoid that if at all possible. I have to do the double source assignment because it appears that many images are lacking in width/height parameters in the places where the microsoft bitmap loader expects them to be. So, the first assignment appears to force the download, and the second assignment gets the dpi/image dimensions happen properly. What can I do to catch and handle this exception? Stack trace: at MS.Internal.HRESULT.Check(Int32 hr) at System.Windows.Media.Imaging.BitmapFrameDecode.get_ColorContexts() at System.Windows.Media.Imaging.BitmapImage.FinalizeCreation() at System.Windows.Media.Imaging.BitmapImage.OnDownloadCompleted(Object sender, EventArgs e) at System.Windows.Media.UniqueEventHelper.InvokeEvents(Object sender, EventArgs args) at System.Windows.Media.Imaging.LateBoundBitmapDecoder.DownloadCallback(Object arg) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Boolean isSingleParameter) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.TranslateAndDispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Application.RunInternal(Window window) at LensComparison.App.Main() in C:\Users\Mark64\Documents\Visual Studio 2008\Projects\LensComparison\LensComparison\obj\Release\App.g.cs:line 48 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • How do I get .NET to garbage collect aggressively?

    - by mmr
    I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected. Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off? EDIT: Here is some very specific code that will cause the crash. According to this SO question, I do not need to be disposing of the BitmapSource objects here. Here is the naive version, no GC.Collects in it. It generally crashes on iteration 4 to 10 of the undo procedure. This code replaces the constructor in a blank WPF project, since I'm using WPF. I do the wackiness with the bitmapsource because of the limitations I explained in my answer to @dthorpe below as well as the requirements listed in this SO question. public partial class Window1 : Window { public Window1() { InitializeComponent(); //Attempts to create an OOM crash //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops int theRows = 4000, currRows; int theColumns = 4000, currCols; int theMaxChange = 30; int i; List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack byte[] displayBuffer = null;//the buffer used as a bitmap source BitmapSource theSource = null; for (i = 0; i < theMaxChange; i++) { currRows = theRows - i; currCols = theColumns - i; theList.Add(new ushort[(theRows - i) * (theColumns - i)]); displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create(currCols, currRows, 96, 96, PixelFormats.Gray8, null, displayBuffer, (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8); System.Console.WriteLine("Got to change " + i.ToString()); System.Threading.Thread.Sleep(100); } //should get here. If not, then theMaxChange is too large. //Now, go back up the undo stack. for (i = theMaxChange - 1; i >= 0; i--) { displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create((theColumns - i), (theRows - i), 96, 96, PixelFormats.Gray8, null, displayBuffer, ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8); System.Console.WriteLine("Got to undo change " + i.ToString()); System.Threading.Thread.Sleep(100); } } } Now, if I'm explicit in calling the garbage collector, I have to wrap the entire code in an outer loop to cause the OOM crash. For me, this tends to happen around x = 50 or so: public partial class Window1 : Window { public Window1() { InitializeComponent(); //Attempts to create an OOM crash //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops for (int x = 0; x < 1000; x++){ int theRows = 4000, currRows; int theColumns = 4000, currCols; int theMaxChange = 30; int i; List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack byte[] displayBuffer = null;//the buffer used as a bitmap source BitmapSource theSource = null; for (i = 0; i < theMaxChange; i++) { currRows = theRows - i; currCols = theColumns - i; theList.Add(new ushort[(theRows - i) * (theColumns - i)]); displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create(currCols, currRows, 96, 96, PixelFormats.Gray8, null, displayBuffer, (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8); } //should get here. If not, then theMaxChange is too large. //Now, go back up the undo stack. for (i = theMaxChange - 1; i >= 0; i--) { displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create((theColumns - i), (theRows - i), 96, 96, PixelFormats.Gray8, null, displayBuffer, ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8); GC.WaitForPendingFinalizers();//force gc to collect, because we're in scenario 2, lots of large random changes GC.Collect(); } System.Console.WriteLine("Got to changelist " + x.ToString()); System.Threading.Thread.Sleep(100); } } } If I'm mishandling memory in either scenario, if there's something I should spot with a profiler, let me know. That's a pretty simple routine there. Unfortunately, it looks like @Kevin's answer is right-- this is a bug in .NET and how .NET handles objects larger than 85k. This situation strikes me as exceedingly strange; could Powerpoint be rewritten in .NET with this kind of limitation, or any of the other Office suite applications? 85k does not seem to me to be a whole lot of space, and I'd also think that any program that uses so-called 'large' allocations frequently would become unstable within a matter of days to weeks when using .NET. EDIT: It looks like Kevin is right, this is a limitation of .NET's GC. For those who don't want to follow the entire thread, .NET has four GC heaps: gen0, gen1, gen2, and LOH (Large Object Heap). Everything that's 85k or smaller goes on one of the first three heaps, depending on creation time (moved from gen0 to gen1 to gen2, etc). Objects larger than 85k get placed on the LOH. The LOH is never compacted, so eventually, allocations of the type I'm doing will eventually cause an OOM error as objects get scattered about that memory space. We've found that moving to .NET 4.0 does help the problem somewhat, delaying the exception, but not preventing it. To be honest, this feels a bit like the 640k barrier-- 85k ought to be enough for any user application (to paraphrase this video of a discussion of the GC in .NET). For the record, Java does not exhibit this behavior with its GC.

    Read the article

  • Pinyin Character entry on a touchscreen keyboard

    - by mmr
    The app I'm developing requires that it be deployed in China, which means that it needs to have Pinyin and Chinese character handling. I'm told that the way that our customers handle character entry is like so: Enter in the pinyin character, like 'zhang' As they enter the characters, a list of possible Chinese (Mandarin?) characters are presented to the user, like: The user will then select '1' to enter the family name that is roughly translated to 'zhang' How can I hook such programs (I believe one is called 'mspy.exe', from Microsoft, which I'm lead to believe comes with Microsoft versions of XP) into a WPF text box? Right now, the user can enter text either by using their keyboard or by using an on-screen keyboard, so I will probably need to capture the event of a keypress from either source and feed it to some OS event or to MSPY.exe or some similar program. Or is there some other way to enter pinyin and have it converted to Mandarin? Is there a program other than MSPY I should look at? EDIT: For those of you who think that this should 'just work', it does not. Chinese character entry will work just fine if entering text into notepad or the start-run menu or whatever, but it will not work in WPF. That's the key to this question: how do I enable WPF entry? There's the Google Pinyin and Sogou pinyin, but the websites are in Mandarin or Chinese or something similar and I don't read the language.

    Read the article

  • How can I transform this haml into a table?

    - by mmr
    I have the following haml code: - @theLinks.each_index do |x| %br %form{:action=>'/Download', :method=>"post",:enctype=>"multipart/form-data"} %input{:type=>"submit", :name=>"#{@theLinks[x].url}", :value=>"Name: #{@theLinks[x].Name} Study Time: #{@theLinks[x].studyTime} Comments: #{@theLinks[x].comments}"} Basically, for each person, list the time they participated in a study and the comments on the study. Right now, this renders as a set of buttons. I'd like to render it as a table, with each row clickable in the same way (ie, using the 'post' method, so that only the haml file has to be edited without touching the rest of the files). Ideally, I'd also like to be able to sort the table by name, time, or comments, but that might be getting ahead of myself. So how can I change this list of buttons into a table with clickable rows?

    Read the article

  • What should I grab as a development platform, an iPod or an iPad?

    - by mmr
    Hey all, I've recently gotten into the world of contract programming, and two of my clients have indicated that they'd like to do something 'trendy', like ipod touch/iphone/ipad development. I have a mac laptop (first gen macbook pro) that I'll have to upgrade to snow leopard to do the development for any of them, from what I've read. So that's already a bit of a commitment, given all the stuff I have on that laptop I'll have to make sure is recoverable from backup. My budget is limited, but I think I need to learn this skill. Which device should I get to learn this kind of development, an iPod touch or an iPad? I don't have the money for an iPhone. I think that the iPhone/iPad SDK has an emulator mode, but I like to have the device I'm going to roll out on available to make sure that everything works as I'd expect, ie, what's easily readable on a laptop screen is still readable on the touch, etc.

    Read the article

  • How can I set up jnlp for use with Sinatra?

    - by mmr
    I have a website currently running Sinatra and a bunch of other technologies. I wanted to get progress bars running, but that's been a no go. So, a friend of mine whipped up a quick upload in Java, and I'm running that through scponly (chrooted) to make sure they can't do anything funny with changing directories and suchlike. But how can I start the jnlp file from Sinatra? Is it as simply as making it a link and then letting it be downloaded by the user? This code suggests otherwise, but there are a lot of things in there I don't understand (probably because I'm not reading it in the context of the larger program there).

    Read the article

  • How and why do I set up a C# build machine?

    - by mmr
    Hi all, I'm working with a small (4 person) development team on a C# project. I've proposed setting up a build machine which will do nightly builds and tests of the project, because I understand that this is a Good Thing. Trouble is, we don't have a whole lot of budget here, so I have to justify the expense to the powers that be. So I want to know: What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts? What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests. What kind of hardware will I need for this? Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to. How often should we make this kind of build? How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so? Is there anything else I'm not seeing here? I realize that this is a very large topic, and I'm just starting out. I couldn't find a duplicate of this question here, and if there's a book out there I should just get, please let me know. EDIT: I finally got it to work! Hudson is completely fantastic, and FxCop is showing that some features we thought were implemented were actually incomplete. We also had to change the installer type from Old-And-Busted vdproj to New Hotness WiX. Basically, for those who are paying attention, if you can run your build from the command line, then you can put it into hudson. Making the build run from the command line via MSBuild is a useful exercise in itself, because it forces your tools to be current.

    Read the article

  • MessageBox.Show-- font change?

    - by mmr
    Hi all, I'm using the MessageBox class to show errors to users, and while that might not be the right behavior, it's very convenient. This is a touchscreen application, however, so I need the 'ok' button to be much larger than it is (curse my inordinately large fingers!). I think that if I increase the font size in the dialog box, I should be ok. Is there a way to do that? Or really, is there any way to increase the dialog size? Thanks

    Read the article

  • How do I get C# to garbage collect aggressively?

    - by mmr
    I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected. Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off?

    Read the article

1 2  | Next Page >