Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 268/555 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • solve a classic map-reduce problem with opencl?

    - by liuliu
    I am trying to parallel a classic map-reduce problem (which can parallel well with MPI) with OpenCL, namely, the AMD implementation. But the result bothers me. Let me brief about the problem first. There are two type of data that flow into the system: the feature set (30 parameters for each) and the sample set (9000+ dimensions for each). It is a classic map-reduce problem in the sense that I need to calculate the score of every feature on every sample (Map). And then, sum up the overall score for every feature (Reduce). There are around 10k features and 30k samples. I tried different ways to solve the problem. First, I tried to decompose the problem by features. The problem is that the score calculation consists of random memory access (pick some of the 9000+ dimensions and do plus/subtraction calculations). Since I cannot coalesce memory access, it costs. Then, I tried to decompose the problem by samples. The problem is that to sum up overall score, all threads are competing for few score variables. It keeps overwriting the score which turns out to be incorrect. (I cannot carry out individual score first and sum up later because it requires 10k * 30k * 4 bytes). The first method I tried gives me the same performance on i7 860 CPU with 8 threads. However, I don't think the problem is unsolvable: it is remarkably similar to ray tracing problem (for which you carry out calculation that millions of rays against millions of triangles). Any ideas?

    Read the article

  • Why are my installed fonts not available in .NET?

    - by Dan Herbert
    I'm trying to render some images with text using a font I just added to my machine and no matter what I do, I can't seem to get the font to become accessible in .NET. I tried using PrivateFontCollection.AddFontFile(filename) and PrivateFontCollection.AddMemoryFont(...) to load the font into memory. Whenever I do this, the method throws a "File Not Found" exception, which is unusual because I get this exception when loading the font from memory, where there should be no files to be "not found". Initially, I thought it may be because the font I was trying to use was in the .pfm format, so I converted the font to .otf and had the same problem. Then I tried installing the .otf font to my Windows Fonts folder so I could pull it from FontFamily.Families. Once I installed the font, it became available in Microsoft Word & Notepad2. However, when I try to load it from FontFamily.Families, it is not included in the array. I thought rebooting my machine would fix the issue but obviously there is something more complicated involved here. Is there something basic I just might have missed when installing the font in my machine (Windows Vista), or is there another way to programmatically load a font that I should be using instead? Is .otf not supported in .NET?

    Read the article

  • Visual Studio setup problem - 'A problem has been encountered while loading the setup components. Ca

    - by kronoz
    Hi All, I've had a serious issue with my Visual Studio 2008 setup. I receive the ever-so-useful error 'A problem has been encountered while loading the setup components. Canceling setup.' whenever I try to uninstall, reinstall or repair Visual Studio 2008 (team system version). If I can't resolve this issue I have no choice but to completely wipe my computer and start again which will take all day long! I've recently received very strange errors when trying to build projects regarding components running out of memory (despite having ~2gb physical memory free at the time) which has rendered my current VS install useless. Note I installed VS2005 shell version using the vs_setup.msi file in the SQL Server folder after I had installed VS2008, in order to gain access to the SQL Server 2005 Reporting Services designer in Business Intelligence Development Studio (this is inexplicably unavailable in VS2008). Does anyone have any solutions to this problem? P.S.: I know this isn't directly related to programming, however I feel this is appropriate to SO as it is directly related to my ability to program at all! Note: A colleague found a solution to this problem, hopefully this should help others with this problem.

    Read the article

  • array retain question

    - by Cosizzle
    Hello, im fairly new to objective-c, most of it is clear however when it comes to memory managment I fall a little short. Currently what my application does is during a NSURLConnection when the method -(void)connectionDidFinishLoading:(NSURLConnection *)connection is called upon I enter a method to parse some data, put it into an array, and return that array. However I'm not sure if this is the best way to do so since I don't release the array from memory within the custom method (method1, see the attached code) Below is a small script to better show what im doing .h file #import <UIKit/UIKit.h> @interface memoryRetainTestViewController : UIViewController { NSArray *mainArray; } @property (nonatomic, retain) NSArray *mainArray; @end .m file #import "memoryRetainTestViewController.h" @implementation memoryRetainTestViewController @synthesize mainArray; // this would be the parsing method -(NSArray*)method1 { // ???: by not release this, is that bad. Or does it get released with mainArray NSArray *newArray = [[NSArray alloc] init]; newArray = [NSArray arrayWithObjects:@"apple",@"orange", @"grapes", "peach", nil]; return newArray; } // this method is actually // -(void)connectionDidFinishLoading:(NSURLConnection *)connection -(void)method2 { mainArray = [self method1]; } // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { mainArray = nil; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [mainArray release]; [super dealloc]; } @end

    Read the article

  • Will HttpResponse.Filter buffer the whole data before start the sending?

    - by vtortola
    Hi, An user posts this article about how to use HttpResponse.Filter to compress large amounts of data. But what will happen if I try to transfer a 4G file? will it load the whole file in memory in order to compress it? or otherwise it will compress it chunk by chunk? I mean, I'm doing this right now: public void GetFile(HttpResponse response) { String fileName = "example.iso"; response.ClearHeaders(); response.ClearContent(); response.ContentType = "application/octet-stream"; response.AppendHeader("Content-Disposition", "attachment; filename=" + fileName); response.AppendHeader("Content-Length", new FileInfo(fileName).Length.ToString()); using (FileStream fs = new FileStream(Path.Combine(HttpContext.Current.Server.MapPath("~/App_Data"), fileName), FileMode.Open)) using (DeflateStream ds = new DeflateStream(fs,CompressionMode.Compress)) { Byte[] buffer = new Byte[4096]; Int32 readed = 0; while ((readed = ds.Read(buffer, 0, buffer.Length)) > 0) { response.OutputStream.Write(buffer, 0, readed); response.Flush(); } } } So at the same time I'm reading, I'm compressing and sending it. Then I wanna know if HttpResponse do the same thing, or otherwise it will load the whole file in memory in order to compress it. Cheers.

    Read the article

  • perl threading problem

    - by Alice Wozownik
    I'm writing a multithreaded website uptime checker in perl, and here is the basic code so far (includes only threading part): !/usr/bin/perl use LWP::UserAgent; use Getopt::Std; use threads; use threads::shared; my $maxthreads :shared = 50; my $threads :shared = 0; print "Website Uptime Checker\n"; my $infilename = $ARGV[0]; chomp($infilename); open(INFILE, $infilename); my $outfilename = $ARGV[1]; chomp($outfilename); open(OUTFILE, "" . $outfilename); OUTFILE-autoflush(1); while ($site = ) { chomp($site); while (1) { if ($threads < $maxthreads) { $threads++; my $thr = threads-create(\&check_site, $site); $thr-detach(); last; } else { sleep(1); } } } while ($threads 0) { sleep(1); } sub check_site { $server = $_[0]; print "$server\n"; $threads--; } It gives an error after a while: Can't call method "detach" on an undefined value at C:\perl\webchecker.pl line 28, line 245. What causes this error? I know it is at detach, but what am I doing wrong in my code? Windows shows lots of free memory, so it should not be the computer running out of memory, this error occurs even if I set $maxthreads as low as 10 or possibly even lower.

    Read the article

  • Speed Problem - Animating Height Change in UITableView

    - by Travis
    So I have a huge UITableView, between 1000 and 3000 rows. Each row needs to, when selected, expand to include several buttons. I do this by rendering the buttons below the cell, enabling clipping on the cells, and then just animating a change in height when they're selected. So I have heightForRowAtIndex checking if the row is selected, if so it gives a different height value. To do the animation, I have the following: - (void) tableView: (UITableView *) tableView didSelectRowAtIndexPath: (NSIndexPath *) indexPath { [self.tableView beginUpdates]; [self.tableView endUpdates]; } This all works wonderfully until I get around 1000 rows, at which point the animations still work and are still quite smooth, but they lag by a second or two. My first thought was a memory problem, but everything is being autoreleased and upon inspection my application isn't even receiving a memory warning message. So...any ideas on what this might be and how I can erase this lag between selection and change in cell height? Oh, and one last thing - right now I'm testing this on the iPad, so if this is a problem there, I expect it to be worse on our other target devices (iPhone and iPod).

    Read the article

  • Right way to dispose Image/Bitmap and PictureBox

    - by kornelijepetak
    I am trying to develop a Windows Mobile 6 (in WF/C#) application. There is only one form and on the form there is only a PictureBox object. On it I draw all desired controls or whatever I want. There are two things I am doing. Drawing custom shapes and loading bitmaps from .png files. The next line locks the file when loading (which is an undesired scenario): Bitmap bmp = new Bitmap("file.png"); So I am using another way to load bitmap. public static Bitmap LoadBitmap(string path) { using (Bitmap original = new Bitmap(path)) { return new Bitmap(original); } } This is I guess much slower, but I don't know any better way to load an image, while quickly releasing the file lock. Now, when drawing an image there is method that I use: public void Draw() { Bitmap bmp = new Bitmap(240,320); Graphics g = Graphics.FromImage(bmp); // draw something with Graphics here. g.Clear(Color.Black); g.DrawImage(Images.CloseIcon, 16, 48); g.DrawImage(Images.RefreshIcon, 46, 48); g.FillRectangle(new SolidBrush(Color.Black), 0, 100, 240, 103); pictureBox.Image = bmp; } This however seems to be some kind of a memory leak. And if I keep doing it for too long, the application eventually crashes. Therefor, I have X questions: 1.) What is the better way for loading bitmaps from files without locking the file? 2.) What objects needs to be manually disposed in the Draw() function (and in which order) so there's no memory leak and no ObjectDisposedException throwing? 3.) If pictureBox.Image is set to bmp, like in the last line of the code, would pictureBox.Image.Dispose() dispose only resources related to maintaining the pictureBox.Image or the underlying Bitmap set to it?

    Read the article

  • Long running stats process - thoughts on language choice?

    - by Josh
    I am on a LAMP stack for a website I am managing. There is a need to roll up usage statistics (a variety of things related to our desktop product), and I initially tackled the problem with PHP (being that I had a bunch of classes to work with the data already). All worked well on my dev box which was using 5.3 Long story short, 5.1 memory management seems to suck a lot worse, and I've had to do a lot of fooling to get the long term roll up scripts to run in a fixed memory space. Our server guys are unwilling to upgrade php at this time. I've since moved my dev server back to 5.1 so I don't run into this problem again... For mining of mysql databases to roll up statistics for different periods and resolutions, potentially running a process that does this all the time in the future (as opposed to on a cron schedule), what language choice do you recommend? I was looking at python (I know it more or less), java (don't know it that well), sticking it out with php (know it quite well). Thanks for any suggestions. Josh

    Read the article

  • Unmanaged DLL in C# Web Service

    - by Telis
    Hi Guys, please help µe as I am new into accessing an unmanaged DLL from C#.. I have a large unmanaged DLL in C++ and I am trying to access the DLL's classes and functions from a C# Web Service. I have seen many examples how to use DLLImport, but for some reason I am stuck with my very first wrapper method spending many hours with no luck.. What should I do to return an object in my 'Marshaled' [DllImport..] function? I would like to do something like that: [DllImport("unmanaged.dll")] public static extern MyClass MyFunction(); Here is the definition of my C++ class and the function that I want to access: class __declspec(dllexport) TPDate { public: TPDate(); TPDate(const TPDate& rhs); ... //today's date. static TPDate AsOfDate(void); ... } In my Web service I have declared the following StructLayout: [StructLayout(LayoutKind.Sequential)] public class TPDate { public TPDate(TPDate d) { _tpDate = d; } public TPDate _tpDate; } and here's where I think that I'm not doing something right: class WrapperTPDate { [DllImport("TPTools.dll", ExactSpelling=false, EntryPoint = "?AsOfDate@TPDate@@SA?AV1@XZ", CallingConvention = CallingConvention.StdCall)] [return: MarshalAs(UnmanagedType.Struct)] **public static extern TPDate AsOfDate();**// HERE THERE IS PROBLEM }; I am calling the wrapper as follows from my WebMethod: [WebMethod] public void ConstructModel() { TPDate date1 = WrapperTPDate.AsOfDate();// Here I get exception TPDate date = new TPDate(date1); } The exception i am getting is: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Runtime.InteropServices.MarshalDirectiveException: Cannot marshal 'return value': Invalid managed/unmanaged type combination (this type must be paired with LPStruct or Interface). If I change it to LPSTRUCT, I am getting another exception: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt Could you please tell me where I'm doing wrong here Thanks

    Read the article

  • AS3 Working With Arbitrarily Large Files

    - by Kekoa
    I am trying to read a very large file in AS3 and am having problems with the runtime just crashing on me. I'm currently using a FileStream to open the file asynchronously. This does not work(crashes without an Exception) for files bigger than about 300MB. _fileStream = new FileStream(); _fileStream.addEventListener(IOErrorEvent.IO_ERROR, loadError); _fileStream.addEventListener(Event.COMPLETE, loadComplete); _fileStream.openAsync(myFile, FileMode.READ); In looking at the documentation, it sounds like the FileStream class still tries to read in the entire file to memory(which is bad for large files). Is there a more suitable class to use for reading large files? I really would like something like a buffered FileStream class that only loads the bytes from the files that are going to be read next. I'm expecting that I may need to write a class that does this for me, but then I would need to read only a piece of a file at a time. I'm assuming that I can do this by setting the position and readAhead properties of the FileStream to read a chunk out of a file at a time. I would love to save some time if there is a class like this that already exists. Is there a good way to process large files in AS3, without loading entire contents into memory?

    Read the article

  • Terrible DotNetNuke performance

    - by Peter Bridger
    I'm involved with a project using DotNetNuke version 05.01.04 Community Edition. We are building our new Intranet using it, but performance is terrible. We have five people adding pages and content to it and every 15-30 seconds they experience a pause of 10 seconds or longer before the system continues and the next screens loads. The server is Windows 2003, 3.8GHz with 1GB of RAM. I'm told by our server admin that the CPU and memory performance don't appear to be the bottleneck. We currently have 350 pages in the system, we a plan to add 1000. So we need to resolve this performance problem so that we can enter content and so we can go live. I just can't see where the bottleneck is. Is there a good why to determine the bottleneck when using DotNetNuke? Modules installed Publish:Engage (Not currently in use) Page Blaster (Doesn't appear to providing caching when users logged in using Integrated Authentication) SimpleGallery XMod Content Manager IIS Setup Application recycling completely disabled (Apart from a 2am recycle) New findings: 18th March 2010 The main bottleneck was due to version 5.1.4 having a bug which caused 1300 database roundtrips on an average page, due to broken database in-memory caching. We've upgraded to 5.2.4 which has resolved this bottleneck. Now the next biggest bottleneck is the navigation. We've used both DDR:Menu and DDN:Nav, but both have a major impact on performance. Is there a navigation interface out there that doesn't drain performance so badly?

    Read the article

  • How bad is code using std::basic_string<t> as a contiguous buffer?

    - by BillyONeal
    I know technically the std::basic_string template is not required to have contiguous memory. However, I'm curious how many implementations exist for modern compilers that actually take advantage of this freedom. For example, if one wants code like the following it seems silly to allocate a vector just to turn around instantly and return it as a string: DWORD valueLength = 0; DWORD type; LONG errorCheck = RegQueryValueExW( hWin32, value.c_str(), NULL, &type, NULL, &valueLength); if (errorCheck != ERROR_SUCCESS) WindowsApiException::Throw(errorCheck); else if (valueLength == 0) return std::wstring(); std::wstring buffer; do { buffer.resize(valueLength/sizeof(wchar_t)); errorCheck = RegQueryValueExW( hWin32, value.c_str(), NULL, &type, &buffer[0], &valueLength); } while (errorCheck == ERROR_MORE_DATA); if (errorCheck != ERROR_SUCCESS) WindowsApiException::Throw(errorCheck); return buffer; I know code like this might slightly reduce portability because it implies that std::wstring is contiguous -- but I'm wondering just how unportable that makes this code. Put another way, how may compilers actually take advantage of the freedom having noncontiguous memory allows? Oh: And of course given what the code's doing this only matters for Windows compilers.

    Read the article

  • How to make sure Solr/Lucene won't die with java.lang.OutOfMemoryError?

    - by taw
    I'm really puzzled why it keeps dying with java.lang.OutOfMemoryError during indexing even though it has a few GBs of memory. Is there a fundamental reason why it needs manual tweaking of config files / jvm parameters instead of it just figuring out how much memory is available and limiting itself to that? No other programs except Solr ever have this kind of problem. Yes, I can keep tweaking JVM heap size every time such crashes happen, but this is all so backwards. Here's stack trace of the latest such crash in case it is relevant: SEVERE: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOfRange(Arrays.java:3209) at java.lang.String.<init>(String.java:216) at org.apache.lucene.index.TermBuffer.toTerm(TermBuffer.java:122) at org.apache.lucene.index.SegmentTermEnum.term(SegmentTermEnum.java:169) at org.apache.lucene.search.FieldCacheImpl$StringIndexCache.createValue(FieldCacheImpl.java:701) at org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:208) at org.apache.lucene.search.FieldCacheImpl.getStringIndex(FieldCacheImpl.java:676) at org.apache.lucene.search.FieldComparator$StringOrdValComparator.setNextReader(FieldComparator.java:667) at org.apache.lucene.search.TopFieldCollector$OneComparatorNonScoringCollector.setNextReader(TopFieldCollector.java:94) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:245) at org.apache.lucene.search.Searcher.search(Searcher.java:171) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:988) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:884) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:341) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:182) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Can i use a different parser for Axis 1.4?

    - by NishM
    The current SAX parser takes a lot of time (20 minutes) and heap memory(around 400mb) to deserialize the response coming from the soap server as per the logs. Our response XMLs are of average size 4 mb. A part of the log when it runs the applicaiton out of heap is below DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing handler org.apache.axis.message.SOAPHandler@16d22f1 DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(newElem00) DEBUG (org.apache.axis.message.MessageElement) New MessageElement (org.apache.axis.message.MessageElement@112c22) named {}name DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing element name DEBUG (org.apache.axis.utils.NSStack) NSPush (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::startElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::endElement(, name) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(popHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Popping handler org.apache.axis.message.SOAPHandler@16d22f1 DEBUG (org.apache.axis.utils.NSStack) NSPop (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Popped element stack to org.apache.axis.message.MessageElement:property DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::endElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::startElement(, value) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(pushHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing handler org.apache.axis.message.SOAPHandler@16880ba DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(newElem00) DEBUG (org.apache.axis.message.MessageElement) New MessageElement (org.apache.axis.message.MessageElement@1db74af) named {}value DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing element value DEBUG (org.apache.axis.utils.NSStack) NSPush (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::startElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::endElement(, value) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(popHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Popping handler org.apache.axis.message.SOAPHandler@16880ba DEBUG (org.apache.axis.utils.NSStack) NSPop (32) I cannot use Axis2 because of technical reasons. I have tried using HTTP Commons client instead of HTTP client but the response time remains the same. How can i link a different parser(example xerces 2.10.0 or xstream 1.3.1?) to Axis 1.4 framework in this context so that memory management and response time is favorable?.

    Read the article

  • Trouble move-capturing std::unique_ptr in a lambda using std::bind

    - by user2478832
    I'd like to capture a variable of type std::vector<std::unique_ptr<MyClass>> in a lambda expression (in other words, "capture by move"). I found a solution which uses std::bind to capture unique_ptr (http://stackoverflow.com/a/12744730/2478832) and decided to use it as a starting point. However, the most simplified version of the proposed code I could get doesn't compile (lots of template mistakes, it seems to try to call unique_ptr's copy constructor). #include <functional> #include <memory> std::function<void ()> a(std::unique_ptr<int>&& param) { return std::bind( [] (int* p) {}, std::move(param)); } int main() { a(std::unique_ptr<int>(new int())); } Can anybody point out what is wrong with this code? EDIT: tried changing the lambda to take a reference to unique_ptr, it still doesn't compile. #include <functional> #include <memory> std::function<void ()> a(std::unique_ptr<int>&& param) { return std::bind( [] (std::unique_ptr<int>& p) {}, // also as a const reference std::move(param)); } int main() { a(std::unique_ptr<int>(new int())); }

    Read the article

  • Speed up bitstring/bit operations in Python?

    - by Xavier Ho
    I wrote a prime number generator using Sieve of Eratosthenes and Python 3.1. The code runs correctly and gracefully at 0.32 seconds on ideone.com to generate prime numbers up to 1,000,000. # from bitstring import BitString def prime_numbers(limit=1000000): '''Prime number generator. Yields the series 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ... using Sieve of Eratosthenes. ''' yield 2 sub_limit = int(limit**0.5) flags = [False, False] + [True] * (limit - 2) # flags = BitString(limit) # Step through all the odd numbers for i in range(3, limit, 2): if flags[i] is False: # if flags[i] is True: continue yield i # Exclude further multiples of the current prime number if i <= sub_limit: for j in range(i*3, limit, i<<1): flags[j] = False # flags[j] = True The problem is, I run out of memory when I try to generate numbers up to 1,000,000,000. flags = [False, False] + [True] * (limit - 2) MemoryError As you can imagine, allocating 1 billion boolean values (1 byte 4 or 8 bytes (see comment) each in Python) is really not feasible, so I looked into bitstring. I figured, using 1 bit for each flag would be much more memory-efficient. However, the program's performance dropped drastically - 24 seconds runtime, for prime number up to 1,000,000. This is probably due to the internal implementation of bitstring. You can comment/uncomment the three lines to see what I changed to use BitString, as the code snippet above. My question is, is there a way to speed up my program, with or without bitstring?

    Read the article

  • Android: Determine when an app is being finalized vs destroyed for screen orientation change

    - by Matt
    Hi all, I am relatively new to the Android world and am having some difficultly understanding how the whole screen orientation cycle works. I understand that when the orientation changes from portrait to landscape or vice versa the activity is destroyed and then re-created. Thus all the code in the onCreate function will run again. So here's my situation: I have an app that I am working on where it logs into a website, retrieves data, and displays it to the user. While this is all done in background threads, the code that starts these threads is in the onCreate function. Now, the problem lies in that whenever the user changes the screen orientation, the app will log in, retrieve the data, and display it to the user again. What I would like to do is set a boolean that tells the app if it is logged in or not so it knows whether or not it must log in when the onCreate function is called. So long as the app is in memory the HttpClient will exist and contain the cookies from logging the user in but when the app is killed by the system those will go away. So I would assume that I need to do something like setting the logged in boolean to false when the app is killed but since onDestroy is called when the screen is rotated how is this possible? I also looked into the finalize function and isFinishing() but those seem to not be working. Shorter version: How can I distinguish between when an app is being killed from memory from when an activity is being rotated and different code for each event? Any help or a point in the right direction is greatly appreciated. Thank you!

    Read the article

  • Matlab crashes on library initialize when called from Java

    - by David Sauter
    Hello everyone. The setup I have is I'm using a Java application to call native C-code with JNI, which in turn starts up the MATLAB runtime and calls functions on it (I know there are other solutions to call MATLAB methods from Java). The problem is that the MATLAB engine crashes at some point during the initialization and I don't know what's causing it exactly. The crash causes my jvm to terminate, I assume it's some kind of memory corruption. The C++ code calling MATLAB functions that is actually crashing is JNIEXPORT void JNICALL some_jni_vodoo_initializeLibrary(JNIEnv* env, jclass thisClass) { try { if (!mclInitializeApplication(NULL, 0)) { THROW_EXCEPTION(env, "Could not initialize the application properly."); return; } if (!<library>Initialize()) { THROW_EXCEPTION(env, "Could not initialize the library."); return; } } ... The function <library>Initialize() crashes here, the Java error log reads Stack Trace: [0] jmi.dll:0x793f4175(0x7934cdca, 1, 0x7937e67c "à;.y`[email protected] in C:\BUILD_ARE..", 0x792d6a32) [1] jvm.dll:0x792df9a5(0xc0000005, 0x79356791, 0x4961b400 "Ð\8y", 0x6d8b29de) [2] jvm.dll:0x792e0431(0x8b515008, 0x70f0e8ce, 0x8b5ffffa, 0xc25d5ec6) ------------------------------------------------------------------------ Fatal Java Exception detected at Fri Apr 30 11:08:08 2010 ------------------------------------------------------------------------ Configuration: MATLAB Version: 7.8.0.347 (R2009a) MATLAB License: unknown Operating System: Microsoft Windows Vista Window System: Version 6.0 (Build 6002: Service Pack 2) Processor ID: x86 Family 6 Model 10 Stepping 5, GenuineIntel Virtual Machine: Java is not enabled Default Encoding: windows-1252 Java is not enabled I really have no idea what could be wrong. Is there not enough memory from the jvm? I guess the problem is somehow related to Java, since calling the JNI functions from a simple test C++ program works fine... Thanks

    Read the article

  • How to keep unreachable code?

    - by Gabriel
    I'd like to write a function that would have some optional code to execute or not depending on user settings. The function is cpu-intensive and having ifs in it would be slow since the branch predictor is not that good. My idea is making a copy in memory of the function and replace NOPs with jumps when I don't want to execute some code. My working example goes like this: int Test() { int x = 2; for (int i=0 ; i<10 ; i++) { x *= 2; __asm {NOP}; // to skip it replace this __asm {NOP}; // by JMP 2 (after the goto) x *= 2; // Op to skip or not x *= 2; } return x; } In my test's main, I copy this function into a newly allocated executable memory and replace the NOPs by a JMP 2 so that the following x *= 2 is not executed. The problem is that I would have to change the JMP operand every time I change the code to be skipped. An alternative that would fix this problem would be: __asm {NOP}; // to skip it replace this __asm {NOP}; // by JMP 2 (after the goto) goto dont_do_it; x *= 2; // Op to skip or not dont_do_it: x *= 2; This way, as a goto uses 2 bytes of binary, I would be able to replace the NOPs by a fixed JMP of alway 2 in order to skip the goto. Unfortunately, in full optimization mode, the goto and the x*=2 are removed because they are unreachable at compilation time. Hence the need to keep that dead code.

    Read the article

  • Checking for corrupt war file after mvn install

    - by Tauren
    Is there a maven command that will verify that a WAR file is valid and not corrupt? Or is there some other program or technique to validate zip files? I'm on Ubuntu 9.10, so a linux solution is preferred. On occasion, I end up with a corrupt WAR file after doing mvn clean and mvn install on my project. If I extract the WAR file to my hard drive, an error occurs and the file doesn't get extracted. I believe this happens when my system is in a low-memory condition, because this tends to happen only when lots of memory is in use. After rebooting, doing a mvn install always gives a valid WAR file. Because this happens infrequently, I don't typically test the file by uncompressing it. I transfer the 50MB war file to my server and then restart Jetty with it as the root webapp. But when the file is corrupt, I get a java.util.zip.ZipException: invalid block type error. So I'm looking for a quick way to validate the file as soon as mvn install is completed. Is there a maven command to do this? Any other ideas?

    Read the article

  • Workaround for GNU Make 3.80 eval bug

    - by bengineerd
    I'm trying to create a generic build template for my Makefiles, kind of like they discuss in the eval documentation. I've run into a known bug with GNU Make 3.80. When $(eval) evaluates a line that is over 193 characters, Make crashes with a "Virtual Memory Exhausted" error. The code I have that causes the issue looks like this. SRC_DIR = ./src/ PROG_NAME = test define PROGRAM_template $(1)_SRC_DIR = $$(SRC_DIR)$(1)/ $(1)_SRC_FILES = $$(wildcard $$($(1)_SRC_DIR)*.c) $(1)_OBJ_FILES = $$($(1)_SRC_FILES):.c=.o) $$($(1)_OBJ_FILES) : $$($(1)_SRC_FILES) # This is the problem line endef $(eval $(call PROGRAM_template,$(PROG_NAME))) When I run this Makefile, I get gmake: *** virtual memory exhausted. Stop. The expected output is that all .c files in ./src/test/ get compiled into .o files (via an implicit rule). The problem is that $$($(1)_SRC_FILES) and $$($(1)_OBJ_FILES) are together over 193 characters long (if there are enough source files). I have tried running the make file on a directory where there is only 2 .c files, and it works fine. It's only when there are many .c files in the SRC directory that I get the error. I know that GNU Make 3.81 fixes this bug. Unfortunately I do not have the authority or ability to install the newer version on the system I'm working on. I'm stuck with 3.80. So, is there some workaround? Maybe split $$($(1)_SRC_FILES) up and declare each dependency individually within the eval?

    Read the article

  • C# Win Forms Access Violation Exception

    - by Goober
    I keep getting the following error: System.AccessViolationException was unhandled Message="Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Source="FEDivaNET" StackTrace: at Diva.Handles.FEDivaObject.dropReference() at Diva.Handles.FEDivaObject.!FEDivaObject() at Diva.Handles.FEDivaObject.Dispose(Boolean ) at Diva.Handles.FEDivaObject.Finalize() InnerException: Any ideas what the issue could be? - I'm using a library that is written in C++ and isn't designed for multithreading, yet I'm hammering it about 3000 times with requests every 6 minutes. CODE delegate void SetTextCallback(string mxID, string text); public void UpdateLog(string mxID, string text) { //lock (thisLock) //{ if (listBoxProcessLog.InvokeRequired) { SetTextCallback d = new SetTextCallback(UpdateLog); this.BeginInvoke(d, new object[] { mxID, text }); } else { //Get a reference to the datatable assigned as the datasource. //DataTable logDT = (DataTable)listBoxProcessLog.DataSource; //logDT.Rows.Add(DateTime.Now + " - " + mxID + ": " + text); if (text.Contains("COMPLETE") || (text.Contains("FAILED"))) { if (progressBar1.Value <= progressBar1.MaxValue) { progressBar1.Value += 1; } } } //} } Baring in mind that even when the Lock and the DataTable weren't commented out, the error still occurred!

    Read the article

  • iPhone - dequeueReusableCellWithIdentifier usage

    - by Jukurrpa
    Hi, I'm working on a iPhone app which has a pretty large UITableView with data taken from the web, so I'm trying to optimize its creation and usage. I found out that dequeueReusableCellWithIdentifier is pretty useful, but after seeing many source codes using this, I'm wondering if the usage I make of this function is the good one. Here is what people usually do: UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:@"Cell"]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:@"Cell"]; // Add elements to the cell return cell; And here is the way I did it: NSString identifier = [NSString stringWithFormat:@"Cell @d", indexPath.row]: // The cell row UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:identifier]; if (cell != nil) return cell; cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:identifier]; // Add elements to the cell return cell; The difference is that people use the same identifier for every cell, so dequeuing one only avoids to alloc a new one. For me, the point of queuing was to give each cell a unique identifier, so when the app asks for a cell it already displayed, neither allocation nor element adding have to be done. In fine I don't know which is best, the "common" method ceils the table's memory usage to the exact number of cells it display, whislt the method I use seems to favor speed as it keeps all calculated cells, but can cause large memory consumption (unless there's an inner limit to the queue). Am I wrong to use it this way? Or is it just up to the developper, depending on his needs?

    Read the article

  • Oracle performance problems with large batch of XSL operations

    - by FrustratedWithFormsDesigner
    I have a system that is performing many XSL transformations on XMLType objects. The problem is that the system gradually slows down over time, and sometimes crashes when it runs out of memory. It seems that the slow down (and possibly memory crash) is around the dbms_xslprocessor.processXSL function call, which gradually takes longer and longer to complete. The code looks like this: v_doc dbms_xmldom.DOMDocument; v_transformer dbms_xmldom.DOMDocument; v_XSLprocessor dbms_xslprocessor.Processor; v_stylesheet dbms_xslprocessor.Stylesheet; v_clob clob; ... transformer := PKG_STUFF.getXSL(); v_transformer := dbms_xmldom.newDOMDocument(transformer); v_XSLprocessor := Dbms_Xslprocessor.newProcessor; v_stylesheet := dbms_xslprocessor.newStylesheet(v_transformer, ''); ... for source_data in (select id in source_tbl) loop begin v_doc := PKG_CONVERT.convert(in_id => source_data.id); --start time of operation v_begin_op_time := dbms_utility.get_time; --reset the CLOB v_clob := ' '; --Apply XSL Transform dbms_xslprocessor.processXSL(p => v_XSLprocessor, ss => v_stylesheet, xmldoc => v_Doc, cl => v_clob); v_doc := dbms_xmldom.newDOMDocument(XMLType(v_clob)); --end time v_end_op_time := dbms_utility.get_time; --calculate duration v_time_taken := (((v_end_op_time - v_begin_op_time))); --log the duration PKG_LOG.log_message('Time taken to transform XML: '||v_time_taken); ... ... DBMS_XMLDOM.freeDocument(v_Doc); DBMS_LOB.freetemporary(lob_loc => v_clob); end loop; The time taken to transform the XML is slowly creeping up (I suppose it might also be the call to dbms_xmldom.newDOMDocument, but I had thought that to be fairly straightforward). I have no idea why.... :( (Oracle 10g)

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >