Search Results

Search found 12267 results on 491 pages for 'out of memory'.

Page 245/491 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Visual Studio project remains "stuck" when stopped

    - by Traveling Tech Guy
    Hi, Currently developing a connector DLL to HP's Quality Center. I'm using their (insert expelative) COM API to connect to the server. An Interop wrapper gets created automatically by VStudio. My solution has 2 projects: the DLL and a tester application - essentially a form with buttons that call functions in the DLL. Everything works well - I can create defects, update them and delete them. When I close the main form, the application stops nicely. But when I call a function that returns a list of all available projects (to fill a combo box), if I close the main form, VStudio still shows the solution as running and I have to stop it. I've managed to pinpoint a single function in my code that when I call, the solution remains "hung" and if I don't, it closes well. It's a call to a property in the TDC object get_VisibleProjects that returns a List (not the .Net one, but a type in the COM library) - I just iterate over it and return a proper list (that I later use to fill the combo box): public List<string> GetAvailableProjects() { List<string> projects = new List<string>(); foreach (string project in this.tdc.get_VisibleProjects(qcDomain)) { projects.Add(project); } return projects; } My assumption is that something gets retained in memory. If I run the EXE outside of VStudio it closes - but who knows what gets left behind in memory? My question is - how do I get rid of whatever calling this property returns? Shouldn't the GC handle this? Do I need to delve into pointers? Things I've tried: getting the list into a variable and setting it to null at the end of the function Adding a destructor to the class and nulling the tdc object Stepping through the tester function application all the way out, whne the form closes and the Main function ends - it closes, but VStudio still shows I'm running. Thanks for your assistance!

    Read the article

  • Using shared_ptr to implement RCU (read-copy-update)?

    - by yongsun
    I'm very interested in the user-space RCU (read-copy-update), and trying to simulate one via tr1::shared_ptr, here is the code, while I'm really a newbie in concurrent programming, would some experts help me to review? The basic idea is, reader calls get_reading_copy() to gain the pointer of current protected data (let's say it's generation one, or G1). writer calls get_updating_copy() to gain a copy of the G1 (let's say it's G2), and only one writer is allowed to enter the critical section. After the updating is done, writer calls update() to do a swap, and make the m_data_ptr pointing to data G2. The ongoing readers and the writer now hold the shared_ptr of G1, and either a reader or a writer will eventually deallocate the G1 data. Any new readers would get the pointer to G2, and a new writer would get the copy of G2 (let's say G3). It's possible the G1 is not released yet, so multiple generations of data my co-exists. template <typename T> class rcu_protected { public: typedef T type; typedef std::tr1::shared_ptr<type> rcu_pointer; rcu_protected() : m_data_ptr (new type()) {} rcu_pointer get_reading_copy () { spin_until_eq (m_is_swapping, 0); return m_data_ptr; } rcu_pointer get_updating_copy () { spin_until_eq (m_is_swapping, 0); while (!CAS (m_is_writing, 0, 1)) {/* do sleep for back-off when exceeding maximum retry times */} rcu_pointer new_data_ptr(new type(*m_data_ptr)); // as spin_until_eq does not have memory barrier protection, // we need to place a read barrier to protect the loading of // new_data_ptr not to be re-ordered before its construction _ReadBarrier(); return new_data_ptr; } void update (rcu_pointer new_data_ptr) { while (!CAS (m_is_swapping, 0, 1)) {} m_data_ptr.swap (new_data_ptr); // as spin_until_eq does not have memory barrier protection, // we need to place a write barrier to protect the assignments of // m_is_writing/m_is_swapping be re-ordered bofore the swapping _WriteBarrier(); m_is_writing = 0; m_is_swapping = 0; } private: volatile long m_is_writing; volatile long m_is_swapping; rcu_pointer m_data_ptr; };

    Read the article

  • VSC++, virtual method at bad adress, curious bug

    - by antoon.groenewoud
    Hello, This guy: virtual phTreeClass* GetTreeClass() const { return (phTreeClass*)m_entity_class; } When called, crashed the program with an access violation, even after a full recompile. All member functions and virtual member functions had correct memory adresses (I hovered mouse over the methods in debug mode), but this function had a bad memory adress: 0xfffffffc. Everything looked okay: the 'this' pointer, and everything works fine up until this function call. This function is also pretty old and I didn't change it for a long time. The problem just suddenly popped up after some work, which I commented all out to see what was doing it, without any success. So I removed the virtual, compiled, and it works fine. I add virtual, compiled, and it still works fine! I basically changed nothing, and remember that I did do a full recompile earlier, and still had the error back then. I wasn't able to reproduce the problem. But now it is back. I didn't change anything. Removing virtual fixes the problem. Sincerely, Antoon

    Read the article

  • Delphi fast large bitmap creation (without clearing)

    - by Ritsaert Hornstra
    When using the TBitmap wrapper for a GDI bitmap from the unit Graphics I noticed it will always clear out the bitmap (using a PatBlt call) when setting up a bitmap with SetSize( w, h ). When I copy in the bits later on (see routine below) it seems ScanLine is the fastest possibility and not SetDIBits. function ToBitmap: TBitmap; var i, N, x: Integer; S, D: PAnsiChar; begin Result := TBitmap.Create(); Result.PixelFormat := pf32bit; Result.SetSize( width, height ); S := Src; D := Result.ScanLine[ 0 ]; x := Integer( Result.ScanLine[ 1 ] ) - Integer( D ); N := width * sizeof( longword ); for i := 0 to height - 1 do begin Move( S^, D^, N ); Inc( S, N ); Inc( D, x ); end; end; The bitmaps I need to work with are quite large (150MB of RGB memory). With these iomages it takes 150ms to simply create an empty bitmap and a further 140ms to overwrite it's contents. Is there a way of initializing a TBitmap with the correct size WITHOUT initializing the pixels itself and leaving the memory of the pixels uninitialized (eg dirty)? Or is there another way to do such a thing. I know we could work on the pixels in place but this still leaves the 150ms of unnessesary initializtion of the pixels.

    Read the article

  • iPhone: Calling dealloc on parentViewController causes an exception

    - by arielcamus
    Hi, I'm dealing with viewDidUnload and dealloc methods and I've founded a problem when calling [super dealloc]; in parent view controller. I have a lot of view controllers with custom code which I have putted outside on a parent view controller. So, when defining my view controllers I set a reference to the super class: @interface LoginViewController : AbstractViewController Then, at the dealloc method I call the AbstractViewController dealloc method: //(Login View Controller code) - (void)dealloc { [user release]; [passwd release]; [super dealloc]; } [super dealloc] execute the following code: //(Abstract View Controller code) - (void)dealloc { [dbUtils release]; [loadingView release]; [super dealloc]; } If I simulate a memory warning on iPhone Simulator, the following exception is thrown: 2010-03-03 11:27:45.805 MyApp[71563:40b] Received simulated memory warning. 2010-03-03 11:27:45.808 MyApp[71563:40b] *** -[LoginViewController isViewLoaded]: message sent to deallocated instance 0x13b51b0 kill quit However, if I comment the [super dealloc] line in AbstractViewController the exception is not thrown and my app still running. Thank you for your help once again!

    Read the article

  • Efficient algorithm for Next button on a MySQL result set

    - by David Grayson
    I have a website that lets people view rows in a table (each row is a picture). There are more than 100,000 rows. You can view different subsets of the rows, and you can view them with different sort orders. While you are viewing one of the rows, you can click the "Next" or "Previous" buttons to go the next/previous row in the list. How would you implement the "Next" and "Previous" features of the website? More specifically, if you have an arbitrary query that returns a list of up to 100,000+ rows, and you know some information about the current row someone is viewing, how do you determine the NEXT row efficiently? Here is the pseudo-code of the solution I came up with when the website was young, and it worked well when there were only 1000 rows, but now that there are 100,000 rows I think it is eating up too much memory. int nextRowId(string query, int currentRowId) { array allRowIds = mysql_query(query); // Takes up a lot of memory! int currentIndex = (index of currentRowId in allRowIds); // Takes time! return allRowIds[currentIndex+1]; } While you are thinking about this problem, remember that the website can store more information about the current row than just its ID (for example, the position of the current row in the result set), and this information can be used as a hint to help determine the ID of the next row. Edit: Sorry for not mentioning this earlier, but this isn't just a static website: rows can often be added to the list, and rows can be re-ordered in the list. (Much rarer, rows can be removed from the list.) I think that I should worry about that kind of thing, but maybe you can convince me otherwise.

    Read the article

  • What about race condition in multithreaded reading?

    - by themoob
    Hi, According to an article on IBM.com, "a race condition is a situation in which two or more threads or processes are reading or writing some shared data, and the final result depends on the timing of how the threads are scheduled. Race conditions can lead to unpredictable results and subtle program bugs." . Although the article concerns Java, I have in general been taught the same definition. As far as I know, simple operation of reading from RAM is composed of setting the states of specific input lines (address, read etc.) and reading the states of output lines. This is an operation that obviously cannot be executed simultaneously by two devices and has to be serialized. Now let's suppose we have a situation when a couple of threads access an object in memory. In theory, this access should be serialized in order to prevent race conditions. But e.g. the readers/writers algorithm assumes that an arbitrary number of readers can use the shared memory at the same time. So, the question is: does one have to implement an exclusive lock for read when using multithreading (in WinAPI e.g.)? If not, why? Where is this control implemented - OS, hardware? Best regards, Kuba

    Read the article

  • Spring Integration 1.0 RC2: Streaming file content?

    - by gdm
    I've been trying to find information on this, but due to the immaturity of the Spring Integration framework I haven't had much luck. Here is my desired work flow: New files are placed in an 'Incoming' directory Files are picked up using a file:inbound-channel-adapter The file content is streamed, N lines at a time, to a 'Stage 1' channel, which parses the line into an intermediary (shared) representation. This parsed line is routed to multiple 'Stage 2' channels. Each 'Stage 2' channel does its own processing on the N available lines to convert them to a final representation. This channel must have a queue which ensures no Stage 2 channel is overwhelmed in the event that one channel processes significantly slower than the others. The final representation of the N lines is written to a file. There will be as many output files as there were routing destinations in step 4. *'N' above stands for any reasonable number of lines to read at a time, from [1, whatever I can fit into memory reasonably], but is guaranteed to always be less than the number of lines in the full file. How can I accomplish streaming (steps 3, 4, 5) in Spring Integration? It's fairly easy to do without streaming the files, but my files are large enough that I cannot read the entire file into memory. As a side note, I have a working implementation of this work flow without Spring Integration, but since we're using Spring Integration in other places in our project, I'd like to try it here to see how it performs and how the resulting code compares for length and clarity.

    Read the article

  • JAXB, how to marshal without a namespace

    - by Alvin
    I have a fairly large repetitive XML to create using JAXB. Storing the whole object in the memory then do the marshaling takes too much memory. Essentially, my XML looks like this: <Store> <item /> <item /> <item /> ..... </Store> Currently my solution to the problem is to "hard code" the root tag to an output stream, and marshal each of the repetitive element one by one: aOutputStream.write("<?xml version="1.0"?>") aOutputStream.write("<Store>") foreach items as item aMarshaller.marshall(item, aOutputStream) end aOutputStream.write("</Store>") aOutputStream.close() Somehow the JAXB generate the XML like this <Store xmlns="http://stackoverflow.com"> <item xmlns="http://stackoverflow.com"/> <item xmlns="http://stackoverflow.com"/> <item xmlns="http://stackoverflow.com"/> ..... </Store> Although this is a valid XML, but it just look ugly, so I'm wonder is there any way to tell the marshaller not to put namespace for the item elements? Or is there better way to use JAXB to serialize to XML chunk by chunk?

    Read the article

  • Qt application crashing immediately without debugging info. How do I track down the problem?

    - by jjacksonRIAB
    I run an Qt app I've built: ./App Segmentation fault I run it with strace: strace ./App execve("./App", ["./App"], [/* 27 vars */]) = 0 --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV +++ Process 868 detached Again, no useful info. I run it with gdb: (gdb) run Starting program: /root/test/App Reading symbols from shared object read from target memory...(no debugging symbols found)...done. Loaded system supplied DSO at 0xffffe000 Program received signal SIGSEGV, Segmentation fault. 0x00000001 in ?? () Again, nothing. I run it with valgrind: ==948== Process terminating with default action of signal 11 (SIGSEGV) ==948== Bad permissions for mapped region at address 0x0 ==948== at 0x1: (within /root/test/App) Even if I put in debugging symbols, it doesn't give any more useful info. ldd shows all libraries being linked properly. Is there any other way I can find out what's wrong? I can't even do standard printf, cout, etc debugging. The executable doesn't even seem to start running at all. I rebuilt with symbols, and tried the suggestion below (gdb) break main Breakpoint 1 at 0x45470 (gdb) run Starting program: /root/test/App Breakpoint 1 at 0x80045470 Reading symbols from shared object read from target memory...done. Loaded system supplied DSO at 0xffffe000 Program received signal SIGSEGV, Segmentation fault. 0x00000001 in ?? () I checked for static initializers and I don't seem to have any. Yep, I tried printf, cout, etc. It doesn't even make it into the main routine, so I'm looking for problems with static initializers in link libraries, adding them in one-by-one. I'm not getting any stack traces either.

    Read the article

  • Do ORMs normally allow circular relations? If so, how would they handle it?

    - by SeanJA
    I was hacking around trying to make a basic orm that has support for the one => one and one => many relationships. I think I succeeded somewhat, but I am curious about how to handle circular relationships. Say you had something like this: user::hasOne('car'); car::hasMany('wheels'); car::property('type'); wheel::hasOne('car'); You could then do this (theoretically): $u = new user(); echo $u->car->wheels[0]->car->wheels[1]->car->wheels[2]->car->wheels[3]->type; #=> "monster truck" Now, I am not sure why you would want to do this. It seems like it wastes a whole pile of memory and time just to get to something that could have been done in a much shorter way. In my small ORM, I now have 4 copies of the wheel class, and 4 copies of the car class in memory, which causes a problem if I update one of them and save it back to the database, the rest get out of date, and could overwrite the changes that were already made. How do other ORMs handle circular references? Do they even allow it? Do they go back up the tree and create a pointer to one of the parents? DO they let the coder shoot themselves in the foot if they are silly enough to go around in circles?

    Read the article

  • How do virtual destructors work?

    - by Prabhu
    Few hours back I was fiddling with a Memory Leak issue and it turned out that I really got some basic stuff about virtual destructors wrong! Let me put explain my class design. class Base { virtual push_elements() {} }; class Derived:public Base { vector<int> x; public: void push_elements(){ for(int i=0;i <5;i++) x.push_back(i); } }; void main() { Base* b = new Derived(); b->push_elements(); delete b; } The bounds checker tool reported a memory leak in the derived class vector. And I figured out that the destructor is not virtual and the derived class destructor is not called. And it surprisingly got fixed when I made the destructor virtual. Isn't the vector deallocated automatically even if the derived class destructor is not called? Is that a quirk in BoundsChecker tool or is my understanding of virtual destructor wrong?

    Read the article

  • Virtual destructor - How does it work?

    - by Prabhu
    Hello All, Few hours back I was fiddling with a Memory Leak issue and it turned out that I really got some basic stuff about virtual destructor wrong!! Let me put explain my class design. class Base { virtual push_elements()<br>{}<br> }; class Derived:public Base { vector<int> x; public: void push_elements(){ for(int i=0;i <5;i++) x.push_back(i); } }; void main() { Base* b = new Derived(); b->push_elements(); delete b; } The bounds checker tool reported a memory leak in the derived class vector. And I figured out that the destructor is not virtual and the derived class destructor is not called.And it surprisingly got fixed when I made the destructor virtual. But my question is "isn't the vector deallocated automatically even if the derived class destructor is not called"? Is that a quirk in BoundsChecker tool or is my understanding of virtual destructor is wrong:)

    Read the article

  • Animation management in COCOS2D iphone.

    - by shreya
    Hi All, I have near about 255 image frames for background animation, 99 frames of enemy sprite and 125 frames of player sprite. All animations are running simultaneously on the screen. That is background animation is running and 4-5 enemies are on the screen are present at a time, also player is there at the same time. Take a look at the code below, CCAnimation *_enemyAnimation = [CCAnimation animationWithName:@"Enemy" delay:0.1f]; for (int i = 1; i<99; i++) { [_enemyAnimation addFrameWithFilename:[NSString stringWithFormat:@"enemy %02d.jpg",i]]; } id action1 = [CCAnimate actionWithAnimation: _enemyAnimation]; [_enemySprite runAction:[CCRepeatForever actionWithAction: action1]]; [self schedule:@selector(BackToGameLogic:) interval:5.0]; This makes my game too slower and consumes memory about 65MB in the allocations. How should I manage my animations so there will be improvement in speed and memory consumption will be reduced?. Please suggest me the way. Thanks.

    Read the article

  • Problem of Sprites and labels are displayed by white boxes.

    - by srikanth rongali
    I am writing a game in cocos2d. I am using a function restartDirector in AppDelegate class. -(void)restartDirector{ [[CCDirector sharedDirector] end]; [[CCDirector sharedDirector] release]; if( ! [CCDirector setDirectorType:CCDirectorTypeDisplayLink] ) [CCDirector setDirectorType:CCDirectorTypeDefault]; [[CCDirector sharedDirector] setPixelFormat:kPixelFormatRGBA8888]; [CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA8888]; [[CCDirector sharedDirector] setAnimationInterval:1.0/60]; [[CCDirector sharedDirector] setDisplayFPS:YES]; [[CCDirector sharedDirector] setDeviceOrientation:CCDeviceOrientationLandscapeLeft]; [[CCDirector sharedDirector] attachInView:window]; } This function I called in one of the game Scene . -(void)PracticeMethod:(id)sender { [MY_DELEGATE restartDirector]; CCScene *endPageScene = [CCScene node]; CCLayer *endPageLayer = [DummyScene node]; [endPageScene addChild:endPageLayer]; [[CCDirector sharedDirector] runWithScene:endPageScene]; // [[CCDirector sharedDirector] replaceScene:endPageScene]; } When used the replaceScene, there is no problem in game but the memory of abject allocation is high(I checked in leaks tool). So I used runWithScene. But , while using these when the scene DummyScene is loaded the sprites, labels in it are displayed by white boxes. I cannot see the sprites and labels. If I am using replaceScene everything thing is working fine but the memory allocation is high. this is my problem. Thank you.

    Read the article

  • Loading/Displaying large amount of data on webpage.

    - by jb
    I have a webpage which contains a table for displaying a large amount of data (on average from 2,000 to 10,000 rows). This page takes a long time to load/render. Which is understandable. The problem is, while the page is loading the PCs memory usage skyrockets (500mb on my test system is in use by iexplorer) and the whole PC grinds to a halt until it has finished, which can take a minute or two. IE hangs until it is complete, switching to another running program is the same. I need to fix this - and ideally i want to accomplish 2 things: 1) Load individual parts of the page seperately. So the page can render initially without the large data table. A loading div will be placed there until it is ready. 2) Dont use up so much memory or local resources while rendering - so at least they can use a different tab/application at the same time. How would I go about doing both or either of these? I'm an applications programmer by trade so i am still a little fizzy on the things I can do in a web environment. Cheers all.

    Read the article

  • Progressively stream the output of an ASP.NET page - or render a page outside of an HTTP request

    - by Evgeny
    I have an ASP.NET 2.0 page with many repeating blocks, including a third-party server-side control (so it's not just plain HTML). Each is quite expensive to generate, in terms of both CPU and RAM. I'm currently using a standard Repeater control for this. There are two problems with this simple approach: The entire page must be rendered before any of it is returned to the client, so the user must wait a long time before they see any data. (I write progress messages using Response.Write, so there is feedback, but no actual results.) The ASP.NET worker process must hold everything in memory at the same time. There is no inherent needs for this: once one block is processed it won't be changed, so it could be returned to the client and the memory could be freed. I would like to somehow return these blocks to the client one at a time, as each is generated. I'm thinking of extracting the stuff inside the Repeater into a separate page and getting it repeatedly using AJAX, but there are some complications involved in that and I wonder if there is some simper approach. Ideally I'd like to keep it as one page (from the client's point of view), but return it incrementally. Another way would be to do something similar, but on the server: still create a separate page, but have the server access it and then Response.Write() the HTML it gets to the response stream for the real client request. Is there a way to avoid an HTTP request here, though? Is there some ASP.NET method that would render a UserControl or a Page outside of an HTTP request and simply return the HTML to me as a string? I'm open to other ideas on how to do this as well.

    Read the article

  • Drawing an image in Java, slow as hell on a netbook.

    - by Norswap
    In follow-up to my previous questions (especially this one : http://stackoverflow.com/questions/2684123/java-volatileimage-slower-than-bufferedimage), i have noticed that simply drawing an Image (it doesn't matter if it's buffered or volatile, since the computer has no accelerated memory*, and tests shows it's doesn't change anything), tends to be very long. (*) System.out.println(GraphicsEnvironment.getLocalGraphicsEnvironment() .getDefaultScreenDevice().getAvailableAcceleratedMemory()); --> 0 How long ? For a 500x400 image, about 0.04 seconds. This is only drawing the image on the backbuffer (obtained via buffer strategy). Now considering that world of warcraft runs on that netbook (tough it is quite laggy) and that online java games seems to have no problem whatsoever, this is quite thought provoking. I'm quite certain I didn't miss something obvious, I've searched extensively the web, but nothing will do. So do any of you java whiz have an idea of what obscure problem might be causing this (or maybe it is normal, tough I doubt it) ? PS : As I'm writing this I realized this might be cause by my Linux installation (archlinux) tough I have the correct Intel driver. But my computer normally has "Integrated Intel Graphics Media Accelerator 950", which would mean it should have accelerated video memory somehow. Any ideas about this side of things ?

    Read the article

  • handle large Parcelable ArrayList in Android

    - by Gal Ben-Haim
    I'm developing an Android app that is a client to a JSON webservice API. I have classes of resource objects (some are nested) and I pass results from an IntentService that access the webserive using the Parcelable interface for all the resource classes. the webservice returns arrays or results that can be potentially large (because of the nesting, for example, a post object also contains comments array, each comment also contains a user object). currently I'm either inserting the results into a SQlite database or displaying them in a ListView. (my relevant methods are accepting ArrayList<resourceClass> as arguments). (some data need to be persistent stored and some should not). since I don't know what size of lists I can handle this way without reaching the memory limits, is this a good practice ? is it a better idea to save the parsed JSON to a local file immediately and pass the file path to the ResultReceiver, then either insert to database from that file or display the data ? is there a better way to handle this ? btw - I'm parsing the JSON as a stream with Gson's Reader so there shouldn't be memory issues at that stage.

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • jQuery "best practices" is an oxymoron [closed]

    - by n00b
    jQuery "best practices", oxymoron for several reasons sadly To speak candidly, the entire jQuery library has a HIGHLY inconsistent API, and virtually no coding standards. It's basically a subset of JavaScript, a new language if you will, which is probably the most confusing part for normal javascript developers. In my opinion, use normal JavaScript where possible to save on memory consumption. Or, better yet, use a professional tool like YUI. If you already come from a javascript background, you will appreciate it much more than jQuery because it doesn't have n00b wrappers around everything. In tools like YUI, you interact directly with the native DOM to do things, instead of a super-jQuery object that tries to do everything. I'll get voted down for saying the truth. Don't get me wrong, jQuery is cool if you wanna throw something flashy up quick, but if you're going to build a larger app you're going to need a more refined tool (especially since jQuery leaks memory like no other once you start chaining too much things). jQuery does have one of the lowest learning curves, and you won't outgrow it for awhile, but when you do it will be highly apparent and tedious to merge to something else. over 60 years programming experience someone clean this up and community wiki please

    Read the article

  • I need help on this data file to be edited in SOM_PAK format

    - by Mola
    Hi Experts, I am working on Self Organizing Map (SOM) Implementation and i have a microarray dataset which i am trying to read in using some_read_data function, but i keep having an errors when i edit it to have it in SOM_PAK form which is recognise by SOM for reading such as ??? Error using == somtoolbox\som_read_data.m Only 69 vector components on input file data line 1 (dimension is 70) Error in == SomMainFunction at 3 sD = som_read_data('B_r2.txt'); but when i try to read the data without editing which is the original file as shown here: http://rapidshare.com/files/376239367/DLBCL.txt.html It indicates "Data read OK", but i have the following error ??? Error using == unknown Out of memory. Type HELP MEMORY for your options. Error in == somtoolbox\som_bmus.m at 189 Bmus = zeros(dlen,length(which_bmus)); Error in == somvis\somvis_p_matrix.m at 41 [dummy dists] = som_bmus (dat, dat, 2:datlen); Error in == SomMainFunction at 16 [pheight rad_real perc] = somvis_p_matrix(sM,sD); You can get the datafile from here:http://rapidshare.com/files/376239367/DLBCL.txt.html I need someone to help me correct this data for me and put it in SOM_PAK format. I have tried getting it in SOM_PAK format, but it still giving me errors:

    Read the article

  • How slow are bit fields in C++

    - by Shane MacLaughlin
    I have a C++ application that includes a number of structures with manually controlled bit fields, something like #define FLAG1 0x0001 #define FLAG2 0x0002 #define FLAG3 0x0004 class MyClass { ' ' unsigned Flags; int IsFlag1Set() { return Flags & FLAG1; } void SetFlag1Set() { Flags |= FLAG1; } void ResetFlag1() { Flags &= 0xffffffff ^ FLAG1; } ' ' }; For obvious reasons I'd like to change this to use bit fields, something like class MyClass { ' ' struct Flags { unsigned Flag1:1; unsigned Flag2:1; unsigned Flag3:1; }; ' ' }; The one concern I have with making this switch is that I've come across a number of references on this site stating how slow bit fields are in C++. My assumption is that they are still faster than the manual code shown above, but is there any hard reference material covering the speed implications of using bit fields on various platforms, specifically 32bit and 64bit windows. The application deals with huge amounts of data in memory and must be both fast and memory efficient, which could well be why it was written this way in the first place.

    Read the article

  • UIImagePickerController crashes on rapid scrolling, slower than photos app

    - by vvanhee
    Most of the time, my image picker works perfectly (iOS 4.2.1). However, if I scroll very rapidly up and down about 4-6 times through my camera roll of about 300 photos, I get a crash. This never happens with the "photos" app on the same iPhone 3Gs. Also, I'm noticing that the stock "photos" app scrolls much more smoothly than my image picker. Has anyone else noticed this behavior? I'd be interested if others could attempt this in their own apps and see if they crash. I don't think it's related to other objects hogging memory on my iPhone because it's a simple app, and this happens right after I start the app. It also doesn't seem to be related to messages sent to other released objects or overreleasing of other objects in viewdidunload, based on my crash logs and the fact that the simulator responds well to simulated memory warnings. I think it might be a bug in the internal implementation of the UIImagePickerController... This is how I start the picker. I've done this multiple ways (including setting a retain property for the UIImagePickerController in my header and releasing on dealloc). This seems to be the best way (crashes least): UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum; picker.allowsEditing = YES; [self presentModalViewController:picker animated:YES]; [picker release]; This is the crashed thread (I get various exception types): Exception Type: SIGSEGV Exception Codes: SEGV_ACCERR at 0xfffffffff4faafa4 Crashed Thread: 8 ... Thread 8 Crashed: 0 CoreFoundation 0x000494ea -[__NSArrayM replaceObjectAtIndex:withObject:] + 98 1 PhotoLibrary 0x00008e0f -[PLImageTable _segmentAtIndex:] + 527 2 PhotoLibrary 0x00008a21 -[PLImageTable _mappedImageDataAtIndex:] + 221 3 PhotoLibrary 0x0000893f -[PLImageTable dataForEntryAtIndex:] + 15 4 PhotoLibrary 0x000087e7 PLThumbnailManagerImageDataAtIndex + 35 5 PhotoLibrary 0x00008413 -[PLThumbnailManager _dataForPhoto:format:width:height:bytesPerRow:dataWidth:dataHeight:imageDataOffset:imageDataFormat:preheat:] + 299 6 PhotoLibrary 0x000b6c13 __-[PLThumbnailManager preheatImageDataForImages:withFormat:]_block_invoke_1 + 159 7 libSystem.B.dylib 0x000d6680 _dispatch_call_block_and_release + 20 8 libSystem.B.dylib 0x000d6ba0 _dispatch_worker_thread2 + 128 9 libSystem.B.dylib 0x0007b251 _pthread_wqthread + 265

    Read the article

  • Can I do this?? Trying to load an object from within itself.

    - by Smikey
    Hi all! I have an object which conforms to the NSCoding protocol. The object contains a method to save itself to memory, like so: - (void)saveToFile { NSMutableData *data = [[NSMutableData alloc] init]; NSKeyedArchiver *archiver = [[NSKeyedArchiver alloc] initForWritingWithMutableData:data]; [archiver encodeObject:self forKey:kDataKey]; [archiver finishEncoding]; [data writeToFile:[self dataFilePath] atomically:YES]; [archiver release]; [data release]; } This works just fine. But I would also like to initialise an empty version of this object and then call its own 'Load' method to load whatever data exists on file into its variables. So I've created the following: - (void)loadFromFile { NSString *filePath = [self dataFilePath]; if ([[NSFileManager defaultManager] fileExistsAtPath:filePath]) { NSData *data = [[NSMutableData alloc] initWithContentsOfFile:[self dataFilePath]]; NSKeyedUnarchiver *unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data]; self = [unarchiver decodeObjectForKey:kDataKey]; [unarchiver finishDecoding]; } } Now this second method doesn't manage to load any variables. Is this line not possible perhaps? self = [unarchiver decodeObjectForKey:kDataKey]; Ultimately I would like to use the code like this: One viewController takes user entered input, creates an object and saves it to memory using [anObject saveToFile]; And a second viewController creates an empty object, then initialises its values to those stored on file by calling: [emptyObject loadFromFile]; Any suggestions on how to make this work would be hugely appreciated. Thanks :) Michael

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >