Search Results

Search found 5642 results on 226 pages for 'coding efficiency'.

Page 12/226 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • iPhone OS: Why is my managedModelObject not complying with Key Value Coding?

    - by nickthedude
    Ok so I'm trying to build this stat tracker for my app and I have built a data model object called statTracker that keeps track of all the stuff I want it to. I can set and retrieve values using the selectors, but if I try and use KVC (ie setValue: forKey: ) everything goes bad and says my StatTracker class is not KVC compliant: valueForUndefinedKey:]: the entity StatTracker is not key value coding-compliant for the key "timesLauched".' 2010-05-18 15:55:08.573 here's the code that is triggering it: NSArray *statTrackerArray = [[NSArray alloc] init]; statTrackerArray = [[CoreDataSingleton sharedCoreDataSingleton] getStatTracker]; NSNumber *number1 = [[NSNumber alloc] init]; number1 = [NSNumber numberWithInt:(1 + [[(StatTracker *)[statTrackerArray objectAtIndex:0] valueForKey:@"timesLauched"] intValue])]; [(StatTracker *)[statTrackerArray objectAtIndex:0] setValue:number1 forKey:@"timesLaunched" ]; NSError *error; if (![[[CoreDataSingleton sharedCoreDataSingleton] managedObjectContext] save:&error]) { NSLog(@"error writing to db"); } Not sure if this is enough code for you folks let me know what you need if you do need more. This would be so sweet if I could use KVC because I could then abstract all this stat tracking stuff into a single method call with a string argument for the value in question. At least that is what I hope to accomplish here. I'm actually now understanding the power of KVC but now I'm just trying to figure out how to make it work. Thanks! Nick

    Read the article

  • How forgiving do you need to be on new employees?

    - by Arcturus
    Recently we have a new developer in our team. We are getting him up to speed and he is picking it all up quite fast, but a new developer means new (foreign) coding styles and ways to solve things. It feels kinda petty to start whining about coding styles at the first three classes he codes, but how forgiving are you guys when dealing with new developers? Do you let them muddle on, and point it out later? Or do you wield the scepter of intolerance immediately? When do you draw the line, or if not, why not? P.S. New guy, if you read this: you are doing great, keep up the good work ;) Edit: I've accepted the most up-voted answer, as most answers share the same message: Be nice, but tell them asap! Thanks all for the nice answers! Really appreciated it!

    Read the article

  • migrating product and team from startup race to quality development

    - by thevikas
    This is year 3 and product is selling good enough. Now we need to enforce good software development practices. The goal is to monitor incoming bug reports and reduce them, allow never ending features and get ready for scaling 10x. The phrases "test-driven-development" and "continuous-integration" are not even understood by the team cause they were all in the first 2 year product race. Tech team size is 5. The question is how to sell/convince team and management about TDD/unit testing/coding standards/documentation - with economics. train the team to do more than just feature coding and start writing test units along - which looks like more work, means needs more time! how to plan for creating units for all backlog production code

    Read the article

  • If you need more than 3 levels of indentation, you're screwed?

    - by jokoon
    Per the Linux kernel coding style document: The answer to that is that if you need more than 3 levels of indentation, you're screwed anyway, and should fix your program. What can I deduct from this quote? On top of the fact that too long methods are hard to maintain, are they hard or impossible to optimize for the compiler? I don't really understand if this quote encourages better coding practice or is really a mathematical / algorithmic sort of truth. I also read in some C++ optimizing guide that dividing up a program into more function improves its design is a common thing taught at school, but it should be not done too much, since it can turn into a lot of JMP calls (even if the compiler can inline some methods by itself).

    Read the article

  • If you need more than 3 levels of indentation, you're screwed?

    - by jokoon
    Per the Linux kernel coding style document: The answer to that is that if you need more than 3 levels of indentation, you're screwed anyway, and should fix your program. What can I deduce from this quote? On top of the fact that too long methods are hard to maintain, are they hard or impossible to optimize for the compiler? I don't really understand if this quote encourages better coding practice or is really a mathematical / algorithmic sort of truth. I also read in some C++ optimizing guide that "dividing up a program into more functions improves its design" is frequently taught in CS courses, but it should be not done too much, since it can turn into a lot of JMP calls (even if the compiler can inline some methods by itself).

    Read the article

  • Conditions with common logic: question of style, readability, efficiency, ...

    - by cdonner
    I have conditional logic that requires pre-processing that is common to each of the conditions (instantiating objects, database lookups etc). I can think of 3 possible ways to do this, but each has a flaw: Option 1 if A prepare processing do A logic else if B prepare processing do B logic else if C prepare processing do C logic // else do nothing end The flaw with option 1 is that the expensive code is redundant. Option 2 prepare processing // not necessary unless A, B, or C if A do A logic else if B do B logic else if C do C logic // else do nothing end The flaw with option 2 is that the expensive code runs even when neither A, B or C is true Option 3 if (A, B, or C) prepare processing end if A do A logic else if B do B logic else if C do C logic end The flaw with option 3 is that the conditions for A, B, C are being evaluated twice. The evaluation is also costly. Now that I think about it, there is a variant of option 3 that I call option 4: Option 4 if (A, B, or C) prepare processing if A set D else if B set E else if C set F end end if D do A logic else if E do B logic else if F do C logic end While this does address the costly evaluations of A, B, and C, it makes the whole thing more ugly and I don't like it. How would you rank the options, and are there any others that I am not seeing?

    Read the article

  • How to test Text Search accuracy and efficiency?

    - by DEN
    I have created a web application. One of the feature is text search which perform the boolean operator ( NOT, AND, OR) as well. However, I have no idea on calculating the search's accuracy and efficiency. For example: 1 . Probe identification system for a measurement instrument 2 . Pulse-based impedance measurement instrument 3 . Millimeter with filtered measurement mode when the user key in will return the result as below input :measurement instrument Result: 1,2 input : measurement OR instrument NOT milimeter Result: 1,2,3 so, i have no idea on what issue and what algorithm to calculate on the accuracy and efficiency of the text search.. anyone have any idea on that?

    Read the article

  • Why are marketing employees, product managers, etc. deserving of their own office, yet programmers are jammed in a room as many as possible?

    - by TheImirOfGroofunkistan
    I don't understand why many (many) companies treat software developers like they are assembly line workers making widgets. Joel Spolsky has a great example of the problems this creates: With programmers, it's especially hard. Productivity depends on being able to juggle a lot of little details in short term memory all at once. Any kind of interruption can cause these details to come crashing down. When you resume work, you can't remember any of the details (like local variable names you were using, or where you were up to in implementing that search algorithm) and you have to keep looking these things up, which slows you down a lot until you get back up to speed. Here's the simple algebra. Let's say (as the evidence seems to suggest) that if we interrupt a programmer, even for a minute, we're really blowing away 15 minutes of productivity. For this example, lets put two programmers, Jeff and Mutt, in open cubicles next to each other in a standard Dilbert veal-fattening farm. Mutt can't remember the name of the Unicode version of the strcpy function. He could look it up, which takes 30 seconds, or he could ask Jeff, which takes 15 seconds. Since he's sitting right next to Jeff, he asks Jeff. Jeff gets distracted and loses 15 minutes of productivity (to save Mutt 15 seconds). Now let's move them into separate offices with walls and doors. Now when Mutt can't remember the name of that function, he could look it up, which still takes 30 seconds, or he could ask Jeff, which now takes 45 seconds and involves standing up (not an easy task given the average physical fitness of programmers!). So he looks it up. So now Mutt loses 30 seconds of productivity, but we save 15 minutes for Jeff. Ahhh! Quote Link More Spolsky on Offices Why don't managers and owner's see this?

    Read the article

  • Why do marketing employees get their own office, yet programmers are jammed in a room as many as possible?

    - by TheImirOfGroofunkistan
    I don't understand why many (many) companies treat software developers like they are assembly line workers making widgets. Joel Spolsky has a great example of the problems this creates: With programmers, it's especially hard. Productivity depends on being able to juggle a lot of little details in short term memory all at once. Any kind of interruption can cause these details to come crashing down. When you resume work, you can't remember any of the details (like local variable names you were using, or where you were up to in implementing that search algorithm) and you have to keep looking these things up, which slows you down a lot until you get back up to speed. Here's the simple algebra. Let's say (as the evidence seems to suggest) that if we interrupt a programmer, even for a minute, we're really blowing away 15 minutes of productivity. For this example, lets put two programmers, Jeff and Mutt, in open cubicles next to each other in a standard Dilbert veal-fattening farm. Mutt can't remember the name of the Unicode version of the strcpy function. He could look it up, which takes 30 seconds, or he could ask Jeff, which takes 15 seconds. Since he's sitting right next to Jeff, he asks Jeff. Jeff gets distracted and loses 15 minutes of productivity (to save Mutt 15 seconds). Now let's move them into separate offices with walls and doors. Now when Mutt can't remember the name of that function, he could look it up, which still takes 30 seconds, or he could ask Jeff, which now takes 45 seconds and involves standing up (not an easy task given the average physical fitness of programmers!). So he looks it up. So now Mutt loses 30 seconds of productivity, but we save 15 minutes for Jeff. Ahhh! Quote Link More Spolsky on Offices Why don't managers and owner's see this?

    Read the article

  • Dynamic Memory Allocation and Memory Management

    - by Bunkai.Satori
    In an average game, there are hundreds or maybe thousands of objects in the scene. Is it completely correct to allocate memory for all objects, including gun shots (bullets), dynamically via default new()? Should I create any memory pool for dynamic allocation, or is there no need to bother with this? What if the target platform are mobile devices? Is there a need for a memory manager in a mobile game, please? Thank you. Language Used: C++; Currently developed under Windows, but planned to be ported later.

    Read the article

  • Multi threaded game - updating, rendering, and how to split them

    - by CodeBunny
    From the StackOverflow post (it was recommended I move this): So, I'm working on a game engine, and I've made pretty good progress. However, my engine is single-threaded, and the advantages of splitting updating and rendering into separate threads sounds like a very good idea. How should I do this? Single threaded game engines are (conceptually) very easy to make, you have a loop where you update - render - sleep - repeat. However, I can't think of a good way to break updating and rendering apart, especially if I change their update rates (say I go through the update loop 25x a second, and have 60fps for rendering) - what if I begin updating halfway through a render loop, or vice versa?

    Read the article

  • Designing a large database with multiple sources

    - by CatchingMonkey
    I have been tasked with redesigning, or at worst optimising the structure of a database for a data warehouse. Currently, the database has 4 other source databases (which is due to expand to X number of others), all of which have their own data structures, naming conventions etc. At the moment an overnight SSIS package pulls the data from the various source and then for each source coverts the data into a standardised, usable format. These tables are then appended to each other creating a 60m row, 40 column beast!. This table is then used in a variety of ways from an OLAP cube to a web front end. The structure has been in place for a very long time, and the work I have been able to prove the advantages of normalisation, and this is the way I would like to go. The problem for me is that the overnight process takes so long I don't then want to spend additional time normalising the last table into something usable. Can anyone offer any insight or ideas into the best way to restructure or optimise the database efficiently? Edit: All the databases are MS SQL Server 2008 R2 Thanks in advance CM

    Read the article

  • Out of Memory when building an application

    - by Jacob Neal
    I have quite a major problem with my Multimedia Fusion 2 game. I finished it months ago, however, the only thing keeping me from finally compiling the game into an executable file is this error message that pops up every time I try to, simply saying, "Out of Memory". Its highly frustrating to be halted at this point by this message, and I tried everything I could come up with to fix it including compressing the runtime and sounds and increasing the proity of MMF2 all the way to realtime in the task manager. Im begging someone to toss me a bone on this problem, and any advice at all would be much appreciated.

    Read the article

  • Source of (programmer) inefficiency

    - by Daniel
    I am interested to gain a better insight about the possible reasons of personal inefficiency as programmers (and only in programming) due to – simply - our own errors (because we are humans – well, almost all of us). I am not interested in how much we are productive or in how many adjustements the customer asks for when the work is done, but where and how each of us spend that part of its time in tasks that are unproductive and there is no one to blame except ourselves. Excluding ego - feeding and / or self – gratification, what I am trying to get (for all of us) is: what are the common issues eating our time; insight on reasons for that issues; identify simple way for us, personally (not delegating actions to other or our organizations), to correct our own problems. Please, do not think in academic terms but aim at the opportunity to compare our daily experiences and understand what are and how we try to fix our personal deficiencies. If you are interested to respond to this post, please: integrate the list if you see something important (or obvious) missing; highlight or name honestly your first issue tellng the way you try to address and solve your issue acting on yourself and yourself only in a sort of "continuous quality improving" My criteria for accepting the answer is: choose the best solution (feasibility and utility) to fix one (or more) of the problems of the list. Of course, selecting an error is not a vote on our skills: maybe we are hyper professional programmers and we lose ten minutes only every year or we are terribly inefficient, losing a couple of days a week: reasons for inefficiency could be really the same - but in a different scale. A possible list: Plain error in the names (variables, functions). Inability to see the obvious in your code. Misreading. Lack of concentration. Trying to use a technology you have not mastered. Errors with data types. Time required to understand your previous code or your documentation. Trying to do something more than requested because you enjoy it Using solutions more complicated than required because you enjoy it. Plain logical errors. Errors due to your fault in communications. Distraction My first personal issue: "Trying to use a technology you do not master." I have to use daily several technologies and I often need to spend significant time correcting code because my assumptions were plainly wrong. Reasons for this: production needs put high pressure and make difficult to find the time to learn. I try to address this reading technical books - as many as I can - even if this actually consumes a lot of time.

    Read the article

  • Best memory allocation strategy for iOS ?

    - by Mr.Gando
    Hey guys, I'm debating myself about memory allocation on iOS. I write most of my code in C++ and I really like using ObjectPools, FreeLists, etc. In order to pre-allocate a lot of the stuff that I'll be constantly "alloc/dealloc" during the course of my game, ( like particles, game entities, etc ). Still on iOS, it's not like we are developing for a console like PSP, where I can know for fact that I'll get a fixed amount of memory. iOS , will issue "memory warnings" when the system needs memory. Does anyone have some suggestions about this ? Is it too serious since the new iPod touch/iPhone 4 are carrying more RAM ? or it's still a big concern ? Thanks!

    Read the article

  • Powder games: how do they work?

    - by Marc Müller
    Hey guys, I recently found these two gems: http://powdertoy.co.uk/ http://dan-ball.jp/en/javagame/dust/ My question is: How are the physics with so many elements efficiently handled? Am I just severely underestimating modern computing power or is it possible to 'just' have a two-dimensional array, each cell of which describes what is placed at the according position and simulate each cell in every step. Or are there more complex things being done like summarising large areas of the same kind into a single data set and separating said set as needed? Are there any open-source games like this I could look at?

    Read the article

  • Animating Tile with Blitting taking up Memory.

    - by Kid
    I am trying to animate a specific tile in my 2d Array, using blitting. The animation consists of three different 16x16 sprites in a tilesheet. Now that works perfect with the code below. BUT it's causing memory leakage. Every second the FlashPlayer is taking up +140 kb more in memory. What part of the following code could possibly cause the leak: //The variable Rectangle finds where on the 2d array we should clear the pixels //Fillrect follows up by setting alpha 0 at that spot before we copy in nxt Sprite //Tiletype is a variable that holds what kind of tile the next tile in animation is //(from tileSheet) //drawTile() gets Sprite from tilesheet and copyPixels it into right position on canvas public function animateSprite():void{ tileGround.bitmapData.lock(); if(anmArray[0].tileType > 42){ anmArray[0].tileType = 40; frameCount = 0; } var rect:Rectangle = new Rectangle(anmArray[0].xtile * ts, anmArray[0].ytile * ts, ts, ts); tileGround.bitmapData.fillRect(rect, 0); anmArray[0].tileType = 40 + frameCount; drawTile(anmArray[0].tileType, anmArray[0].xtile, anmArray[0].ytile); frameCount++; tileGround.bitmapData.unlock(); } public function drawTile(spriteType:int, xt:int, yt:int):void{ var tileSprite:Bitmap = getImageFromSheet(spriteType, ts); var rec:Rectangle = new Rectangle(0, 0, ts, ts); var pt:Point = new Point(xt * ts, yt * ts); tileGround.bitmapData.copyPixels(tileSprite.bitmapData, rec, pt, null, null, true); } public function getImageFromSheet(spriteType:int, size:int):Bitmap{ var sheetColumns:int = tSheet.width/ts; var col:int = spriteType % sheetColumns; var row:int = Math.floor(spriteType/sheetColumns); var rec:Rectangle = new Rectangle(col * ts, row * ts, size, size); var pt:Point = new Point(0,0); var correctTile:Bitmap = new Bitmap(new BitmapData(size, size, false, 0)); correctTile.bitmapData.copyPixels(tSheet, rec, pt, null, null, true); return correctTile; }

    Read the article

  • Keyboard "type ahead" in CRUD web apps?

    - by user61852
    In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can "type ahead", ie continue typing and "tab-bing" and "enter-ing" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this "type ahead of time" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ?

    Read the article

  • Can web apps allow fast data-typists to "type-ahead"?

    - by user61852
    In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can "type ahead", ie continue typing and "tab-bing" and "enter-ing" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this "type ahead of time" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ? Edit: By "type ahead" I mean "keyboard type ahead" (typing faster than the next entry form can load), not suggets-as-you-type-like-google type ahead. Typeahead is a feature of computers and software (and some typewriters) that enables users to continue typing regardless of program or computer operation—the user may type in whatever speed he or she desires, and if the receiving software is busy at the time it will be called to handle this later. Often this means that keystrokes entered will not be displayed on the screen immediately. This programming technique for handling user what is known as a keyboard buffer.

    Read the article

  • How to deal with large open worlds?

    - by Mr. Beast
    In most games the whole world is small enough to fit into memory, however there are games where this is not the case, how is this archived, how can the game still run fluid even though the world is so big and maybe even dynamic? How does the world change in memory while the player moves? Examples for this include the TES games (Skyrim, Oblivion, Morrowind), MMORPGs (World of Warcraft), Diablo, Titan Quest, Dwarf Fortress, Far Cry.

    Read the article

  • Effective versus efficient code

    - by Todd Williamson
    TL;DR: Quick and dirty code, or "correct" (insert your definition of this term) code? There is often a tension between "efficient" and "effective" in software development. "Efficient" often means code that is "correct" from the point of view of adhering to standards, using widely-accepted patterns/approaches for structures, regardless of project size, budget, etc. "Effective" is not about being "right", but about getting things done. This often results in code that falls outside the bounds of commonly accepted "correct" standards, usage, etc. Usually the people paying for the development effort have dictated ahead of time what it is that they value more. An organization that lives in a technical space will tend towards the efficient end, others will tend towards the effective. Developers often refuse to compromise their favored approach for the other. In my own experience I have found that people with formal education in software development tend towards the Efficient camp. Those that picked up software development more or less as a tool to get things done tend towards the Effective camp. These camps don't get along very well. When managing a team of developers who are not all in one camp it is challenging. In your own experience, which camp do you land in, and do you find yourself having to justify your approach to others? To management? To other developers?

    Read the article

  • Triangle Strips and Tangent Space Normal Mapping

    - by Koarl
    Short: Do triangle strips and Tangent Space Normal mapping go together? According to quite a lot of tutorials on bump mapping, it seems common practice to derive tangent space matrices in a vertex program and transform the light direction vector(s) to tangent space and then pass them on to a fragment program. However, if one was using triangle strips or index buffers, it is a given that the vertex buffer contains vertices that sit at border edges and would thus require more than one normal to derive tangent space matrices to interpolate between in fragment programs. Is there any reasonable way to not have duplicate vertices in your buffer and still use tangent space normal mapping? Which one do you think is better: Having normal and tangent encoded in the assets and just optimize the geometry handling to alleviate the cost of duplicate vertices or using triangle strips and computing normals/tangents completely at run time? Thinking about it, the more reasonable answer seems to be the first one, but why might my professor still be fussing about triangle strips when it seems so obvious?

    Read the article

  • Acceptable GC frequency for a SlimDX/Windows/.NET game?

    - by Rei Miyasaka
    I understand that the Windows GC is much better than the Xbox/WP7 GC, being that it's generational and multithreaded -- so I don't need to worry quite as much about avoiding memory allocation. SlimDX even has some unavoidable functions that generate some amount of garbage (specifically, MapSubresource creates DataBoxes), yet people don't seem to be too upset about it. I'd like to use some functional paradigms to write my code too, which also means creating objects like closures and monads. I know premature optimization isn't a good thing, but are there rules of thumb or metrics that I can follow to know whether I need to cut down on allocations? Is, say, one gen 0 GC per frame too much? One thing that has me stumped is object promotions. Gen 0 GCs will supposedly finish within a millisecond or two, but if I'm understanding correctly, it's the gen 1 and 2 promotions that start to hurt. I'm not too sure how I can predict/prevent these.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >