Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 207/563 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Is creating a separate pool for each individual png image in the same class appropriate?

    - by Panzercrisis
    I'm still possibly a little green about object-pooling, and I want to make sure something like this is a sound design pattern before really embarking upon it. Take the following code (which uses the Starling framework in ActionScript 3): [Embed(source = "/../assets/images/game/misc/red_door.png")] private const RED_DOOR:Class; private const RED_DOOR_TEXTURE:Texture = Texture.fromBitmap(new RED_DOOR()); private const m_vRedDoorPool:Vector.<Image> = new Vector.<Image>(50, true); . . . public function produceRedDoor():Image { // get a Red Door image } public function retireRedDoor(pImage:Image):void { // retire a Red Door Image } Except that there are four colors: red, green, blue, and yellow. So now we have a separate pool for each color, a separate produce function for each color, and a separate retire function for each color. Additionally there are several items in the game that follow this 4-color pattern, so for each of them, we have four pools, four produce functions, and four retire functions. There are more colors involved in the images themselves than just their predominant one, so trying to throw all the doors, for instance, in a single pool, and then changing their color properties around isn't going to work. Also the nonexistence of the static keyword is due to its slowness in AS3. Is this the right way to do things?

    Read the article

  • What makes signed integers behave differently?

    - by 000
    In this example of x86_64 hex/disassembled code I see: 48B80000000000000000 mov rax, 0x0 Signed Byte 52 Unsigned Byte 52 Signed Short 14388 Unsigned Short 14388 Signed Int 943863860 Unsigned Int 943863860 Signed Int64 3472328296363079732 Unsigned Int64 3472328296363079732 Float 4.630555e-05 Double 1.39804332763832e-76 String 48B80000000000000000 which to me appears to have the same functionality as: 48C7C000000000 mov rax, 0x0 48C7C000000000 Signed Byte 52 Unsigned Byte 52 Signed Short 14388 Unsigned Short 14388 Signed Int 927152180 Unsigned Int 927152180 Signed Int64 3472328377950746676 Unsigned Int64 3472328377950746676 Float 1.163599e-05 Double 1.39806836023098e-76 String 48C7C000000000 How is the first example treated differently from the second example?

    Read the article

  • What happens at control invoke function?

    - by user65909
    A question about form controls invoke function. Control1 is created on thread1. If you want to update something in Control1 from thread2 you must do something like: delegate void SetTextCallback(string txt); void setText(string txt) { if (this.textBox1.InvokeRequired) { SetTextCallback d = new SetTextCallback(setText); this.Invoke(d, new object[] { txt }); } else { // this will run on thread1 even when called from thread2 this.textBox1.AppendText(msg); } }` What happens behind the scenes here? This invoke behaves different from a normal object invoke. When you want to call a function in an object on a specific thread, then that thread must be waiting on some queue of delegates, and execute the incoming delegates. Is it correct that the windows forms control invoke function is completely different from the standard object invoke function?

    Read the article

  • I don't understand why algorithms are so special

    - by Jessica
    I'm a student of computer science trying to soak up as much information on the topic as I can during my free time. I keep returning to algorithms time and again in various formats (online course, book, web tutorial), but the concept fails to sustain my attention. I just don't understand: why are algorithms so special? I can tell you why fractals are awesome, why the golden ratio is awesome, why origami is awesome and scientific applications of all the above. Heck I even love Newton's laws and conical sections. But when it comes to algorithms, I'm just not astounded. They are not insightful in new ways about human cognition at all. I was expecting algorithms to be shattering preconceptions and mind-altering but time and time again they fail miserably. What am I doing wrong in my approach? Can someone tell me why algorithms are so awesome?

    Read the article

  • Functional programming readability

    - by Jimmy Hoffa
    I'm curious about this because I recall before learning any functional languages, I thought them all horribly, awfully, terribly unreadable. Now that I know Haskell and f#, I find it takes a little longer to read less code, but that little code does far more than an equivalent amount would in an imperative language, so it feels like a net gain and I'm not extremely practiced in functional. Here's my question, I constantly hear from OOP folks that functional style is terribly unreadable. I'm curious if this is the case and I'm deluding myself, or if they took the time to learn a functional language, the whole style would no longer be more unreadable than OOP? Has anybody seen any evidence or got any anecdotes where they saw this go one way or another with frequency enough to possibly say? If writing functionally really is of lower readability than I don't want to keep using it, but I really don't know if that's the case or not..

    Read the article

  • Why Adobe Air is so underrated for building mobile apps?

    - by Marcelo de Assis
    I worked with Adobe Flash related technologies for the last 5 years, although not being a big fan of Adobe. I see some little bugs happening in some apps, but I cannot imagine why a lot of big companies do not even think to use use Adobe Air as a good technology for their mobile apps. I see a lot of mobile developer positions asking for experts in Android or iOS , but very much less positions asking for Adobe Air, even when Adobe Air apps have the advantage of being multi-plataform, with the same app working in Blackberry, iOS and Android. Is so much easier to develop a game using Flash, than using Android SDK, for example. It really have flaws (that I never saw) or it is just some kind of mass prejudgement? I also would like to hear what a project manager or a indie developer takes when choosing a plataform for building apps.

    Read the article

  • Do certain corporations hold more weight on a resume?

    - by Ryan
    Would a developer/tester position at Google, Apple, Microsoft, etc. (any large tech. company of which most people have heard) be more valuable on a resume than working as a developer/tester somewhere where tech. isn't the main objective (shipping company, restaurant chain, insurance company, etc.)? Let's say you have two offers, and you only plan to stay with whichever company for 5 years, before trying to get a better position at a different company. One at Google that has a starting salary of $60,000, and one at some insurance company that has a starting salary of $80,000. I guess what I'm trying to say is... with university's, if someone graduates from MIT or Carnegie Mellon, they can pretty much get a job anywhere. Does someone seem more valuable after having worked at a company like Google, Apple, Microsoft, etc.? In other words, would taking the lower paying job be better in the long run since it's at Google, or would it be better to take the higher paying job at the insurance company?

    Read the article

  • What is the difference between Static code analysis and code review?

    - by Xander
    I just wanted to know what is the difference between static code analysis and code review. How these two are done? What are the tools available today for code review/ static analysis of PHP. I also like to know about good tools for any language code review. Thanks in Advance. Xander Cage Note: I am asking this because I was not able to understand the difference. Please, I expect some answers than "I am Mr.Geek and you asked an irrelevant bla bla..... this is closed". I know this sounds mean. But I am sorry.

    Read the article

  • Is there a SUPPORTED way to run .NET 4.0 applications natively on a Mac?

    - by Dan
    What, if any, are the (Microsoft) supported options for running C#/.NET 4.0 code natively on the Mac? Yes, I know about Mono, but among other things, it lags Microsoft. And Silverlight only works in a web browser. A VMWare-type solution won't cut it either. Here's the subjective part (and might get this closed): is there any semi-authoritative answer to why Microsoft just doesn't support .NET on the Mac itself? It would seem like they could Silverlight and/or buy Mono and quickly be there. No need for native Visual Studio; cross-compiling and remote debugging is fine. The reason is that where I work there is a growing amount of Uncertainty about the future which is causing a lot more development to be done in C++ instead of C#; brand new projects are chosing to use C++. Nobody wants to tell management 18–24 months from now "sorry" should the Mac (or iPad) become a requirement. C++ is seen as the safer option, even if it (arguably) means a loss in productivity today.

    Read the article

  • Is there a SUPPORTED way to run .NET 4.0 applications natively on a Mac?

    - by Dan
    What, if any, are the Microsoft supported options for running C#/.NET 4.0 code natively on the Mac? Yes, I know about Mono, but among other things, it lags Microsoft. And Silverlight only works in a web browser. A VMWare-type solution won't cut it either. Here's the subjective part (and might get this closed): is there any semi-authoritative answer to why Microsoft just doesn't support .NET on the Mac itself? It would seem like they could Silverlight and/or buy Mono and quickly be there. No need for native Visual Studio; cross-compiling and remote debugging is fine. The reason is that where I work there is a growing amount of Uncertainty about the future which is causing a lot more development to be done in C++ instead of C#; brand new projects are chosing to use C++. Nobody wants to tell management 18–24 months from now "sorry" should the Mac (or iPad) become a requirement. C++ is seen as the safer option, even if it (arguably) means a loss in productivity today.

    Read the article

  • Web framework able to handle many concurrent users [closed]

    - by Jonas
    Social networking sites needs to handle many concurrent users e.g. for chat functionality. What web frameworks scales well and are able to handle more than 10.000 concurrent users connected with Comet or WebSockets. The server is a Linux VPS with limited memory, e.g. 1GB-8GB. I have been looking for some Java frameworks but they consume much memory per connection. So I'm looking for other alternatives too. Are there any good frameworks that are able to handle more than 10.000 concurrent users with limited memory resources?

    Read the article

  • Social IT guy barrier [closed]

    - by sergiol
    Possible Duplicate: How do you deal with people who ask you to fix their computer? Hello. Almost every person that deserves the title of being a programmer as faced the problem of persons that do not even remember the mere existence of those professionals, unless they have serious problems in their computer or some other IT related problem. May be my post will be considered off-topic, but I think it is a very important question. As Joel Spolsky says, IT guys are not Asperger geeks, and they need social life like everybody. But the people that is always asking for favors from us, can ruin deeply our social and personal life. I could experience this by myself. This fact as generated articles like http://www.lifereboot.com/2007/10-reasons-it-doesnt-pay-to-be-the-computer-guy/ and http://ecraazul.wordpress.com/2009/01/29/o-gajo-da-informatica-de-a-a-z/ (I received this one in my mailbox. It is in Portuguese, but I believe it is translated from English). Basically the idea is to criticize people that is always asking us favors. It is even more annoying if you are person very specialized in some subject and a person asks you a completely out-of-that-context question. For example, you are a VBA programmer and somebody says you to that his/her Mobile Internet Pen stopped to work five days ago and needs your help to put it working again. When you go to a doctor to fix your legs, you don't go to an ophthalmologist. You go to an orthopedist. And you pay. I don't how it works in other countries, but in Portugal being a doctor is so an overvalued job, that they earn very much money and almost nobody asks them free favors. So, my question is: what kind of social barrier (or whatever else) do you use to protect yourself from that situation?

    Read the article

  • Why should most logic be in the monitor objects and not in the thread objects when writing concurrent software in Java?

    - by refuser
    When I took the Realtime and Concurrent programming course our lecturer told us that when writing concurrent programs in Java and using monitors, most of the logic should be in the monitor and as little as possible in the threads that access it. I never really understood why and I really would like to. Let me clarify. In this particular case we had several classes. Lift extends Thread Person extends Thread LiftView Monitor, all methods synchronized. This is nothing we came up with, our task was to implement a lift simulation with persons waiting on different floors, and theses were the class skeletons that were given. Then our lecturer said to implement most of the logic in the monitor (he was talking about class Monitor as THE monitor) and as little as possible in the threads. Why would he make a statement like that?

    Read the article

  • Class instance clustering in object reference graph for multi-entries serialization

    - by Juh_
    My question is on the best way to cluster a graph of class instances (i.e. objects, the graph nodes) linked by object references (the -directed- edges of the graph) around specifically marked objects. To explain better my question, let me explain my motivation: I currently use a moderately complex system to serialize the data used in my projects: "marked" objects have a specific attributes which stores a "saving entry": the path to an associated file on disc (but it could be done for any storage type providing the suitable interface) Those object can then be serialized automatically (eg: obj.save()) The serialization of a marked object 'a' contains implicitly all objects 'b' for which 'a' has a reference to, directly s.t: a.b = b, or indirectly s.t.: a.c.b = b for some object 'c' This is very simple and basically define specific storage entries to specific objects. I have then "container" type objects that: can be serialized similarly (in fact their are or can-be "marked") they don't serialize in their storage entries the "marked" objects (with direct reference): if a and a.b are both marked, a.save() calls b.save() and stores a.b = storage_entry(b) So, if I serialize 'a', it will serialize automatically all objects that can be reached from 'a' through the object reference graph, possibly in multiples entries. That is what I want, and is usually provides the functionalities I need. However, it is very ad-hoc and there are some structural limitations to this approach: the multi-entry saving can only works through direct connections in "container" objects, and there are situations with undefined behavior such as if two "marked" objects 'a'and 'b' both have a reference to an unmarked object 'c'. In this case my system will stores 'c' in both 'a' and 'b' making an implicit copy which not only double the storage size, but also change the object reference graph after re-loading. I am thinking of generalizing the process. Apart for the practical questions on implementation (I am coding in python, and use Pickle to serialize my objects), there is a general question on the way to attach (cluster) unmarked objects to marked ones. So, my questions are: What are the important issues that should be considered? Basically why not just use any graph parsing algorithm with the "attach to last marked node" behavior. Is there any work done on this problem, practical or theoretical, that I should be aware of? Note: I added the tag graph-database because I think the answer might come from that fields, even if the question is not.

    Read the article

  • How do I initialize a Scala map with more than 4 initial elements in Java?

    - by GlenPeterson
    For 4 or fewer elements, something like this works (or at least compiles): import scala.collection.immutable.Map; Map<String,String> HAI_MAP = new Map4<>("Hello", "World", "Happy", "Birthday", "Merry", "XMas", "Bye", "For Now"); For a 5th element I could do this: Map<String,String> b = HAI_MAP.$plus(new Tuple2<>("Later", "Aligator")); But I want to know how to initialize an immutable map with 5 or more elements and I'm flailing in Type-hell. Partial Solution I thought I'd figure this out quickly by compiling what I wanted in Scala, then decompiling the resultant class files. Here's the scala: object JavaMapTest { def main(args: Array[String]) = { val HAI_MAP = Map(("Hello", "World"), ("Happy", "Birthday"), ("Merry", "XMas"), ("Bye", "For Now"), ("Later", "Aligator")) println("My map is: " + HAI_MAP) } } But the decompiler gave me something that has two periods in a row and thus won't compile (I don't think this is valid Java): scala.collection.immutable.Map HAI_MAP = (scala.collection.immutable.Map) scala.Predef..MODULE$.Map().apply(scala.Predef..MODULE$.wrapRefArray( scala.Predef.wrapRefArray( (Object[])new Tuple2[] { new Tuple2("Hello", "World"), new Tuple2("Happy", "Birthday"), new Tuple2("Merry", "XMas"), new Tuple2("Bye", "For Now"), new Tuple2("Later", "Aligator") })); I'm really baffled by the two periods in this: scala.Predef..MODULE$ I asked about it on #java on Freenode and they said the .. looked like a decompiler bug. It doesn't seem to want to compile, so I think they are probably right. I'm running into it when I try to browse interfaces in IntelliJ and am just generally lost. Based on my experimentation, the following is valid: Tuple2[] x = new Tuple2[] { new Tuple2<String,String>("Hello", "World"), new Tuple2<String,String>("Happy", "Birthday"), new Tuple2<String,String>("Merry", "XMas"), new Tuple2<String,String>("Bye", "For Now"), new Tuple2<String,String>("Later", "Aligator") }; scala.collection.mutable.WrappedArray<Tuple2> y = scala.Predef.wrapRefArray(x); There is even a WrappedArray.toMap() method but the types of the signature are complicated and I'm running into the double-period problem there too when I try to research the interfaces from Java.

    Read the article

  • Efficient Trie implementation for unicode strings

    - by U Mad
    I have been looking for an efficient String trie implementation. Mostly I have found code like this: Referential implementation in Java (per wikipedia) I dislike these implementations for mostly two reasons: They support only 256 ASCII characters. I need to cover things like cyrillic. They are extremely memory inefficient. Each node contains an array of 256 references, which is 4096 bytes on a 64 bit machine in Java. Each of these nodes can have up to 256 subnodes with 4096 bytes of references each. So a full Trie for every ASCII 2 character string would require a bit over 1MB. Three character strings? 256MB just for arrays in nodes. And so on. Of course I don't intend to have all of 16 million three character strings in my Trie, so a lot of space is just wasted. Most of these arrays are just null references as their capacity far exceeds the actual number of inserted keys. And if I add unicode, the arrays get even larger (char has 64k values instead of 256 in Java). Is there any hope of making an efficient trie for strings? I have considered a couple of improvements over these types of implementations: Instead of using array of references, I could use an array of primitive integer type, which indexes into an array of references to nodes whose size is close to the number of actual nodes. I could break strings into 4 bit parts which would allow for node arrays of size 16 at the cost of a deeper tree.

    Read the article

  • How long should one wait before going for a MS?

    - by Jungle Hunter
    Removed duplicate aspects of the question Hi! I'm an undergraduate Master's student. I've what seems to be a good offer in hand in Singapore (if location plays a role). Is an undergraduate Master's good enough for a Master's or one should go for MS? How much time should one wait (in their job) before going for a MS if that's the decision? Does one lose the progress which one makes while at the job before the Master's? Note: Undergraduate Master's is when my degree is called Master's but it is my first degree. This one is a four year one.

    Read the article

  • Are there any "best practices" on cross-device development?

    - by vstrien
    Developing for smartphones in the way the industry is currently doing is relatively new. Of course, there has been enterprise-level mobile development for several decades. The platforms have changed, however. Think of: from stylus-input to touch-input (different screen res, different control layout etc.) new ways of handling multi-tasking on mobile platforms (e.g. WP7's "tombstoning") The way these platforms work aren't totally new (iPhone has been around for quite awhile now for example), but at the moment when developing a functionally equal application for both desktop and smartphone it comes down to developing two applications from ground up. Especially with the birth of Windows Phone with the .NET-platform on board and using Silverlight as UI-language, it's becoming appealing to promote the re-use of (parts of the UI). Still, it's fairly obvious that the needs of an application on a smartphone (or tablet) are very different compared to the needs of a desktop application. An (almost) one-on-one conversion will therefore be impossible. My question: are there "best practices", pitfalls etc. documented about developing "cross-device" applications (for example, developing an app for both the desktop and the smartphone/tablet)? I've been looking at weblogs, scientific papers and more for a week or so, but what I've found so far is only about "migratory interfaces".

    Read the article

  • What should you say in post-interview?

    - by ??? Shengyuan Lu
    When you are leaving your company, you will be post-interviewed. As best practice, you shouldn't say something bad about your company, because it will "burn your bridge", another best practice is to keep silence if the company really sucks. I think if you decide to leave, there must be a reason(s) making you really unhappy, and I am sure you will have something really important to tell your employer. Then what is your attitude about post-interview?

    Read the article

  • How can test users access an unpublished iOS app?

    - by David
    I am considering outsourcing the development of an iOS app to various independent developers. I will have have various testers of the app. We all work for separate companies. Some of these testers will be customers, who I would like feedback from. As there are multiple developers involved I expect there to be a new release on a daily basis. How can this be done? Would each of the testers need to buy some sort of license to avoid having to go through the app approval process? Is there any smooth way to do this so that it will not be a hassle for our friendly customers, who are willing to test our app?

    Read the article

  • Is "Interface inheritance" always safe?

    - by Software Engeneering Learner
    I'm reading "Effective Java" by Josh Bloch and in there is Item 16 where he tells how to use inheritance in a correct way and by inheritance he means only class inheritance, not implementing interfaces or extend interfaces by other interfaces. I didn't find any mention of interface inheritance in the entire book. Does this mean that interface inheritance is always safe? Or there are guidlines for interface inheritance?

    Read the article

  • Avoid GPL violation by moving library out of process

    - by Andrey
    Assume there is a library that is licensed under GPL. I want to use it is closed source project. I do following: Create small wrapper application around that GPL library that listens to socket, parse messages and call GPL library. Then returns results back. Release it's sources (to comply with GPL) Create client for this wrapper in my main application and don't release sources. I know that this adds huge overhead compared to static/dynamic linking, but I am interested in theoretical way.

    Read the article

  • Best practice while marking a bug as resolved with Bugzilla (versioning of product and components)

    - by Vincent B.
    I am wondering what is the best way to handle the situation of marking a bug as resolved and providing a version of component/product in which this fix can be found. Context For a project I am working on, we are using Bugzilla for issue tracking, and we have the following: A product "A" with a version number like vA.B.C.D, This product "A" have the following components: Component "C1" with a version number like vA.B.C.D, Component "C2" with a version number like vA.B.C.D, Component "C3" with a version number like vA.B.C.D. Internally we keep track of which component versions have been used to generate the product A version vA.B.C.D. Example: Product "A" version v1.0.0.0 has been produced from component "C1" v1.0.0.3, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. And Product "A" version v1.0.1.0 has been produced from component "C1" v1.0.0.4, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. Each component is a SVN repository. The person in charge of generating the product "A" have only access to the different components tags folder in SVN, and not the trunk of each component repository. Problem Now the problem is the following, when a bug is found in the product "A", and that the bug is related to Component "C1", the version of product "A" is chosen (e.g. v1.0.0.0), and this version allow the developer to know which version of component "C1" has the bug (here it will be v1.0.0.3). A bug report is created. Now let's say that the developer responsible for component "C1" corrects the bug, then when the bug seems to be fixed and after some test and validation, the developer generates a new tag for component "C1", with the version v1.0.0.4. At this time, the developer of component "C1" needs to update the bug report, but what is the best to do: Mark the bug as resolved/fixed and add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component" ? Keep the bug as assigned, add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component, update this bug status to resolved for the next version of the product that will be generated with the newest version (v1.0.0.4 of C1)" ? Another possible way to deal with this problem. Right now the problem is that when a product component CX is fixed, it is not sure in which future version of the product A it will be included, so it is for me not possible to say in which version of the product it will be solved, but it is possible to say in which version of the Component CX it has been solved. So when do we need to mark a bug as solved, when the product A version include the fixed version of CX, or only when CX component has been fixed ? Thanks for your personal feedback and ideas about this !

    Read the article

  • How to present a stable data model in a public API that allows internal data structures to be changed without breaking the public view of the data?

    - by Max Palmer
    I am in the process of developing an application that allows users to write C# scripts. These scripts allow users to call selected methods and to access and manipulate data in a document. This works well, however, in the development version, scripts access the document's (internal) data structures directly. This means that if we were to change the internal data model/structure, there is a good chance that someone's script will no longer compile. We obviously want to prevent this breaking change from happening, but still want to allow the user to write sensible C# code (whilst not restricting how we develop our internal data model as a result). We therefore need to decouple our scripting API and its data structures from our internal methods and data structures. We've a few ideas as to how we might allow the user to access a what is effectively a stable public version of the document's internal data*, but I wanted to throw the question out there to someone who might have some real experience of this problem. NB our internal document's data structure is quite complex and it could be quite difficult to wrap. We know we want to expose as little as possible in our public API, especially as once it's out there, it's out there for good. Can anyone help? How do scripting languages / APIs decouple their public API and data structures from their internal data structures? Is there no real alternative to having to write a complex interaction layer? If we need to do this, what's a good approach or pattern for wrapping complex data structures that include nested objects, including collections? I've looked at the API facade pattern, which looks like it's trying to address these kinds of issues, but are there alternatives? *One idea is to build a data facade that is kept stable across versions of our application. The facade exposes a set of facade data objects that are used in the script code. These maintain backwards compatibility and wrap access to our internal document's data model.

    Read the article

  • Resources on learning to program in machine code?

    - by AceofSpades
    I'm a student, fresh into programming and loving it, from Java to C++ and down to C. I moved backwards to the barebones and thought to go further down to Assembly. But, to my surprise, a lot of people said it's not as fast as C and there is no use. They suggested learning either how to program a kernel or writing a C compiler. My dream is to learn to program in binary (machine code) or maybe program bare metal (program micro-controller physically) or write bios or boot loaders or something of that nature. The only possible thing I heard after so much research is that a hex editor is the closest thing to machine language I could find in this age and era. Are there other things I'm unaware of? Are there any resources to learn to program in machine code? Preferably on a 8-bit micro-controller/microprocessor. This question is similar to mine, but I'm interested in practical learning first and then understanding the theory.

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >