Search Results

Search found 16947 results on 678 pages for 'kernel programming'.

Page 130/678 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Designing spawning system

    - by Vlad
    I played this game recently http://www.kongregate.com/games/JuicyBeast/knightmare-tower and I am amazed by the way how different monsters are beign spawned. I personally developed my own shooter game and I added time based but also count based spawing system. By count based I mean when there are 5 enemies on stage stop spawning. But this is one example. My question is how are these spawning mechanism built, is there some pattern or some theory how they are built? Are there some online materials/pages where I can improve my knowledge? To sumarize, let just say we have 6 types of monsters. I start the game and kill of monsters of type 1,2 and 3 all the time. Once I pass the first ceiling, like in the game above, monster type 4 appear. ANd so on. As I progress trough the game, the same system of 6 types of monsters stay, but they become more and more resilient and dangerous. So I must also improve to be able to destroy the same monsters but now stronger. My question is simple, are there some theories built or written for developing this type of inteligent systems? Note: This is a general question, not tied up with some game or how exactly should the game work. I am capable to program my own mechanisms but I think I need some help. Thanks.

    Read the article

  • Maintainability of Boolean logic - Is nesting if statements needed?

    - by Vaccano
    Which of these is better for maintainability? if (byteArrayVariable != null) if (byteArrayVariable .Length != 0) //Do something with byteArrayVariable OR if ((byteArrayVariable != null) && (byteArrayVariable.Length != 0)) //Do something with byteArrayVariable I prefer reading and writing the second, but I recall reading in code complete that doing things like that is bad for maintainability. This is because you are relying on the language to not evaluate the second part of the if if the first part is false and not all languages do that. (The second part will throw an exception if evaluated with a null byteArrayVariable.) I don't know if that is really something to worry about or not, and I would like general feedback on the question. Thanks.

    Read the article

  • Novice prototyping a massive multiplayer webpage based gaming system

    - by Sean Hendlin
    I'm trying to build a website based game in which various pages of the site act as different areas of the game. I am wondering what you would recommended as a design structure. Which languages would be best if building what will hopefully becomes a massive system able to scale to massive amounts of users. I am wondering if and how various elements from differing languages could be meshed to interact with each other. For example could I use html5, javascript, and PHP? What about asp.net how might that factor in? I'm a newbie programmer but I've been working on this idea for years and I want to build it to reality. Your comments and suggestions are appreciated. P.S.: The game is not all graphics and animation (though flash like appearance and some animation would be nice). What I am thinking of is essentially a heavily gamified system of forms. And LOTS of data in many different categories cross referencing each-other. I'm not sure how to go about structuring the collection of data. Also while I know javascript can be used to process some functions, I'm wondering what sort of base system I would need to handle the server side processing of what I am expecting to be some pretty significant algorithm processing. That is to say I expect to have many many many functions and I'm not sure how to mange this using javascript. I feel like they would be forgotten, mixed up, disorganizes as they essentially only exist where they are coded. I guess I need to learn something of libraries? OK, Thank you! Is enough from me for now.

    Read the article

  • Advice for how to handle company pride

    - by user17971
    We have this "amazing" little product using the latest development methodologies, components with all the bells and whistles. I took over this product maybe 6 months ago and struggled with it from day one. Even though it is supposedly is state of the art because of all its amazing structure, using dependency injections, inversion of control from the unity framework, hibernation and is domain driven in a .net mvvm xaml application to make it streamlined and modular. I knew from the moment I saw the monolith that it was going to be an uphill struggle for me. A lot of little code-bits scattered all around in neatly organized paradigms. Debugging is difficult, tracing the code is difficult, making new code is difficult, although some modifications is surprisinly easy but it doesn't out weight the problems I have with the code by a long shot. When I took over the project I was told that the new management console was ready for delivery and all I had to do was compile it and drop it. This was the beginning of a uphill struggle, our customer didn't agree at all that this was the functionality they had asked for so I had to do modifications to the program to their specifications. Since the project pretty much has been overdue since I took over it it has always been important that we didn't add or change much to the original system. I could modify the existing bits. fast forward until today where I finally completed all their comments and issues with the program but now I think that the users has opened their eyes (even though they saw this program many times) that they will be going backwards with this new system, that it will be much worse than the tool they got today (for a long time due to the fact that I'm the only resource on the project, project manager, tester, developer, integration specialist etc) My problem is that I lost faith in this system quite early due to the nature of the program. Although I made many changes and improvements to the system I wholeheartedly sympathize with the poor users who are going to start using this system. Its not nearly doing all the things it should do. I had this conversation internally with my boss where I told him what I thought about it, that if I were the customer I wouldn't have spent money developing it. So what do I do now? The system in ready, on a staging system and nobody likes it, its too slow and boring and does maybe do 50% of what they need it to do. Despite how much energy and working around the clock I've done to this project: I won't mind scrapping the system but we've spent much money (well my salaries) developing it and my company wants us to be proud of everything we do and advocate it. How will I tackle the contractor when he asks for advice? Surely I can tell him, this is what we agreed upon based on your use case scenarios, and be done with it? How will I inform my boss about this progress? He knows what I feel about it but I always get the feeling he let my criticism pass him by as just hot air, gone tomorrow,.

    Read the article

  • Language Niches and Niche Libraries

    - by Roman A. Taycher
    "Everyone Knows" ... ... that c is widely used for low level programs in large part because operating system/device apis are usually in c. ... that Java is widely used for enterprise applications in large part because of enterprise libraries and ide support. ... that ruby is widely used for webapps thanks in large part because of rails and its library ecosytem But lets go into to details what are the specific niches and subniches. Especially with respect to libraries. Where might you embed lua for application scripting versus python. Where would you use Java vs C#. Which languages do different scientists use? Also which languages have libraries for these subniches? Things like bioperl/scipy/Incanter. Please no flamewars about how nice each language or environment is. This is where they used. Also no complaints about marketing/PHBs. (Manually migrated) I asked this question again after it was closed on stackoverflow.com

    Read the article

  • How necessary is it to learn JavaScript before jQuery?

    - by benhowdle89
    In my opinion, when I looked at JavaScript, it looked like not my cup of tea. When I came across jQuery, I loved it. I sat and watched Nettuts+ 15 days of jQuery screencasts, 1 year later and now I'm fairly confident I wouldn't develop a website without including jQuery's library. I have never felt this has held me back but my question is, will this come back and bite me in the ass one day, the fact that I didn't have a solid JavaScript foundation before jumping feet first into one of its best (if not the best) frameworks? Did anyone else take this approach?

    Read the article

  • Is it a good pattern that no objects should know more than what it needs to know?

    - by Jim Thio
    I am implementing a viewController class. The view controller class got NSNotification when the Grabbing class start or finish updating. I have 2 choices. I can make the grabbing class to provide a public read only property so all other classes can know whether it is still uploading. Or I can let view Controller to listen to 2 different events. Start updating and finish updating events. The truth is the viewController do need to know whether the grabbing class is still updating or not at any other time. So I am thinking of creating 2 events would be a better way to go. Actually, what do you think?

    Read the article

  • How to execute a Ruby file in Java, capable of calling functions from the Java program and receiving primitive-type results?

    - by Omega
    I do not fully understand what am I asking (lol!), well, in the sense of if it is even possible, that is. If it isn't, sorry. Suppose I have a Java program. It has a Main and a JavaCalculator class. JavaCalculator has some basic functions like public int sum(int a,int b) { return a + b } Now suppose I have a ruby file. Called MyProgram.rb. MyProgram.rb may contain anything you could expect from a ruby program. Let us assume it contains the following: class RubyMain def initialize print "The sum of 5 with 3 is #{sum(5,3)}" end def sum(a,b) # <---------- Something will happen here end end rubyMain = RubyMain.new Good. Now then, you might already suspect what I want to do: I want to run my Java program I want it to execute the Ruby file MyProgram.rb When the Ruby program executes, it will create an instance of JavaCalculator, execute the sum function it has, get the value, and then print it. The ruby file has been executed successfully. The Java program closes. Note: The "create an instance of JavaCalculator" is not entirely necessary. I would be satisfied with just running a sum function from, say, the Main class. My question: is such possible? Can I run a Java program which internally executes a Ruby file which is capable of commanding the Java program to do certain things and get results? In the above example, the Ruby file asks the Java program to do a sum for it and give the result. This may sound ridiculous. I am new in this kind of thing (if it is possible, that is). WHY AM I ASKING THIS? I have a Java program, which is some kind of game engine. However, my target audience is a bunch of Ruby coders. I don't want to have them learn Java at all. So I figured that perhaps the Java program could simply offer the functionality (capacity to create windows, display sprites, play sounds...) and then, my audience can simply code with Ruby the logic, which basically justs asks my Java engine to do things like displaying sprites or playing sounds. That's when I though about asking this.

    Read the article

  • Stuff you learned in school, that you have never used again?

    - by Mercfh
    Obviously we learn plenty of things in our University/College/Whatever that probably don't apply to everyday use, but is there anything that stands out particularly? Maybe something that was concentrated ALOT on? For me it was def. 2 things: OO Concepts and Pointers I still use OO, but not nearly to the amount people made it out to be, i can see where it'd be useful but in my line of work we don't have huge amounts of classes, maybe a couple at most. And there certainly isn't much OO reuse (i finally figured out what that means lol) Pointers are another thing, again I can see where they'd be useful...however I barely barely ever touch them, nor do the others I work with. I guess language choice has alot to do with that but still. What about you guys? edit: For those who are asking I work for a Large Printer company, and most of the Applications we work on are Java+XML and Actionscript for "Printer Apps". But we are moving towards other languages (think like webkits and stuff). So the Code amounts per parts are quite small. I never say OO wasn't useful I just said I personally havent seen it used in my workplace much.

    Read the article

  • Dealing with curly brace soup

    - by Cyborgx37
    I've programmed in both C# and VB.NET for years, but primarily in VB. I'm making a career shift toward C# and, overall, I like C# better. One issue I'm having, though, is curly brace soup. In VB, each structure keyword has a matching close keyword, for example: Namespace ... Class ... Function ... For ... Using ... If ... ... End If If ... ... End If End Using Next End Function End Class End Namespace The same code written in C# ends up very hard to read: namespace ... { class ... { function ... { for ... { using ... { if ... { ... } if ... { ... } } } // wait... what level is this? } } } Being so used to VB, I'm wondering if there's a technique employed by c-style programmers to improve readability and to ensure that your code ends up in the correct "block". The above example is relatively easy to read, but sometimes at the end of a piece of code I'll have 8 or more levels of curly braces, requiring me to scroll up several pages to figure out which brace ends the block I'm interested in.

    Read the article

  • How to shift development culture from tech fetish to focusing on simplicity and getting things done?

    - by Serge
    Looking for ways to switch team/individual culture from chasing latest fads, patterns, and all kinds of best practices to focusing on finding quickest and simplest solutions and shipping features. My definition of "tech fetish": Chasing latest fads, applying new technologies and best practices without considering product/project impact, focusing on micro optimization, creating platforms and frameworks instead of finding simple and quick ways to ship product features. Few examples of culture differences: From "Spent a day on trying to map database query with five complex joins in NHibernate" to "Wrote a SQL query and used DataReader to pull data in" From "Wrote super-fast JSON parser in C++" to "Used Python to parse JSON response and call C++ code" From "Let's use WCF because it supports all possible communication standards" to "REST is simple text-based format, let's stick with it and use simple HTTP handlers"

    Read the article

  • Java Applet Tower Defence Game needs tweeking

    - by Ephiras
    Hello :) i have made a tower defence Game for my computer science class as one of my major projects, but have encountered some rather fatal roadblocks. here they are creating a menu screen (class Menu) that can set the total number of enimies, the max number of towers, starting money and the map. i tried creating a constructor in my Main class that sets all the values to whatever the Menu class passes in. I want the Menu screen to close after a difficulty has been selected and the main class to begin. Another problem i would really like some help with is instead of having to write entire arrays i would like to create a small segment of code that runs through an entire picture and sets up an array based on that pixels color.this way i can have multiple levels just dragged into a level folder and have the program read through them. users can even create their own. so a 1 if its yellow, a two if blue and a 3 if purple, then everything else = 0; you can download all the classes and code uif you'd like here sorry about having to redirect you but i wasn't sure how to efficently add a code spoiler. help is greatly appreciated

    Read the article

  • Should a project start with the client or the server?

    - by MadBurn
    Pretty simple question with a complex answer. Should a project start with the client or the server, and why? Where should a single programmer start a client/server project? What are the best practices and what are the reasons behind them? If you can't think of any, what reasons do you use to justify why you would choose to start one before the other? Personally, I'm asking this question because I'm finishing up specs for a project I will be doing for myself on the side for fun. But now that I'm finishing this phase, I'm wondering "ok, now where do I begin?" Since I've never done a project like this by myself, I'm not sure where I should start. In this project, my server will be doing all the heavy lifting and the client will just be sending updates, getting information from the server, and displaying it. But, I don't want that to sway the answer as I'm looking for more of an in depth and less specific answer that would apply to any project I begin in the future.

    Read the article

  • Which is preferable? To know jQuery well, or to know JavaScript well? [closed]

    - by Marwan
    I'm quite familiar with using jQuery, but I've come to feel like a bit of a dummy using it, as my knowledge of JavaScript itself is rather poor. So I'm considering abandoning jQuery and spending time working in straight JS... perhaps even creating my own framework as a learning experience. Does this make sense though? Is there any real point to obtaining more than a passing knowledge of JavaScript when jQuery allows me to accomplish so much, so quickly?

    Read the article

  • How to create an Orthographic display in OpenGL (ES) that handles different screen sizes and orientations?

    - by Piku
    I'm trying to create an iPad/iPhone game using GLES2.0 that contains a 3D scene with a heads-up-display/GUI overlaid on the top. However, this problem would also apply if I were to port my game to a computer and run the game in a resizable window, or allow the user to change screen resolutions... When trying to make the 2D GUI/HUD work I've made the assumption that all I'm really doing is drawing a load of 2D textured 'quads' on the screen and am trying to treat the orthographic projection as an old-style 2D display with 0,0 in the upper left and screenWidth,ScreenHeight in the lower right. This causes me all sorts of confusion when I rotate my ipad into Landscape mode since I can't work out what to put into my projection and modelview matrices to turn everything around the right way. It also gets messy if I want to support the iPad's large screen, an iPhone or a Retina display since I have to then draw three sets of textures for everything and work out which ones to use. Should I be trying to map the 2D OpenGL co-ords 1:1 with the screen? While typing out this question it occurs to me that I could keep my origin in the centre, still running -1/+1 along the axes. This would let me scale my 2D content appropriately on the different screen sizes, but wouldn't I end up with the textures being scaled and possibly losing quality? I'm using OpenGLES 2.0 and have a matrix library that has equivalents to the GLES1.1 glOrthof() and glFrustrum() calls.

    Read the article

  • Which web site gives the most accurate indication of a programmer's capabilities?

    - by Jerry Coffin
    If you were hiring programmers, and could choose between one of (say) the top 100 coders on topcoder.com, or one of the top 100 on stackoverflow.com, which would you choose? At least to me, it would appear that topcoder.com gives a more objective evaluation of pure ability to solve problems and write code. At the same time, despite obvious technical capabilities, this person may lack any hint of social skills -- he may be purely a "lone coder", with little or no ability to help/work with others, may lack mentoring ability to help transfer his technical skills to others, etc. On the other hand, stackoverflow.com would at least appear to give a much better indication of peers' opinion of the coder in question, and the degree to which his presence and useful and helpful to others on the "team". At the same time, the scoring system is such that somebody who just throws up a lot of mediocre (or even poor answers) will almost inevitably accumulate a positive total of "reputation" points -- a single up-vote (perhaps just out of courtesy) will counteract the effects of no fewer than 5 down-votes, and others are discouraged (to some degree) from down-voting because they have to sacrifice their own reputation points to do so. At the same time, somebody who makes little or no technical contribution seems unlikely to accumulate a reputation that lands them (even close to) the top of the heap, so to speak. So, which provides a more useful indication of the degree to which this particular coder is likely to be useful to your organization? If you could choose between them, which set of coders would you rather have working on your team?

    Read the article

  • Are there currently any modern, standardized, aptitude test for software engineering?

    - by Matthew Patrick Cashatt
    Background I am a working software engineer who is in the midst of seeking out a new contract for the next year or so. In my search, I am enduring several absurd technical interviews as indicated by this popular question I asked earlier today. Even if the questions I was being asked weren't almost always absurd, I would be tired nonetheless of answering them many times over for various contract opportunities. So this got me thinking that having a standardized exam that working software professionals could take would provide a common scorecard that could be referenced by interviewers in lieu of absurd technical interview questions (i.e. nerd hazing). Question Is there a standardized software engineering aptitude test (SEAT??) available for working professionals to take? If there isn't a such an exam out there, what questions or topics should be covered? An additional thought Please keep in mind, if suggesting a question or topic, to focus on questions or topics that would be relevant to contemporary development practices and realistic needs in the workforce as that would be the point of a standard aptitude test. In other words, no clown traversal questions.

    Read the article

  • How to deal with tautology in comments?

    - by Tamás Szelei
    Sometimes I find myself in situations when the part of code that I am writing is (or seems to be) so self-evident that its name would be basically repeated as a comment: class Example { /// <summary> /// The location of the update. /// </summary> public Uri UpdateLocation { get; set; }; } (C# example, but please refer to the question as language-agnostic). A comment like that is useless; what am I doing wrong? Is it the choice of the name that is wrong? How could I comment parts like this better? Should I just skip the comment for things like this?

    Read the article

  • Why do memory-managed languages retain the `new` keyword?

    - by Channel72
    The new keyword in languages like Java, Javascript, and C# creates a new instance of a class. This syntax seems to have been inherited from C++, where new is used specifically to allocate a new instance of a class on the heap, and return a pointer to the new instance. In C++, this is not the only way to construct an object. You can also construct an object on the stack, without using new - and in fact, this way of constructing objects is much more common in C++. So, coming from a C++ background, the new keyword in languages like Java, Javascript, and C# seemed natural and obvious to me. Then I started to learn Python, which doesn't have the new keyword. In Python, an instance is constructed simply by calling the constructor, like: f = Foo() At first, this seemed a bit off to me, until it occurred to me that there's no reason for Python to have new, because everything is an object so there's no need to disambiguate between various constructor syntaxes. But then I thought - what's really the point of new in Java? Why should we say Object o = new Object();? Why not just Object o = Object();? In C++ there's definitely a need for new, since we need to distinguish between allocating on the heap and allocating on the stack, but in Java all objects are constructed on the heap, so why even have the new keyword? The same question could be asked for Javascript. In C#, which I'm much less familiar with, I think new may have some purpose in terms of distinguishing between object types and value types, but I'm not sure. Regardless, it seems to me that many languages which came after C++ simply "inherited" the new keyword - without really needing it. It's almost like a vestigial keyword. We don't seem to need it for any reason, and yet it's there. Question: Am I correct about this? Or is there some compelling reason that new needs to be in C++-inspired memory-managed languages like Java, Javascript and C#?

    Read the article

  • What is So Unique About Node.js?

    - by Adrian Shum
    Recently there has been a lot of praise for Node.js. I am not a developer that has had much exposure to network application. From my bare understanding of Nodes.js, its strength is: we have only one thread handling multiple connections, providing an event-based architecture. However, for example in Java, I can create only one thread using NIO/AIO (which is non-blocking APIs from my bare understanding), and handle multiple connections using that thread, and I provide an event-based architecture to implement the data handling logic (shouldn't be that difficult by providing some callback etc) ? Given JVM being a even more mature VM than V8 (I expect it to run faster too), and event-based handling architecture seems to be something not difficult to create, I am not sure why Node.js is attracting so much attention. Did I miss some important points?

    Read the article

  • How is intermediate data organized in MapReduce?

    - by Pedro Cattori
    From what I understand, each mapper outputs an intermediate file. The intermediate data (data contained in each intermediate file) is then sorted by key. Then, a reducer is assigned a key by the master. The reducer reads from the intermediate file containing the key and then calls reduce using the data it has read. But in detail, how is the intermediate data organized? Can a data corresponding to a key be held in multiple intermediate files? What happens when there is too much data corresponding to one key to be held by a single file? In short, how do intermediate partitions differ from intermediate files and how are these differences dealt with in the implementation?

    Read the article

  • Maximum Length Of IP Address: 15 (IPv4) & 39(IPv6)

    - by Gopinath
    Problem You are designing a database table for a web application that requires to store IP address of users who visits the site. The IP address is required to be stored a character data in the table. To define size of the character column you need to know maximum length of IP address. So, what is the maximum length of an IP address? Solution The IPv4 version of IP address is in the following format 255.255.255.255 To store IPv4 address we require 15 characters. The IPv6 version of IP address is grouped into sets of 4 hex digits separated by colons, like the below 2001:0db8:85a3:0000:0000:8a2e:0370:7334 To store IPv6 address you require a 39 characters long column. Conclusion As IPv4 and IPv6 are the commonly use protocols, you better define a column with 39 characters length so that both the format address are saved in to the table without any issues. This article titled,Maximum Length Of IP Address: 15 (IPv4) & 39(IPv6), was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • "Opportunity" to take over maintenance of a small internal website. What should I do?

    - by Dan
    I have been offered an "opportunity" to take over maintenance of a small internal website run by my group that provides information about schedules and photos of events the groups done. My manager sent me the link to the site and checked it out. The site looked clean and neat but loaded in ~5 seconds. I thought this was a little long considering the site really didn't contain a lot of content. This prompted me to take a look under the hood at the pages source code. To my horror it'd been totally hacked together using nested tables! I'm new so I really can't say no to this "opportunity" so what should I do with it? Every fiber of my being feels that the only correct thing to do is over hall the site using CSS, Div's, Span's and any other appropriate tags that a sane/good web developer would used to begin with instead of depending on the render incentive magic of tables. But I'd like to ask programmers with more experienced then me, who have been in this situation. What should I do? Is my only realistic option to leave the horror as is and only adjusting the content as requested? I'm really torn between good development and the corporate reality I'm part of. Is there some kind of middle ground where things can be made better even if they're not perfect? Thanks ahead of time.

    Read the article

  • How should bots be recognised in a game?

    - by Bane
    I'm interested in how bots are usually written. Here's my situation: I plan to make an online 2D mecha game in HTML5, and the server-side will be done with node. It is intended to be multiplayer, but I also want to make bots in case there aren't enough players. How does my game logic see them, as players or as bots? Is there a standard by which I should make them? Also, any general tips and hints will be OK.

    Read the article

  • Push-Based Events in a Services Oriented Architecture

    - by Colin Morelli
    I have come to a point, in building a services oriented architecture (on top of Thrift), that I need to expose events and allow listeners. My initial thought was, "create an EventService" to handle publishing and subscribing to events. That EventService can use whatever implementation it desires to actually distribute the events. My client automatically round-robins service requests to available service hosts which are determined using Zookeeper-based service discovery. So, I'd probably use JMS inside of EventService mainly for the purpose of persisting messages (in the event that a service host for EventService goes down before it can distribute the message to all of the available listeners). When I started considering this, I began looking into the differences between Queues and Topics. Topics unfortunately won't work for me, because (at least for now), all listeners must receive the message (even if they were down at the time the event was pushed, or hadn't made a subscription yet because they haven't completed startup (during deployment, for example) - messages should be queued until the service is available). However, I don't want EventService to be responsible for handling all of the events. I don't think it should have the code to react to events inside of it. Each of the services should do what it needs with a given event. This would indicate that each service would need a JMS connection, which questions the value of having EventService at all (as the services could individually publish and subscribe to JMS directly). However, it also couples all of the services to JMS (when I'd rather that there be a single service that's responsible for determining how to distribute events). What I had thought was to publish an event to EventService, which pulls a configuration of listeners from some configuration source (database, flat file, irrelevant for now). It replicates the message and pushes each one back into a queue with information specific to that listener (so, if there are 3 listeners, 1 event would become 3 events in JMS). Then, another thread in EventService (which is replicated, running on multiple hots) would be pulling from the queue, attempting to make the service call to the "listener", and returning the message to the queue (if the service is down), or discarding the message (if the listener completed successfully). tl;dr If I have an EventService that is responsible for receiving events and delegating service calls to "event listeners," (which are really just endpoints on other services), how should it know how to craft the service call? Should I create a generic "Event" object that is shared among all services? Then, the EventService can just construct this object and pass it to the service call. Or is there a better answer to this problem entirely?

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >