Search Results

Search found 19841 results on 794 pages for 'programming resources'.

Page 84/794 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Google Programmers

    - by seth
    As a soon-to-be software engineer, I have studied some languages during my time in college. I can name C, C++, Java, Scheme, Ruby, PHP, for example. However, one of the main principles in my college (recognized by many as the best in my country) is to teach us how to learn for ourselves and how to search the web when we have a doubt. This leads to a proactive attitude, when I need something, I go get it and this has worked so far for me. Recently, I started wondering though, how much development would I be able to do without internet access. The answer bugged me quite a bit. I know the concept of the languages, mostly I know what to do, but I was amazed by how "slow" things were without having the google to help in the development. The problem was mostly related to specific syntax but it was not without some effort that i solved some of the SPOJ problems in C++. Is this normal? Should I be worried and try to change something in my programming behaviour? UPDATE: I'll give a concrete example. Reading and writing to a file in java. I have done this about a dozen times in my life, yet every time I need to do it, I end up googling "read file java" and refreshing my memory. I completely understand the code, i fully understand what it does. But I am sure, that without google, it would take me a few tries to read and write correctly (if I had to sit in front of the screen with a blank page and write this without consulting any source whatsoever).

    Read the article

  • How to handle a player's level and its consequent privileges?

    - by Songo
    I'm building a game similar to Mafia Wars where a player can do tasks for his gang and gain experience and thus advancing his level. The game is built using PHP and a Mysql database. In the game I want to limit the resources allowed to player based on his level. For example: ________| (Max gold) | (Max army size) | (Max moves) | ... Level 1 | 1000 | 100 | 10 | ... Level 2 | 1500 | 200 | 20 | ... Level 3 | 3000 | 300 | 25 | ... . . . In addition certain features of the game won't be allowed until a certain level is reached such as players under Level 10 can't trade in the game market, players under Level 20 can't create alliances,...etc. The way I have modeled it is by implementing a very loooong ACL (Access Control List) with about 100 entries (an entry for each level). However, I think there may be a simpler approach to this seeing that this feature have been implemented in many games before.

    Read the article

  • Is there a process-oriented IDE ?

    - by Raveline
    My problem is simple : when I'm programming in an OO paradigm, I'm often having part of a main business process divided in many classes. Which means, if I want to examine the whole functional chain that leads to the output, for debugging or for optimization research, it can be a bit painful. So I was wondering : is there an IDE that let you put a "process tag" on functions coming from different objects, and give you a view of all those functions having the same tag ? edit : To give an example (that I'm making up completely, sorry if it doesn't sound very realistic). Let's say we have the following business process for a HR application : receive a holiday-request by an employee, check the validity of the request, then give an alert to his boss ("one of those lazy programmer wants another day off"); at the same time, let's say the boss will want to have a table of all employee's timetable during the time the employee wants his vacations; then handle the answer of the boss, send a nice little mail to the employee ("No way, lazy bones"). Even if we get rid of everything not purely business-related (mail sending process, db handling to get the useful info, GUI functionalities, and so on), we still have something that doesn't really fit in "one class". I'd like to have an IDE that would give me the opportunity to embrace quickly, at the very least : The function handling the validation of the request by the employee; The function preparing the "timetable" for the boss; The function handling the validation of the request by the boss; I wouldn't put all those functions in the same class (but perhaps that's my mistake ?). This is where my dreamed IDE could be helpful.

    Read the article

  • Discuss: PLs are characterised by which (iso)morphisms are implemented

    - by Yttrill
    I am interested to hear discussion of the proposition summarised in the title. As we know programming language constructions admit a vast number of isomorphisms. In some languages in some places in the translation process some of these isomorphisms are implemented, whilst others require code to be written to implement them. For example, in my language Felix, the isomorphism between a type T and a tuple of one element of type T is implemented, meaning the two types are indistinguishable (identical). Similarly, a tuple of N values of the same type is not merely isomorphic to an array, it is an array: the isomorphism is implemented by the compiler. Many other isomorphisms are not implemented for example there is an isomorphism expressed by the following client code: match v with | ((?x,?y),?z = x,(y,z) // Felix match v with | (x,y), - x,(y,z) (* Ocaml *) As another example, a type constructor C of int in Felix may be used directly as a function, whilst in Ocaml you must write a wrapper: let c x = C x Another isomorphism Felix implements is the elimination of unit values, including those in tuples: Felix can do this because (most) polymorphic values are monomorphised which can be done because it is a whole program analyser, Ocaml, for example, cannot do this easily because it supports separate compilation. For the same reason Felix performs type-class dispatch at compile time whilst Haskell passes around dictionaries. There are some quite surprising issues here. For example an array is just a tuple, and tuples can be indexed at run time using a match and returning a value of a corresponding sum type. Indeed, to be correct the index used is in fact a case of unit sum with N summands, rather than an integer. Yet, in a real implementation, if the tuple is an array the index is replaced by an integer with a range check, and the result type is replaced by the common argument type of all the constructors: two isomorphisms are involved here, but they're implemented partly in the compiler translation and partly at run time.

    Read the article

  • Broad topics needed for teaching game development

    - by livingtech
    I am going to be doing a presentation on game development to an iPhone user group in the near(ish) future. My audience are iPhone developers, but not necessarily very experienced ones, and this is meant to be an introduction. My question is, what broad topics are needed to understand game development? I acknowledge that this is fairly subjective, but I really am hoping for a comprehensive list of high-level topics that apply to a broad enough swath of games that anyone interested in the topic SHOULD know about them. I would be ecstatic with some pointers to any resources that attempt to make a list such as this this. (I have looked, but my google-fu is failing me tonight.) Here's what I have so far: The Game Loop a sub-note about event driven games 2D Animation sprites/texture maps 3D Animation importance of frameworks modeling software Particles and particle effects hit detection AI Obviously I will not be covering all these topics with any depth, more like simply defining them so that after my talk, the audience will (hopefully) be able to wrap their heads around how any given game might be developed. What am I missing?

    Read the article

  • Is these company terms good for a programmer or should I move?

    - by o_O
    Here are some of the terms and conditions set forward by my employer. Does these make sense for a job like programming? No freelancing in any way even in your free time outside company work hours (may be okay. May be they wanted their employees to be fully concentrating on their full time job. Also they don't want their employees to do similar work for a competing client. Completely rational in that sense). - So sort of agreed. Any thing you develop like ideas, design, code etc while I'm employed there, makes them the owner of that. Seriously? Don't you think that its bad (for me)? If I'm to develop something in my free time (by cutting down sleep and hard working), outside the company time and resource, is that claim rational? I heard that Steve Wozniak had such a contract while he was working at HP. But that sort of hardware design and also those companies pay well, when compared to the peanuts I get. No other kind of works allowed. Means no open source stuffs. Fully dedicated to being a puppet for the employer, though the working environment is sort of okay. According to my assessment this place would score a 10/12 in Joel's test. So are these terms okay especially considering the fact that I'm underpaid with peanuts?

    Read the article

  • Metaphor for task synchronization [closed]

    - by nkint
    I'm looking for a metaphor. A friend of mine taught me to use metaphors from nature, everyday life, math, and use them to design my projects. They can help in creating a better design or better understanding or the problem, and they are cool. Now I'm working on a project with hardware and micro-controllers in C. For convenience, I have decided to use multiple micro-controllers as co-processor units for real-time (the slaves) and a master. This has saved me a lot of headache: I can code the main logic in the master without paying too much attention to super optimizing everything; I don't care if I need some blocking-call; I don't worry about serial communication with the computer. I just send messages to the slaves and they are super fast super in real time. I like my design and it seems to work well. So here are the important concepts that I'm trying capture in the metaphor: hierarchy of processing Not using one big brain but rather several small, distributed brain units using distributed power or resources I'm looking for a good metaphor for this concept of having one unit synchronize the work of all the others. Preferably, the metaphor would come from nature, biology, or zoology.

    Read the article

  • Which language is more suitable heavy file tasks?

    - by All
    I need to write a script (based on basic functions) to process /image/audio/video files. The process is mainly filesystem tasks and converts. The database of files has been stored by mysql. The script is simple but cause heavy tasks on the system; for example renaming/converting/copying thousands of file in a run. The script does not read the content of files into memory, it just manage the commands for sub-processes. The main weight is on the communication with filesystem. The script will be used regularly for new files. My concern is about performance. I am thinking of Shell script a complied language like C Please advise which programming language is more suitable for this purpose and why? UPDATE: An example is to scan a folder for images, convert them with ImageMagick, move files to destination folder, get file info, then update the database. As you can see, the process has no room for optimization, and most of languages have similar APIs for popular programs like ImageMagick, MySQL, etc. Thus, it can be written in any language. I just wish to reduce resource usage by speeding up the long loop. NOTE: I know that questions about comparing languages are not favorable, but I really had problem to choose, because the problems can appear in action.

    Read the article

  • Is it OK to use dynamic typing to reduce the amount of variables in scope?

    - by missingno
    Often, when I am initializing something I have to use a temporary variable, for example: file_str = "path/to/file" file_file = open(file) or regexp_parts = ['foo', 'bar'] regexp = new RegExp( regexp_parts.join('|') ) However, I like to reduce the scope my variables to the smallest scope possible so there is less places where they can be (mis-)used. For example, I try to use for(var i ...) in C++ so the loop variable is confined to the loop body. In these initialization cases, if I am using a dynamic language, I am then often tempted to reuse the same variable in order to prevent the initial (and now useless) value from being used latter in the function. file = "path/to/file" file = open(file) regexp = ['...', '...'] regexp = new RegExp( regexp.join('|') ) The idea is that by reducing the number of variables in scope I reduce the chances to misuse them. However this sometimes makes the variable names look a little weird, as in the first example, where "file" refers to a "filename". I think perhaps this would be a non issue if I could use non-nested scopes begin scope1 filename = ... begin scope2 file = open(filename) end scope1 //use file here //can't use filename on accident end scope2 but I can't think of any programming language that supports this. What rules of thumb should I use in this situation? When is it best to reuse the variable? When is it best to create an extra variable? What other ways do we solve this scope problem?

    Read the article

  • How programmers can afford to NOT learn new things.

    - by newbie
    Good day! I am wondering how programmers learn many things because as a career shifter (from engineering to IT), I find it really hard to absorb everything. Three months ago, I learned HTML/CSS/Javascript. Two months ago, I learned mySQL and CCNA1. One month ago I learned C and Java. Now I am trying to learn J2EE. But it seems that I must combine everything I learned then add more into my brain (especially because J2EE is HUGE! -- XML, servlets, JSP, JSTL, EJB, frameworks(Hibernate, Structs, Spring), JDBC... and so on!!!) So I am wondering, how can programmers learn everything, then add something new without being confused of everything! Because Right now, I feel like my brain is going to explode because of information overload! And these knowledge I am trying to acquire are just the BASICS of programming (icing on the cake)! I still need to learn MORE to become a good programmer! And new technology emerges now and then that requires programmers to learn more again.. Learn.. learn.. learn... Any suggestions on how you as a programmer fit all you've learned into your brain? And how do you know which is the right thing for you to learn? Aren't you afraid that what you've learned may be obsolete next year then start learning again...?

    Read the article

  • First Experience with Web Services

    When I first started programming with Microsoft .Net (1.0 Framework) I had a strong desire to learn how search engines indexed web sites. At that time I was a working as a search engine spammer creating web pages to generate traffic for specific themes for various clients. One way I attempted to better understand .Net was to build a web spider that analyzed web pages on demand. An example of the spider is hosted at AddLinkz.com. After my spider was built I had no real idea what I could/should do with it until I found the MSN Search API. I used this web service to compare its results with my spider. Additionally, I used the API to feed my .Net web spider new URLs from the API based on specific search terms. MSN’s search API was very easy to use, I just had to request information by calling a web URL with parameters via a Get request and the results were returned in XML. At that time all requests were limited to XML responses and a maximum of 1,000 results per query.   Since then the entire API has gone through several reconstructions, rebranding and new search services.  Microsoft’s new Bing API replaced the older MSN search API and added several new search capabilities. These new features allow search data to be returned for web searches, image searches, new searches, and related search terms to name a few. Bing API Version 2.0 SourceTypes Web Searches for web content Sushi Image Searches for images on the web Sushi News Searches news stories Sushi InstantAnswer Searches Encarta online what is sushi, convert 5 feet to meters, x*5=7, and 2 plus 2 Spell Searches Encarta dictionary for spelling suggestions Phonebook Searches phonebook entries sushi in Los Angeles RelatedSearch Returns the query strings most similar to yours Ad Returns advertisements to incorporate with results I currently plan to start using the web search feature from the new Bing 2.0 API in an open source project related to exception management. Currently, it is still in the conception phase.

    Read the article

  • Are these company terms good for a programmer or should I move?

    - by o_O
    Here are some of the terms and conditions set forward by my employer. Does these make sense for a job like programming? No freelancing in any way even in your free time outside company work hours (may be okay. May be they wanted their employees to be fully concentrating on their full time job. Also they don't want their employees to do similar work for a competing client. Completely rational in that sense). - So sort of agreed. Any thing you develop like ideas, design, code etc while I'm employed there, makes them the owner of that. Seriously? Don't you think that its bad (for me)? If I'm to develop something in my free time (by cutting down sleep and hard working), outside the company time and resource, is that claim rational? I heard that Steve Wozniak had such a contract while he was working at HP. But that sort of hardware design and also those companies pay well, when compared to the peanuts I get. No other kind of works allowed. Means no open source stuffs. Fully dedicated to being a puppet for the employer, though the working environment is sort of okay. According to my assessment this place would score a 10/12 in Joel's test. So are these terms okay especially considering the fact that I'm underpaid with peanuts?

    Read the article

  • Should I modify an entity with many parameters or with the entity itself?

    - by Saeed Neamati
    We have a SOA-based system. The service methods are like: UpdateEntity(Entity entity) For small entities, it's all fine. However, when entities get bigger and bigger, to update one property we should follow this pattern in UI: Get parameters from UI (user) Create an instance of the Entity, using those parameters Get the entity from service Write code to fill the unchanged properties Give the result entity to the service Another option that I've experienced in previous experiences is to create semantic update methods for each update scenario. In other words instead of having one global all-encompasing update method, we had many ad-hoc parametric methods. For example, for the User entity, instead of having UpdateUser (User user) method, we had these methods: ChangeUserPassword(int userId, string newPassword) AddEmailToUserAccount(int userId, string email) ChangeProfilePicture(int userId, Image image) ... Now, I don't know which method is truly better, and for each approach, we encounter problems. I mean, I'm going to design the infrastructure for a new system, and I don't have enough reasons to pick any of these approaches. I couldn't find good resources on the Internet, because of the lack of keywords I could provide. What approach is better? What pitfalls each has? What benefits can we get from each one?

    Read the article

  • what languages are good selling points on resume? [closed]

    - by Thomas Galvin
    I have a good amount of experience with C# and Java at the moment but after education and whatnot I wish to be able in more than just 2 high-level, comparatively limited languages, and from what I've seen languages like C(++) or PHP are in demand at the moment. I've thought about learning the following: C. Very standard, lightweight and available on everything. However very old and mostly procedural. C++. Standard like C but I've read in some places that it encourages bad programming design and use of dodgy libraries - but similar things have been said about C too so I'll take that with a grain of salt. D. Quite new but looks promising, but will it be relevant or applicable in the future though? PHP. With the internet becoming ever more important I think this might be the one to go with, but the code itself isn't very intuitive. CoffeeScript (or plain JavaScript). With Microsoft's new idea of HTML5+JS for everything under the sun this doesn't look like a bad choice. However things do change and I wish to be primarily a software dev, not web dev. So out of the above list, or any others that you could suggest, what would you say I should begin to focus on? What is your opinion on staying with C#?

    Read the article

  • Languages with a clear distinction between subroutines that are purely functional, mutating, state-changing, etc?

    - by CPX
    Lately I've become more and more frustrated that in most modern programming languages I've worked with (C/C++, C#, F#, Ruby, Python, JS and more) there is very little, if any, language support for determining what a subroutine will actually do. Consider the following simple pseudo-code: var x = DoSomethingWith(y); How do I determine what the call to DoSomethingWith(y) will actually do? Will it mutate y, or will it return a copy of y? Does it depend on global or local state, or is it only dependent on y? Will it change the global or local state? How does closure affect the outcome of the call? In all languages I've encountered, almost none of these questions can be answered by merely looking at the signature of the subroutine, and there is almost never any compile-time or run-time support either. Usually, the only way is to put your trust in the author of the API, and hope that the documentation and/or naming conventions reveal what the subroutine will actually do. My question is this: Does there exist any languages today that make symbolic distinctions between these types of scenarios, and places compile-time constraints on what code you can actually write? (There is of course some support for this in most modern languages, such as different levels of scope and closure, the separation between static and instance code, lambda functions, et cetera. But too often these seem to come into conflict with each other. For instance, a lambda function will usually either be purely functional, and simply return a value based on input parameters, or mutate the input parameters in some way. But it is usually possible to access static variables from a lambda function, which in turn can give you access to instance variables, and then it all breaks apart.)

    Read the article

  • What determines which Javascript functions are blocking vs non-blocking?

    - by Sean
    I have been doing web-based Javascript (vanilla JS, jQuery, Backbone, etc.) for a few years now, and recently I've been doing some work with Node.js. It took me a while to get the hang of "non-blocking" programming, but I've now gotten used to using callbacks for IO operations and whatnot. I understand that Javascript is single-threaded by nature. I understand the concept of the Node "event queue". What I DON'T understand is what determines whether an individual javascript operation is "blocking" vs. "non-blocking". How do I know which operations I can depend on to produce an output synchronously for me to use in later code, and which ones I'll need to pass callbacks to so I can process the output after the initial operation has completed? Is there a list of Javascript functions somewhere that are asynchronous/non-blocking, and a list of ones that are synchronous/blocking? What is preventing my Javascript app from being one giant race condition? I know that operations that take a long time, like IO operations in Node and AJAX operations on the web, require them to be asynchronous and therefore use callbacks - but who is determining what qualifies as "a long time"? Is there some sort of trigger within these operations that removes them from the normal "event queue"? If not, what makes them different from simple operations like assigning values to variables or looping through arrays, which it seems we can depend on to finish in a synchronous manner? Perhaps I'm not even thinking of this correctly - hoping someone can set me straight. Thanks!

    Read the article

  • Putting Together a Game Design Team?

    - by Kaia
    I'm attempting to put together a game design team that is willing to help me design/program, test, and somewhat produce the game we make to the public. I need anyone who knows anything about programming/coding, designing, etc. Once we get it up and running and out into the world (over dramatic maybe? haha) I have ideas of generating a profit from it so there is a possibility of payment. My thinking on it (so far) is this: 2D (possibly. I haven't decided if I want it 2D or 3D. It really depends on what is easier) 3rd person. Adventure (I want there to be a point to it, but like a point with no real end) I want there to be a story to it. If you've ever played Dofus, think like that. There is a story to the game, but no real end. I want (if possible) to include mini-games. These could end up becoming a possible way for a player to aquire in-game money, quest items, etc. If anyone is interested in helping me create the story line/script (which we will finsih first, before creating the game) please contact me. I want to get this completed as soon as possible.

    Read the article

  • Can too much abstraction be bad?

    - by m3th0dman
    As programmers I feel that our goal is to provide good abstractions on the given domain model and business logic. But where should this abstraction stop? How to make the trade-off between abstraction and all it's benefits (flexibility, ease of changing etc.) and ease of understanding the code and all it's benefits. I believe I tend to write code overly abstracted and I don't know how good is it; I often tend to write it like it is some kind of a micro-framework, which consists of two parts: Micro-Modules which are hooked up in the micro-framework: these modules are easy to be understood, developed and maintained as single units. This code basically represents the code that actually does the functional stuff, described in requirements. Connecting code; now here I believe stands the problem. This code tends to be complicated because it is sometimes very abstracted and is hard to be understood at the beginning; this arises due to the fact that it is only pure abstraction, the base in reality and business logic being performed in the code presented 1; from this reason this code is not expected to be changed once tested. Is this a good approach at programming? That it, having changing code very fragmented in many modules and very easy to be understood and non-changing code very complex from the abstraction POV? Should all the code be uniformly complex (that is code 1 more complex and interlinked and code 2 more simple) so that anybody looking through it can understand it in a reasonable amount of time but change is expensive or the solution presented above is good, where "changing code" is very easy to be understood, debugged, changed and "linking code" is kind of difficult. Note: this is not about code readability! Both code at 1 and 2 is readable, but code at 2 comes with more complex abstractions while code 1 comes with simple abstractions.

    Read the article

  • Breaking up classes and methods into smaller units

    - by micahhoover
    During code reviews a couple devs have recommended I break up my methods into smaller methods. Their justification was (1) increased readability and (2) the back trace that comes back from production showing the method name is more specific to the line of code that failed. There may have also been some colorful words about functional programming. Additionally I think I may have failed an interview a while back because I didn't give an acceptable answer about when to break things up. My inclination is that when I see a bunch of methods in a class or across a bunch of files, it isn't clear to me how they flow together, and how many times each one gets called. I don't really have a good feel for the linearity of it as quickly just by eye-balling it. The other thing is a lot of people seem to place a premium of organization over content (e.g. 'Look at how organized my sock drawer is!' Me: 'Overall, I think I can get to my socks faster if you count the time it took to organize them'). Our business requirements are not very stable. I'm afraid that if the classes/methods are very granular it will take longer to refactor to requirement changes. I'm not sure how much of a factor this should be. Anyway, computer science is part art / part science, but I'm not sure how much this applies to this issue.

    Read the article

  • Have there been attempts to make object containers that search for valid programs by auto wiring compatible components?

    - by Aaron Anodide
    I hope this post isn't too "Fringe" - I'm sure someone will just kill it if it is :) Three things made me want to reach out about this now: Decoupling is so in the forefront of design. TDD inspires the idea that it doesn't matter how a program comes to exist as long as it works. Seeing how often the adapter pattern is applied to achieve (1). I'm almost sure this has been tried from a memory of reading about it around the year 2000 or so. If I had to guess, it was maybe about and earlier version of the Java Spring framework. At this time we were not so far from days when the belief was that computer programs could exhibit useful emergent behavior. I think the article said it didn't work, but it didn't say it was impossible. I wonder if since then it has been deemed impossible or simply an illusion due to a false assumption of similarity between a brain and a CPU. I know this illusion existed because I had an internship in 1996 where I programmed neural nets that were supposedly going to exhibit "brain damage". STILL, after all that, I'm sitting around this morning and not able to shake the idea that it should be possible to have a method of programming to allow autonomous components to find each other, attempt to collaborate and their outputs evaluated against a set desired results.

    Read the article

  • is wisdom of what happens 'behind scenes' (in compiler, external DLLs etc.) important?

    - by I_Question_Things_Deeply
    I have been a computer-fanatic for almost a decade now. I've always loved and wondered how computers work, even from the purest, lowest hardware level to the very smallest pixel on the screen, and all the software around that. That seems to be my problem though ... as I try to write code (I'm pretty fluent at C++) I always sit there enormous amounts of time in front of a text-editor wondering how every line, statement, datum, function, etc. will correspond to every Assembly and machine instruction performed to do absolutely everything necessary for the kernel to allocate memory to run my compiled program, and all of the other hardware being used as well. For example ... I would write cout << "Before memory changed" << endl; and run the debugger to get the Assembly for this, and then try and reverse disassemble the Assembly to machine code based on my ISA, and then research every .dll, library file, linked library, linking process, linker source code of the program, the make file, the kernel I'm using's steps of processing this compilation, the hardware's part aside from the processor (e.g. video card, sound card, chipset, cache latency, byte-sized registers, calling convention use, DDR3 RAM and disk drive, filesystem functioning and so many other things). Am I going about programming wrong? I mean I feel I should know everything that goes on underneath English syntax on a computer program. But the problem is that the more I research every little thing the less I actually accomplish at all. I can never finish anything because of this mentality, yet I feel compelled to know everything... what should I do?

    Read the article

  • What's the right/standard way of achieving separation of concerns?

    - by Ghanima
    Some background: I want to start developing games, and taking some of the advice given in this site, I've started with something simple and familiar, such as pong, tetris, etc. I want to take as much time as needed to make sure that I have the basics right before moving on to something bigger. I have medium programming experience but I realize making games is a different thing. I find myself wondering many things like should this be in a separate class? Should this module handle this stuff or is it better to let other modules have that kind of functionality? For example, the bouncing of a ball in pong, right now is handled in the ball module, but maybe it's better that some other module did it. Right now I have different modules: one for the graphics, one for the game logic, and others for the objects (depending on the kind of movement required, not all the objects are alike). I know I am asking a lot, any tips you have will be very much appreciated. Short question: What's the right or standard way of separating the modules? What have you found most effective? Is it enough to just keep the drawing (graphics) and the logic separate? Is it necessary to have a lot of classes? (for example for the objects in the game, to handle the movement, etc)

    Read the article

  • Learning how to design knowledge and data flow [closed]

    - by max
    In designing software, I spend a lot of time deciding how the knowledge (algorithms / business logic) and data should be allocated between different entities; that is, which object should know what. I am asking for advice about books, articles, presentations, classes, or other resources that would help me learn how to do it better. I code primarily in Python, but my question is not really language-specific; even if some of the insights I learn don't work in Python, that's fine. I'll give a couple examples to clarify what I mean. Example 1 I want to perform some computation. As a user, I will need to provide parameters to do the computation. I can have all those parameters sent to the "main" object, which then uses them to create other objects as needed. Or I can create one "main" object, as well as several additional objects; the additional objects would then be sent to the "main" object as parameters. What factors should I consider to make this choice? Example 2 Let's say I have a few objects of type A that can perform a certain computation. The main computation often involves using an object of type B that performs some interim computation. I can either "teach" A instances what exact parameters to pass to B instances (i.e., make B "dumb"); or I can "teach" B instances to figure out what needs to be done when looking at an A instance (i.e., make B "smart"). What should I think about when I'm making this choice?

    Read the article

  • What is the best objective way to measure language popularity trends? (What's better than TIOBE?)

    - by Eric Wilson
    The best way to get data on computer language popularity that I know is the TIOBE index. But everyone knows that TIOBE is hopelessly flawed. (If someone provides a link to support this, I'll add it here.) So is there any data on programming language popularity that is generally considered meaningful? The only other option I know is to look at the trends at indeed.com, which is inherently flawed, being based on job postings. It isn't like I would make a future language decision solely based on an index, but it might provide a useful balance to the skewed perspective one obtains by talking to ones friends and colleagues. To illustrate that bias, I'll point out that based on the experience of those I personally know, the only languages used professionally today (in order of popularity) are Java, C#, Groovy, JavaScript, Ruby, Objective C, and Perl. (Though it is evident that C, C++ and PHP were used in the past.) So my question is, everyone bashes TIOBE, but is there anything else? If so, can anyone explain how we know the alternative has better methodology? Thanks.

    Read the article

  • What language available on commodity web hosts would suit a C# developer? [closed]

    - by billpg
    Recognising its ubiquity on commodity web hosting services, I tried developing in PHP a few years ago. I really didn't like it, later deciding that life was too short for PHP. (In brief, having to put $ on variable names; mis-spelt variable names become new variables; converting non-numeric strings to integers without complaint; the need for an "and this time I mean it" comparison operator.) In my ideal world, commodity web hosts would all support C#/ASP.NET, my preferred web-development language and framework, but this is not my ideal world. Even Mono has barely made a dent on Linux based hosts. However, last time I moaned about PHP's ubiquity, someone followed up that this was no longer the case, and that many other languages are now commonly usable on web hosts too. What programming language; a. Would suit a developer who prefers C#. b. Is available to run on many web hosts.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >