Search Results

Search found 23473 results on 939 pages for 'game programming'.

Page 167/939 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • How to deal with tautology in comments?

    - by Tamás Szelei
    Sometimes I find myself in situations when the part of code that I am writing is (or seems to be) so self-evident that its name would be basically repeated as a comment: class Example { /// <summary> /// The location of the update. /// </summary> public Uri UpdateLocation { get; set; }; } (C# example, but please refer to the question as language-agnostic). A comment like that is useless; what am I doing wrong? Is it the choice of the name that is wrong? How could I comment parts like this better? Should I just skip the comment for things like this?

    Read the article

  • How much time takes to a new language like D to become popular? [closed]

    - by Adrián Pérez
    I was reading about new languages for me to learn and I find very good comments about D, like it's the new C or what C++ should have been. Knowing that many people say wonders about the language, I'm wondering how much time usually takes to a language to become popular. This is, having libraries ported or written natively for this language and being used in serious software development. I have read about the history of Java, and Python to figure it out, but may be they are too high level complexity to say their development could take the same time as will take for D.

    Read the article

  • What to use C++ for?

    - by futlib
    I really love C++. However, I'm struggling to find good uses for it lately. It is still the language to use if you're building huge systems with huge performance requirements. Like backend/infrastructure code at Google and Facebook, or high-end games. But I don't get to do stuff like that. It's also a good choice for code that runs close to the hardware. I'd like to do more low-level stuff, but it isn't part of my job, and I can't think of useful private projects that would involve that. Traditionally, C++ was also a good choice for rich client applications, but those are mostly written in C# and Obj-C lately - and aren't really that important anymore, with everything being a web app. Or a mobile app, which are mostly written in Obj-C and Java. And of course, web-based desktop and mobile apps are quite prominent, too. At my job, I work mostly on web applications, using Java, JavaScript and Groovy. Java is a good/popular choice for non-Google-scale backends, Groovy (or Python, or Ruby or Node.js) is pretty good for the server-side of web apps and JavaScript is the only real choice for the client-side. Even the little games I'm writing in my spare time are lately mostly written in JavaScript, so they can run in the browser. So what would you suggest I could use C++ for? I'm aware that this question is very similar. However, I don't want to learn C++, I was a professional C++ programmer for years. I want to keep doing it and find good new use cases for it. I know that I can use C++ for web apps/games. I could even compile C++ to JavaScript with Emscripten. However, it doesn't seem like a good idea. I'm looking for something C++ is really good at to stay competent in the language. If your answer is: Just give up and forget C++, you'll probably never need it again, so be it.

    Read the article

  • How can I optimize my development machines files/dirs?

    - by LuxuryMode
    Like any programmer, I've got a lot of stuff on my machine. Some of that stuff is projects of my own, some are projects I'm working on for my employer, others are open-source tools and projects, etc. Currently, I have my files organized as follows: /Code --/development (things I'm sort of hacking on plus maybe libraries used in other projects) --/scala (organized by language...why? I don't know!) --/android --/ruby --/employer_name -- /mobile --/android --/ios --/open-source (basically my forks that I'm pushing commits back upstream from) --/some-awesome-oss-project --/another-awesome-one --/tools random IDE settings sprinkled in here plus some other apps As you can see, things are kind of a mess here. How can I keep things organized in some sort of coherent fashion?

    Read the article

  • Should we consider code language upon design?

    - by Codex73
    Summary This question aims to conclude if an applications usage will be a consideration when deciding upon development language. What factors if any could be considered upon language writing could be taken into context. Application Type: Web Question Of the following popular languages, when should we use one or the other? What factors if any could be considered upon language writing could be taken into context. Languages PHP Ruby Python My initial thought is that language shouldn't be considered as much as framework. Things to consider on framework are scalability, usage, load, portability, modularity and many more. Things to consider on Code Writing maybe cost, framework stability, community, etc.

    Read the article

  • Generalise variable usage inside code

    - by Shirish11
    I would like to know if it is a good practice to generalize variables (use single variable to store all the values). Consider simple example Strings querycre,queryins,queryup,querydel; querycre = 'Create table XYZ ...'; execute querycre ; queryins = 'Insert into XYZ ...'; execute queryins ; queryup = 'Update XYZ set ...'; execute queryup; querydel = 'Delete from XYZ ...'; execute querydel ; and Strings query; query= 'Create table XYZ ... '; execute query ; query= 'Insert into XYZ ...'; execute query ; query= 'Update XYZ set ...'; execute query ; query= 'Delete from XYZ ...'; execute query ; In first case I use 4 strings each storing data to perform the actions mentioned in their suffixes. In second case just 1 variable to store all kinds the data. Having different variables makes it easier for someone else to read and understand it better. But having too many of them makes it difficult to manage. Also does having too many variables hamper my performance?

    Read the article

  • Should I modify an entity with many parameters or with the entity itself?

    - by Saeed Neamati
    We have a SOA-based system. The service methods are like: UpdateEntity(Entity entity) For small entities, it's all fine. However, when entities get bigger and bigger, to update one property we should follow this pattern in UI: Get parameters from UI (user) Create an instance of the Entity, using those parameters Get the entity from service Write code to fill the unchanged properties Give the result entity to the service Another option that I've experienced in previous experiences is to create semantic update methods for each update scenario. In other words instead of having one global all-encompasing update method, we had many ad-hoc parametric methods. For example, for the User entity, instead of having UpdateUser (User user) method, we had these methods: ChangeUserPassword(int userId, string newPassword) AddEmailToUserAccount(int userId, string email) ChangeProfilePicture(int userId, Image image) ... Now, I don't know which method is truly better, and for each approach, we encounter problems. I mean, I'm going to design the infrastructure for a new system, and I don't have enough reasons to pick any of these approaches. I couldn't find good resources on the Internet, because of the lack of keywords I could provide. What approach is better? What pitfalls each has? What benefits can we get from each one?

    Read the article

  • Motivation and use of move constructors in C++

    - by Giorgio
    I recently have been reading about move constructors in C++ (see e.g. here) and I am trying to understand how they work and when I should use them. As far as I understand, a move constructor is used to alleviate the performance problems caused by copying large objects. The wikipedia page says: "A chronic performance problem with C++03 is the costly and unnecessary deep copies that can happen implicitly when objects are passed by value." I normally address such situations by passing the objects by reference, or by using smart pointers (e.g. boost::shared_ptr) to pass around the object (the smart pointers get copied instead of the object). What are the situations in which the above two techniques are not sufficient and using a move constructor is more convenient?

    Read the article

  • Are there currently any modern, standardized, aptitude test for software engineering?

    - by Matthew Patrick Cashatt
    Background I am a working software engineer who is in the midst of seeking out a new contract for the next year or so. In my search, I am enduring several absurd technical interviews as indicated by this popular question I asked earlier today. Even if the questions I was being asked weren't almost always absurd, I would be tired nonetheless of answering them many times over for various contract opportunities. So this got me thinking that having a standardized exam that working software professionals could take would provide a common scorecard that could be referenced by interviewers in lieu of absurd technical interview questions (i.e. nerd hazing). Question Is there a standardized software engineering aptitude test (SEAT??) available for working professionals to take? If there isn't a such an exam out there, what questions or topics should be covered? An additional thought Please keep in mind, if suggesting a question or topic, to focus on questions or topics that would be relevant to contemporary development practices and realistic needs in the workforce as that would be the point of a standard aptitude test. In other words, no clown traversal questions.

    Read the article

  • Which web site gives the most accurate indication of a programmer's capabilities?

    - by Jerry Coffin
    If you were hiring programmers, and could choose between one of (say) the top 100 coders on topcoder.com, or one of the top 100 on stackoverflow.com, which would you choose? At least to me, it would appear that topcoder.com gives a more objective evaluation of pure ability to solve problems and write code. At the same time, despite obvious technical capabilities, this person may lack any hint of social skills -- he may be purely a "lone coder", with little or no ability to help/work with others, may lack mentoring ability to help transfer his technical skills to others, etc. On the other hand, stackoverflow.com would at least appear to give a much better indication of peers' opinion of the coder in question, and the degree to which his presence and useful and helpful to others on the "team". At the same time, the scoring system is such that somebody who just throws up a lot of mediocre (or even poor answers) will almost inevitably accumulate a positive total of "reputation" points -- a single up-vote (perhaps just out of courtesy) will counteract the effects of no fewer than 5 down-votes, and others are discouraged (to some degree) from down-voting because they have to sacrifice their own reputation points to do so. At the same time, somebody who makes little or no technical contribution seems unlikely to accumulate a reputation that lands them (even close to) the top of the heap, so to speak. So, which provides a more useful indication of the degree to which this particular coder is likely to be useful to your organization? If you could choose between them, which set of coders would you rather have working on your team?

    Read the article

  • Is that true that .Net will be dumped by Microsoft in Windows 8? [closed]

    - by Dee Jay
    Possible Duplicate: What does Windows 8 mean for the future of .NET? Ok, I read this question and someone pointed that C# will be sidelined in next version of windows. There is a link in that question pointed at another link, i.e. this one: Dumping .NET - Microsoft's Madness Is that true that .Net will be dumped by Microsoft in Windows 8? Someone with insider information please share with us your opinions. I'm deeply worried about this.

    Read the article

  • What is the difference between industrial development and open source development?

    - by Ida
    Intuitively, I think open source development should be much more "casual" than industrial development process (like in Microsoft). Because for OSS development: Duty separation is not that strict than in big companies (maybe developers == testers in open source development?) People come in and out of the open source community, much more frequently than in big companies However, above are just my guesses. I really want to know more about the major difference between the open source and industrial development. Is their division of duty totally different (e.g., is there a leader/manager-like role in open source development?)? Maybe it is their communication style that differs a lot? Or their workflow? Please share your opinions. Thanks a lot!

    Read the article

  • What are some good game development programs for kids?

    - by John Giotta
    I know a very bright little boy who excels in math, but at home he's glued to his Nintendo DS. When I asked him what he wanted to do when he grew up he said "Make video games!" I remember a few years there was mention of a MIT software called Scratch and thought maybe this kid can do want he wants to do. Has anyone used any of the "game development" for kids softwares out there? Can you recommend any?

    Read the article

  • What is the meaning of 'high cohesion'?

    - by Max
    I am a student who recently joined a software development company as an intern. Back at the university, one of my professors used to say that we have to strive to achieve "Low coupling and high cohesion". I understand the meaning of low coupling. It means to keep the code of separate components separately, so that a change in one place does not break the code in another. But what is meant by high cohesion. If it means integrating the various pieces of the same component well with each other, I dont understand how that becomes advantageous. What is meant by high cohesion? Can an example be explained to understand its benefits?

    Read the article

  • Why do memory-managed languages retain the `new` keyword?

    - by Channel72
    The new keyword in languages like Java, Javascript, and C# creates a new instance of a class. This syntax seems to have been inherited from C++, where new is used specifically to allocate a new instance of a class on the heap, and return a pointer to the new instance. In C++, this is not the only way to construct an object. You can also construct an object on the stack, without using new - and in fact, this way of constructing objects is much more common in C++. So, coming from a C++ background, the new keyword in languages like Java, Javascript, and C# seemed natural and obvious to me. Then I started to learn Python, which doesn't have the new keyword. In Python, an instance is constructed simply by calling the constructor, like: f = Foo() At first, this seemed a bit off to me, until it occurred to me that there's no reason for Python to have new, because everything is an object so there's no need to disambiguate between various constructor syntaxes. But then I thought - what's really the point of new in Java? Why should we say Object o = new Object();? Why not just Object o = Object();? In C++ there's definitely a need for new, since we need to distinguish between allocating on the heap and allocating on the stack, but in Java all objects are constructed on the heap, so why even have the new keyword? The same question could be asked for Javascript. In C#, which I'm much less familiar with, I think new may have some purpose in terms of distinguishing between object types and value types, but I'm not sure. Regardless, it seems to me that many languages which came after C++ simply "inherited" the new keyword - without really needing it. It's almost like a vestigial keyword. We don't seem to need it for any reason, and yet it's there. Question: Am I correct about this? Or is there some compelling reason that new needs to be in C++-inspired memory-managed languages like Java, Javascript and C#?

    Read the article

  • At $20/month Windows Azure host my website with 99.97% uptime

    - by Gopinath
    Couple of years ago a reliable and decent performing Windows hosting was not affordable to many enthusiastic developers who want to try a startup idea or build a hobby site. I tried to start an ASP.NET website few years ago to provide services like – Mobile Tracing, Vehicle Tracing. But due to high cost of Windows hosting I developed those services using PHP (not an easy task for .NET developer) and hosted on them Linux servers.  But with recent evolution of Windows Azure, hosting ASP.NET websites on highly reliable servers is affordable. Today anyone can host a high responsive and available ASP.NET website for just $20/month using Windows Azure. My website coziie.com is running on Windows Azure and serves close to quarter millions visitors a month with 99.97% of uptime and most of the page load times are less than 3 seconds. All I spend to run this website is just around $20, if you translate it to India rupees its roughly Rs.1000. The web sever of coziie.com is powered by a single Extra Small Web role instance and the backend is powered by a SQL Azure instance. Azure is quite impressive to provide 99.97% of uptime. Response times during peak are around 3 seconds and on nomarl loads it is around 1.5 seconds. Here is the report of uptime provided by Royal Pingdom over last one year For just $20/month Windows Azure takes care of the following apart from hosting Patches up Windows OS to the latest version Upgrades ASP.NET to the latest version – coziie.com is running on ASP.NET MVC 3 and soon I’ll upgrade it to ASP.NET MVC 4 Hosts data on latest and best version Sql Server database SQL Azure maintains 3 copies of database and automatically recovers in case of server failures and disasters. I never worry about database backups/restore. Provides staging environment for deploying applications for testing purpose and move them to production – I upgrade  twice a month on average With Windows Azure I no longer focus on server maintenance or data backups. They are taken up by Microsoft team and I just focus on building my website. Wish there is a low cost Linux version of Windows Azure so that I can stop worrying about server maintenance of this blog!! If you are looking for a Windows hosting, look no further than Windows Azure. If you find $20/month is a bit expensive to start with you may explore Azure Website (sort of shared hosted environment) which is free to start with and as your traffic grows you can move to paid hosting.

    Read the article

  • Is it bad practice to output from within a function?

    - by Nick
    For example, should I be doing something like: <?php function output_message($message,$type='success') { ?> <p class="<?php echo $type; ?>"><?php echo $message; ?></p> <?php } output_message('There were some errors processing your request','error'); ?> or <?php function output_message($message,$type='success') { ob_start(); ?> <p class="<?php echo $type; ?>"><?php echo $message; ?></p> <?php return ob_get_clean(); } echo output_message('There were some errors processing your request','error'); ?> I understand they both achieve the same end result, but are there benefits doing one way over the other? Or does it not even matter?

    Read the article

  • "Imprinting" as a language feature?

    - by MKO
    Idea I had this idea for a language feature that I think would be useful, does anyone know of a language that implements something like this? The idea is that besides inheritance a class can also use something called "imprinting" (for lack of better term). A class can imprint one or several (non-abstract) classes. When a class imprints another class it gets all it's properties and all it's methods. It's like the class storing an instance of the imprinted class and redirecting it's methods/properties to it. A class that imprints another class therefore by definition also implements all it's interfaces and it's abstract class. So what's the point? Well, inheritance and polymorphism is hard to get right. Often composition gives far more flexibility. Multiple inheritance offers a slew of different problems without much benefits (IMO). I often write adapter classes (in C#) by implementing some interface and passing along the actual methods/properties to an encapsulated object. The downside to that approach is that if the interface changes the class breaks. You also you have to put in a lot of code that does nothing but pass things along to the encapsulated object. A classic example is that you have some class that implements IEnumerable or IList and contains an internal class it uses. With this technique things would be much easier Example (c#) [imprint List<Person> as peopleList] public class People : PersonBase { public void SomeMethod() { DoSomething(this.Count); //Count is from List } } //Now People can be treated as an List<Person> People people = new People(); foreach(Person person in people) { ... } peopleList is an alias/variablename (of your choice)used internally to alias the instance but can be skipped if not needed. One thing that's useful is to override an imprinted method, that could be achieved with the ordinary override syntax public override void Add(Person person) { DoSomething(); personList.Add(person); } note that the above is functional equivalent (and could be rewritten by the compiler) to: public class People : PersonBase , IList<Person> { private List<Person> personList = new List<Person>(); public override void Add(object obj) { this.personList.Add(obj) } public override int IndexOf(object obj) { return personList.IndexOf(obj) } //etc etc for each signature in the interface } only if IList changes your class will break. IList won't change but an interface that you, someone in your team, or a thirdparty has designed might just change. Also this saves you writing a whole lot of code for some interfaces/abstract classes. Caveats There's a couple of gotchas. First we, syntax must be added to call the imprinted classes's constructors from the imprinting class constructor. Also, what happends if a class imprints two classes which have the same method? In that case the compiler would detect it and force the class to define an override of that method (where you could chose if you wanted to call either imprinted class or both) So what do you think, would it be useful, any caveats? It seems it would be pretty straightforward to implement something like that in the C# language but I might be missing something :) Sidenote - Why is this different from multiple inheritance Ok, so some people have asked about this. Why is this different from multiple inheritance and why not multiple inheritance. In C# methods are either virtual or not. Say that we have ClassB who inherits from ClassA. ClassA has the methods MethodA and MethodB. ClassB overrides MethodA but not MethodB. Now say that MethodB has a call to MethodA. if MethodA is virtual it will call the implementation that ClassB has, if not it will use the base class, ClassA's MethodA and you'll end up wondering why your class doesn't work as it should. By the terminology sofar you might already confused. So what happens if ClassB inherits both from ClassA and another ClassC. I bet both programmers and compilers will be scratching their heads. The benefit of this approach IMO is that the imprinting classes are totally encapsulated and need not be designed with multiple inheritance in mind. You can basically imprint anything.

    Read the article

  • Maximum Length Of IP Address: 15 (IPv4) & 39(IPv6)

    - by Gopinath
    Problem You are designing a database table for a web application that requires to store IP address of users who visits the site. The IP address is required to be stored a character data in the table. To define size of the character column you need to know maximum length of IP address. So, what is the maximum length of an IP address? Solution The IPv4 version of IP address is in the following format 255.255.255.255 To store IPv4 address we require 15 characters. The IPv6 version of IP address is grouped into sets of 4 hex digits separated by colons, like the below 2001:0db8:85a3:0000:0000:8a2e:0370:7334 To store IPv6 address you require a 39 characters long column. Conclusion As IPv4 and IPv6 are the commonly use protocols, you better define a column with 39 characters length so that both the format address are saved in to the table without any issues. This article titled,Maximum Length Of IP Address: 15 (IPv4) & 39(IPv6), was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • BDD: Getting started

    - by thom
    I'm starting with BDD and this is my story: Feature: Months and days to days In order to see months and days as days As a date conversion fan I need a webpage where users can enter days and months and convert them to days. I have some doubts ... Should I write my scenarios before coding anything or should I first write a scenario and then write code, write a scenario again and then write code, and so on ... ? If I should write my scenarios before, can my steps be approved and production code still does not get done? When should I do refactoring on my code? After the feature is done or after each scenario implementation?

    Read the article

  • What is the philosophy/reasoning behind C#'s Pascal-casing method names?

    - by Nocturne
    I'm just starting to learn C#. Coming from a background in Java, C++ and Objective-C, I find C#'s Pascal-casing its method-names rather unique, and a tad difficult to get used to at first. What is the reasoning and philosophy behind this? I'm guessing it is because of C# properties. Unlike in Objective-C, where method names can be exactly the same as an instance variables, this is not the case with C#. I would guess one of the goals with properties (as it is with most of the languages that support it) is to make properties truly indistinguishable from variables and methods. So, one can have an "int x" in C#, and the corresponding property becomes X. To ensure that properties and methods are indistinguishable, all method names I'm guessing are also therefore expected to start with an uppercase letter. (This is just my hypothesis based on what I know of C# so far—I'm still learning). I'm very curious to know how this curious guideline came into being (given that it's not something one sees in most other languages where method names are expected to start with a lowercase letter) (EDIT: By Pascal-casing, I mean PascalCase (which is basically camelCase but starting with a capital letter). Method names typically start with a lowercase letter in most languages)

    Read the article

  • Stuff you learned in school, that you have never used again?

    - by Mercfh
    Obviously we learn plenty of things in our University/College/Whatever that probably don't apply to everyday use, but is there anything that stands out particularly? Maybe something that was concentrated ALOT on? For me it was def. 2 things: OO Concepts and Pointers I still use OO, but not nearly to the amount people made it out to be, i can see where it'd be useful but in my line of work we don't have huge amounts of classes, maybe a couple at most. And there certainly isn't much OO reuse (i finally figured out what that means lol) Pointers are another thing, again I can see where they'd be useful...however I barely barely ever touch them, nor do the others I work with. I guess language choice has alot to do with that but still. What about you guys? edit: For those who are asking I work for a Large Printer company, and most of the Applications we work on are Java+XML and Actionscript for "Printer Apps". But we are moving towards other languages (think like webkits and stuff). So the Code amounts per parts are quite small. I never say OO wasn't useful I just said I personally havent seen it used in my workplace much.

    Read the article

  • "Opportunity" to take over maintenance of a small internal website. What should I do?

    - by Dan
    I have been offered an "opportunity" to take over maintenance of a small internal website run by my group that provides information about schedules and photos of events the groups done. My manager sent me the link to the site and checked it out. The site looked clean and neat but loaded in ~5 seconds. I thought this was a little long considering the site really didn't contain a lot of content. This prompted me to take a look under the hood at the pages source code. To my horror it'd been totally hacked together using nested tables! I'm new so I really can't say no to this "opportunity" so what should I do with it? Every fiber of my being feels that the only correct thing to do is over hall the site using CSS, Div's, Span's and any other appropriate tags that a sane/good web developer would used to begin with instead of depending on the render incentive magic of tables. But I'd like to ask programmers with more experienced then me, who have been in this situation. What should I do? Is my only realistic option to leave the horror as is and only adjusting the content as requested? I'm really torn between good development and the corporate reality I'm part of. Is there some kind of middle ground where things can be made better even if they're not perfect? Thanks ahead of time.

    Read the article

  • Should I concentrate on writing code for money or my studies while in college?

    - by A-Cube
    I am college student of Software Engineering. My worries are that while I am concentrating on my studies, my peers are getting down with the code (e.g. HTML, ASP, PHP, etc) to earn money. Should I be worried that I am not doing coding like them? I was asked to be Microsoft Student Partner but I refused because the person what was doing before me told it was just arranging events. Nothing as such like getting with Microsoft and coding. Should I be writing code and earning money as I still am in 4th semester? I only have C++ as learning language in college. Will my job count on these projects that I do, or should I concentrate on studies for now to get maximum benefit?

    Read the article

  • Which is a better practice - helper methods as instance or static?

    - by Ilian Pinzon
    This question is subjective but I was just curious how most programmers approach this. The sample below is in pseudo-C# but this should apply to Java, C++, and other OOP languages as well. Anyway, when writing helper methods in my classes, I tend to declare them as static and just pass the fields if the helper method needs them. For example, given the code below, I prefer to use Method Call #2. class Foo { Bar _bar; public void DoSomethingWithBar() { // Method Call #1. DoSomethingWithBarImpl(); // Method Call #2. DoSomethingWithBarImpl(_bar); } private void DoSomethingWithBarImpl() { _bar.DoSomething(); } private static void DoSomethingWithBarImpl(Bar bar) { bar.DoSomething(); } } My reason for doing this is that it makes it clear (to my eyes at least) that the helper method has a possible side-effect on other objects - even without reading its implementation. I find that I can quickly grok methods that use this practice and thus help me in debugging things. Which do you prefer to do in your own code and what are your reasons for doing so?

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >