Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 304/1016 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • Is OpenTK Dead?

    - by ashes999
    Looking at OpenTK, I notice some disturbing signs: The last news item was posted on December 31st, 2010 The main forum gets about one post a day On SourceForge, the last nightly build was in March, and the last release was 2010. Does OpenTK exist anymore, or is it abandonware now? Edit: Some people have expressed concern at my use of "ambiguous" and "loaded terms" like "dead," "abandonware," and others. What I'm asking is this: software projects comprise of many pieces: The actual software project (such as OpenTK) A group of people who maintain the software (project leads, core developers) Some vehicle by which users can find and consume the latest versions (such as releasing daily builds) A community (can I ask questions about it? Get answers?) Updates (are there new features? New releases? Active development? A roadmap?) Some projects have all of these things. Most have a few. Some have nothing, other than maybe the actual software project itself. Is OpenTK one of these? Because it seems like: The actual software project is stable The maintainers don't contribute to it anymore There are no more latest versions (daily builds), not since 2010 (2+ years) The community is very low-traffic (nobody is asking/answering questions, who is actually using this anyway?) There are no updates since 2010

    Read the article

  • Should we always prefer OpenGL ES version 2 over version 1.x

    - by Shivan Dragon
    OpengGL ES version 2 goes a long way into changing the development paradigm that was established with OpenGL ES 1.x. You have shaders which you can chain together to apply varios effects/transforms to your elements, the projection and transformation matrices work completly different etc. I've seen a lot of online tutorials and blogs that simply say "ditch version 1.x, use version 2, that's the way to go". Even on Android's documentation it sais to "use version 2 as it may prove faster than 1.x". Now, I've also read a book on OpenGL ES (which was rather good, but I'm not gonna mention here because I don't want to give the impression that I'm trying to make hidden publicity). The guy there treated only OpenGL ES 1.x for 80% of the book, and then at the end only listed the differences in version 2 and said something like "if OpenGL ES 1 does what you need, there's no need to switch to version 2, as it's only gonna over complicate your code. Version 2 was changed a lot to facillitate newer, fancier stuff, but if you don't need it, version 1.x is fine". My question is then, is the last statement right? Should I always use Open GL ES version 1.x if I don't need version 2 only stuff? I'd sure like to do that, because I find coding in version 1.x A LOT simpler than version 2 but I'm afraid that my apps might get obsolete faster for using an older version.

    Read the article

  • Craftsmanship is ALL that Matters

    - by Wayne Molina
    Today, I'm going to talk about a touchy subject: the notion of working in a company that doesn't use the prescribed "best practices" in its software development endeavours.  Over the years I have, using a variety of pseudonyms, asked this question on popular programming forums.  Although I always add in some minor variation of the story to avoid suspicion that it's the same person posting, the crux of the tale remains the same: A Programmer’s Tale A junior software developer has just started a new job at an average company, creating average line-of-business applications for internal use (the most typical scenario programmers find themselves in).  This hypothetical newbie has spent a lot of time reading up on the "theory" of software development, devouring books, blogs and screencasts from well-known and respected software developers in the community in order to broaden his knowledge and "do what the pros do".  He begins his new job, eager to apply what he's learned on a real-world project only to discover that his new teammates doesn't use any of those concepts and techniques.  They hack their way through development, or in a best-case scenario use some homebrew, thrown-together semblance of a framework for their applications that follows not one of the best practices suggested by the “elite” in the software community - things like TDD (TDD as a "best practice" is the only subjective part of this post, but it's included here due to a very large following of respected developers who consider it one), the SOLID principles, well-known and venerable tools, even version control in a worst case and truly nightmarish scenario.  Our protagonist is frustrated that he isn't doing things the "proper" way - a way he's spent personal time digesting and learning about and, more importantly, a way that some of the top developers in the industry advocate - and turns to a forum to ask the advice of his peers. Invariably the answer I, in the guise of the concerned newbie, will receive is that A) I don't know anything and should just shut my mouth and sling code the bad way like everybody else on the team, and B) These "best practices" are fade or a joke, and the only thing that matters is shipping software to your customers. I am here today to say that anyone who says this, or anything like it, is not only full of crap but indicative of exactly the type of “developer” that has helped to give our industry a bad name.  Here is why: One Who Knows Nothing, Understands Nothing On one hand, you have the cognoscenti of the .NET development world.  Guys like James Avery, Jeremy Miller, Ayende Rahien and Rob Conery; all well-respected and noted programmers that are pretty much our version of celebrities.  These guys write blogs, books, and post videos outlining the "correct" way of writing software to make sure it not only works but is maintainable and extensible and a joy to work with.  They tout the virtues of the SOLID principles, or of using TDD/BDD, or using a mature ORM like NHibernate, Subsonic or even Entity Framework. On the other hand, you have Joe Everyman, Lead Software Developer at Initrode Corporation - in our hypothetical story Joe is the junior developer's new boss.  Joe's been with Initrode for 10 years, starting as the company’s very first programmer and over the years building up a little fiefdom of his own until at the present he’s in charge of all Initrode’s software development.  Joe writes code the same way he always has, without bothering to learn much, if anything.  He looked at NHibernate once and found it was "too hard", so he uses a primitive implementation of the TableDataGateway pattern as a wrapper around SqlClient.SqlConnection and SqlClient.SqlCommand instead of an actual ORM (or, in a better case scenario, has created his own ORM); the thought of using LINQ or Entity Framework or really anything other than his own hastily homebrew solution has never occurred to him.  He doesn't understand TDD and considers “testing” to be using the .NET debugger to step through code, or simply loading up an app and entering some values to see if it works.  He doesn't really understand SOLID, and he doesn't care to.  He's worked as a programmer for years, and that's all that counts.  Right?  WRONG. Who would you rather trust?  Someone with years of experience and who writes books, creates well-known software and is akin to a celebrity, or someone with no credibility outside their own minute environment who throws around their clout and company seniority as the "proof" of their ability?  Joe Everyman may have years of experience at Initrode as a programmer, and says to do things "his way" but someone like Jeremy Miller or Ayende Rahien have years of experience at companies just like Initrode, THEY know ten times more than Joe Everyman knows or could ever hope to know, and THEY say to do things "this way". Here's another way of thinking about it: If you wanted to get into politics and needed advice on the best way to do it, would you rather listen to the mayor of Hicktown, USA or Barack Obama?  One is a small-time nobody while the other is very well-known and, as such, would probably have much more accurate and beneficial advice. NOTE: The selection of Barack Obama as an example in no way, shape, or form suggests a political affiliation or political bent to this post or blog, and no political innuendo should be mistakenly read from it; the intent was merely to compare a small-time persona with a well-known persona in a non-software field.  Feel free to replace the name "Barack Obama" with any well-known Congressman, Senator or US President of your choice. DIY Considered Harmful I will say right now that the homebrew development environment is the WORST one for an aspiring programmer, because it relies on nothing outside it's own little box - no useful skill outside of the small pond.  If you are forced to use some half-baked, homebrew ORM created by your Director of Software, you are not learning anything valuable you can take with you in the future; now, if you plan to stay at Initrode for 10 years like Joe Everyman, this is fine and dandy.  However if, like most of us, you want to advance your career outside a very narrow space you will do more harm than good by sticking it out in an environment where you, to be frank, know better than everybody else because you are aware of alternative and, in almost most cases, better tools for the job.  A junior developer who understands why the SOLID principles are good to follow, or why TDD is beneficial, or who knows that it's better to use NHibernate/Subsonic/EF/LINQ/well-known ORM versus some in-house one knows better than a senior developer with 20 years experience who doesn't understand any of that, plain and simple.  Anyone who disagrees is either a liar, or someone who, just like Joe Everyman, Lead Developer, relies on seniority and tenure rather than adapting their knowledge as things evolve. In many cases, the Joe Everymans of the world act this way out of fear - they cannot possibly fathom that a “junior” could know more than them; after all, they’ve spent 10 or more years in the same company, doing the same job, cranking out the same shoddy software.  And here comes a newbie who hasn’t spent 10+ years doing the same things, with a fresh and often radical take on the craft, and Joe Everyman is afraid he might have to put some real effort into his career again instead of just pointing to his 10 years of service at Initrode as “proof” that he’s good, or that he might have to learn something new to improve; in most cases the problem is Joe Everyman, and by extension Initrode itself, has a mentality of just being “good enough”, and mediocrity is the rule of the day. A Thorn Bush is No Place for a Phoenix My advice is that if you work on a team where they don't use the best practices that some of the most famous developers in our field say is the "right" way to do things (and have legions of people who agree), and YOU are aware of these practices and can see why they work, then LEAVE the company.  Find a company where they DO care about quality, and craftsmanship, otherwise you will never be happy.  There is no point in "dumbing" yourself down to the level of your co-workers and slinging code without care to craftsmanship.  In 95% of these situations there will be no point in bringing it to the attention of Joe Everyman because he won't listen; he might even get upset that someone is trying to "upstage" him and fire the newbie, and replace someone with loads of untapped potential with a drone that will just nod affirmatively and grind out the tasks assigned without question. Find a company that has people smart enough to listen to the "best and brightest", and be happy.  Do not, I repeat, DO NOT waste away in a job working for ignorant people.  At the end of the day software development IS a craft, and a level of craftsmanship is REQUIRED for any serious professional.  When you have knowledgeable people with the credibility to back it up saying one thing, and small-time people who are, to put it bluntly, nobodies in the field saying and doing something totally different because they can't comprehend it, leave the nobodies to their own devices to fade into obscurity.  Work for a company that uses REAL software engineering techniques and really cares about craftsmanship.  The biggest issue affecting our career, and the reason software development has never been the respected, white-collar career it was meant to be, is because hacks and charlatans can pass themselves off as professional programmers without following a lick of good advice from programmers much better at the craft than they are.  These modern day snake-oil salesmen entrench themselves in companies by hoodwinking non-technical businesspeople and customers with their shoddy wares, end up in senior/lead/executive positions, and push their lack of knowledge on everybody unfortunate enough to work with/for/under them, crushing any dissent or voices of reason and change under their tyrannical heel and leaving behind a trail of dismayed and, often, unemployed junior developers who were made examples of to keep up the facade and avoid the shadow of doubt being cast upon them. To sum this up another way: If you surround yourself with learned people, you will learn.  Surround yourself with ignorant people who can't, as the saying goes, see the forest through the trees, and you'll learn nothing of any real value.  There is more to software development than just writing code, and the end goal should not be just "shipping software", it should be shipping software that is extensible, maintainable, and above all else software whose creation has broadened your knowledge in some capacity, even if a minor one.  An eager newbie who knows theory and thirsts for knowledge can easily be moulded and taught the advanced topics, but the same can't be said of someone who only cares about the finish line.  This industry needs more people espousing the benefits of software craftsmanship and proper software engineering techniques, and less Joe Everymans who are unwilling to adapt or foster new ways of thinking. Conclusion - I Cast “Protection from Fire” I am fairly certain this post will spark some controversy and might even invite the flames.  Please keep in mind these are opinions and nothing more.  A little healthy rant and subsequent flamewar can be good for the soul once in a while.  To paraphrase The Godfather: It helps to get rid of the bad blood.

    Read the article

  • Windows 8 Will be Here Tomorrow; but Should Silverlight be Gone Today?

    - by andrewbrust
    The software industry lives within an interesting paradox. IT in the enterprise moves slowly and cautiously, upgrading only when safe and necessary.  IT interests intentionally live in the past.  On the other hand, developers, and Independent Software Vendors (ISVs) not only want to use the latest and greatest technologies, but this constituency prides itself on gauging tech’s future, and basing its present-day strategy upon it.  Normally, we as an industry manage this paradox with a shrug of the shoulder and musings along the lines of “it takes all kinds.”  Different subcultures have different tendencies.  So be it. Microsoft, with its Windows operating system (OS), can’t take such a laissez-faire view of the world though.  Redmond relies on IT to deploy Windows and (at the very least) influence its procurement, but it also relies on developers to build software for Windows, especially software that has a dependency on features in new versions of the OS.  It must indulge and nourish developers’ fetish for an early birthing of the next generation of software, even as it acknowledges the IT reality that the next wave will arrive on-schedule in Redmond and will travel very slowly to end users. With the move to Windows 8, and the corresponding shift in application development models, this paradox is certainly in place. On the one hand, the next version of Windows is widely expected sometime in 2012, and its full-scale deployment will likely push into 2014 or even later.  Meanwhile, there’s a technology that runs on today’s Windows 7, will continue to run in the desktop mode of Windows 8 (the next version’s codename), and provides absolutely the best architectural bridge to the Windows 8 Metro-style application development stack.  That technology is Silverlight.  And given what we now know about Windows 8, one might think, as I do, that Microsoft ecosystem developers should be flocking to it. But because developers are trying to get a jump on the future, and since many of them believe the impending v5.0 release of Silverlight will be the technology’s last, not everyone is flocking to it; in fact some are fleeing from it.  Is this sensible?  Is it not unprecedented?  What options does it lead to?  What’s the right way to think about the situation? Is v5.0 really the last major version of the technology called Silverlight?  We don’t know.  But Scott Guthrie, the “father” and champion of the technology, left the Developer Division of Microsoft months ago to work on the Windows Azure team, and he took his people with him.  John Papa, who was a very influential Redmond-based evangelist for Silverlight (and is a Visual Studio Magazine author), left Microsoft completely.  About a year ago, when initial suspicion of Silverlight’s demise reached significant magnitude, Papa interviewed Guthrie on video and their discussion served to dispel developers’ fears; but now they’ve moved on. So read into that what you will and let’s suppose, for the sake of argument, speculation that Silverlight’s days of major revision and iteration are over now is correct.  Let’s assume the shine and glimmer has dimmed.  Let’s assume that any Silverlight application written today, and that therefore any investment of financial and human resources made in Silverlight development today, is destined for rework and extra investment in a few years, if the application’s platform needs to stay current. Is this really so different from any technology investment we make?  Every framework, language, runtime and operating system is subject to change, to improvement, to flux and, yes, to obsolescence.  What differs from project to project, is how near-term that obsolescence is and how disruptive the change will be.  The shift from .NET 1.1. to 2.0 was incremental.  Some of the further changes were too.  But the switch from Windows Forms to WPF was major, and the change from ASP.NET Web Services (asmx) to Windows Communication Foundation (WCF) was downright fundamental. Meanwhile, the transition to the .NET development model for Windows 8 Metro-style applications is actually quite gentle.  The finer points of this subject are covered nicely in Magenic’s excellent white paper “Assessing the Windows 8 Development Platform.” As the authors of that paper (including Rocky Lhotka)  point out, Silverlight code won’t just “port” to Windows 8.  And, no, Silverlight user interfaces won’t either; Metro always supports XAML, but that relationship is not commutative.  But the concepts, the syntax, the architecture and developers’ skills map from Silverlight to Windows 8 Metro and the Windows Runtime (WinRT) very nicely.  That’s not a coincidence.  It’s not an accident.  This is a protected transition.  It’s not a slap in the face. There are few things that are unnerving about this transition, which make it seem markedly different from others: The assumed end of the road for Silverlight is something many think they can see.  Instead of being ignorant of the technology’s expiration date, we believe we know it.  If ignorance is bliss, it would seem our situation lacks it. The new technology involving WinRT and Metro involves a name change from Silverlight. .NET, which underlies both Silverlight and the XAML approach to WinRT development, has just about reached 10 years of age.  That’s equivalent to 80 in human years, or so many fear. My take is that the combination of these three factors has contributed to what for many is a psychologically compelling case that Silverlight should be abandoned today and HTML 5 (the agnostic kind, not the Windows RT variety) should be embraced in its stead.  I understand the logic behind that.  I appreciate the preemptive, proactive, vigilant conscientiousness involved in its calculus.  But for a great many scenarios, I don’t agree with it.  HTML 5 clients, no matter how impressive their interactivity and the emulation of native application interfaces they present may be, are still second-class clients.  They are getting better, especially when hardware acceleration and fast processors are involved.  But they still lag.  They still feel like they’re emulating something, like they’re prototypes, like they’re not comfortable in their own skins.  They are based on compromise, and they feel compromised too. HTML 5/JavaScript development tools are getting better, and will get better still, but they are not as productive as tools for other environments, like Flash, like Silverlight or even more primitive tooling for iOS or Android.  HTML’s roots as a document markup language, rather than an application interface, create a disconnect that impedes productivity.  I do not necessarily think that problem is insurmountable, but it’s here today. If you’re building line-of-business applications, you need a first-class client and you need productivity.  Lack of productivity increases your costs and worsens your backlog.  A second class client will erode user satisfaction, which is never good.  Worse yet, this erosion will be inconspicuous, rather than easily identified and diagnosed, because the inferiority of an HTML 5 client over a native one is hard to identify and, notably, doing so at this juncture in the industry is unpopular.  Why would you fault a technology that everyone believes is revolutionary?  Instead, user disenchantment will remain latent and yet will add to the malaise caused by slower development. If you’re an ISV and you’re coveting the reach of running multi-platform, it’s a different story.  You’ve likely wanted to move to HTML 5 already, and the uncertainty around Silverlight may be the only remaining momentum or pretext you need to make the shift.  You’re deploying many more copies of your application than a line-of-business developer is anyway; this makes the economic hit from lower productivity less impactful, and the wider potential installed base might even make it profitable. But no matter who you are, it’s important to take stock of the situation and do it accurately.  Continued, but merely incremental changes in a development model lead to conservatism and general lack of innovation in the underlying platform.  Periods of stability and equilibrium are necessary, but permanence in that equilibrium leads to loss of platform relevance, market share and utility.  Arguably, that’s already happened to Windows.  The change Windows 8 brings is necessary and overdue.  The marked changes in using .NET if we’re to build applications for the new OS are inevitable.  We will ultimately benefit from the change, and what we can reasonably hope for in the interim is a migration path for our code and skills that is navigable, logical and conceptually comfortable. That path takes us to a place called WinRT, rather than a place called Silverlight.  But considering everything that is changing for the good, the number of disruptive changes is impressively minimal.  The name may be changing, and there may even be some significance to that in terms of Microsoft’s internal management of products and technologies.  But as the consumer, you should care about the ingredients, not the name.  Turkish coffee and Greek coffee are much the same. Although you’ll find plenty of interested parties who will find the names significant, drinkers of the beverage should enjoy either one.  It’s all coffee, it’s all sweet, and you can tell your fortune from the grounds that are left at the end.  Back on the software side, it’s all XAML, and C# or VB .NET, and you can make your fortune from the product that comes out at the end.  Coffee drinkers wouldn’t switch to tea.  Why should XAML developers switch to HTML?

    Read the article

  • Has "Killer Game Programming in Java" by Andrew Davison been rendered useless? What new tutorial should I follow? [duplicate]

    - by BDillan
    This question already has an answer here: Killer Game Programming [Java 3D] outdated? 5 answers Its pathetic how it was created in 2005 not so long ago when the Java 3D 1.31 API was released. Now after downloading the source code off the books website and implementing it onto my work-space it cant run. (The 3D Shooter Ch. 23) I downloaded Java 3D 1.5 API, installed it- went back to my source code and javax.media ect.., and EVERY imported class within the game simply couldn't be resolved... they are all outdated.. Please do not say how its meant for a positive curvature in your game development skills. No ~ I got the book to understand how to code 3D textures/particle systems/games. Without seeing results how can I learn? If there is still hope for this book please tell me. Otherwise what other books match this one in Game Programming? This one was very well organized, documented and long. I am not looking for a 3D game prog. book on Java. Rather one that has the same reputation as this one but is a bit newer..

    Read the article

  • What needs to be done to port a Windows Flash game to Android?

    - by Russell Yan
    My team has developed a game using Flash (with Lua embeded) that targets Windows. And we want to port that game to Android (also iOS in the future). It's acceptable to rewrite the Flash part of client. And some other team in our company recommended Unity3D to replace the Flash part. Since we have no experience in Android/iOS development, we would need to learn some new tool/language anyway. So we would like that learning still be useful after the porting and when we starting a new game. Any thoughts? Edits: I think it is worth noticing the background of the game : the project is started by a tester and developer, both without knowing much about flex and actionscript. They built the game while learning, so most of the code is hard to maintain. I and other two developers joined the project after a year or so when one has leave (and the other be our manager). The other two developers are just graduates with little experience and little knowledge of flex. I am good at the server part and the c# language. Based on the fact that the code is hard to maintain (and we do need to change a lot of code to make the game easier to playe in a mobile device), and I am good at c# (and learning). I still tend to do the porting with Unity, which could get better performance and possibly save time.

    Read the article

  • PHP Browser Game Question - Pretty General Language Suitability and Approach Question

    - by JimBadger
    I'm developing a browser game, using PHP, but I'm unsure if the way I'm going about doing it is to be encouraged anymore. It's basically one of those MMOs where you level up various buildings and what have you, but, you then commit some abstract fighting entity that the game gives you, to an automated battle with another player (producing a textual, but hopefully amusing and varied combat report). Basically, as soon as two players agree to fight, PHP functions on the "fight.php" page run queries against a huge MySQL database, looking up all sorts of complicated fight moves and outcomes. There are about three hundred thousand combinations of combat stance, attack, move and defensive stances, so obviously this is quite a resource hungry process, and, on the super cheapo hosted server I'm using for development, it rapidly runs out of memory. The PHP script for the fight logic currently has about a thousand lines of code in it, and I'd say it's about half-finished as I try to add a bit of AI into the fight script. Is there a better way to do something this massive than simply having some functions in a PHP file calling the MySQL Database? I taught myself a modicum of PHP a while ago, and most of the stuff I read online (ages ago) about similar games was all PHP-based. but a) am I right to be using PHP at all, and b) am I missing some clever way of doing things that will somehow reduce server resource requirements? I'd consider non PHP alternatives but, if PHP is suitable, I'd rather stick to that, so there's no overhead of learning something new. I think I'd bite that bullet if it's the best option for a better game, though.

    Read the article

  • How to deal with OpenGL and Fullscreen on OS X

    - by Armin Ronacher
    I do most of my development on OS X and for my current game project this is my target environment. However when I play games I play on Windows. As a windows gamer I am used to Alt+Tab switching from within the game to the last application that was open. On OS X I currently can't find either a game that supports that nor can I find a way to make it possible. My current project is based on SDL 1.3 and I can see that cmd+tab is a sequence that is sent directly to my application and not intercepted by the operating system. Now my first attempt was to hide the rendering window on cmd+tab which certainly works, but has the disadvantage that a hidden OpenGL window in SDL cannot be restored when the user tabs back to the application. First of all, there is no event fired for that or I can't find it, secondly the core problem is that when that application window is hidden, my game is still the active application, just that the window disappeared. That is incredible annoying. Any ideas how to approximate the windows / linux behavior for alt+tab?

    Read the article

  • handling various frame layouts in android

    - by vaibhav
    i'm new to game development and am trying create a Contra or the old tmnt game (but a simple one) like game for android. for the game i decided to divide my main screen in three parts - upper for stats,mid for the game and lower for controls. my main.xml is <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <FrameLayout android:id="@+id/upper_bar" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1" > </FrameLayout> <FrameLayout android:id="@+id/fl" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="0.5" > </FrameLayout> <FrameLayout android:id="@+id/low_bar" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="0.85" > </FrameLayout> </LinearLayout> so i have created the gameview and gameloopthread classes for the mid surface(which is pretty standard). my problem is that how do i draw in the upper and lower frame layouts? should i make new classes for view and thread for each layout , should i do all this in the gameview class itself or is there any better way to implement this?

    Read the article

  • Other games that employ mechanics like the game "Diplomacy"

    - by Kevin Peno
    I'm doing a little bit of research and I'm hoping you can help me track down any games, other than Diplomacy (online version here), that employ all or some of the mechanics in Diplomacy (rules, short form). Examples I'm looking for: Simultaneous orders given prior to execution of orders In Diplomacy, players "write down" their moves and execute them "at the same time" Support, in terms of supporting an attacker or defender "take" a territory. In Diplomacy, no one unit is stronger than another you need to combine the strength of multiple units to attack other territories. Rules for how move conflicts are resolved Example, 2 units move into a space, but only one is allowed, what happens. I may add to this list later, but these are the primary things I'm looking for. If you need clarification on anything just let me know. Note: I tried asking this on GamingSE, but it was shot down. So, I am unsure where else I could post this. Since I am researching this for game development purposes, I assume this post is on topic. Please let me know if this is not the case. Please also feel free to re-categorize this. Thanks!

    Read the article

  • XNA and C# vs the 360's in order processor

    - by Richard Fabian
    Having read this rant, I feel that they're probably right (seeing as the Xenon and the CELL BE are both highly sensitive to branchy and cache ignorant code), but I've heard that people are making games fine with C# too. So, is it true that it's impossible to do anything professional with XNA, or was the poster just missing what makes XNA the killer API for game development? By professional, I don't mean in the sense that you can make money from it, but more in the sense that the more professional games have phsyics models and animation systems that seem outside the reach of a language with such definite barriers to the intrinsics. If I wanted to write a collision system or fluid dynamics engine for my game, I don't feel like C# offers me the chance to do that as the runtime code generator and the lack of intrinsics gets in the way of performance. However, many people seem to be fine working within these constraints, making their successful games but seemingly avoiding any of the problems by omission. I've not noticed any XNA games do anything complex other than what's already provided by the libraries, or handled by shaders. Is this avoidance of the more complex game dynamics because of teh limitations of C#, or just people concentrating on getting it done? In all honesty, I can't believe that AI war can maintain that many units of AI without some data oriented approach, or some dirty C# hacks to make it run better than the standard approach, and I guess that's partly my question too, how have people hacked C# so it's able to do what games coders do naturally with C++?

    Read the article

  • Getting into game/game engine programming

    - by Darkslash
    So I am interested in learning game programming, but I really have an interest in the lower level engineering in games. I have openGL experience, and I am really interested in learning more about implementing AI, Physics, etc. I have a computer science degree, so I really like getting into technical stuff. Many times when I ask about this sort of thing, I get a lot of "Use an engine", "Use Unity3d", "Why waste your time writing code that already exists", etc etc. My idea was to use simpler libraries such as SFML or XNA so that I could learn how to implement the more complex systems. The thing is, although I do want to write games, I want to learn things that using something like Unity simply doesnt teach you. My goal is not to make a current generation quality 3D game to sell, I just want to make some cool smaller games and learn all I can about the programming side of game development. Is this something that people just do not do anymore? It seems like everywhere I turn people are using Unity or UDK or GameMaker. I fully understand why you would use a tool like these, but I cant see how they would suit my purposes. So where does someone like myself turn? Am I trying to learn something that people just do not bother doing anymore? Is the innovation in this area gone and just all about gameplay now? Im sorry if this question seems silly, but I am genuinely interested in knowing more about this and meeting more people who are interested in this sort of thing.

    Read the article

  • Change from Bullet/OgreAnim to Havok?

    - by Brett Powell
    I have been working on a game in Ogre for the last 6 months or so. It started as a learning project, and after a few rewrites it actually turned into a real game project. Physics scared me, and using Bullet with its lack of documentation was a nightmare, but I was able to atleast get some basics added and learned a lot. So as of now I am using Ogre with its default animation system (fairly basic) and Bullet Physics. I had always wanted to use Havok when I started out, but due to lack of integration information on the Ogre forums, and lack of tutorials on the net, I decided against it. Now that I am actually at the point where Bullet is just too much of a headache to proceed with (staring at forum threads praying someone answers) and the Ogre animation system is so basic, I am considering switching to Havok for Physics and Animation. The Physics system looks extremely polished and easy to use. The animation system looks incredible with the retargeting/blending/etc. The documentation is incredibly detailed as well (I guess when you come from Bullet, any documentation looks amazing) So my question is, as I am still somewhat of a 'newbie' to game development, should I just stick with what I am using now or should I make the switch over to Havok? The physics looks like I could get my project back to where it is now with minimal effort, and be able to expand much faster. The animation aspect looks extremely daunting as far as integrating it with Ogre.

    Read the article

  • How do I create a 2.5d parallax effect?

    - by Nikolay Dyankov
    I have a decent background in 3D graphics and programming, but I'm new to game development. I'm currently exploring different possibilities and I really want to make an RPG game. I was thinking about classic 2D isometric view, but I really love how Diablo 2 looks and feels to play. My question is - how can I achieve Diablo 2's parallax effect? Everything looks hand drawn with baked lights and shadows and looks awesome, but when you move around you notice some perspective. For example, let's say that I drew a big hall with columns in Photoshop with an orthographic perspective (classic pixel art style, just parallel lines). How would I give parallax effect to this scene when the character moves around? If I use camera-facing sprites for everything it would probably look OK in the distance, but it would be really fake when a character comes close to a column (cylinder) for example. Any suggestions? How did Blizzard make the parallax effect in Diablo 2? See this screenshot: http://guidesmedia.ign.com/guides/10629/images/act2tombs.jpg

    Read the article

  • 'Binary XML' for game data?

    - by bluescrn
    I'm working on a level editing tool that saves its data as XML. This is ideal during development, as it's painless to make small changes to the data format, and it works nicely with tree-like data. The downside, though, is that the XML files are rather bloated, mostly due to duplication of tag and attribute names. Also due to numeric data taking significantly more space than using native datatypes. A small level could easily end up as 1Mb+. I want to get these sizes down significantly, especially if the system is to be used for a game on the iPhone or other devices with relatively limited memory. The optimal solution, for memory and performance, would be to convert the XML to a binary level format. But I don't want to do this. I want to keep the format fairly flexible. XML makes it very easy to add new attributes to objects, and give them a default value if an old version of the data is loaded. So I want to keep with the hierarchy of nodes, with attributes as name-value pairs. But I need to store this in a more compact format - to remove the massive duplication of tag/attribute names. Maybe also to give attributes native types, so, for example floating-point data is stored as 4 bytes per float, not as a text string. Google/Wikipedia reveal that 'binary XML' is hardly a new problem - it's been solved a number of times already. Has anyone here got experience with any of the existing systems/standards? - are any ideal for games use - with a free, lightweight and cross-platform parser/loader library (C/C++) available? Or should I reinvent this wheel myself? Or am I better off forgetting the ideal, and just compressing my raw .xml data (it should pack well with zip-like compression), and just taking the memory/performance hit on-load?

    Read the article

  • 2D map/plane with nodes overlayed that supports panning, scaling and clicking on nodes

    - by garlicman
    I'm trying my hand at Android development and seem to be running into an invisible ceiling in trying to get what I want accomplished. Basically I'm trying to create an app that renders a 2D surface map that I can (pinch) zoom and pan. I'll have to place nodes on the surface of the map that will scale/zoom and pan in relation to the surface. I started out with a 2D ImageView approach and got as far as pinch zoom, pan and laying nodes as relative ImageViews, but all the methods I tried to get X,Y,W,H for the 2D surface were always off for some reason. Additionally, I was never able to scale the node ImageViews correctly, and as a result never got far enough to try and work out their X,Y scaled offset. So I decided to get back to 3D rendering. Conceptually pan/zoom is camera manipulation, so I don't have to mess with how to scale the 2D map or the nodes. But I need a starting point or sample to get me going that's close to what I'm trying to achieve. A sample on a translucent spinning cube isn't helping as much as I need it to. Any tips? Links, insults and sympathy are all welcome!

    Read the article

  • Camera field of view: 3D projections & trigonometry

    - by Thomas O
    Okay, here goes. I have a camera at (Xc, Yc, Zc.) The Xc and Yc coordinates are latitude/longitude, and the Zc coordinate is an altitude in metres. I have a point at (Xp, Yp, Zp) and a field of view on the camera (Th1, Th2) - where Th1 is horizontal FOV and Th2 is vertical FOV. Given this information, I'd like to: test if the point is visible (i.e. in the camera's FOV) project the point as the camera would see it I've figured out already that the camera's horizontal view at any given distance is tan(Th1) * distance, but I don't know how to test if the point is visible. Accuracy is not critical. I would prefer a simple solution over a complicated solution, if it works well enough. The computations will be performed by a small microcontroller, which isn't very fast at things like trig functions. P.S. this is not homework, I'm doing this for some game development. It will be integrated with the real world, hence the latitude/longitude/altitude. It involves flying real RC planes through virtual hoops (or chasing virtual targets), so I have to project the positions of these hoops on a display.

    Read the article

  • Drawing particles as a smooth blob

    - by Nömmik
    I'm new to game/graphics development and I'm playing around with particles (in 2D). I want to draw particles close to each other as a blob, just as liquid/water. I do not want to draw big circles overlapping as the blob won't be smooth (and too big). I don't really know physics but I assume what I want is something looking similar to surface tension. I haven't been able to find anything on stackexchange or on Google (maybe I do not know the correct keywords?). So far I have found two possible solutions, but I am unable to find any concrete information about algorithms. One of them is to calculate the concave hull of particles I consider being a blob. I can calculate the blob by creating an equivalence class (on the relation "close to each other"). Strangely enough I haven't been able to find any algorithm explaining how to calculate the concave hull. Many posts (and among stackexchange) links to libraries or commercial products that do this (I need libraries to work in C#), but never any algorithm. Also this solution might have a problem with a circle of particles, which would not detect the empty space in the middle. While researching concave hull I stumbled upon something called alpha shapes. Which seems to be exactly what I want to do, however just as with concave hull I haven't found any source explaining how they actually work. I have found some presentation materials but not enough to go on. It's like a big secret everyone knows except me :-/ After calculating the concave hull or alpha shape I want to make it a Bézier curve to make it smooth and nice. Although I do find my approach a bit too complex, maybe I am trying to solve this the wrong way? If you can either suggest any other solution to my problem, or explain the pieces I am missing I would be very happy and grateful :-) Thanks.

    Read the article

  • How to check Early Z efficiency on AMD GPU with Windows 7

    - by Suma
    I have a game using DirectX 9, and a development station using Win 7 x64. I am still able to get access to another station with Vista x64 / dual booted with WinXP x86. I wanted to check early Z efficiency in the game and to my sadness all tools I have tried seem to be unable to perform this task: AMD PerfStudio AMD GPUPerfStudio 2 does not support DirectX 9 at all AMD GPUPerfStudio 1.2 does not install correctly on Windows 7. When I have tweaked the MSI package (a simple OS version check adjustment was needed), it complained the drivers I have do not provide needed instrumentation. The drivers old enough to support the GPUPerfStudio would most likely not be able to operate with my Radeon 5750 card (though this is something I am not 100 % sure, I did not attempt to try any older drivers, not knowing which I should look for) PIX PIX does not seem to contain any counters like this. It offers some ATI specific counters, but when I try to activate them, the PIX reports "PIX encountered a problem while attaching to the target program." I do not want to upgrade to DX 10/11 just to be able to profile the game, but it seems without the step I am somewhat locked with a toolset which is no longer supported. I see only one obvious options which would probably work, and that is using WinXP (or with a little bit of luck even Vista) station, perhaps with some older AMD card, to make sure GPUPerfStudio 1.2 works. Other than that, can anyone recommend other options how to check GPU HW counters (HiZ / EarlyZ in particular, but if others would be enabled as well, it would be a nice bonus) for a DirectX 9 game on Windows 7, preferably on AMD GPU? (If that is not possible, I would definitely prefer switching GPU to switching the OS, but before I do so I would like to know if I will not hit the same problem with nVidia again)

    Read the article

  • Does concurrency inherently introduce "randomness" into a game?

    - by Jeff
    When a game is implemented with concurrency (as most games are), does this necessarily, by its very nature, introduce an element of randomness into the game that is outside of the players' control? Note that when I use the word "random", I'm not meaning to launch into a philosophical debate about the deterministic nature of the system. I understand that concurrency is deterministic in the sense that the operating system decides which processes to allow time on the CPU and in what order (or the JVM controls which Thread's turn it is to execute, etc). But my understanding of this is that there is no way to control or predict whether one thread's next command will execute before or after another. The reason I'm asking is because this seems like a fundamental difficulty for game development where a game is supposedly designed around a player's skill. Consider a game like League of Legends. Assume that two players are battling it out. It's a very close contest between the two and it's coming down to the wire -- so much so that whoever gets their last attack off will be the one to kill the other and win the game for their team. If the players are implemented using concurrency and the situation really was like this, is it essentially out of the players' hands at this point? Is the outcome of this match all up to whatever system is arbitrarily deciding which player's thread/process will execute next? If not, what am I misunderstanding about concurrency? If so, is there any way around this problem so that a game of skill can always be a game of skill, especially in those most crucial moments?

    Read the article

  • Regarding sprite design and resolution for tablets and phones

    - by Dimitris P.
    I am about to start working on a game for android devices, in my spare time, to get familiar with android development. I'm more interested in using the best practices possible than getting a quick result, and that is why I need some guidance regarding graphics. I think the game is going to be fully sprite based. Everything is going to be in .bmp form, or something similar, and my question is: Should I design the sprites in a small resolution (ie for phone screens) and scale them up to fit into larger screens (tablet screens), should I do it vice-versa or should I consider a completely different approach? Would designing a different set of sprites for each of the most used resolution settings be worth it or are there simpler solutions to the problem with fewer drawbacks than the ones I mentioned above? (If I follow the first approach, for example, the larger the screen the worse the graphics will get, since every pixel of the original drawing will cover several pixels on the screen). Is there a standard approach for dealing with this kind of problems? If you need me to be more detailed or more clear about something I mentioned (or forgot to) please don't hesitate to ask. Also, excuse me for any inaccurate use of the English language. Thank you in advance for your input.

    Read the article

  • Sprite Animation using cocos2dx 2.0.2

    - by Lalit Chattar
    I am new in game development and learning coco2dx framework. I am trying to implement sprite animation using coco2dx. i tried many demo they all are same. But when i tried i got access violation error in my code. CCSpriteFrameCache::sharedSpriteFrameCache()->addSpriteFramesWithFile("AnimBear.plist"); CCSpriteBatchNode *spreetsheet = CCSpriteBatchNode::create("AnimBear.png"); this->addChild(spreetsheet); CCArray *bearArray = new CCArray(); for(int i = 1; i <= 8; i++) { char name[32] = {0}; sprintf(name, "bear%d.png",i); bearArray->addObject(CCSpriteFrameCache::sharedSpriteFrameCache()->spriteFrameByName(name)); } CCAnimation *walkAnim = CCAnimation::animationWithSpriteFrames(bearArray, 0.1f); CCSize size = CCDirector::sharedDirector()->getWinSize(); CCSprite *bear = CCSprite::spriteWithSpriteFrameName("bear1.png"); bear->setPosition(ccp(size.width/2, size.height/2)); CCAction *walkAction = CCRepeatForever::actionWithAction(CCAnimate::actionWithAnimation(walkAnim)); bear->runAction(walkAction); spreetsheet->addChild(bear); error is coming in first line while we passing plist refrence. Plese help me. I a using Visual Basic 2010 and put both files in Resource folder (png and plist).

    Read the article

  • Does Unity have existing support for Timelines?

    - by Raven Dreamer
    I am planning the development of a game in Unity3D, and trying to come to terms with what the engine has already provided, and what I must code myself. The game itself is going to be a rhythm game, which means synching audio and graphical events so that they always play when they're supposed to. What I'm looking to avoid is a potential scenario of lag where either the audio or the graphics starts to progress faster than the other. When we discussed this type of coordinating system in my game design class back at university, my professor called this type of design a "Timeline" class. The idea being that you can instantiate one or more of these to progress at different rates, schedule things to happen in the future, and synch up periodic events. However, calling this a "Timeline" class seems to have been limited to my professor himself, as googling for whether a certain API features "Timeline" functionality has been a fruitless endeavor. Is there some more common name for this kind of functionality? Does Unity have any pre-existing methods to coordinate the scheduling of events like this, or is this the kind of thing that needs to be built onto the engine? And if it does, I'd appreciate being pointed towards some tutorials!

    Read the article

  • Help to organize game cycle in Java

    - by ASIO22
    I'm pretty new here (as though to a game development). So here's my question. I'm trying to organize a really simple game cycle in my public static main() as follows: Scanner sc = new Scanner(System.in); //Running the game cycle boolean flag=true; while (flag) { int action; System.out.println("Type your action please:"); System.out.println("0: Exit app"); try { action = sc.nextInt(); switch (action) { case 0: flag=false; break; case 1: break; } } catch (InputMismatchException ex) { System.out.println(ex.getClass() + "\n" + "Please type a correct input\n"); //action = sc.nextInt(); continue; } What's wrong with this cycle: I want to catch an exception when user types text instead of number, show a message, warning user, and the continue game cycle, read user input etc. But instead of that, when users types wrong data, it goes into a eternal cycle without even prompting user. What I did wrong?

    Read the article

  • Alternatives to Component Based Architecture?

    - by Ben Lakey
    Usually when I develop a game I will use an architecture like what you see below. What other architectures are popular for simple game development? I'm concerned about having a narrow view of what exists out there for architectures beyond this. Is this an example of component-based architecture? Or is this something else? What would that look like? What alternatives exist? public abstract class ComponentBase { protected final Collection<ComponentBase> subComponents = new LinkedList<ComponentBase>(); private boolean enableInput; private boolean isVisible; protected ComponentBase(boolean enableInput, boolean isVisible) { this.enableInput = enableInput; this.isVisible = isVisible; } public void render(Graphics2D graphics) { for(ComponentBase gameComponent : this.subComponents) { if(gameComponent.isVisible()) { gameComponent.render(graphics); } } } public void input(InputData input) { for(ComponentBase gameComponent : this.subComponents) { if(gameComponent.inputIsEnabled()) { gameComponent.input(input); } } } ... getters/setters ... public void update(long elapsedTimeMillis) { for(ComponentBase gameComponent : this.subComponents) { gameComponent.update(elapsedTimeMillis); } } }

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >