Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 301/1039 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • Which software to keep track of my project?

    - by Exa
    I'm about to start the first real phase of my game development which will consist of the acquisition of information, resources and the definition of where I want to go and what I will need for that. I just want to make sure that I'm prepared as best as possible before I actually start development. I don't like the thought of using Microsoft Word or Excel for my project management... I already worked with MS Project but I don't think it fits my needs. I need a software where I can easily maintain project steps, milestones, important issues, information about technologies and engines I use, as well as simple notes and thoughts I just want to write down. I usually prefer a whiteboard for stuff like that but unfortunately it's not a persistent way of storing. ;) Also writing it down the old-school way is something I can think of, but only for quick notes... Which software do you use for that? Are there commonly used programs? Is there any free software at all?

    Read the article

  • Jump handling and gravity

    - by sprawl
    I'm new to game development and am looking for some help on improving my jump handling for a simple side scrolling game I've made. I would like to make the jump last longer if the key is held down for the full length of the jump, otherwise if the key is tapped, make the jump not as long. Currently, how I'm handling the jumping is the following: Player.prototype.jump = function () { // Player pressed jump key if (this.isJumping === true) { // Set sprite to jump state this.settings.slice = 250; if (this.isFalling === true) { // Player let go of jump key, increase rate of fall this.settings.y -= this.velocity; this.velocity -= this.settings.gravity * 2; } else { // Player is holding down jump key this.settings.y -= this.velocity; this.velocity -= this.settings.gravity; } } if (this.settings.y >= 240) { // Player is on the ground this.isJumping = false; this.isFalling = false; this.velocity = this.settings.maxVelocity; this.settings.y = 240; } } I'm setting isJumping on keydown and isFalling on keyup. While it works okay for simple use, I'm looking for a better way handle jumping and gravity. It's a bit buggy if the gravity is increased (which is why I had to put the last y setting in the last if condition in there) on keyup, so I'd like to know a better way to do it. Where are some resources I could look at that would help me better understand how to handle jumping and gravity? What's a better approach to handling this? Like I said, I'm new to game development so I could be doing it completely wrong. Any help would be appreciated.

    Read the article

  • Is there an alternative to SDL 1.3 for a C++ game that should run on iOS and Android?

    - by futlib
    I've used SDL for many desktop games, always as the cross-platform glue for: Creating a window Processing input Rendering images Rendering fonts Playing sounds/music It has never disappointed me at those tasks. But when it comes to graphics, I prefer to work with the OpenGL API directly, even though all of our games are 2D. In the project I'm currently working on, I've made sure to only use the API subset supported by both OpenGL 1.3 and OpenGL 1.0, so making the thing run on Android should be easy, I thought. Turns out there is no official Android or iOS port of SDL yet. However, there's one in SDL 1.3, which is still in development. SDL 1.3 doesn't seem very appealing to me for three reasons: It's been in development for at least 4 years, and I have no idea when it will be done, not to mention stable. It's not ported to as many platforms as SDL 1.2. From what I've seen, it uses OpenGL for drawing, so I suppose the community will move away from directly using OpenGL. So I'm wondering if I should use a different library for our current project - it doesn't matter much if I need to port my existing code from SDL 1.2 to SDL 1.3 or to some other library. We're planning to release on Windows, Mac OS X, Linux, iOS and Android, so good support for these platforms is essential. Is there anything stable that does what I want?

    Read the article

  • Clutter for game GUI

    - by tjameson
    I'm pretty new to game development, having only written a simple 3d game for a class project, but I'd like to get started on a bigger project. I'm writing an MMORPG to run in both the browser (WebGL) and natively (OpenGL ES 2). In choosing a GUI toolkit, I'm trying to find a style that work work natively and would be simple to emulate in WebGL. I am considering using D or Go for writing my game, so interfacing with C++ libraries will be difficult, if not impossible. Of course, the language isn't the end goal here, so if using C++ will save considerable time, I'll bite the bullet and use that. In order to reduce the amount of code I'll have to write for the browser, I'm considering using something simple like Clutter for basic abstractions, which I think will be pretty easy to emulate (layered canvases maybe?). Does anyone have experience using Clutter for a 3d game? Note: I haven't used any game development libraries, and I only have limited experience with GUI libraries. I do have HTML+CSS experience, so maybe librocket is a viable solution?

    Read the article

  • OpenXDK Questions

    - by iamcreasy
    I was strolling around XBox development. Apart form buying a DevKit from Microsoft, another thing got my attention is called, OpenXDK which stands for Open XBox Development Kit. From their main site its pretty obvious that there hasn't been any update since 2005 but digging a little deeper, I found that in their project repository is was being updated. Last time stamp was 2009-02-15. Quick google search said, its not actually really on a good place to poke around. Many and MANY features are absent. Being a hobby project I perfectly understand. But, those results are quite old. The question is, is there anybody who has any experience with OpenXDK? If is, that is it possible to shade some light on this? about its limitations? Is this a mature project? How's the latest version and what's it capable of doing? Or should I just stay away from it?

    Read the article

  • What helped YOU learn C++? [on hold]

    - by Tips48
    So here's my attempt to not get this question closed for too subjective :P I'm a young programmer, specifically interested in Game Development. I've written my first couple games in Java, which I would consider my self intermediate-Advanced in. As I start to prepare myself for college and (hopefully) internships, I've noticed that learning C/C++ is essential to the industry. I've decided to start with C++, and so I read a couple of books that I saw were suggested. Anyway, now I have a decent understanding of the basics, but I really want to enhance my language knowledge. Instead of just asking for things to do, I was wondering what were some exercises that you did that really helped you understand the language? Preferably they would be near the beginner level. I understand that they obviously won't be directly related to Game Development, but it be nice if there were some things that I could transfer over eventually. (Specifically, I struggle with memory (pointers, etc) since there is no such concept in Java) Thanks! - Tips P.S.: Here's to hoping this isn't to subjective :P

    Read the article

  • Writing an OS for Motorola 68K processor. Can I emulate it? And can I test-drive OS development?

    - by ulver
    Next term, I'll need to write a basic operating system for Motorola 68K processor as part of a course lab material. Is there a Linux emulator of a basic hardware setup with that processor? So my partners and I can debug quicker on our computers instead of physically restarting the board and stuff. Is it possible to apply test-driven development technique to OS development? Code will be mostly assembly and C. What will be the main difficulties with trying to test-drive this? Any advice on how to do it?

    Read the article

  • Overwrite archetypes in Maven

    - by Random
    Hello again! I'm having some trouble using Maven for my archetypes and I will need to overwrite some. I launch an instruction that does an archetype:generate in an archetype already existing directory. Is there a parameter that let's me overwrite existing archetypes? I have search the maven definitve guide but it states that the only parameters accepted are: -DgroupId -DartifactId -Dversion -DpackageName -DarchetypeGroupId -DarchetypeArtifactId -DarchetypeVersion -DinteractiveMode I could just search the directory and delete the files, but this proccess is going to be done automatically (so no human involved, no brains involved) and I wouldn't like he machine deleting things around. Thanks for all! Edit: I almost forgot, here is some maven trace: [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [archetype:generate] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] Preparing archetype:generate [INFO] No goals needed for project - skipping [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus.velocity.ContextClassLoaderResourceLoader'. [INFO] Setting property: velocimacro.messages.on => 'false'. [INFO] Setting property: resource.loader => 'classpath'. [INFO] Setting property: resource.manager.logwhenfound => 'false'. [INFO] [archetype:generate {execution: default-cli}] [INFO] Generating project in Batch mode [INFO] Archetype defined by properties [INFO] ---------------------------------------------------------------------------- [INFO] Using following parameters for creating OldArchetype: archetype-foo-lib:1.0 [INFO] ---------------------------------------------------------------------------- [INFO] Parameter: groupId, Value: foo.tecnologia [INFO] Parameter: packageName, Value: foo.tecnologia [INFO] Parameter: basedir, Value: C:\temp\Desarrollo [INFO] Parameter: package, Value: foo.tecnologia [INFO] Parameter: version, Value: 1.0 [INFO] Parameter: artifactId, Value: Foo-Lib-Test [ERROR] Directory Foo-Lib-Test already exists - please run from a clean directory org.apache.maven.archetype.old.ArchetypeTemplateProcessingException: Directory Foo-Lib-Test already exists - please run from a clean directory at org.apache.maven.archetype.old.DefaultOldArchetype.createArchetype(DefaultOldArchetype.java:242) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.processOldArchetype(DefaultArchetypeGenerator.java:253) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.generateArchetype(DefaultArchetypeGenerator.java:143) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.generateArchetype(DefaultArchetypeGenerator.java:286) at org.apache.maven.archetype.DefaultArchetype.generateProjectFromArchetype(DefaultArchetype.java:69) at org.apache.maven.archetype.mojos.CreateProjectFromArchetypeMojo.execute(CreateProjectFromArchetypeMojo.java:184) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at com.foo.model.CSMavenCli.main(CSMavenCli.java:391) at com.foo.model.MavenAdmin.generateArchetype(MavenAdmin.java:399) at com.foo.model.ValidarPom.validarPom(ValidarPom.java:167) at com.foo.prueba.GenerarPOM.execute(GenerarPOM.java:93) at org.apache.struts.chain.commands.servlet.ExecuteAction.execute(ExecuteAction.java:58) at org.apache.struts.chain.commands.AbstractExecuteAction.execute(AbstractExecuteAction.java:67) at org.apache.struts.chain.commands.ActionCommandBase.execute(ActionCommandBase.java:51) at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191) at org.apache.commons.chain.generic.LookupCommand.execute(LookupCommand.java:305) at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191) at org.apache.struts.chain.ComposableRequestProcessor.process(ComposableRequestProcessor.java:283) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913) at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:873) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Unknown Source) [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] : org.apache.maven.archetype.old.ArchetypeTemplateProcessingException: Directory Foo-Lib-Test already exists - please run from a clean directory Directory Foo-Lib-Test already exists - please run from a clean directory [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Fri Apr 09 10:01:33 CEST 2010 [INFO] Final Memory: 15M/28M [INFO] ------------------------------------------------------------------------

    Read the article

  • Now Available: Visual Studio 2010 Release Candidate Virtual Machines with Sample Data and Hands-on-L

    - by John Alexander
    From a message from Brian Keller: “Back in December we posted a set of virtual machines pre-configured with Visual Studio 2010 Beta 2, Visual Studio Team Foundation Server 2010 Beta 2, and 7 hands-on-labs. I am pleased to announce that today we have shipped an updated virtual machine using the Visual Studio 2010 Release Candidate bits, a brand new sample application, and 9 hands-on-labs. This VM is customer-ready and includes everything you need to learn and/or deliver demonstrations of many of my favorite application lifecycle management (ALM) capabilities in Visual Studio 2010. This VM is available in the virtualization platform of your choice (Hyper-V, Virtual PC 2007 SP1, and Windows [7] Virtual PC). Hyper-V is highly recommended because of the performance benefits and snapshotting capabilities. Tailspin Toys The sample application we are using in this virtual machine is a simple ASP.NET MVC 2 storefront called Tailspin Toys. Tailspin Toys sells model airplanes and relies on the application lifecycle management capabilities of Visual Studio 2010 to help them build, test, and maintain their storefront. Major kudos go to Dan Massey for building out this great application for us. Hands-on-Labs / Demo Scripts The 9 hands-on-labs / demo scripts which accompany this virtual machine cover several of the core capabilities of conducting application lifecycle management with Visual Studio 2010. Each document can be used by an individual in a hands-on-lab capacity, to learn how to perform a given set of tasks, or used by a presenter to deliver a demonstration or classroom-style training. Unlike the beta 2 release, 100% of these labs target Tailspin Toys to help ensure a consistent storytelling experience. Software quality: Authoring and Running Manual Tests using Microsoft Test Manager 2010 Introduction to Test Case Management with Microsoft Test Manager 2010 Introduction to Coded UI Tests with Visual Studio 2010 Ultimate Debugging with IntelliTrace using Visual Studio 2010 Ultimate Software architecture: Code Discovery using the architecture tools in Visual Studio 2010 Ultimate Understanding Class Coupling with Visual Studio 2010 Ultimate Using the Architecture Explore in Visual Studio 2010 Ultimate to Analyze Your Code Software Configuration Management: Planning your Projects with Team Foundation Server 2010 Branching and Merging Visualization with Team Foundation Server 2010 “ Check out Brian’s Post for more info including download instructions…

    Read the article

  • New TFS Template Available - "Agile Dev in a Waterfall Environment"–GovDev

    - by Hosam Kamel
      Microsoft Team Foundation Server (TFS) 2010 is the collaboration platform at the core of Microsoft’s application lifecycle management solution. In addition to core features like source control, build automation and work-item tracking, TFS enables teams to align projects with industry processes such as Agile, Scrum and CMMi via the use of customable XML Process Templates. Since 2005, TFS has been a welcomed addition to the Microsoft developer tool line-up by Government Agencies of all sizes and missions. However, many government development teams consistently struggle with leveraging an iterative development process all while providing the structure, visibility and status reporting that is required by many Government, waterfall-centric, project methodologies. GovDev is an open source, TFS Process Template that combines the formality of CMMi/Waterfall with the flexibility of Agile/Iterative: The GovDev for TFS Accelerator also implements two new custom reports to support the customized process and provide the real-time visibility across the lifecycle with full traceability and drill down to tasks, tests and code: The TFS Accelerator contains: A custom TFS process template that implements a requirements centric, yet iterative process with extreme traceability throughout the lifecycle. A custom “Requirements Traceability Report” that provides a single view of traceability for the project.   Within the Traceability Report, you can also view live status indicators and “click-through” to the individual assets (even changesets). A custom report that focuses on “Contributions by Team Member” tracking things like “number of check-ins” and “Net lines added”.  Fully integrated documentation on the entire process and features. For a 45min demo of GovDev, visit: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032508359&culture=en-us Download it from Codeplex here.     Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Java - System design with distributed Queues and Locks

    - by sunny
    Looking for inputs to evaluate a design for a system (java) which would have a distributed queue serving several (but not too many) nodes. These nodes would process objects present in the distributed queue and on occasion require a distributed lock across the cluster on an arbitrary (distributed) data structures. These (distributed) data structures could potentially lie in a distributed cache. Eliminating Terracotta (DSO),Hazelcast and Akka what could be alternative choices. Currently considering zookeeper as a distributed locking mechanism. Since the recommendation of a znode is not to exceed the 1M size , the understanding is that zookeeper should not be used a distributed queue. And also from Netflix curator tech note 4. So should a distributed cache, say like memcached, or redis be used to emulate a distributed queue ? i.e. The distributed queue will be stored in the caches and will be locked cluster-wide via zookeeper. Are there potential pitfalls with this high-level approach. The objects don't need to be taken off the queue. The object will pass through a lifecycle which will determine its removal from the queue. There would be about 10k+ objects in a queue at a given time changing states and any node could service one stage of the object's lifecycle. (Although not strictly necessary .. i.e. one node could serve the entire lifecycle if that is more efficient.) Any suggestions/alternatives ? sidenote: new to zookeeper ; redis etc.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Visual Studio 2010 Service Pack 1 Released

    - by krislankford
    The VS 2010 SP 1 release was simultaneous to the release of TFS 2010 SP1 and includes support for the Project Server Integration Feature Pack and updates to .NET Framework 4.0. The complete Visual Studio SP1 list including Test and Lab Manager: http://support.microsoft.com/kb/983509 The release addresses some of the most requested features from customers of Visual Studio 2010 like better help support IntelliTrace support for 64bit and SharePoint Silverlight 4 Tools in the box unit testing support on .NET 3.5 a new performance wizard for Silverlight Another major addition is the announcement of Unlimited Load Testing for Visual Studio 2010 Ultimate with MSDN Subscribers! The benefits of Visual Studio 2010 Load Test Feature Pack and useful links: Improved Overall Software Quality through Early Lifecycle Performance Testing: Lets you stress test your application early and throughout its development lifecycle with realistically modeled simulated load. By integrating performance validations early into your applications, you can ensure that your solution copes with real-world demands and behaves in a predictable manner, effectively increasing overall software quality. Higher Productivity and Reduced TCO with the Ability to Scale without Incremental Costs: Development teams no longer have to purchase Visual Studio Load Test Virtual User Pack 2010. Download the Visual Studio 2010 Load Test Feature Pack Deployment Guide Get started with stress and performance testing with Visual Studio 2010 Ultimate: Quality Solutions Best Practice: Enabling Performance and Stress Testing throughout the Application Lifecycle Hands-On-Lab: Introduction to Load Testing with ASP.NET Profile in Visual Studio 2010 How-Do-I videos: Use ASP.NET Profiler in Load Tests Use Network Emulation in Load Tests VHD/VPC walkthrough: Getting Started with Load and Performance Testing Best Practice guidance: Visual Studio Performance Testing Quick Reference Guide

    Read the article

  • Apache Solr: What is a good strategy for creating a tag/attribute based search for an image.

    - by Development 4.0
    I recently read an article about YayMicro that descries how they used solr to search their photos. I would like doing something similar (but on a smaller scale). I have figured out how to have solr to search text files, but I would like to learn what the best way to associate images with semi structured/unstructured text. Do I create an xml file with an image link in it? I basically want to input a search string and have it return a grid of images. Yay Micro Article Link

    Read the article

  • Two internet connections coming in, one Sonicwall Tz170 (enhanced os), and slow speed

    - by Development 4.0
    I work a lot from my home office and being in general a tad paranoid I have cable and DSL pipes coming into my house. I have used an Ebay bought Sonicwall Tz170 with the enhanced OS for a good while. I believe it does failover and has a feature for doing round robin on which connection is used. I get the impression from using it that I might not be getting the most out of this setup. Is it possible/likely that my router could be a cause of the slowdown? Are there more appropriate choices?

    Read the article

  • LogMeIn style remote access to NAS drive

    - by Mere Development
    I've been asked to setup some remote access to a NAS drive. The NAS drive will sit on a VLAN inside a network that uses a Cisco 891 IS router as gateway. The charity have no SSL-VPN licenses for the Cisco. At present there are no open ports or services on the Cisco itself and ideally we would like to keep it that way for a while, hence the request for a LogMeIn style service that's initiated from inside. We need multiple user access, about 10 max. Using LogMeIn on a machine connected to the NAS would only provide screen sharing I believe, and no concurrent connections (could be wrong?) The end users need to be able to read and write files to the NAS from Mac's and PC's around the globe. Read-only access from Mobile devices would be a bonus but not absolutely necessary. This is for a charity, non-commercial, but they are willing to spend if necessary. Cisco config knowledge is at a minimum so if I can avoid upsetting that delicate device I'll be happy :) Anyone have any clever ideas? I can provide more information on request. Thanks, Ben

    Read the article

  • Multiple Apps - One SSL

    - by Optix App Development
    I'm trying to configure a domain and SSL to run multiple Facebook apps through the SSL. What I need advice on is routing the apps through the SSL without actually hosting them on that server. Ideally they would be hosted on the client's server. Any advice on how to do this? UPDATE Following the advice from the replies I have setup a domain which houses my Facebook apps under one SSL. So far this is working well. Thanks guys. :)

    Read the article

  • The Game vs The Game Engine?

    - by Milo
    I was wondering if somebody could tell me how the game and the game engine fit into game development. Specifically what I mean is, the game engine does not actually have a game. So where I'm unclear about is basically, do game developpers build an engine, then create a new class that inherits from engine which becomes the game? Ex: class ShooterGame : public Engine { }; So basically i'm unclear on where the game code fits into the engine. Thanks

    Read the article

  • Start Developing a Multiplayer Online Client to host existing video game

    - by Rami.Shareef
    GameRanger Garena ... etc I'm planning to start developing a small online client like these mentioned above (for friends usage), where the player that hosts the game is the server him self. was looking through the web for something to start with, but couldnt find any resources for this request!. Planning to do it with .NET technology, I have a good decent development experience. Any good resources to start with? the game I'm aiming to support is WarCraft III The frozen throne as start

    Read the article

  • Popular genres in Asian (non-Japanese) markets?

    - by mummey
    Hello, From time-to-time I've wondered what kind of games are popular in Asia (India, China, Korea, Singapore, etc...). I hear about developers in the US and UK who outsource work there, but what goes into the games they make for themselves? Related, you hear these days about how Japanese developers have been marketing their games more for American audiences these days (with mixed success). In what ways could American developers aim their development toward Asian audiences?

    Read the article

  • Android landscape mod game.

    - by davidv
    I am beginner in android game development. I want my game to run only in landscape fullscreen mod (currently I have Optimus 2X with resolution 800x480 in landscape), and I dont know how to set it. I found the fullscreen mod settings, and tried some landscape mod (set orientation:landscape in AndroidManifest), but the game is now crashing and its very unstable (eg. when i change phone orientation). So is there any way to do that? Thank you for help.

    Read the article

  • Kickstarter "last minute cold feet"

    - by mm24
    today I scheduled the publication of a video on kickstarter requesting approximately 5.000 $ in order to complete the iPhone shooter game I started 1 year ago after quitting my job. I invested more than 20.000$ in the game so far (for artwork, music, legal and accountant expenses) and I am now getting cold feet about my decision of publishing the video. The game is "nearly finished", in other words: the game mechanics are working but I still have some bugs to fix. Once I will have finished this (I hope will take me 1 or 2 weeks) I plan to start working on the actual level balancing (e.g. deciding the order of appearence of enemies for each level and balancing the number of hitpoints and strenght of bullets that the enemies have). Reasons for not publishing the video are: fear that the concept can be copied easily: the game is a shooter game set in a different environment (its a pretty cool one, believe me :)) and I am worried that someone might copy* the idea (I know, its the usual "I am worried story.."). A shooter game is one of the easiest game to implement and hence there will be hundreds game developer able to copy it by just adapting their existing code and changing graphics (not as straightforward). It took me one year to develop this because I was inexperienced plus there are approximately 6/7 months of work from the illustrator and there are 8 unique music tracks composed. The soundtrack of the video is the soundtrack of the game wich is not yet published and has not been deposited to a music society. I did create legally valid timestamps for the tracks and I am considering uploading the album on iTunes before publishing the video so I can have a certain publication date. But overall I am a bit scared and worried because I have never done this before and even the simple act of publishing an album requires me to read a long contract from the "aggregator company") which, even if I do have contracts with the musicians do worry me as I am not a U.S. resident and I am not familiar with the U.S. law system Reasons for publishing the video are: I almost run out of money (but this is not a real reason as I should have enough for one more month of development time) ...I kind of need extra money as, even if I do have money for 1 month of development I do not have money for marketing and for other expenses (e.g. accountant) It will create a fan base I could get some useful feedback from a wider range of beta testers It might create some pre-release buzz in case some blogger or game magazine likes the concept Anyone has had similar experiences? Is there a real risk that someone will copy the concept and implement it in a couple of months? Will the Kickstarter campaing be a good pre-release exposure for the gmae? Any refrences of similar projects/situations? Is it realistic that someone like ROVIO will copy the idea straight away?

    Read the article

  • Graphical quality of open source vs. commercial games

    - by Toktik
    I'm new in Game development. I have researched many open source games. But I have not met any open source game which has high quality graphics, comparable to these found in commercial games. What is the reason for this? Are open source game engines not advanced enough to support such graphics or is there just a lack of assets, textures and models? I know that this question is very general, I would like to hear some points of view.

    Read the article

  • Distributing cross-platform .jar containing natives for LWJGL?

    - by Carter H
    I'm making a game in Java using Slick2d, which depends on LWJGL. I can get everything to work in my development environment, but when I export it to a .jar, it needs the natives placed in the same directory as the .jar. What I'm asking is if it's possible to package the natives for all operating systems in the .jar, and automatically use the right ones depending on what OS was detected. So, is this possible?

    Read the article

  • Do game studios hire people based on their math knowledge alone?

    - by Brent Horvath
    I have very little programming skills outside of very basic levels of Java, but I have excellent math and science knowledge. I was wondering what I could offer any potential team if I were to go into video game development? Do people hire people based on their math knowledge alone? I like to do other things such as writing or drawing, but math and science are the only skills in which I really excel in.

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >