Search Results

Search found 4765 results on 191 pages for 'gh unit'.

Page 105/191 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • A good software development book

    - by Mahmoud Hossam
    I've searched this website, as well as SO for a question like that, and I still haven't found what I'm looking for. I'm looking for a book that is similar to Head First Software Development. I want to know more about the different stages of software development, I know about coding already, but I don't know much about unit testing, version control, integration, design...etc. P.S. it'd be nice if the book wasn't a thousand pages long. Edit: I'm looking for an introductory text, not a book about the latest trends in software development.

    Read the article

  • Code Structure / Level Design: Plants vs Zombies game level dissection

    - by lalan
    Hi Friends, I am interested in learning the class structure of Plants vs Zombies, particularly level design; for those who haven't played it - this video contains nice play-through: http://www.youtube.com/watch?v=89DfdOIJ4xw. How would I go ahead and design the code, mostly structure & classes, which allows for maximum flexibility & clean development? I am familiar with data driven design concepts, and would use events to handle most of dynamic behavior. Dissection at macro level: (Once every Level) Load tilemap, props, etc -- basically build the map (Once every Level) Camera Movement - might consider it as short cut-scene (Once every Level) Show Enemies you'll face during present level (Once every Level) Unit Selection Window/Panel - selection of defensive plants (Once every Level) Camera Movement - might consider it as short cut-scene (Once every Level) HUD Creation - based on unit selection (Level Loop) Enemy creation - based on types of zombies allowed (Level Loop) Sun/Resource generation (Level Loop) Show messages like 'huge wave of zombies coming', 'final wave' (Level Loop) Other unique events - Spawn gifts, money, tombstones, etc (Once every Level) Unlock new plant Potential game scripts: a) Level definitions: Level_1_1.xml, Level_1_2.xml, etc. Level_1_1.xml :: Sample script <map> <tilemap>tilemapFrontLawn</tilemap> <SpawnPoints> tiles where particular type of zombies (land vs water) may spawn</spawnPoints> <props> position, entity array -- lawnmower, </props> </map> <zombies> <... list of zombies who gonna attack by ids...> </zombies> <plants> <... list by plants which are available for defense by ids...> </plants> <progression> <ZombieWave name='first wave' spawnScript='zombieLightWave.lua' unlock='null'> <startMessages time=1.5>Ready</startMessages> <endMessages time=1.5>Huge wave of zombies incoming</endMessages> </ZombieWave> </progression> b) Entities definitions: .xmls containing zombies, plants, sun, lawnmower, coins, etc description. Potential classes: //LevelManager - Based on the level under play, it will load level script. Few of the // functions it may have: class LevelManager { public: bool load(string levelFileName); bool enter(); bool update(float deltatime); bool exit(); private: LevelData* mLevelData; } // LevelData - Contains the details of level loaded by LevelManager. class LevelData { private: string file; // array of camera,dialog,attackwaves, etc in active level LevelCutSceneCamera** mArrayCutSceneCamera; LevelCutSceneDialog** mArrayCutSceneDialog; LevelAttackWave** mArrayAttackWave; .... // which camera,dialog,attackwave is active in level uint mCursorCutSceneCamera; uint mCursorCutSceneDialog; uint mCursorAttackWave; public: // based on cursor, get the next camera,dialog,attackwave,etc in active level // return false/true based on failure/success bool nextCutSceneCamera(LevelCutSceneCamera**); bool nextCutSceneDialog(LevelCutSceneDialog**); } // LevelUnderPlay- LevelManager class LevelUnderPlay { private: LevelCutSceneCamera* mCutSceneCamera; LevelCutSceneDialog* mCutSceneDialog; LevelAttackWave* mAttackWave; Entities** mSelectedPlants; Entities** mAllowedZombies; bool isCutSceneCameraActive; public: bool enter(); bool update(float deltatime); bool exit(); } I am totally confused.. :( Does it make sense of using class composition (have flat class hierarchy) for managing levels. Is it a good idea to just add/remove/update sprites (or any drawable stuff) to current scene from LevelManager or LevelUnderPlay? If I want to make non-linear level design, how should I go ahead? Perhaps I would need a LevelProgression class, which would decide what to do based on decision tree. Any suggestions would be appreciated very much. Thank for your time, lalan

    Read the article

  • Learning django by example source code (not examples)

    - by Bryce
    I'm seeking a nice complete open source django application to study and learn best practices from, or even use as a template. The tutorials only go so far, and django is super flexible which can lead one to paining themselves into a corner. Ideally such a template / example would: Ignore django admin, and implement full CRUD outside the admin. Be built like a large application in terms of best practices and patterns. Have a unit test Use at least one package (e.g. twitter integration or threaded comments) Implement some AJAX or Comet See also: Learning Django by example

    Read the article

  • Oracle’s new release of Primavera P6 Enterprise Portfolio Management

    It is estimated that projects totaling more than $6 trillion in value have been managed with Primavera products. Companies turn to Oracle's Primavera project portfolio management solutions to help them make better portfolio management decisions, evaluate the risks and rewards associated with projects, and determine whether there are sufficient resources with the right skills to accomplish the work. Tune into this conversation with Yasser Mahmud, Director of Product Strategy, for the Oracle Primavera Global Business Unit, to learn how P6 revolutionized project management, the new features in the release of Oracle Primavera P6 version 7 and how this newest release helps project-intensive businesses manage their entire project portfolio lifecycle, including projects of all sizes.

    Read the article

  • Flood fill algorithm for Game of Go

    - by Jackson Borghi
    I'm having a hell of a time trying to figure out how to make captured stones disappear. I've read everywhere that I should use the flood fill algorithm, but I haven't had any luck with that so far. Any help would be amazing! Here is my code: package Go; import static java.lang.Math.*; import static stdlib.StdDraw.*; import java.awt.Color; public class Go2 { public static Color opposite(Color player) { if (player == WHITE) { return BLACK; } return WHITE; } public static void drawGame(Color[][] board) { Color[][][] unit = new Color[400][19][19]; for (int h = 0; h < 400; h++) { for (int x = 0; x < 19; x++) { for (int y = 0; y < 19; y++) { unit[h][x][y] = YELLOW; } } } setXscale(0, 19); setYscale(0, 19); clear(YELLOW); setPenColor(BLACK); line(0, 0, 0, 19); line(19, 19, 19, 0); line(0, 19, 19, 19); line(0, 0, 19, 0); for (double i = 0; i < 19; i++) { line(0.0, i, 19, i); line(i, 0.0, i, 19); } for (int x = 0; x < 19; x++) { for (int y = 0; y < 19; y++) { if (board[x][y] != YELLOW) { setPenColor(board[x][y]); filledCircle(x, y, 0.47); setPenColor(GRAY); circle(x, y, 0.47); } } } int h = 0; } public static void main(String[] args) { int px; int py; Color[][] temp = new Color[19][19]; Color[][] board = new Color[19][19]; Color player = WHITE; for (int i = 0; i < 19; i++) { for (int h = 0; h < 19; h++) { board[i][h] = YELLOW; temp[i][h] = YELLOW; } } while (true) { drawGame(board); while (!mousePressed()) { } px = (int) round(mouseX()); py = (int) round(mouseY()); board[px][py] = player; while (mousePressed()) { } floodFill(px, py, player, board, temp); System.out.print("XXXXX = "+ temp[px][py]); if (checkTemp(temp, board, px, py)) { for (int x = 0; x < 19; x++) { for (int y = 0; y < 19; y++) { if (temp[x][y] == GRAY) { board[x][y] = YELLOW; } } } } player = opposite(player); } } private static boolean checkTemp(Color[][] temp, Color[][] board, int x, int y) { if (x < 19 && x > -1 && y < 19 && y > -1) { if (temp[x + 1][y] == YELLOW || temp[x - 1][y] == YELLOW || temp[x][y - 1] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } if (x == 18) { if (temp[x - 1][y] == YELLOW || temp[x][y - 1] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } if (y == 18) { if (temp[x + 1][y] == YELLOW || temp[x - 1][y] == YELLOW || temp[x][y - 1] == YELLOW) { return false; } } if (y == 0) { if (temp[x + 1][y] == YELLOW || temp[x - 1][y] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } if (x == 0) { if (temp[x + 1][y] == YELLOW || temp[x][y - 1] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } else { if (x < 19) { if (temp[x + 1][y] == GRAY) { checkTemp(temp, board, x + 1, y); } } if (x >= 0) { if (temp[x - 1][y] == GRAY) { checkTemp(temp, board, x - 1, y); } } if (y < 19) { if (temp[x][y + 1] == GRAY) { checkTemp(temp, board, x, y + 1); } } if (y >= 0) { if (temp[x][y - 1] == GRAY) { checkTemp(temp, board, x, y - 1); } } } return true; } private static void floodFill(int x, int y, Color player, Color[][] board, Color[][] temp) { if (board[x][y] != player) { return; } else { temp[x][y] = GRAY; System.out.println("x = " + x + " y = " + y); if (x < 19) { floodFill(x + 1, y, player, board, temp); } if (x >= 0) { floodFill(x - 1, y, player, board, temp); } if (y < 19) { floodFill(x, y + 1, player, board, temp); } if (y >= 0) { floodFill(x, y - 1, player, board, temp); } } } }

    Read the article

  • A good software development book [closed]

    - by Mahmoud Hossam
    I've searched this website, as well as SO for a question like that, and I still haven't found what I'm looking for. I'm looking for a book that is similar to Head First Software Development. I want to know more about the different stages of software development, I know about coding already, but I don't know much about unit testing, version control, integration, design...etc. P.S. it'd be nice if the book wasn't a thousand pages long. Edit: I'm looking for an introductory text, not a book about the latest trends in software development.

    Read the article

  • Don't Use Static? [closed]

    - by Joshiatto
    Possible Duplicate: Is static universally “evil” for unit testing and if so why does resharper recommend it? Heavy use of static methods in a Java EE web application? I submitted an application I wrote to some other architects for code review. One of them almost immediately wrote me back and said "Don't use "static". You can't write automated tests with static classes and methods. "Static" is to be avoided." I checked and fully 1/4 of my classes are marked "static". I use static when I am not going to create an instance of a class because the class is a single global class used throughout the code. He went on to mention something involving mocking, IOC/DI techniques that can't be used with static code. He says it is unfortunate when 3rd party libraries are static because of their un-testability. Is this other architect correct?

    Read the article

  • FloodFill Algorithm for Game of Go

    - by Jackson Borghi
    I'm having a hell of a time trying to figure out how to make captured stones disappear. I've read everywhere that I should use the FloodFill algorithm, but I havent had any luck with that so far. Any help would be amazing! Here is my code: package Go; import static java.lang.Math.; import static stdlib.StdDraw.; import java.awt.Color; public class Go2 { public static Color opposite(Color player) { if (player == WHITE) { return BLACK; } return WHITE; } public static void drawGame(Color[][] board) { Color[][][] unit = new Color[400][19][19]; for (int h = 0; h < 400; h++) { for (int x = 0; x < 19; x++) { for (int y = 0; y < 19; y++) { unit[h][x][y] = YELLOW; } } } setXscale(0, 19); setYscale(0, 19); clear(YELLOW); setPenColor(BLACK); line(0, 0, 0, 19); line(19, 19, 19, 0); line(0, 19, 19, 19); line(0, 0, 19, 0); for (double i = 0; i < 19; i++) { line(0.0, i, 19, i); line(i, 0.0, i, 19); } for (int x = 0; x < 19; x++) { for (int y = 0; y < 19; y++) { if (board[x][y] != YELLOW) { setPenColor(board[x][y]); filledCircle(x, y, 0.47); setPenColor(GRAY); circle(x, y, 0.47); } } } int h = 0; } public static void main(String[] args) { int px; int py; Color[][] temp = new Color[19][19]; Color[][] board = new Color[19][19]; Color player = WHITE; for (int i = 0; i < 19; i++) { for (int h = 0; h < 19; h++) { board[i][h] = YELLOW; temp[i][h] = YELLOW; } } while (true) { drawGame(board); while (!mousePressed()) { } px = (int) round(mouseX()); py = (int) round(mouseY()); board[px][py] = player; while (mousePressed()) { } floodFill(px, py, player, board, temp); System.out.print("XXXXX = "+ temp[px][py]); if (checkTemp(temp, board, px, py)) { for (int x = 0; x < 19; x++) { for (int y = 0; y < 19; y++) { if (temp[x][y] == GRAY) { board[x][y] = YELLOW; } } } } player = opposite(player); } } private static boolean checkTemp(Color[][] temp, Color[][] board, int x, int y) { if (x < 19 && x > -1 && y < 19 && y > -1) { if (temp[x + 1][y] == YELLOW || temp[x - 1][y] == YELLOW || temp[x][y - 1] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } if (x == 18) { if (temp[x - 1][y] == YELLOW || temp[x][y - 1] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } if (y == 18) { if (temp[x + 1][y] == YELLOW || temp[x - 1][y] == YELLOW || temp[x][y - 1] == YELLOW) { return false; } } if (y == 0) { if (temp[x + 1][y] == YELLOW || temp[x - 1][y] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } if (x == 0) { if (temp[x + 1][y] == YELLOW || temp[x][y - 1] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } else { if (x < 19) { if (temp[x + 1][y] == GRAY) { checkTemp(temp, board, x + 1, y); } } if (x >= 0) { if (temp[x - 1][y] == GRAY) { checkTemp(temp, board, x - 1, y); } } if (y < 19) { if (temp[x][y + 1] == GRAY) { checkTemp(temp, board, x, y + 1); } } if (y >= 0) { if (temp[x][y - 1] == GRAY) { checkTemp(temp, board, x, y - 1); } } } return true; } private static void floodFill(int x, int y, Color player, Color[][] board, Color[][] temp) { if (board[x][y] != player) { return; } else { temp[x][y] = GRAY; System.out.println("x = " + x + " y = " + y); if (x < 19) { floodFill(x + 1, y, player, board, temp); } if (x >= 0) { floodFill(x - 1, y, player, board, temp); } if (y < 19) { floodFill(x, y + 1, player, board, temp); } if (y >= 0) { floodFill(x, y - 1, player, board, temp); } } } }

    Read the article

  • BizTalk 2009 - Installing BizTalk Server 2009 on XP for Development

    - by StuartBrierley
    At my previous employer, when developing for BizTalk Server 2004 using Visual Studio 2003, we made use of separate development and deployment environments; developing in Visual Studio on our client PCs and then deploying to a seperate shared BizTalk 2004 Server from there.  This server was part of a multi-server Standard BizTalk environment comprising of separate BizTalk Server 2004 and SQL Server 2000 servers.  This environment was implemented a number of years ago by an outside consulting company, and while it worked it did occasionally cause contention issues with three developers deploying to the same server to carry out unit testing! Now that I am making the design and implementation decisions about the environment that BizTalk will be developed in and deployed to, I have chosen to create a single "server" installation on my development PC, installling SQL Server 2008, Visual Studio 2008 and BizTalk Server 2009 on a single system.  The client PC in use is actually a MacBook Pro running Windows XP; not the most powerful of systems for high volume processing but it should be powerful enough to allow development and initial unit testing to take place. I did not need to, and so chose not to, install all of the components detailed in the Microsoft guide for installing BizTalk 2009 on Windows XP but I did follow the basics of the procedures detailed within.  Outlined below are the highlights of this process and any details of what choices I made.   Install IIS I had previsouly installed Windows XP, including all current service packs and critical updates.  At the time of installation this included Service Pack 3, the .Net Framework 3.5 and MS Windows Installer 3.1.  Having a running XP system, my first step was to install IIS - this is quite straightforward and posed no difficulties. Install Visual Studio 2008 The next step for me was to install Visual Studio 2008.  Making sure to select a custom installation is crucial at this point, as you need to make sure that you deselect SQL Server 2005 Express Edition as it can cause the BizTalk installation to fail.  The installation guide suggests that you only select Visual C# when selecting features to install, but  I decided that due to some legacy systems I have code for that I would also select the VB and ASP options. Visual Studio 2008 Service Pack 1 Following the completion of the installation of Visual Studio itself you should then install the Visual Studio 2008 Service Pack 1. SQL Server 2008 Standard Edition The next step before intalling BizTalk Server 2009 itself is to install SQL Server 2008 Standard Edition. On the feature selection screen make sure that you select the follwoing options: Database Engine Services SQL Server Replication Full-Text Search Analysis Services Reporting Services Business Intelligence Development Studio Client Tools Connectivity Integration Services Management Tools Basic and Complete Use the default instance and the same accounts for all SQL server instances - in my case I used the Network Service and Local Service accounts for the two sets of accounts. On the database engine configuration screen I selected windows authentication and added the current user, adding the same user again on the Analysis services Configuration screen.  All other screens were left on the default settings. The SQL Server 2008 installation also included the installation of hotfix for XP KB942288-v3, the Windows Installer 4.5 Redistributable. System Configuration At this stage I took a moment to disable the SQL Server shared memory protocol and enable the Named Pipes and TCP/IP protocols.  These can be found in the SQL Server Configuration Manager > SQL Server Network Configuration > Protocols for MSSQLServer.  I also made sure that the DTC settings were configured correctley.   BizTalk Server 2009 The penultimate step is to install BizTalk Server 2009 Standard Edition. I had previsouly downloaded the redistributable prerequisites as a CAB file so was able to make use of this when carrying out the installation. When selecting which components to install I selected: Server Runtime BizTalk EDI/AS2 Runtime WCF Adapter Runtime Portal Components Administrative Tools WFC Administartion Tools Developer Tools and SDK, Enterprise SSO Administration Module Enterprise SSO Master Secret Server Business Rules Components BAM Alert Provider BAM Client BAM Eventing Once installation has completed clear the launch BizTalk Server Configuration check box and select finish. Verify the Installation Before configuring BizTalk Server it is a good idea to check that BizTalk Server 2009 is installed and that SQL Server 2008 has started correctly.  The easiest way to verify the BizTalk installation is check the Programs and Features in Control panel.  Check that SQL is started by looking in the SQL Server Configuration Manager. Configure BizTalk Server 2009 Finally we are ready to configure BizTalk Server 2009.  To start this I opted for a custom configuration that allowed me to choose in more detail the settings to be used. For all databases I selected the local server and default database names. For all Accounts I used a local account that had been created specifically for the BizTalk Services. For all windows groups I allowed the configuration wizard to create the default local groups. The configuration wizard then ran:   Upon completion you will be presented with a screen detailing the success or failure of the configuration.  If your configuration failed you will need to sort out the issues and try again (it is possible to save the configuration settings for later use if you want too - except passwords of course!).  If you see lots of nice green ticks - congratulations BizTalk Server 2009 on XP is now installed and configured ready for development.

    Read the article

  • How is technical debt best measured? What metric(s) are most useful?

    - by throp
    If I wanted to help a customer understand the degree of technical debt in his application, what would be the best metric to use? I've stumbled across Erik Doernenburg's code toxicity, and also Sonar's technical debt plugin, but was wondering what others exist. Ideally, I'd like to say "system A has a score of 100 whereas system B has a score of 50, so system A will most likely be more difficult to maintain than system B". Obviously, I understand that boiling down a complex concepts like "technical debt" or "maintainability" into a single number might be misleading or inaccurate (in some cases), however I need a simple way to convey the to a customer (who is not hands-on in the code) roughly how much technical debt is built into their system (relative to other systems), for the goal of building a case for refactoring/unit tests/etc. Again, I'm looking for one single number/graph/visualization, and not a comprehensive list of violations (e.g. CheckStyle, PMD, etc.).

    Read the article

  • What can I do with dynamic typing that I can not do with static typing

    - by Justin984
    I've been using python for a few days now and I think I understand the difference between dynamic and static typing. What I don't understand is why it's useful. I keep hearing about its "flexibility" but it seems like it just moves a bunch of compile time checks to runtime, which means more unit tests. This seems like an awfully big tradeoff to make for small advantages like readability and "flexibility". Can someone provide me with a real world example where dynamic typing allows me to do something I can't do with static typing?

    Read the article

  • Addressing a variable in VB

    - by Jeff
    Why doesn't Visual Basic.NET have the addressof operator like C#? In C#, one can int i = 123; int* addr = &i; But VB has no equivalent counter part. It seems like it should be important. UPDATE Since there's some interest, Im copying my response to Strilanc below. The case I ran into didnt necessitate pointers by any means, but I was trying to trouble shoot a unit test that was failing and there was some confusion over whether or not an object being used at one point in the stack was the same object as an object several methods away.

    Read the article

  • BlueNES: A Bluetooth Connector for Classic NES Controllers

    - by Jason Fitzpatrick
    If you’re looking for a DIY way to hook up your classic Nintendo controllers for use in modern emulation programs, this hack allows you to use them without modifying the original casing or cables. Courtesy of Evan Dustin, we find this guide on hacking apart a broken NES unit (to get the basic parts like the port connectors) and then binding it all together with an Arduino board. Check out the video above to see it in action and then hit up the link below to check out the notes on the YouTube video for additional information including parts and code. BluesNES: Bluetooth NES Controller [via Hack A Day] HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • How to manage security cameras in Ubuntu?

    - by Josh
    I am setting up a server of sorts and chose ubuntu for the OS as my dad has it on a few computers. I am unimpressed with Windows or MAC due to all the add-ons and complexity of it when all I want is something simple. The system will have 3 purposes, storing my wife's photography work (she is a professional photographer) storing music for quick access to our entertainment system (will be running the system through the tv in our living room and thus through our surround sound) and will also serve as a DVR unit for a home security system I am going to put together. My question is what sort of software options are there for the Ubuntu system as far as a DVR with frame by frame playback. It does not need to be fancy but of course a variety of options are a nice touch.

    Read the article

  • Customer Centricity: It's Not Easy, But Worth It

    - by tony.berk
    Defining customer centricity is relatively easy: focusing on the customer and their experiences and interactions with your company. Implementing a customer centric strategy is not so easy. We've highlighted customers who have focused on their customers and experienced great success including SJ, the Swedish rail operator, and Vopak, the world's largest provider of conditioned storage facilities for bulk liquids. In this interview with Stuart Lennie, President, Volvo IT, North America and VP, Volvo's Global Sales to Order Solutions Unit, we get the opportunity to learn from another company that is not just talking about the customer, but actually implementing the significant strategic shifts required to become customer centric. Volvo has developed a vision, a strategy and a methodology to keep existing customers by understanding what is important to them. To see other customer success stories, visit Siebel CRM Success. Click here, to learn more about Oracle's CRM products.

    Read the article

  • Rendering skybox in first person shooter

    - by brainydexter
    I am trying to get a skybox rendered correctly in my first person shooter game. I have the skybox cube rendering using GL_TEXTURE_CUBE_MAP. I author the cube with extents of -1 and 1 along X,Y and Z. However, I can't wrap my head around the camera transformations that I need to apply to get it right. My render loop looks something like this: mp_Camera-ApplyTransform() :: Takes the current player transformation and inverts it and pushes that on the modelview stack. Draw GameObjects Draw Skybox DrawSkybox does the following: glEnable(GL_TEXTURE_CUBE_MAP); glDepthMask(GL_FALSE); // draw the cube here with extents glDisable(GL_TEXTURE_CUBE_MAP); glDepthMask(GL_TRUE); Do I need to translate the skybox by the translation of the camera ? (btw, that didn't help either) EDIT: I forgot to mention: It looks like a small cube with unit extents. Also, I can strafe in/out of the cube. Screenshot:

    Read the article

  • Junior software developer - How to understand web applications in depth?

    - by nat_gr
    I am currently a junior developer in web applications and specifically in ASP.NET MVC technology. My problem is that the C# senior developer in the company has no experience with this technology and I try to learn without any guidance. I went through all tutorials (e.g music store), codeplex projects and also read Pro ASP.NET MVC 4. However, most of the examples are about CRUD and e-commerce applications. What I don't understand is how dependency injection fits in web applications (I have realized that is not only used for facilitating unit testing) or when I should use a custom model binder or how to model the business logic when there is already a database schema in place. I read the forum quite often and it would very helpful if some experienced developer could give me an insight about how to proceed. Do I need to read some books to understand the overall idea behind web applications? And what kind of application should I start building myself - I don't think it would be useful to create similar examples with the tutorials.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Starcraft 2 - Third Person Custom Map

    - by Norla
    I would like to try my hand at creating a custom map in Starcraft 2 that has a third-person camera follow an individual unit. There are a few custom maps that exist with this feature already, so I do know this is possible. What I'm having trouble wrapping my head around are the features that are new to the SC2 map editor that didn't exist in the Warcraft 3 editor. For instance, to do a third-person map, do I need a custom mods file, or can everything be done in the map file? Regardless, is it worth using a mod file? What map settings do I NEED to edit/implement? Which are not necessary, but recommended?

    Read the article

  • Gallio and VS2010 code coverage

    - by andrewstopford
    Scott mentioned on twitter a great post on using VS2010 code coverage with ASP.NET unit tests with the following comment. So I figured I would work up a quick post on using Gallio with the code coverage features (and thus MbUnit, NUnit etc).  Using Gallio with the VS2010 code coverage features is exactly the same as you would use MSTest. Just enable the code coverage collector.   Select the assembly you want to profile (double click the collector to do this) Run your test   Right Click and select code coverage.  

    Read the article

  • Need clarification concerning Windows Azure

    - by SnOrfus
    I basically need some confirmation and clarification concerning Windows Azure with respect to a Silverlight application using RIA Services. In a normal Silverlight app that uses RIA services you have 2 projects: App App.Web ... where App is the default client-side Silverlight and app.web is the server-side code where your RIA services go. If you create a Windows Azure app and add a WCF Web Services Role, you get: App (Azure project) App.Services (WCF Services project) In App.Services, you add your RIA DomainService(s). You would then add another project to this solution that would be the client-side Silverlight that accesses the RIA Services in the App.Services project. You then can add the entity model to the App.Services or another project that is referenced by App.Services (if that division is required for unit testing etc.) and connect that entity model to either a SQLServer db or a SQLAzure instance. Is this correct? If not, what is the general 'layout' for building an application with the following tiers: UI (Silverlight 4) Services (RIA Services) Entity/Domain (EF 4) Data (SQL Server)

    Read the article

  • How should I log time spent on multiple tasks?

    - by xenoterracide
    In Joel's blog on evidence based scheduling he suggests making estimates based on the smallest unit of work and logging extra work back to the original task. The problem I'm now experiencing is that I'll have create object A with subtask method A which creates object B and test all of the above. I create tasks for each of these that seems to be resulting in ok-ish estimates (need practice), but when I go to log work I find that I worked on 4 tasks at once because I tweak method A and find a bug in the test and refactor object B all while coding it. How should I go about logging this work? should I say I spent, for example, 2 hours on each of the 4 tasks I worked on in the 8 hour day?

    Read the article

  • Deploying Data Mining Models using Model Export and Import, Part 2

    - by [email protected]
    In my last post, Deploying Data Mining Models using Model Export and Import, we explored using DBMS_DATA_MINING.EXPORT_MODEL and DBMS_DATA_MINING.IMPORT_MODEL to enable moving a model from one system to another. In this post, we'll look at two distributed scenarios that make use of this capability and a tip for easily moving models from one machine to another using only Oracle Database, not an external file transport mechanism, such as FTP. The first scenario, consider a company with geographically distributed business units, each collecting and managing their data locally for the products they sell. Each business unit has in-house data analysts that build models to predict which products to recommend to customers in their space. A central telemarketing business unit also uses these models to score new customers locally using data collected over the phone. Since the models recommend different products, each customer is scored using each model. This is depicted in Figure 1.Figure 1: Target instance importing multiple remote models for local scoring In the second scenario, consider multiple hospitals that collect data on patients with certain types of cancer. The data collection is standardized, so each hospital collects the same patient demographic and other health / tumor data, along with the clinical diagnosis. Instead of each hospital building it's own models, the data is pooled at a central data analysis lab where a predictive model is built. Once completed, the model is distributed to hospitals, clinics, and doctor offices who can score patient data locally.Figure 2: Multiple target instances importing the same model from a source instance for local scoring Since this blog focuses on model export and import, we'll only discuss what is necessary to move a model from one database to another. Here, we use the package DBMS_FILE_TRANSFER, which can move files between Oracle databases. The script is fairly straightforward, but requires setting up a database link and directory objects. We saw how to create directory objects in the previous post. To create a database link to the source database from the target, we can use, for example: create database link SOURCE1_LINK connect to <schema> identified by <password> using 'SOURCE1'; Note that 'SOURCE1' refers to the service name of the remote database entry in your tnsnames.ora file. From SQL*Plus, first connect to the remote database and export the model. Note that the model_file_name does not include the .dmp extension. This is because export_model appends "01" to this name.  Next, connect to the local database and invoke DBMS_FILE_TRANSFER.GET_FILE and import the model. Note that "01" is eliminated in the target system file name.  connect <source_schema>/<password>@SOURCE1_LINK; BEGIN  DBMS_DATA_MINING.EXPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_SOURCE_DIR_OBJECT',                                 'name =''MY_MINING_MODEL'''); END; connect <target_schema>/<password>; BEGIN  DBMS_FILE_TRANSFER.GET_FILE ('MY_SOURCE_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '01.dmp',                               'SOURCE1_LINK',                               'MY_TARGET_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '.dmp' );  DBMS_DATA_MINING.IMPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_TARGET_DIR_OBJECT'); END; To clean up afterward, you may want to drop the exported .dmp file at the source and the transferred file at the target. For example, utl_file.fremove('&directory_name', '&model_file_name' || '.dmp');

    Read the article

  • Should I reuse variables?

    - by IAdapter
    Should I reuse variables? I know that many best practice say you should not do it, however later when different developer is debugging the code and have 3 variables that look a like and only difference is that they are created in different places in the code he might be confused. unit-testing is a great example of this. However I do know that best practice are most of the time against it. For example they say not to "overide" method parameters. Best practice are even are against nulling the previous variables (in Java there is Sonar that has warning when you assign null to variable that you don't need to do it to call garbage collector since Java6. you cant always control what warnings are turned off, most of the time the default is on)

    Read the article

  • RESIZE casper-rw

    - by Oldrifle
    UsING Toporesize-0.7.1 FLash drive is Transcent I want to add all unused space to casper-rw. Disk=3.72GB 2.68 used 1.GB casper-rw=1.95GB caspe=681MG I boot UBUNTU 11.10 64bit and see that size of HOME is about half of casper-rw It is working from flash drive ok. But I was not able to boot 11.10 64bit using USB 3.0 HDD. UBUNTU 11.10 32 bit on usb 2.0 HDD works ok ( currently multiboot with openSUSE) FUDUNTU 2012 on usb 3.0 hdd VERY FAST. Fast USB 3.0 8GB unit is ready. partitioned. Second part is labeled as casper-rw. Will try to install using partitions as base and home.No swap since I have 8GB ram. ANY SUGGESTION? Thanks

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >