Search Results

Search found 3131 results on 126 pages for 'upper stage'.

Page 14/126 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • How can i get more low memory with the following setup:

    - by user539484
    Modules using memory below 1 MB: Name Total = Conventional + Upper Memory -------- ---------------- ---------------- ---------------- MSDOS 14 317 (14K) 14 317 (14K) 0 (0K) HIMEM 1 120 (1K) 1 120 (1K) 0 (0K) EMM386 3 120 (3K) 3 120 (3K) 0 (0K) OAKCDROM 36 064 (35K) 36 064 (35K) 0 (0K) POWER 80 (0K) 80 (0K) 0 (0K) NLSFUNC 2 784 (3K) 2 784 (3K) 0 (0K) COMMAND 2 928 (3K) 2 928 (3K) 0 (0K) MSCDEX 15 712 (15K) 15 712 (15K) 0 (0K) SMARTDRV 30 384 (30K) 13 984 (14K) 16 400 (16K) KEYB 6 752 (7K) 6 752 (7K) 0 (0K) MOUSE 17 296 (17K) 17 296 (17K) 0 (0K) DISPLAY 8 336 (8K) 0 (0K) 8 336 (8K) SETVER 512 (1K) 0 (0K) 512 (1K) DOSKEY 4 144 (4K) 0 (0K) 4 144 (4K) POWER 4 672 (5K) 0 (0K) 4 672 (5K) Free 552 944 (540K) 539 088 (526K) 13 856 (14K) Memory Summary: Type of Memory Total = Used + Free ---------------- ---------- ---------- ---------- Conventional 653 312 114 224 539 088 Upper 47 920 34 064 13 856 Reserved 0 0 0 Extended (XMS)* 64 898 256 2 671 824 62 226 432 ---------------- ---------- ---------- ---------- Total memory 65 599 488 2 820 112 62 779 376 Total under 1 MB 701 232 148 288 552 944 Total Expanded (EMS) 33 947 648 (33 152K Free Expanded (EMS)* 33 538 048 (32 752K * EMM386 is using XMS memory to simulate EMS memory as needed. Free EMS memory may change as free XMS memory changes. Largest executable program size 538 976 (526K) Largest free upper memory block 7 488 (7K) MS-DOS is resident in the high memory area. I'm running MS-DOS 6.22 on VMWare virtual hardware. This is memory state after memmaker pass, so i'm looking for optimization beyond memmaker. Note: NLS drivers (DISPLAY, KEYB, NSLFUNC) are essential for me. Thanks to @mtone for valuable reminder about MSCDEX /E which gave me 16KiB of low memory (see the diff)!

    Read the article

  • chkdsk, SeaTools, and "does not have enough space to replace bad clusters"

    - by Zian Choy
    When I tried to do a Windows Vista Complete PC Backup, I received an error message that blathered about bad sectors. Then, when I ran chkdsk /r on the destination drive, this is what I got: C:\Windows\system32>chkdsk /R E: The type of the file system is NTFS. Volume label is Desktop Backup. CHKDSK is verifying files (stage 1 of 5)... 822016 file records processed. File verification completed. 1 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 848938 index entries processed. Index verification completed. 0 unindexed files processed. CHKDSK is verifying security descriptors (stage 3 of 5)... 822016 security descriptors processed. Security descriptor verification completed. 13461 data files processed. CHKDSK is verifying file data (stage 4 of 5)... The disk does not have enough space to replace bad clusters detected in file 239649 of name . The disk does not have enough space to replace bad clusters detected in file 239650 of name . The disk does not have enough space to replace bad clusters detected in file 239651 of name . An unspecified error occurred.f 822000 files processed) Yet, when I ran the SeaTools short & long generic tests on the Seagate disk, I didn't receive any errors. I know that I could reformat the disk and try running chkdsk /r again but I'd prefer to avoid waiting 4 hours in the hope that the problem was magically fixed. On the other hand, if I RmA the drive to Seagate, I have no SeaTools error number to use and they may claim that the drive is just fine. What should I try to do next? Side frustration: There is plenty of free hard drive space. The E: partition has 182 GB free.

    Read the article

  • How to display different value with ComboBoxTableCell?

    - by Philippe Jean
    I try to use ComboxBoxTableCell without success. The content of the cell display the right value for the attribute of an object. But when the combobox is displayed, all items are displayed with the toString object method and not the attribute. I tryed to override updateItem of ComboBoxTableCell or to provide a StringConverter but nothing works. Do you have some ideas to custom comboxbox list display in a table cell ? I put a short example below to see quickly the problem. Execute the app and click in the cell, you will see the combobox with toString value of the object. package javafx2; import javafx.application.Application; import javafx.beans.property.adapter.JavaBeanObjectPropertyBuilder; import javafx.beans.value.ObservableValue; import javafx.collections.FXCollections; import javafx.collections.ObservableList; import javafx.scene.Scene; import javafx.scene.control.TableCell; import javafx.scene.control.TableColumn; import javafx.scene.control.TableColumn.CellDataFeatures; import javafx.scene.control.TableView; import javafx.scene.control.cell.ComboBoxTableCell; import javafx.stage.Stage; import javafx.util.Callback; import javafx.util.StringConverter; public class ComboBoxTableCellTest extends Application { public class Product { private String name; public Product(String name) { this.name = name; } public String getName() { return name; } public void setName(String name) { this.name = name; } } public class Command { private Integer quantite; private Product product; public Command(Product product, Integer quantite) { this.product = product; this.quantite = quantite; } public Integer getQuantite() { return quantite; } public void setQuantite(Integer quantite) { this.quantite = quantite; } public Product getProduct() { return product; } public void setProduct(Product product) { this.product = product; } } public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws Exception { Product p1 = new Product("Product 1"); Product p2 = new Product("Product 2"); final ObservableList<Product> products = FXCollections.observableArrayList(p1, p2); ObservableList<Command> commands = FXCollections.observableArrayList(new Command(p1, 20)); TableView<Command> tv = new TableView<Command>(); tv.setItems(commands); TableColumn<Command, Product> tc = new TableColumn<Command, Product>("Product"); tc.setMinWidth(140); tc.setCellValueFactory(new Callback<TableColumn.CellDataFeatures<Command,Product>, ObservableValue<Product>>() { @Override public ObservableValue<Product> call(CellDataFeatures<Command, Product> cdf) { try { JavaBeanObjectPropertyBuilder<Product> jbdpb = JavaBeanObjectPropertyBuilder.create(); jbdpb.bean(cdf.getValue()); jbdpb.name("product"); return (ObservableValue) jbdpb.build(); } catch (NoSuchMethodException e) { System.err.println(e.getMessage()); } return null; } }); final StringConverter<Product> converter = new StringConverter<ComboBoxTableCellTest.Product>() { @Override public String toString(Product p) { return p.getName(); } @Override public Product fromString(String s) { // TODO Auto-generated method stub return null; } }; tc.setCellFactory(new Callback<TableColumn<Command,Product>, TableCell<Command,Product>>() { @Override public TableCell<Command, Product> call(TableColumn<Command, Product> tc) { return new ComboBoxTableCell<Command, Product>(converter, products) { @Override public void updateItem(Product product, boolean empty) { super.updateItem(product, empty); if (product != null) { setText(product.getName()); } } }; } }); tv.getColumns().add(tc); tv.setEditable(true); Scene scene = new Scene(tv, 140, 200); stage.setScene(scene); stage.show(); } }

    Read the article

  • Loaded AS2 SWF into AS3 SWF as AVM1Movie doesn't run any actionscript in the AS2 SWF

    - by steve
    I appreciate loading AS2 into AS3 is never going to be fun, but unfortunately I have to on this one. I'm using the Loader Class in AS3 to load an external AS2 SWF onto the stage as a AVM1Movie Object. Anything that is placed on the stage in the AS2 FLA displays fine, but no ActionScript runs at all. The loaded AS2 SWF has one layer, one frame and a few images in the library but nothing heavy. I've tried stripping everything out of the script other than a simple call to change the text on a dynamic textfield on the stage - still nothing. I have a listener in AS3 waiting for Event.INIT rather than Event.COMPLETE - but neither works. Am I missing something? Anyone else experienced anything similar? It's like it loads but doesn't run.

    Read the article

  • flash blocking javascript events

    - by jedierikb
    this is an edit of the original post now that I better understand the problem. now with source code! In IE, if body (or another html div has focus), then you keypress & click on flash at the same time, then release... a keyup event is never fired. It is not fired in javascript or in flash. Where is this keyup event? This is the order of event firing you get instead: javascriptKeyEvent:bodyDn ** currentFocuedElement: body javascriptKeyEvent:docDn ** currentFocuedElement: body actionScriptEvent::activate ** currentFocuedElement: [object] actionScriptEvent::mouseDown ** currentFocuedElement: [object] actionScriptEvent::mouseUp ** currentFocuedElement: [object] Subsequent keyup and keydown events are captured by flash, but that initial keyUp is never fired.. anywhere. And I need that keyup! Here is the html/javascript: <html> <head> <script type="text/javascript" src="p.js"></script> <script type="text/javascript" src="swfobject.js"></script> <script> function ic( evt ) { Event.observe( $("f1"), 'keyup', onKeyHandler.bindAsEventListener( this, "f1Up" ) ); Event.observe( $("f2"), 'keyup', onKeyHandler.bindAsEventListener( this, "f2Up" ) ); Event.observe( document, 'keyup', onKeyHandler.bindAsEventListener( this, "docUp" ) ); Event.observe( $("body"), 'keyup', onKeyHandler.bindAsEventListener( this, "bodyUp" ) ); Event.observe( window, 'keyup', onKeyHandler.bindAsEventListener( this, "windowUp" ) ); Event.observe( $("f1"), 'keydown', onKeyHandler.bindAsEventListener( this, "f1Dn" ) ); Event.observe( $("f2"), 'keydown', onKeyHandler.bindAsEventListener( this, "f2Dn" ) ); Event.observe( document, 'keydown', onKeyHandler.bindAsEventListener( this, "docDn" ) ); Event.observe( $("body"), 'keydown', onKeyHandler.bindAsEventListener( this, "bodyDn" ) ); Event.observe( window, 'keydown', onKeyHandler.bindAsEventListener( this, "windowDn" ) ); Event.observe( "clr", "mousedown", clearHandler.bindAsEventListener( this ) ); swfobject.embedSWF( "tmp.swf", "f2", "100%", "20px", "9.0.0.0", null, {}, {}, {} ); } function clearHandler( evt ) { clear( ); } function clear( ) { $("log").innerHTML = ""; } function onKeyHandler( evt, dn ) { logIt( "javascriptKeyEvent:"+dn ); } function AS2JS( wha ) { logIt( "actionScriptEvent::" + wha ); } function logIt( k ) { var id = document.activeElement; if (id.identify) { id = id.identify(); } $("log").innerHTML = k + " ** focuedElement: " + id + "<br>" + $("log").innerHTML; } Event.observe( window, 'load', ic.bindAsEventListener(this) ); </script> </head> <body id="body"> <div id="f1"><div id="f2" style="width:100%;height:20px; position:absolute; bottom:0px;"></div></div> <div id="clr" style="color:blue;">clear</div> <div id="log" style="overflow:auto;height:200px;width:500px;"></div> </body> </html> Here is the as3 code: package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.KeyboardEvent; import flash.events.MouseEvent; import flash.events.Event; import flash.external.ExternalInterface; public class tmpa extends Sprite { public function tmpa( ):void { extInt("flashInit"); stage.align = StageAlign.TOP_LEFT; stage.scaleMode = StageScaleMode.NO_SCALE; stage.addEventListener( KeyboardEvent.KEY_DOWN, keyDnCb, false, 0, true ); stage.addEventListener( KeyboardEvent.KEY_UP, keyUpCb, false, 0, true ); stage.addEventListener( MouseEvent.MOUSE_DOWN, mDownCb, false, 0, true ); stage.addEventListener( MouseEvent.MOUSE_UP, mUpCb, false, 0, true ); addEventListener( Event.ACTIVATE, activateCb, false, 0, true ); addEventListener( Event.DEACTIVATE, dectivateCb, false, 0, true ); } private function activateCb( evt:Event ):void { extInt("activate"); } private function dectivateCb( evt:Event ):void { extInt("deactivate"); } private function mDownCb( evt:MouseEvent ):void { extInt("mouseDown"); } private function mUpCb( evt:MouseEvent ):void { extInt("mouseUp"); } private function keyDnCb( evt:KeyboardEvent ):void { extInt( "keyDn" ); } private function keyUpCb( evt:KeyboardEvent ):void { extInt( "keyUp" ); } private function extInt( wha:String ):void { try { ExternalInterface.call( "AS2JS", wha ); } catch (ex:Error) { trace('ex: ' + ex); } } } }

    Read the article

  • How to use custom masks in MaskedEditExtender?

    - by bastos.sergio
    Hello, I need to create a textbox which accepts 3 chars and must follow the following format 1st char - only numbers 2nd char - only upper case letters 3rd char - numbers and upper case letters (except zero) This is what I have so far, but it's not working <ajaxToolkit:MaskedEditExtender ID="MaskedEditExtender1" runat="server" Mask="9L9" TargetControlID="txtNUIPC3" PromptCharacter=""> </ajaxToolkit:MaskedEditExtender>

    Read the article

  • integer Max value constants in SQL Server T-SQL?

    - by AaronLS
    Are there any constants in T-SQL like there are in some other languages that provide the max and min values ranges of data types such as int? I have a code table where each row has an upper and lower range column, and I need an entry that represents a range where the upper range is the maximum value an int can hold(sort of like a hackish infinity). I would prefer not to hard code it and instead use something like SET UpperRange = int.Max

    Read the article

  • Case-insensitive find_or_create_by_whatever

    - by Horace Loeb
    I want to be able to do Artist.case_insensitive_find_or_create_by_name(artist_name)[1] (and have it work on both sqlite and postgreSQL) What's the best way to accomplish this? Right now I'm just adding a method directly to the Artist class (kind of ugly, especially if I want this functionality in another class, but whatever): def self.case_insensitive_find_or_create_by_name(name) first(:conditions => ['UPPER(name) = UPPER(?)', name]) || create(:name => name) end [1]: Well, ideally it would be Artist.find_or_create_by_name(artist_name, :case_sensitive => false), but this seems much harder to implement

    Read the article

  • strange nhibernate exception? "(0xc0000005 at address 5A17BF2A): likely culprit is 'PARSE'."

    - by nRk
    Hi i am gettin a strange exception in may windows service application, but it was working fine for a long time and suddenly gave this error: 2010-05-11 07:00:03,154 ERROR [0 ] [NHibernate.Cfg.Configuration.LogAndThrow] - Could not compile the mapping document: xxxx.hbm.xml NHibernate.MappingException: Could not compile the mapping document: xxxx.hbm.xml ---> System.InvalidOperationException: Unable to generate a temporary class (result=1). error CS0001: Internal compiler error (0x80004005) error CS0001: Internal compiler error (0xc0000017) error CS0583: Internal Compiler Error (0xc0000005 at address 5A17BF2A): likely culprit is 'PARSE'. error CS0586: Internal Compiler Error: stage 'PARSE' error CS0587: Internal Compiler Error: stage 'PARSE' error CS0587: Internal Compiler Error: stage 'BEGIN' any could help me in understanding this issue/error why it came and to solve it? Thanks nRk

    Read the article

  • need help about add navigationItem

    - by RAGOpoR
    according to my picture the upper picture is an original create automatically when navigation controller is push if i try to create like this one it will appear like bottom picture. how can i programmatic to create Back Button like upper picture? here is my code to create Done button self.navigationItem.leftBarButtonItem = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(backtohome)];

    Read the article

  • Is there a way to catch a MOUSE_UP Event over a static textfield in Flash?

    - by Jovica Aleksic
    When the user presses the mouse, and releases it over a static textfield with selectable text, no MOUSE_UP event is fired - not on the stage and also nowhere else. I experienced this when using a scrollbar class on a movieclip with a nested static textfield. When the user drags the scroll handle and releases the mouse over the textfield, the dragging/scrolling is stuck. To test this, create a new AS3 fla file, place a static textfield somewhere, and put in some text. Make sure the selectable property is checked in the properties panel. Add this script to the timeline: import flash.events.* function down(event:Event):void { trace('down'); } function up(event:Event):void { trace('up'); } stage.addEventListener(MouseEvent.MOUSE_DOWN, down) stage.addEventListener(MouseEvent.MOUSE_UP, up) Now test the movie and click the mouse. You will notice that trace('up') will not occur when you release the mouse over the textfield.

    Read the article

  • Kubuntu - Can't move/max/min windows [closed]

    - by GregH
    All of a sudden it seems when ever I open a window on my Kubuntu (9.10) system, the windows dock in the upper left corner and can't be moved. There is nor border on the windows, no min/max/close buttons in the upper right corner of the windows. I tried opening a term window but it seems I can't type in the window. Any ideas what might be causing this?

    Read the article

  • Setting time to 23:59:59

    - by Mike Wills
    I need to compare a date range and am missing rows who's date is the upper comparison date but the time is higher than midnight. Is there a way to set the upper comparison's time to 23:59:59?

    Read the article

  • How do I make all the finders on the model ignorecase?

    - by Glex
    I have a model with several attributes, among them title and artist. The case of title and artist should be ignored in all the Active Record finders. Basically, if title or artist are present in the :conditions (or dynamically i.e. find_all_by_artist), then the WHERE artist = :artist should become WHERE UPPER(artist) = UPPER(:artist) or something along these lines. Is there a way of doing it with Rails?

    Read the article

  • Query for Joining Two Tables With Possible Multiple Mapping

    - by Dharmendra Mohapatra
    First_table srno wono Actual_Start_Date Actual_End_Date 1 31 2012-06-02 2012-06-05 2 32 2012-06-05 2012-06-22 3 33 2012-06-11 2012-06-23 4 34 2012-06-23 2012-06-30 5 A-2 2012-06-24 2012-06-25 6 BU 2012-06-24 2012-06-26 7 40 2012-06-25 2012-06-27 second_table srno wono Base_start_date Base_end_date uploadhistoryid 1 31 2012-06-05 2012-06-05 1 2 32 2012-06-11 2012-06-12 2 3 32 2012-06-15 2012-06-17 3 4 32 2012-06-18 2012-06-20 4 5 33 2012-06-22 2012-06-25 5 5 33 2012-06-23 2012-06-25 5 Result Required SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[Reports_Subanalysis] ( @WONo VARCHAR(20) ) AS BEGIN SELECT 'SAT' AS stage, s.Base_start_date AS start_date, s.Base_end_date AS end_date, f.Actual_Start_Date AS Actual_Start_Date, f.Actual_end_Date AS Actual_End_Date FROM First_table f, second_table B WHERE A.wOno=B.nOno AND f.uploadhistoryid in (SELECT min(uploadhistoryid) FROM second_table C WHERE f.wono = C.wono) AND b.wono=@WONo END when I pass '32' Result stage start_date end_date Actual_Start_Date Actual_End_Date SAT 2012-06-11 2012-06-12 2012-06-05 2012-06-05 how Can I get the result like this when I pass non matching value like 'BU' stage start_date end_date Actual_Start_Date Actual_End_Date SAT NULL NULL 2012-06-24 2012-06-26 What modification do I need in my routine?

    Read the article

  • Access WindowedApplication from package class.

    - by Senling
    Hi, I'm developing an AIR application, where i need to access WindowedApplication's function from the package class. This is the Main application (Partial code) import mx.events.CloseEvent; import messages.MessageWindow public function undock():void { stage.nativeWindow.visible = true; stage.nativeWindow.orderToFront(); //Clearing the bitmaps array also clears the applcation icon from the systray NativeApplication.nativeApplication .icon.bitmaps = []; } ]] Package: (Partial code) package messages { public class MessageWindow extends NativeWindow { public function MessageWindow():void { stage.addEventListener(MouseEvent.MOUSE_DOWN,onClick); } private function onClick(event:MouseEvent):void { ****** Need to call the undock method from here. ***** } } } Is it possible to call this way or suggest any other solution Thanks in advance Senling.

    Read the article

  • how to access a different movieclip within the flash in AS3

    - by Pieter888
    I've been trying to learn Action Script 3 the past few weeks, making tiny interactive games to learn the basics. I stumble upon a problem every now and then but most of the times google helps me out. But this problem has got me stuck so please help: The main stage contains two objects(movieclips), the player and a wall. The player has got his own code so when I drag in the player object I don't have to write any code into the main stage to be able to move the player. This all worked pretty well and I now wanted to add the wall so the player actually has something to bounce into. Now here is the problem, I want to check if the player touches the wall, I've done this before but that was when I used the main stage as my coding playground instead of putting the code in movieclips. How can I check if the player hits the wall within the movement code of the player object?

    Read the article

  • How do you get the pageYOffset of from the bottom of the window and not the top ( Javascript )?

    - by user782860
    I need to get the pageYOffset of the bottom of the viewable area, whether the user is zoomed in on the webpage or not, on Mobile Safari ( iPhone ). However pageYOffset returns the offset from the upper left corner of the window. Spec: "pageYOffset properties returns the pixels the current document has been scrolled from the upper left corner of the window" How do you get the page y offset from the bottom of the viewable area, (zoomed in or not) on the iPhone via Javascript?

    Read the article

  • git destroyed my changes

    - by mare
    I made a commit of my repository a week ago but never actually pushed it to the remote at github, which I did today. However, in the time from my commit I made many changes to the source. But just the initial commit was pushed to remote and while doing it, it also overwrote my local files. What can I do to get back my current files?? For better understanding, this is what I've done: Created new VS project and created a new git repository in it, Performed an initial scan, stage and commit but without adding a remote and performing a push, Worked on files for a week, (Today) Forgot to perform rescan, new stage and commit and just created new GitHub repository and performed this: git remote add origin [email protected]:myaccount/webshop.git git push origin master Now the files in GitHub repository are the ones from inital commit and those were also copied over my current files, so I'm in the initial commit stage now locally too, which is awful. Help appreciated

    Read the article

  • Installation Won't Finish

    - by Joey G
    I installed Ubuntu 12.10 (32-bit) on my Acer Aspire One notebook and replaced the Windows 8 Consumer Preview. Everything went fine, but right before the installation finished, it got stuck. The loading bar at the bottom is full, and it says "Copying installation logs," but my mouse won't move and it's been at this point for almost an hour. Also, the mouse is in the loading spin, so I know my computer didn't freeze. Should I just restart now? I'm not sure if it's at the last stage, but it seems like it is, and this has taken more than the rest of the installation together. EDIT- I had my computer go in sleep mode for a minute and now I can move the mouse again. When I click the "Copying.." part, it says "Activation (eth1) Stage 4 of 5 complete" but "5 of 5" (I assume that comes next) isn't starting.

    Read the article

  • LibGDX tutorial help Scene2D

    - by BluFire
    I'm having trouble understanding this tutorial. It defines the importance of classes, but it doesn't show an outline of the project file so far. From what I got from that tutorial was that there is a stage and actors. Stage would be the static parts of the game, while the actors are the ones moving. After that I got confused with the drawing method. I tried modifying it so I can draw a shape, but it wouldn't work. How, if possible, would I create sprites using LibGDX's scene2d?

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • How to Use the Signature Editor in Outlook 2013

    - by Lori Kaufman
    The Signature Editor in Outlook 2013 allows you to create a custom signature from text, graphics, or business cards. We will show you how to use the various features of the Signature Editor to customize your signatures. To open the Signature Editor, click the File tab and select Options on the left side of the Account Information screen. Then, click Mail on the left side of the Options dialog box and click the Signatures button. For more details, refer to one of the articles mentioned above. Changing the font for your signature is pretty self-explanatory. Select the text for which you want to change the font and select the desired font from the drop-down list. You can also set the justification (left, center, right) for each line of text separately. The drop-down list that reads Automatic by default allows you to change the color of the selected text. Click OK to accept your changes and close the Signatures and Stationery dialog box. To see your signature in an email, click Mail on the Navigation Bar. Click New Email on the Home tab. The Message window displays and your default signature is inserted into the body of the email. NOTE: You shouldn’t use fonts that are not common in your signatures. In order for the recipient to see your signature as you intended, the font you choose also needs to be installed on the recipient’s computer. If the font is not installed, the recipient would see a different font, the wrong characters, or even placeholder characters, which are empty square boxes. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can save it as a draft if you want, but it’s not necessary. If you decide to use a font that is not common, a better way to do so would be to create a signature as an image, or logo. Create your image or logo in an image editing program making it the exact size you want to use in your signature. Save the image in a file size as small as possible. The .jpg format works well for pictures, the .png format works well for detailed graphics, and the .gif format works well for simple graphics. The .gif format generally produces the smallest files. To insert an image in your signature, open the Signatures and Stationery dialog box again. Either delete the text currently in the editor, if any, or create a new signature. Then, click the image button on the editor’s toolbar. On the Insert Picture dialog box, navigate to the location of your image, select the file, and click Insert. If you want to insert an image from the web, you must enter the full URL for the image in the File name edit box (instead of the local image filename). For example, http://www.somedomain.com/images/signaturepic.gif. If you want to link to the image at the specified URL, you must also select Link to File from the Insert drop-down list to maintain the URL reference. The image is inserted into the Edit signature box. Click OK to accept your changes and close the Signatures and Stationery dialog box. Create a new email message again. You’ll notice the image you inserted into the signature displays in the body of the message. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You may want to put a link to a webpage or an email link in your signature. To do this, open the Signatures and Stationery dialog box again. Enter the text to display for the link, highlight the text, and click the Hyperlink button on the editor’s toolbar. On the Insert Hyperlink dialog box, select the type of link from the list on the left and enter the webpage, email, or other type of address in the Address edit box. You can change the text that will display in the signature for the link in the Text to display edit box. Click OK to accept your changes and close the dialog box. The link displays in the editor with the default blue, underlined text. Click OK to accept your changes and close the Signatures and Stationery dialog box. Here’s an example of an email message with a link in the signature. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can also insert your contact information into your signature as a Business Card. To do so, click Business Card on the editor’s toolbar. On the Insert Business Card dialog box, select the contact you want to insert as a Business Card. Select a size for the Business Card image from the Size drop-down list. Click OK. The Business Card image displays in the Signature Editor. Click OK to accept your changes and close the Signatures and Stationery dialog box. When you insert a Business Card into your signature, the Business Card image displays in the body of the email message and a .vcf file containing your contact information is attached to the email. This .vcf file can be imported into programs like Outlook that support this format. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can also insert your Business Card into your signature without the image or without the .vcf file attached. If you want to provide recipients your contact info in a .vcf file, but don’t want to attach it to every email, you can upload the .vcf file to a location on the internet and add a link to the file, such as “Get my vCard,” in your signature. NOTE: If you want to edit your business card, such as applying a different template to it, you must select a different View other than People for your Contacts folder so you can open the full contact editing window.     

    Read the article

  • Should devs, testers and business users have one unified test script?

    - by Carlos Jaime C. De Leon
    In development, I would normally have my own test scripts that would document the data, scenarios and execution steps that I plan to test; this is my dev test plan. When the functionality has been deployed to Test, testers test it using their own test script that they wrote. In UAT, the business user then tests using their own test plan. In retrospect, it looks like this provides a better coverage, with dev tests having a mix of black and white box testing, while testers and business users focus on black box testing. But on the other hand, this brings up distinct test cases that only are executed per stage (ie. some cases which testers thought of are only executed on Test stage) and it would like the dev missed it, which makes it a finding/bug. Is it worth consolidating the test scripts from the start? Thus using one unified test script, or is it abit difficult to do this upfront?

    Read the article

  • Flash framerate reliability

    - by Tim Cooper
    I am working in Flash and a few things have been brought to my attention. Below is some code I have some questions on: addEventListener(Event.ENTER_FRAME, function(e:Event):void { if (KEY_RIGHT) { // Move character right } // Etc. }); stage.addEventListener(KeyboardEvent.KEY_DOWN, function(e:KeyboardEvent):void { // Report key which is down }); stage.addEventListener(KeyboardEvent.KEY_UP, function(e:KeyboardEvent):void { // Report key which is up }); I have the project configured so that it has a framerate of 60 FPS. The two questions I have on this are: What happens when it is unable to call that function every 1/60 of a second? Is this a way of processing events that need to be limited by time (ex: a ball which needs to travel to the right of the screen from the left in X seconds)? Or should it be done a different way?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >