Search Results

Search found 290 results on 12 pages for 'pramod pp'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • What are the disadvantages of self-encapsulation?

    - by Dave Jarvis
    Background Tony Hoare's billion dollar mistake was the invention of null. Subsequently, a lot of code has become riddled with null pointer exceptions (segfaults) when software developers try to use (dereference) uninitialized variables. In 1989, Wirfs-Brock and Wikerson wrote: Direct references to variables severely limit the ability of programmers to re?ne existing classes. The programming conventions described here structure the use of variables to promote reusable designs. We encourage users of all object-oriented languages to follow these conventions. Additionally, we strongly urge designers of object-oriented languages to consider the effects of unrestricted variable references on reusability. Problem A lot of software, especially in Java, but likely in C# and C++, often uses the following pattern: public class SomeClass { private String someAttribute; public SomeClass() { this.someAttribute = "Some Value"; } public void someMethod() { if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { return this.someAttribute; } } Sometimes a band-aid solution is used by checking for null throughout the code base: public void someMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void anotherMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Default Value" ) ) { // do something... } } The band-aid does not always avoid the null pointer problem: a race condition exists. The race condition is mitigated using: public void anotherMethod() { String someAttribute = this.someAttribute; assert someAttribute != null; if( someAttribute.equals( "Some Default Value" ) ) { // do something... } } Yet that requires two statements (assignment to local copy and check for null) every time a class-scoped variable is used to ensure it is valid. Self-Encapsulation Ken Auer's Reusability Through Self-Encapsulation (Pattern Languages of Program Design, Addison Wesley, New York, pp. 505-516, 1994) advocated self-encapsulation combined with lazy initialization. The result, in Java, would resemble: public class SomeClass { private String someAttribute; public SomeClass() { setAttribute( "Some Value" ); } public void someMethod() { if( getAttribute().equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { String someAttribute = this.someAttribute; if( someAttribute == null ) { setAttribute( createDefaultValue() ); } return someAttribute; } protected String createDefaultValue() { return "Some Default Value"; } } All duplicate checks for null are superfluous: getAttribute() ensures the value is never null at a single location within the containing class. Efficiency arguments should be fairly moot -- modern compilers and virtual machines can inline the code when possible. As long as variables are never referenced directly, this also allows for proper application of the Open-Closed Principle. Question What are the disadvantages of self-encapsulation, if any? (Ideally, I would like to see references to studies that contrast the robustness of similarly complex systems that use and don't use self-encapsulation, as this strikes me as a fairly straightforward testable hypothesis.)

    Read the article

  • What is a Delphi version of the C++ header for the DVP7010B video card DLL?

    - by grzegorz1
    I need help with converting c++ header file to delphi. I spent several days on this problem without success. Below is the original header file and my Delphi translation. C++ header #if _MSC_VER > 1000 #pragma once #endif // _MSC_VER > 1000 #ifdef DVP7010BDLL_EXPORTS #define DVP7010BDLL_API __declspec(dllexport) #else #define DVP7010BDLL_API __declspec(dllimport) #endif #define MAXBOARDS 4 #define MAXDEVS 4 #define ID_NEW_FRAME 37810 #define ID_MUX0_NEW_FRAME 37800 #define ID_MUX1_NEW_FRAME 37801 #define ID_MUX2_NEW_FRAME 37802 #define ID_MUX3_NEW_FRAME 37803 typedef enum { SUCCEEDED = 1, FAILED = 0, SDKINITFAILED = -1, PARAMERROR = -2, NODEVICES = -3, NOSAMPLE = -4, DEVICENUMERROR = -5, INPUTERROR = -6, // VERIFYHWERROR = -7 } Res; typedef enum tagAnalogVideoFormat { Video_None = 0x00000000, Video_NTSC_M = 0x00000001, Video_NTSC_M_J = 0x00000002, Video_PAL_B = 0x00000010, Video_PAL_M = 0x00000200, Video_PAL_N = 0x00000400, Video_SECAM_B = 0x00001000 } AnalogVideoFormat; typedef enum { SIZEFULLPAL=0, SIZED1, SIZEVGA, SIZEQVGA, SIZESUBQVGA } VideoSize; typedef enum { STOPPED = 1, RUNNING = 2, UNINITIALIZED = -1, UNKNOWNSTATE = -2 } CapState; class IDVP7010BDLL { public: int AdvDVP_CreateSDKInstence(void **pp); virtual int AdvDVP_InitSDK() PURE; virtual int AdvDVP_CloseSDK() PURE; virtual int AdvDVP_GetNoOfDevices(int *pNoOfDevs) PURE; virtual int AdvDVP_Start(int nDevNum, int SwitchingChans, HWND Main, HWND hwndPreview) PURE; virtual int AdvDVP_Stop(int nDevNum) PURE; virtual int AdvDVP_GetCapState(int nDevNum) PURE; virtual int AdvDVP_IsVideoPresent(int nDevNum, BOOL* VPresent) PURE; virtual int AdvDVP_GetCurFrameBuffer(int nDevNum, int VMux, long* bufSize, BYTE* buf) PURE; virtual int AdvDVP_SetNewFrameCallback(int nDevNum, int callback) PURE; virtual int AdvDVP_GetVideoFormat(int nDevNum, AnalogVideoFormat* vFormat) PURE; virtual int AdvDVP_SetVideoFormat(int nDevNum, AnalogVideoFormat vFormat) PURE; virtual int AdvDVP_GetFrameRate(int nDevNum, int *nFrameRate) PURE; virtual int AdvDVP_SetFrameRate(int nDevNum, int SwitchingChans, int nFrameRate) PURE; virtual int AdvDVP_GetResolution(int nDevNum, VideoSize *Size) PURE; virtual int AdvDVP_SetResolution(int nDevNum, VideoSize Size) PURE; virtual int AdvDVP_GetVideoInput(int nDevNum, int* input) PURE; virtual int AdvDVP_SetVideoInput(int nDevNum, int input) PURE; virtual int AdvDVP_GetBrightness(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetBrightness(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GetContrast(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetContrast(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GetHue(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetHue(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GetSaturation(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetSaturation(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GPIOGetData(int nDevNum, int DINum, BOOL* value) PURE; virtual int AdvDVP_GPIOSetData(int nDevNum, int DONum, BOOL value) PURE; }; Delphi unit IDVP7010BDLL_h; interface uses Windows, Messages, SysUtils, Classes; //{$if _MSC_VER > 1000} //pragma once //{$endif} // _MSC_VER > 1000 {$ifdef DVP7010BDLL_EXPORTS} //const DVP7010BDLL_API = __declspec(dllexport); {$else} //const DVP7010BDLL_API = __declspec(dllimport); {$endif} const MAXDEVS = 4; MAXMUXS = 4; ID_NEW_FRAME = 37810; ID_MUX0_NEW_FRAME = 37800; ID_MUX1_NEW_FRAME = 37801; ID_MUX2_NEW_FRAME = 37802; ID_MUX3_NEW_FRAME = 37803; // TRec SUCCEEDED = 1; FAILED = 0; SDKINITFAILED = -1; PARAMERROR = -2; NODEVICES = -3; NOSAMPLE = -4; DEVICENUMERROR = -5; INPUTERROR = -6; // TRec // TAnalogVideoFormat Video_None = $00000000; Video_NTSC_M = $00000001; Video_NTSC_M_J = $00000002; Video_PAL_B = $00000010; Video_PAL_M = $00000200; Video_PAL_N = $00000400; Video_SECAM_B = $00001000; // TAnalogVideoFormat // TCapState STOPPED = 1; RUNNING = 2; UNINITIALIZED = -1; UNKNOWNSTATE = -2; // TCapState type TCapState = Longint; TRes = Longint; TtagAnalogVideoFormat = DWORD; TAnalogVideoFormat = TtagAnalogVideoFormat; PAnalogVideoFormat = ^TAnalogVideoFormat; TVideoSize = ( SIZEFULLPAL, SIZED1, SIZEVGA, SIZEQVGA, SIZESUBQVGA); PVideoSize = ^TVideoSize; P_Pointer = ^Pointer; TIDVP7010BDLL = class function AdvDVP_CreateSDKInstence(pp: P_Pointer): integer; virtual; stdcall; abstract; function AdvDVP_InitSDK():Integer; virtual; stdcall; abstract; function AdvDVP_CloseSDK():Integer; virtual; stdcall; abstract; function AdvDVP_GetNoOfDevices(pNoOfDevs : PInteger) :Integer; virtual; stdcall; abstract; function AdvDVP_Start(nDevNum : Integer; SwitchingChans : Integer; Main : HWND; hwndPreview: HWND ) :Integer; virtual; stdcall; abstract; function AdvDVP_Stop(nDevNum : Integer ):Integer; virtual; stdcall; abstract; function AdvDVP_GetCapState(nDevNum : Integer ):Integer; virtual; stdcall; abstract; function AdvDVP_IsVideoPresent(nDevNum : Integer; VPresent : PBool) :Integer; virtual; stdcall; abstract; function AdvDVP_GetCurFrameBuffer(nDevNum : Integer; VMux : Integer; bufSize : PLongInt; buf : PByte) :Integer; virtual; stdcall; abstract; function AdvDVP_SetNewFrameCallback(nDevNum : Integer; callback : Integer ) :Integer; virtual; stdcall; abstract; function AdvDVP_GetVideoFormat(nDevNum : Integer; vFormat : PAnalogVideoFormat) :Integer; virtual; stdcall; abstract; function AdvDVP_SetVideoFormat(nDevNum : Integer; vFormat : TAnalogVideoFormat ) :Integer; virtual; stdcall; abstract; function AdvDVP_GetFrameRate(nDevNum : Integer; nFrameRate : Integer) :Integer; virtual; stdcall; abstract; function AdvDVP_SetFrameRate(nDevNum : Integer; SwitchingChans : Integer; nFrameRate : Integer) :Integer; virtual; stdcall; abstract; function AdvDVP_GetResolution(nDevNum : Integer; Size : PVideoSize) :Integer; virtual; stdcall; abstract; function AdvDVP_SetResolution(nDevNum : Integer; Size : TVideoSize ) :Integer; virtual; stdcall; abstract; function AdvDVP_GetVideoInput(nDevNum : Integer; input : PInteger) :Integer; virtual; stdcall; abstract; function AdvDVP_SetVideoInput(nDevNum : Integer; input : Integer) :Integer; virtual; stdcall; abstract; function AdvDVP_GetBrightness(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetBrightness(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GetContrast(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetContrast(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GetHue(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetHue(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GetSaturation(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetSaturation(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GPIOGetData(nDevNum : Integer; DINum:Integer; value : PBool) :Integer; virtual; stdcall; abstract; function AdvDVP_GPIOSetData(nDevNum : Integer; DONum:Integer; value : Boolean) :Integer; virtual; stdcall; abstract; end; function IDVP7010BDLL : TIDVP7010BDLL ; stdcall; implementation function IDVP7010BDLL; external 'DVP7010B.dll'; end.

    Read the article

  • Why does ruby-debug say 'Saved frames may be incomplete'

    - by Chris McCauley
    From time-to-time I get this when a breakpoint is triggered. It looks like stack frames aren't getting saved so I can't step back through the call stack - a real pain. See below for an example --> #0 BatchProcess.add_failure_record(row_id#Fixnum, test#Struct::Test, message#String,...) at line server/processes/batch.rb:309 Warning: saved frames may be incomplete; compare with caller(0). (rdb:1) pp caller ["./server/processes/batch.rb:309:in `run_tests'", "./server/processes/common/generic_process.rb:219:in `each'", "./server/processes/common/generic_process.rb:219:in `run_tests'", "./server/processes/common/generic_process.rb:271:in `run_plan'", "./server/processes/common/corrections.rb:19:in `each_with_index'", "./server/processes/common/generic_process.rb:266:in `each'", "./server/processes/common/generic_process.rb:266:in `each_with_index'", "./server/processes/common/generic_process.rb:266:in `run_plan'", "./server/processes/batch.rb:202:in `run_engine'", "/usr/lib/ruby/1.8/benchmark.rb:293:in `measure'", "./server/processes/batch.rb:201:in `run_engine'", "./server/processes/common/generic_process.rb:88:in `run_dataset'", "./server/processes/batch.rb:210:in `run_dataset'", "/usr/lib/ruby/1.8/benchmark.rb:293:in `measure'", "./server/processes/batch.rb:209:in `run_dataset'", "./server/processes/common/generic_process.rb:159:in `run'", "./server/processes/common/generic_process.rb:158:in `each'", "./server/processes/common/generic_process.rb:158:in `run'", "./server/processes/batch.rb:350:in `run'", "/usr/lib/ruby/1.8/benchmark.rb:293:in `measure'", "./server/processes/batch.rb:349:in `run'", "server/processes/test_runs/run_tests.rb:55:in `run_one_process'", "server/processes/test_runs/run_tests.rb:81"] Any ideas on how to stop this happening?

    Read the article

  • Parsing a file in C

    - by sfactor
    I need parse through a file and do some processing into it. The file is a text file and the data is a variable length data of the form "PP1004181350D001002003..........". So there will be timestamps if there is PP so 1004181350 is 2010-04-08 13:50. The ones where there are D are the data points that are three separate data each three digits long, so D001002003 has three coordonates of 001, 002 and 003. Now I need to parse this data from a file for which I need to store each timestamp into a array and the corresponding datas into arrays that has as many rows as the number of data and three rows for each co-ordinate. The end array might be like TimeStamp[1] = "135000", low[1] = "001", medium[1] = "002", high[1] = "003" TimeStamp[2] = "135015", low[2] = "010", medium[2] = "012", high[2] = "013" TimeStamp[3] = "135030", low[3] = "051", medium[3] = "052", high[3] = "043" .... The question is how do I go about doing this in C? How do I go through this string looking for these patterns? Note: Here the seconds value in timestamp is added on our own as it is known at each data comes after 15 seconds.

    Read the article

  • Need help converting a C++ header file to delphi

    - by grzegorz1
    I need help with converting c++ header file to delphi. I spent several days on this problem without success. Below is the original header file and my Delphi translation. ///////////////////////// C++ header file //////////////////////////////////// if _MSC_VER 1000 pragma once endif // _MSC_VER 1000 ifdef DVP7010BDLL_EXPORTS define DVP7010BDLL_API __declspec(dllexport) else define DVP7010BDLL_API __declspec(dllimport) endif define MAXBOARDS 4 define MAXDEVS 4 define ID_NEW_FRAME 37810 define ID_MUX0_NEW_FRAME 37800 define ID_MUX1_NEW_FRAME 37801 define ID_MUX2_NEW_FRAME 37802 define ID_MUX3_NEW_FRAME 37803 typedef enum { SUCCEEDED = 1, FAILED = 0, SDKINITFAILED = -1, PARAMERROR = -2, NODEVICES = -3, NOSAMPLE = -4, DEVICENUMERROR = -5, INPUTERROR = -6, // VERIFYHWERROR = -7 } Res; typedef enum tagAnalogVideoFormat { Video_None = 0x00000000, Video_NTSC_M = 0x00000001, Video_NTSC_M_J = 0x00000002, Video_PAL_B = 0x00000010, Video_PAL_M = 0x00000200, Video_PAL_N = 0x00000400, Video_SECAM_B = 0x00001000 } AnalogVideoFormat; typedef enum { SIZEFULLPAL=0, SIZED1, SIZEVGA, SIZEQVGA, SIZESUBQVGA } VideoSize; typedef enum { STOPPED = 1, RUNNING = 2, UNINITIALIZED = -1, UNKNOWNSTATE = -2 } CapState; class IDVP7010BDLL { public: int AdvDVP_CreateSDKInstence(void **pp); virtual int AdvDVP_InitSDK() PURE; virtual int AdvDVP_CloseSDK() PURE; virtual int AdvDVP_GetNoOfDevices(int *pNoOfDevs) PURE; virtual int AdvDVP_Start(int nDevNum, int SwitchingChans, HWND Main, HWND hwndPreview) PURE; virtual int AdvDVP_Stop(int nDevNum) PURE; virtual int AdvDVP_GetCapState(int nDevNum) PURE; virtual int AdvDVP_IsVideoPresent(int nDevNum, BOOL* VPresent) PURE; virtual int AdvDVP_GetCurFrameBuffer(int nDevNum, int VMux, long* bufSize, BYTE* buf) PURE; virtual int AdvDVP_SetNewFrameCallback(int nDevNum, int callback) PURE; virtual int AdvDVP_GetVideoFormat(int nDevNum, AnalogVideoFormat* vFormat) PURE; virtual int AdvDVP_SetVideoFormat(int nDevNum, AnalogVideoFormat vFormat) PURE; virtual int AdvDVP_GetFrameRate(int nDevNum, int *nFrameRate) PURE; virtual int AdvDVP_SetFrameRate(int nDevNum, int SwitchingChans, int nFrameRate) PURE; virtual int AdvDVP_GetResolution(int nDevNum, VideoSize *Size) PURE; virtual int AdvDVP_SetResolution(int nDevNum, VideoSize Size) PURE; virtual int AdvDVP_GetVideoInput(int nDevNum, int* input) PURE; virtual int AdvDVP_SetVideoInput(int nDevNum, int input) PURE; virtual int AdvDVP_GetBrightness(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetBrightness(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GetContrast(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetContrast(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GetHue(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetHue(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GetSaturation(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetSaturation(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GPIOGetData(int nDevNum, int DINum, BOOL* value) PURE; virtual int AdvDVP_GPIOSetData(int nDevNum, int DONum, BOOL value) PURE; }; /////////////////// delphi /////////////////////////////////////// unit IDVP7010BDLL_h; interface uses Windows, Messages, SysUtils, Classes; //{$if _MSC_VER 1000} //pragma once //{$endif} // _MSC_VER 1000 {$ifdef DVP7010BDLL_EXPORTS} //const DVP7010BDLL_API = __declspec(dllexport); {$else} //const DVP7010BDLL_API = __declspec(dllimport); {$endif} const MAXDEVS = 4; MAXMUXS = 4; ID_NEW_FRAME = 37810; ID_MUX0_NEW_FRAME = 37800; ID_MUX1_NEW_FRAME = 37801; ID_MUX2_NEW_FRAME = 37802; ID_MUX3_NEW_FRAME = 37803; // TRec SUCCEEDED = 1; FAILED = 0; SDKINITFAILED = -1; PARAMERROR = -2; NODEVICES = -3; NOSAMPLE = -4; DEVICENUMERROR = -5; INPUTERROR = -6; // TRec // TAnalogVideoFormat Video_None = $00000000; Video_NTSC_M = $00000001; Video_NTSC_M_J = $00000002; Video_PAL_B = $00000010; Video_PAL_M = $00000200; Video_PAL_N = $00000400; Video_SECAM_B = $00001000; // TAnalogVideoFormat // TCapState STOPPED = 1; RUNNING = 2; UNINITIALIZED = -1; UNKNOWNSTATE = -2; // TCapState type TCapState = Longint; TRes = Longint; TtagAnalogVideoFormat = DWORD; TAnalogVideoFormat = TtagAnalogVideoFormat; PAnalogVideoFormat = ^TAnalogVideoFormat; TVideoSize = ( SIZEFULLPAL, SIZED1, SIZEVGA, SIZEQVGA, SIZESUBQVGA); PVideoSize = ^TVideoSize; P_Pointer = ^Pointer; TIDVP7010BDLL = class function AdvDVP_CreateSDKInstence(pp: P_Pointer): integer; virtual; stdcall; abstract; function AdvDVP_InitSDK():Integer; virtual; stdcall; abstract; function AdvDVP_CloseSDK():Integer; virtual; stdcall; abstract; function AdvDVP_GetNoOfDevices(pNoOfDevs : PInteger) :Integer; virtual; stdcall; abstract; function AdvDVP_Start(nDevNum : Integer; SwitchingChans : Integer; Main : HWND; hwndPreview: HWND ) :Integer; virtual; stdcall; abstract; function AdvDVP_Stop(nDevNum : Integer ):Integer; virtual; stdcall; abstract; function AdvDVP_GetCapState(nDevNum : Integer ):Integer; virtual; stdcall; abstract; function AdvDVP_IsVideoPresent(nDevNum : Integer; VPresent : PBool) :Integer; virtual; stdcall; abstract; function AdvDVP_GetCurFrameBuffer(nDevNum : Integer; VMux : Integer; bufSize : PLongInt; buf : PByte) :Integer; virtual; stdcall; abstract; function AdvDVP_SetNewFrameCallback(nDevNum : Integer; callback : Integer ) :Integer; virtual; stdcall; abstract; function AdvDVP_GetVideoFormat(nDevNum : Integer; vFormat : PAnalogVideoFormat) :Integer; virtual; stdcall; abstract; function AdvDVP_SetVideoFormat(nDevNum : Integer; vFormat : TAnalogVideoFormat ) :Integer; virtual; stdcall; abstract; function AdvDVP_GetFrameRate(nDevNum : Integer; nFrameRate : Integer) :Integer; virtual; stdcall; abstract; function AdvDVP_SetFrameRate(nDevNum : Integer; SwitchingChans : Integer; nFrameRate : Integer) :Integer; virtual; stdcall; abstract; function AdvDVP_GetResolution(nDevNum : Integer; Size : PVideoSize) :Integer; virtual; stdcall; abstract; function AdvDVP_SetResolution(nDevNum : Integer; Size : TVideoSize ) :Integer; virtual; stdcall; abstract; function AdvDVP_GetVideoInput(nDevNum : Integer; input : PInteger) :Integer; virtual; stdcall; abstract; function AdvDVP_SetVideoInput(nDevNum : Integer; input : Integer) :Integer; virtual; stdcall; abstract; function AdvDVP_GetBrightness(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetBrightness(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GetContrast(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetContrast(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GetHue(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetHue(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GetSaturation(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetSaturation(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GPIOGetData(nDevNum : Integer; DINum:Integer; value : PBool) :Integer; virtual; stdcall; abstract; function AdvDVP_GPIOSetData(nDevNum : Integer; DONum:Integer; value : Boolean) :Integer; virtual; stdcall; abstract; end; function IDVP7010BDLL : TIDVP7010BDLL ; stdcall; implementation function IDVP7010BDLL; external 'DVP7010B.dll'; end.

    Read the article

  • In ParallelPython, a method of an object ( object.func() ) fails to manipulate a variable of an object ( object.value )

    - by mehmet.ali.anil
    With parallelpython, I am trying to convert my old serial code to parallel, which heavily relies on objects that have methods that change that object's variables. A stripped example in which I omit the syntax in favor of simplicity: class Network: self.adjacency_matrix = [ ... ] self.state = [ ... ] self.equilibria = [ ... ] ... def populate_equilibria(self): # this function takes every possible value that self.state can be in # runs the boolean dynamical system # and writes an integer within self.equilibria for each self.state # doesn't return anything I call this method as: Code: j1 = jobserver.submit(net2.populate_equilibria,(),(),("numpy as num")) The job is sumbitted, and I know that a long computation takes place, so I speculate that my code is ran. The problem is, i am new to parallelpython , I was expecting that, when the method is called, the variable net2.equilibria would be written accordingly, and I would get a revised object (net2) . That is how my code works, independent objects with methods that act upon the object's variables. Rather, though the computation is apparent, and reasonably timed, the variable net2.equilibria remains unchanged. As if PP only takes the function and the object, computes it elsewhere, but never returns the object, so I am left with the old one. What do I miss? Thanks in advance.

    Read the article

  • PayPal return URL

    - by Sam
    Here's the code for my Paypal button: <form action="https://www.sandbox.paypal.com/cgi-bin/webscr" method="post"> <input type="hidden" name="cmd" value="_xclick"> <input type="hidden" name="business" value="[email protected]"> <input type="hidden" name="lc" value="GB"> <input type="hidden" name="button_subtype" value="products"> <input type="hidden" name="no_note" value="1"> <input type="hidden" name="no_shipping" value="1"> <input type="hidden" name="rm" value="0"> <input type="hidden" name="return" value="http://www.example.com"> <input type="hidden" name="item_name" value="My Item"> <input type="hidden" name="amount" value="25.00"> <input type="hidden" name="currency_code" value="GBP"> <input type="hidden" name="bn" value="PP-BuyNowBF:proceed_btn.gif:NonHosted"> <input type="hidden" name="item_number" value="4BD9569402CDE"> <input type="image" src="http://www.example.com/image.gif" border="0" name="submit" alt="PayPal - The safer, easier way to pay online."> <img alt="" border="0" src="https://www.paypal.com/en_GB/i/scr/pixel.gif" width="1" height="1"> </form> Is it possible to add the item_number to the return URL? For example, after completing the payment within PayPal the user gets sent back to http://www.example.com?item_number=4BD9569402CDE

    Read the article

  • C++ cin keeps skipping.....

    - by user69514
    I am having problems with my program. WHen I run it, it asks the user for the album, the title, but then it just exits the loop without asking for the price and the sale tax. Any ideas what's going on? This is a sample run Discounts effective for September 15, 2010 Classical 8% Country 4% International 17% Jazz 0% Rock 16% Show 12% Are there more transactions? Y/N y Enter Artist of CD: Sevendust Enter Title of CD: Self titled Enter Genre of CD: Rock enter price Are there more transactions? Y/N Thank you for shopping with us! Program code: #include <iostream> #include <string> using namespace std; int counter = 0; string discount_tiles[] = {"Classical", "Country", "International", "Jazz", "Rock", "Show"}; int discount_amounts[] = {8, 4, 17, 0, 16, 12, 14}; string date = "September 15, 2010"; // Array Declerations //Artist array char** artist = new char *[100]; //Title array char** title = new char *[100]; //Genres array char** genres = new char *[100]; //Price array double* price[100]; //Discount array double* tax[100]; // sale price array double* sale_price[100]; //sale tax array double* sale_tax[100]; //cash price array double* cash_price[100]; //Begin Prototypes char* getArtist(); char* getTitle(); char* getGenre(); double* getPrice(); double* getTax(); unsigned int* AssignDiscounts(); void ReadTransaction (char ** artist, char ** title, char ** genre, float ** cash, float & taxrate, int albumcount); void computesaleprice(); bool AreThereMore (); //End Prototypes bool areThereMore () { char answer; cout << "Are there more transactions? Y/N" << endl; cin >> answer; if (answer =='y' || answer =='Y') return true; else return false; } char* getArtist() { char * artist= new char [100]; cout << "Enter Artist of CD: " << endl; cin.getline(artist,100); cin.ignore(); return artist; } char* getTitle() { char * title= new char [100]; cout << "Enter Title of CD: " << endl; cin.getline(title,100); cin.ignore(); return title; } char* getGenre() { char * genre= new char [100]; cout << "Enter Genre of CD: " << endl; cin.getline(genre,100); cin.ignore(); return genre; } double* getPrice() { //double* price = new double(); //cout << "Enter Price of CD: " << endl; //cin >> *price; //return price; double p = 0.0; cout<< "enter price" << endl; cin >> p; cin.ignore(); double* pp = &p; return pp; } double* getTax() { double* tax= new double(); cout << "Enter local sales tax: " << endl; cin >> *tax; return tax; } int findDiscount(string str){ if(str.compare(discount_tiles[0]) == 0) return discount_amounts[0]; else if(str.compare(discount_tiles[0]) == 0) return discount_amounts[1]; else if(str.compare(discount_tiles[0]) == 0) return discount_amounts[2]; else if(str.compare(discount_tiles[0]) == 0) return discount_amounts[3]; else if(str.compare(discount_tiles[0]) == 0) return discount_amounts[4]; else if(str.compare(discount_tiles[0]) == 0) return discount_amounts[5]; else{ cout << "Error in findDiscount function" << endl; return 0; } } void computesaleprice() { /** fill in array for all purchases **/ for( int i=0; i<=counter; i++){ double temp = *price[i]; temp -= findDiscount(genres[i]); double* tmpPntr = new double(); tmpPntr = &temp; sale_price[i] = tmpPntr; delete(&temp); delete(tmpPntr); } } void printDailyDiscounts(){ cout << "Discounts effective for " << date << endl; for(int i=0; i < 6; i++){ cout << discount_tiles[i] << "\t" << discount_amounts[i] << "%" << endl; } } //Begin Main int main () { for( int i=0; i<100; i++){ artist[i]=new char [100]; title[i]=new char [100]; genres[i]=new char [100]; price[i] = new double(0.0); tax[i] = new double(0.0); } // End Array Decleration printDailyDiscounts(); bool flag = true; while(flag == true){ if(areThereMore() == true){ artist[counter] = getArtist(); title[counter] = getTitle(); genres[counter] = getGenre(); price[counter] = getPrice(); //tax[counter] = getTax(); //counter++; flag = true; } else { flag = false; } } //compute sale prices //computesaleprice(); cout << "Thank you for shopping with us!" << endl; return 0; } //End Main /** void ReadTransaction (char ** artist, char ** title, char ** genre, float ** cash, float & taxrate, int albumcount) { strcpy(artist[albumcount],getArtist()); strcpy(title[albumcount],getTitle()); strcpy(genre[albumcount],getGenre()); //cash[albumcount][0]=computesaleprice();??????? //taxrate=getTax;?????????????? } * * */ unsigned int * AssignDiscounts() { unsigned int * discount = new unsigned int [7]; cout << "Enter Classical Discount: " << endl; cin >> discount[0]; cout << "Enter Country Discount: " << endl; cin >> discount[1]; cout << "Enter International Discount: " << endl; cin >> discount[2]; cout << "Enter Jazz Discount: " << endl; cin >> discount[3]; cout << "Enter Pop Discount: " << endl; cin >> discount[4]; cout << "Enter Rock Discount: " << endl; cin >> discount[5]; cout << "Enter Show Discount: " << endl; cin >> discount[6]; return discount; } /** char ** AssignGenres () { char ** genres = new char * [7]; for (int x=0;x<7;x++) genres[x] = new char [20]; strcpy(genres [0], "Classical"); strcpy(genres [1], "Country"); strcpy(genres [2], "International"); strcpy(genres [3], "Jazz"); strcpy(genres [4], "Pop"); strcpy(genres [5], "Rock"); strcpy(genres [6], "Show"); return genres; } **/ float getTax(float taxrate) { cout << "Please enter store tax rate: " << endl; cin >> taxrate; return taxrate; }

    Read the article

  • Is there a way to make changes to toggles in my .emacs file apply without re-starting Emacs?

    - by Vivi
    I want to be able to make the changes to my .emacs file without having to reload Emacs. I found three questions which sort of answer what I am asking (you can find them here, here and here), but the problem is that the change I have just made is to a toggle, and as the comments to two of the answers (a1, a2) to those questions explain, the solutions given there (such as M-x reload-file or M-x eval-buffer) don't apply to toggles. I imagine there is a way of toggling the variable again with a command, but if there is a way to reload the whole .emacs and have the all the toggles re-evaluated without having to specify them, I would prefer. In any case, I would also appreciate if someone told me how to toggle the value of a variable so that if I just changed one toggle I can do it with a command rather than re-start Emacs just for that (I am new to Emacs). I don't know how useful this information is, but the change I applied was the following (which I got from this answer to another question): (setq skeleton-pair t) (setq skeleton-pair-on-word t) (global-set-key (kbd "[") 'skeleton-pair-insert-maybe) (global-set-key (kbd "(") 'skeleton-pair-insert-maybe) (global-set-key (kbd "{") 'skeleton-pair-insert-maybe) (global-set-key (kbd "<") 'skeleton-pair-insert-maybe) Edit: I included the above in .emacs and reloaded Emacs, so that the changes took effect. Then I commented all of it out and tried M-x load-file. This doesn't work. The suggestion below (C-x C-e by PP works if I am using it to evaluate the toggle first time, but not when I want to undo it). I would like something that would evaluate the commenting out, if such thing exists... Thanks :)

    Read the article

  • How can i add image in email body

    - by Kutbi
    final Intent i = new Intent(android.content.Intent.ACTION_SEND); i.putExtra(android.content.Intent.EXTRA_EMAIL, new String[]{ txt.getText().toString()}); i.putExtra(Intent.EXTRA_SUBJECT, "Merry Christmas"); i.setType("text/html"); Spanned html =Html.fromHtml("<html><body>h<b>ell</b>o<img src='http://www.pp.rhul.ac.uk/twiki/pub/TWiki/GnuPlotPlugin/RosenbrockFunctionSample.png'>world</body></html>", new ImageGetter() { InputStream s; public Drawable getDrawable(String url) { try { s = (InputStream) (new URL(url)).getContent(); } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } Drawable d = Drawable.createFromStream(s, null); LogUtil.debug(this, "Got image: " + d.getClass() + ", " + d.getIntrinsicWidth() + "x" + d.getIntrinsicHeight()); d.setBounds(0, 0, d.getIntrinsicWidth(), d.getIntrinsicHeight()); return d; }},null); i.putExtra(Intent.EXTRA_TEXT, html); startActivity(Intent.createChooser(i, "Send email"));*

    Read the article

  • Indexing and Searching Over Word Level Annotation Layers in Lucene

    - by dmcer
    I have a data set with multiple layers of annotation over the underlying text, such as part-of-tags, chunks from a shallow parser, name entities, and others from various natural language processing (NLP) tools. For a sentence like The man went to the store, the annotations might look like: Word POS Chunk NER ==== === ===== ======== The DT NP Person man NN NP Person went VBD VP - to TO PP - the DT NP Location store NN NP Location I'd like to index a bunch of documents with annotations like these using Lucene and then perform searches across the different layers. An example of a simple query would be to retrieve all documents where Washington is tagged as a person. While I'm not absolutely committed to the notation, syntactically end-users might enter the query as follows: Query: Word=Washington,NER=Person I'd also like to do more complex queries involving the sequential order of annotations across different layers, e.g. find all the documents where there's a word tagged person followed by the words arrived at followed by a word tagged location. Such a query might look like: Query: "NER=Person Word=arrived Word=at NER=Location" What's a good way to go about approaching this with Lucene? Is there anyway to index and search over document fields that contain structured tokens?

    Read the article

  • Formulae for U and V buffer offset

    - by Abhi
    Hi all ! What should be the buffer offset value for U & V in YUV444 format type? Like for an example if i am using YV12 format the value is as follows: ppData.inputIDMAChannel.UBufOffset = iInputHeight * iInputWidth + (iInputHeight * iInputWidth)/4; ppData.inputIDMAChannel.VBufOffset = iInputHeight * iInputWidth; iInputHeight = 160 & iInputWidth = 112 ppdata is an object for the following structure: typedef struct ppConfigDataStruct { //--------------------------------------------------------------- // General controls //--------------------------------------------------------------- UINT8 IntType; // FIRSTMODULE_INTERRUPT: the interrupt will be // rised once the first sub-module finished its job. // FRAME_INTERRUPT: the interrput will be rised // after all sub-modules finished their jobs. //--------------------------------------------------------------- // Format controls //--------------------------------------------------------------- // For input idmaChannel inputIDMAChannel; BOOL bCombineEnable; idmaChannel inputcombIDMAChannel; UINT8 inputcombAlpha; UINT32 inputcombColorkey; icAlphaType alphaType; // For output idmaChannel outputIDMAChannel; CSCEQUATION CSCEquation; // Selects R2Y or Y2R CSC Equation icCSCCoeffs CSCCoeffs; // Selects R2Y or Y2R CSC Equation icFlipRot FlipRot; // Flip/Rotate controls for VF BOOL allowNopPP; // flag to indicate we need a NOP PP processing }*pPpConfigData, ppConfigData; and idmaChannel structure is as follows: typedef struct idmaChannelStruct { icFormat FrameFormat; // YUV or RGB icFrameSize FrameSize; // frame size UINT32 LineStride;// stride in bytes icPixelFormat PixelFormat;// Input frame RGB format, set NULL // to use standard settings. icDataWidth DataWidth;// Bits per pixel for RGB format UINT32 UBufOffset;// offset of U buffer from Y buffer start address // ignored if non-planar image format UINT32 VBufOffset;// offset of U buffer from Y buffer start address // ignored if non-planar image format } idmaChannel, *pIdmaChannel; I want the formulae for ppData.inputIDMAChannel.UBufOffset & ppData.inputIDMAChannel.VBufOffset for YUV444 Thanks in advance

    Read the article

  • Searching Natural Language Sentence Structure

    - by Cerin
    What's the best way to store and search a database of natural language sentence structure trees? Using OpenNLP's English Treebank Parser, I can get fairly reliable sentence structure parsings for arbitrary sentences. What I'd like to do is create a tool that can extract all the doc strings from my source code, generate these trees for all sentences in the doc strings, store these trees and their associated function name in a database, and then allow a user to search the database using natural language queries. So, given the sentence "This uploads files to a remote machine." for the function upload_files(), I'd have the tree: (TOP (S (NP (DT This)) (VP (VBZ uploads) (NP (NNS files)) (PP (TO to) (NP (DT a) (JJ remote) (NN machine)))) (. .))) If someone entered the query "How can I upload files?", equating to the tree: (TOP (SBARQ (WHADVP (WRB How)) (SQ (MD can) (NP (PRP I)) (VP (VB upload) (NP (NNS files)))) (. ?))) how would I store and query these trees in a SQL database? I've written a simple proof-of-concept script that can perform this search using a mix of regular expressions and network graph parsing, but I'm not sure how I'd implement this in a scalable way. And yes, I realize my example would be trivial to retrieve using a simple keyword search. The idea I'm trying to test is how I might take advantage of grammatical structure, so I can weed-out entries with similar keywords, but a different sentence structure. For example, with the above query, I wouldn't want to retrieve the entry associated with the sentence "Checks a remote machine to find a user that uploads files." which has similar keywords, but is obviously describing a completely different behavior.

    Read the article

  • How to use a string as a delimiter shell script

    - by Dan
    I am reading a line and am using regex to match it in my script. (/bin/bash) echo $line | grep -E '[Ss][Uu][Bb][Jj][Ee][Cc][Tt]: [Hh][Ee][Ll][Pp]' > /dev/null 2>&1 if [[ $? = "0" && -z $subject ]]; then subject=`echo $line | cut -d: -f2` > /dev/null echo "Was able to grab a SUBJECT $line and the actual subject is -> $subject" >> $logfile fi now my problem is that i use the colon as the delimiter. but sometimes the email will have multiple colons in the subject and so I am not able to grab the whole subject because of that. I am looking for a way to grab everything after the colon following subject. even is there is a way to loop through and check for a colon or what. maybe cut allows you to cut with a string as delimiter? Not sure...any ideas? Thanks!

    Read the article

  • Memory allocation patterns in C++

    - by Mahatma
    I am confused about the memory allocation in C++ in terms of the memory areas such as Const data area, Stack, Heap, Freestore, Heap and Global/Static area. I would like to understand the memory allocation pattern in the following snippet. Can anyone help me to understand this. If there any thing more apart from the variable types mentioned in the example to help understand the concept better please alter the example. class FooBar { int n; //Stored in stack? public: int pubVar; //stored in stack? void foo(int param) //param stored in stack { int *pp = new int; //int is allocated on heap. n = param; static int nStat; //Stored in static area of memory int nLoc; //stored in stack? string str = "mystring"; //stored in stack? .. if(CONDITION) { static int nSIf; //stored in static area of memory int loopvar; //stored in stack .. } } } int main(int) { Foobar bar; //bar stored in stack? or a part of it? Foobar *pBar; //pBar is stored in stack pBar = new Foobar(); //the object is created in heap? What part of the object is stored on heap } EDIT: What confuses me is, if pBar = new Foobar(); stores the object on the heap, how come int nLoc; and int pubVar;, that are components of the object stored on stack? Sounds contradictory to me. Shouldn't the lifetime of pubvar and pBar be the same?

    Read the article

  • Puppet Directory and File ownership ignored

    - by Phil Sturgeon
    Puppet seems to be lying to me, which is not very nice. I am trying to set some files and directories included in /vagrant/src to be 666 and 777, and set the ownership group to the correct Apache user (using the PuppetLabs Apache module). Output from Puppet says yes. [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with /tmp/vagrant-puppet/manifests/default.pp... stdin: is not a tty No LSB modules are available. warning: require is a metaparam; this value will inherit to all contained resources warning: notify is a metaparam; this value will inherit to all contained resources notice: /Stage[main]//File[/vagrant/src/addons/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//Package[curl]/ensure: ensure changed 'purged' to 'present' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/uploads/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/mode: mode changed '0755' to '0777' notice: /Stage[main]/Apache/Service[httpd]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/mode: mode changed '0755' to '0777' notice: Finished catalog run in 2.29 seconds Output from ls -lah says no: $ ls -lah /vagrant/src/ total 36K drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 00:11 . drwxr-xr-x 1 vagrant vagrant 340 2012-07-03 08:08 .. drwxr-xr-x 1 vagrant vagrant 136 2012-07-03 00:11 addons drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 assets drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 07:45 .git -rw-r--r-- 1 vagrant vagrant 1.3K 2012-07-03 00:11 .gitignore -rwxr-xr-x 1 vagrant vagrant 1.4K 2012-07-03 00:11 .htaccess -rwxr-xr-x 1 vagrant vagrant 8.8K 2012-07-03 00:11 index.php drwxr-xr-x 1 vagrant vagrant 442 2012-07-03 00:11 installer -rwxr-xr-x 1 vagrant vagrant 2.8K 2012-07-03 00:11 LICENSE -rw-r--r-- 1 vagrant vagrant 1.1K 2012-07-03 00:11 phpdoc.dist.xml -rw-r--r-- 1 vagrant vagrant 3.3K 2012-07-03 00:11 README.md drwxr-xr-x 1 vagrant vagrant 204 2012-07-03 00:11 system -rw-r--r-- 1 vagrant vagrant 42 2012-07-03 00:11 .travis.yml drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 uploads Whats up with that? My entire config can be found here.

    Read the article

  • Is it safe to enable forced ASLR via EMET on Windows?

    - by D.W.
    I'd like to enable forced ASLR for all DLLs on Windows. Is this safe? Background: ASLR is an important security mechanism that helps defend against code injection attacks. DLLs can opt into ASLR, and most do, but some DLLs have not opted into ASLR. If a program loads even a single non-ASLRized DLL, then the program doesn't get the benefit/protection of ASLR. This is a problem, because there are a non-trivial number of DLLs that haven't opted into ASLR. For instance, it was recently revealed that Dropbox injects a DLL into a bunch of processes, and the Dropbox DLL doesn't have ASLR turned on, which negates any ASLR protection they otherwise would have had. Unfortunately, there are many other widely used DLLs that haven't opted into ASLR. This is bad for system security. Microsoft provides several ways to turn on ASLR for all DLLs, even ones that haven't opted into ASLR: On Windows 7 and Windows Server 2008, you can enable "Force ASLR" in the registry. On all Windows versions, you can use Microsoft's EMET tool and enable EMET's "Mandatory ASLR" option. These methods are possible because all DLLs are compiled as position-independent code and they can be relocated to a random location even if they haven't opted into ASLR. These options will ensure that ASLR is turned on, even if the developers of the DLL forgot to opt into ASLR. Thus, forcing on ASLR systemwide may help system security. In principle, turning on forced ASLR could potentially break a poorly-written DLL, so there is some risk of breakage. I'm interested in finding out just significant this risk is. I have the suspicion that this kind of breakage might be extremely rare. Here's what I've been able to find: Microsoft has done compatibility testing with several dozen widely used applications. The only one they found where Mandatory ASLR causes problems is Windows Media Player. All the other applications continue working fine. (See pp.39-41 of this document.) I've seen some anecdotal reports that enabling "Mandatory ASLR"/"Force ASLR" is fine and unlikely to cause problems. CERT reports that AMD and ATI video drivers used to crash if you enabled forced ASLR, but their latest drivers have now fixed this problem. They don't show any other drivers with this problem. A forum post from Microsoft shows no other applications with compatibility problems if ASLR is forced on, as of 2011. A user reports that borderlands.exe, a video game by Gearbox Software, crashes if you turn on mandatory ASLR. What else should I know? Is it relatively safe to turn on Force ASLR / Mandatory ASLR systemwide to harden the secuity of my system, or will I be in for a world of pain and broken applications? How significant is the risk of compatibility problems and broken applications?

    Read the article

  • configuration required for HIVE to be installed on a node

    - by ????? ????????
    I went through the process of manually installing ambari (not through SSH, because I couldnt get keyless to work) and everything installed OK, except for HIVE and GANGLIA. I got this message: stderr: None stdout: warning: Unrecognised escape sequence ‘\;’ in file /var/lib/ambari-agent/puppet/modules/hdp-hive/manifests/hive/service_check.pp at line 32 warning: Dynamic lookup of $configuration is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes. notice: /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2Smoke.sh]/ensure: defined content as ‘{md5}7f1d24221266a2330ec55ba620c015a9' notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2.sql]/ensure: defined content as ‘{md5}0c429dc9ae0867b5af74ef85b5530d84' notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/File[/tmp/hcatSmoke.sh]/ensure: defined content as ‘{md5}bae7742f7083db968cb6b2bd208874cb’ notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:11:56 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException org.apache.hadoop.hive.ql.parse.SemanticException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException [Error 10001]: Table not found hcatsmokeida8c07401_date102513 notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:15 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException o When i go to the alerts and health checks i’m getting this: ive Metastore status check CRIT for 42 minutes CRITICAL: Error accessing hive-metaserver status [13/06/25 03:44:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. What am I doing wrong? I have already tried to do ambari-server reset on the the database without results.

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • top Tweets SOA Partner Community – May 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community BPMN2.0 Oracle notations poster from eaiesb http://wp.me/p10C8u-pu Torsten WinterbergLook out for new Oracle #BPM edition coming up soon: The Oracle BPM Standard edtion! Great news for easy entry, small licence fees. Yes! Danilo Schmiedel Had a great chat with customer yesterday about #OracleBPM. Next step will be a 5day event combining modeling and implementation @soacommunity Frank Nimphius Still reading "Oracle Business Process Management Suite 11g Handbook". Excellent resource for a non-SOA but ADF guy like me ;-) Oracle New webcast: Maximize #Oracle #WebLogic Server ROI with Oracle #Enterprise #Manager 12c on May 2 at 10 am PT. Register http://bit.ly/JFUrR9 OTNArchBeat@OTNArchBeat BPM in Financial Services Industry | Sanjeev Sharma http://bit.ly/HCCxui JDeveloper & ADF BPEL 11.1.1.6 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations http://dlvr.it/1V9SxR Oracle UPK & Tutor Collaborate Attendees: Visit the UPK demo pod, SIGS, and sessions: If you are attending Collaborate 2012 - Sun. http://bit.ly/J39z65 Heidi Buelow see #fmw track RT @demed: Are you going to #KSCOPE12 in San Antonio, June 24-28? http://kscope12.com/component/ seminar/seminarslist?topicsid=6 Use promo code Fusion for discount! Sabine Leitner #SIG #Middleware 15.05. Frankfurt #Oracle #DOAG Planung & Aufbau WebLogic Server #WLS http://bit.ly/HKsCWV @OracleWebLogic @soacommunity SOA Community MDS explorer by Red Samurai http://wp.me/p10C8u-pp Biemond &reg; Retrieve or set a HTTP header from Oracle BPEL: With Oracle SOA Suite 11g patch 12928372 you can finally retrie http://bit.ly/JejTHC Lucas Jellema Call for papers for UKOUG 2012 has opened: http://techandebs.ukoug.org /default.asp?p=9306 (deadline 1st of June) OTNArchBeat BPM API usage: List all BPM Processes for a user | Kavitha Srinivasan http://bit.ly/IJKVfj demed SOA, Cloud + Service Tech symposium (London, Sep 24-25) call for paper is open http://www.servicetechsymposium. com /call2012.php @techsymp #oraclesoa OracleBlogs Lessons learned configuring OER 11g Workflows http://ow.ly/1iMsKh OTNArchBeat Scripting WebLogic Admin Server Startup | Antony Reynolds http://bit.ly/IH5ciU orclateamsoa A-Team Blog #ateam: BPM API usage: List all BPM Processes for a user http://ow.ly/1iJADp Lucas Jellema Just blogged about our Live FMW Application Development show during OBUG 2012, next Tuesday 24th April in Maastricht: OracleBlogs OEG integration with OSB/OWSM - 11g http://ow.ly/1iKx7G SOA Community SOA Community Newsletter April 2012 http://wp.me/p10C8u-pl Frank DorstRT @whitehorsesnl: Whiteblog: BPM Process Spaces in Oracle Webcenter (Patch Set 5(http://bit.ly/Hxzh29) #soacommunity #bpm #oracle) David Shaffer The Advanced SOA suite training class next week in Redwood City is full! Learned a lot about accepting credit card payments. OTNArchBeat Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5 | Shub Lahiri http://bit.ly/IgI8GN SOA Community Oracle Fusion Middleware Innovation Awards 2012, Call for Nominations #ofmaward #soa #bpm #soacommunity OTNArchBeat Updated SOA Documents now available in ITSO Reference Library http://bit.ly/I3Y6Sg Oracle Middleware Data Integrator & SOA - why 2 products better than one for integration? Webcast: Apr 24 10 AM PT http://bit.ly/IzmtKR Andrejus Baranovskis Red Samurai MDS Cleaner V2.0 http://fb.me/FxLVz82w SOA Community “@rluttikhuizen: Chapter 4 of SOA Made Simple book "Classification of Services" ready for collegial review” can #soacommunity get a preview? Xavier Verhaeghe #Gartner figures are out: #Oracle top in App Server market share (43.1%) and Relational #Database, too (48.8%) in 2011 Sabine Leitner WLS12c, Exa*, IDM, EM12c, DB @ Private, Public, Hybrid #Cloud Event 26.04. FFM #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity Michel Schildmeijer@wlscommunity @MiddlewareMagic @OTNArchBeat @Oracle_Fusion Oracle WebLogic / SOA Suite 11g HACMP Cluster take-over http://lnkd.in/G78qMd Oracle Middleware Hear how ODI and SOA's unified approach are key to untangling your business. April 24 10AM PT http://bit.ly/IdcsUz #Oracle OTNArchBeat Using SAP Adapter with OSB 11g (PS3) | Shub Lahiri http://bit.ly/IswR9K SOA Community Integrating with Oracle Fusion Applications: Discovering Integration Artifacts https://blogs.oracle.com/governance /entry/integrating_with_oracle_fusion_ applications #soacommunity #oer #governance OracleBlogs Tuning B2B Server Engine Threads in SOA Suite 11g http://ow.ly/1iH5bx OracleBlogs Top Tweets SOA Partner Community April 2012 http://ow.ly/1iVHfA SOA Community Oracle SOA Suite 11g Database Growth Management http://wp.me/p10C8u-pi Sabine Leitner WLS12c,Exa*,IDM,EM12c, DB @ Private, Public, Hybrid #Cloud Event 24.04. München #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity SOA Community Testing Business Rules by Mark Nelson http://redstack.wordpress.com/2012/ 04/18/testing-business-rules/ #soacommunity #soa #rules #oracle SOA CommunityTop Tweets SOA Partner Community - April 2012 http://wp.me/p10C8u-pn OTNArchBeat Webcast: Untangle Your Business with Oracle Unified SOA and Data Integration - April 24 http://bit.ly/IQexqT OTNArchBeat"Do more with SOA Integration: Best of Packt" contributors include @gschmutz, @llaszews, many others http://amzn.to/HVWwYt ServiceTechSymposium Symposium agenda page coming together - page launched today with keynotes, sessions to be added shortly. http://www.servicetechsymposium.com /agenda2012.php SOA Community Shipping Specialization plaques - congratulation #Fujitsu - request yours https://soacommunity.wordpress. com/2011/02/23/who-are-the-soa-experts-specialization-recognized-by-customers/ #soacommunity #OPN http://pic.twitter.com/YMRm2ion ServiceTechSymposium call for Presentations Submission Deadline Moved Up to May 21, 2012. Send your presentations submissions ASAP! ServiceTechSymposium Symposium Keynote by Vicente Navarro, European Space Agency, added to agenda: "SOA & Service-Orientation at the European Space Agency" SOA Community Running a large #soa project? Make sure you read - Oracle SOA Suite 11g Database Growth Management #soacommunity #opn SOA Community List all BPM Processes for a user by Yogesh l #bpm #oracle #soacommunity  For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity, twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Why is "wsdl" namespace interjected into action name when using savon for ruby soap communication?

    - by Nick Gorbikoff
    I'm trying to access a SOAP service i don't control. One of the actions is called ProcessMessage. I follow example and generate a SOAP request, but I get an error back saying that the action doesn't exist. I traced the problem to the way the body of the envelope is generated. <env:Envelope ... "> <env:Header> <wsse:Security ... "> <wsse:UsernameToken ..."> <wsse:Username>USER</wsse:Username> <wsse:Nonce>658e702d5feff1777a6c741847239eb5d6d86e48</wsse:Nonce> <wsu:Created>2010-02-18T02:05:25Z</wsu:Created> <wsse:Password ... >password</wsse:Password> </wsse:UsernameToken> </wsse:Security> </env:Header> <env:Body> <wsdl:ProcessMessage> <payload> ...... </payload> </wsdl:ProcessMessage> </env:Body> </env:Envelope> That ProcessMessage tag should be : <ProcessMessage xmlns="http://www.starstandards.org/webservices/2005/10/transport"> That's what it is when it is generated by the sample java app, and it works. That tag is the only difference between what my ruby app generates and the sample java app. Is there any way to get rid of the "wsdl:" namesaplce in front of that one tag and add an attribute like that. Barring that, is there a way to make force the action to be not to be generated by just passed as a string like the rest of the body? Here is my code. require 'rubygems' require 'savon' client = Savon::Client.new "https://gmservices.pp.gm.com/ProcessMessage?wsdl" response = client.process_message! do | soap, wsse | wsse.username = "USER" wsse.password = "password" soap.namespace = "http://www.starstandards.org/webservices/2005/10/transport" #makes no difference soap.action = "ProcessMessage" #makes no difference soap.input = "ProcessMessage" #makes no difference #my body at this point is jsut one big xml string soap.body = "<payload>...</payload>" # putting <ProccessMessage> tag here doesn't help as it just creates a duplicate tag in the body, since Savon keeps interjecting <wsdl:ProcessMessage> tag. end Thank you P.S.: I tried handsoap but it doesn't support httpS and is confusing, and I tried soap4r but but it'even more confusing than handsoap.

    Read the article

  • Learning Treetop

    - by cmartin
    I'm trying to teach myself Ruby's Treetop grammar generator. I am finding that not only is the documentation woefully sparse for the "best" one out there, but that it doesn't seem to work as intuitively as I'd hoped. On a high level, I'd really love a better tutorial than the on-site docs or the video, if there is one. On a lower level, here's a grammar I cannot get to work at all: grammar SimpleTest rule num (float / integer) end rule float ( (( '+' / '-')? plain_digits '.' plain_digits) / (( '+' / '-')? plain_digits ('E' / 'e') plain_digits ) / (( '+' / '-')? plain_digits '.') / (( '+' / '-')? '.' plain_digits) ) { def eval text_value.to_f end } end rule integer (( '+' / '-' )? plain_digits) { def eval text_value.to_i end } end rule plain_digits [0-9] [0-9]* end end When I load it and run some assertions in a very simple test object, I find: assert_equal @parser.parse('3.14').eval,3.14 Works fine, while assert_equal @parser.parse('3').eval,3 raises the error: NoMethodError: private method `eval' called for # If I reverse integer and float on the description, both integers and floats give me this error. I think this may be related to limited lookahead, but I cannot find any information in any of the docs to even cover the idea of evaluating in the "or" context A bit more info that may help. Here's pp information for both those parse() blocks. The float: SyntaxNode+Float4+Float0 offset=0, "3.14" (eval,plain_digits): SyntaxNode offset=0, "" SyntaxNode+PlainDigits0 offset=0, "3": SyntaxNode offset=0, "3" SyntaxNode offset=1, "" SyntaxNode offset=1, "." SyntaxNode+PlainDigits0 offset=2, "14": SyntaxNode offset=2, "1" SyntaxNode offset=3, "4": SyntaxNode offset=3, "4" The Integer... note that it seems to have been defined to follow the integer rule, but not caught the eval() method: SyntaxNode+Integer0 offset=0, "3" (plain_digits): SyntaxNode offset=0, "" SyntaxNode+PlainDigits0 offset=0, "3": SyntaxNode offset=0, "3" SyntaxNode offset=1, "" Update: I got my particular problem working, but I have no clue why: rule integer ( '+' / '-' )? plain_digits { def eval text_value.to_i end } end This makes no sense with the docs that are present, but just removing the extra parentheses made the match include the Integer1 class as well as Integer0. Integer1 is apparently the class holding the eval() method. I have no idea why this is the case. I'm still looking for more info about treetop.

    Read the article

  • PayPal sandbox Buy Now Problem

    - by Tushar Ahirrao
    Hi , I have paypal sandbox test account. I want to create a 'buy Now' button. I am trying it with GWT. But its even not working with simple HTML form. It displays a 'Buy Now' button on HTML page and after clicking on it redirects to PayPal site. Where it ask to login to buy product but after that it goes on displying message: The email address or password you have entered does not match our records. Please try again. I am using buyer user to purchase product. I am pretty sure about the username and password. Providing here the simple HTML form which I am trying: <form action="https://www.paypal.com/cgi-bin/webscr" method="post" id="payPalForm"> <input type="hidden" name="item_number" value="1"> <input type="hidden" name="cmd" value="_xclick"> <input type="hidden" name="no_note" value="1"> <input type="hidden" name="business" value="[email protected]"> <input type="hidden" name="lc" value="US"> <input type="hidden" name="button_subtype" value="services"> <input type="hidden" name="cn" value="Add special instructions to the seller"> <input type="hidden" name="no_shipping" value="2"> <input type="hidden" name="rm" value="1"> <input type="hidden" name="bn" value="PP-BuyNowBF:btn_paynow_SM.gif:NonHosted"> <input type="hidden" name="variables" value="http://google.com"> <input type="hidden" name="cancel_return" value="http://google.com"> <input type="hidden" name="notify_url" value="http://google.com"> <input type="hidden" name="return" value="http://freelanceswitch.com/payment-complete /"> <input type="hidden" name="currency_code" value="USD"> <input name="item_name" type="hidden" value="Deal Name"> <input name="amount" type="hidden" value="500"> <input type="submit" name="Submit" value="Submit"> </form> Please advice. Thank you.

    Read the article

  • Re-order form fields on submit url

    - by user2521764
    I have a get form with several visible and hidden input fields. When the form is submitted, selected fileds with their values are appended to the url in the order they are placed in the form. Is there a way to re-order the parameters in the url using jQuery? Note that for the reasons of usability, I can not re-order the elements on the form itself. I know it beggs the question "why would I want to do it?", but the reason is that I will be hitting a static page, so the order of the parameters have to be exactly how they are in the static page url. For example, my form returns a url: http://someurl??names=comm&search=all&type=list while the static page has a url: http://someurl??search=all&type=list&names=comm A simplified form example is here: <form id="search_form" method="get" action="http://www.cbif.gc.ca/pls/pp/ppack.jump" > <h2>Choose which names you want to be displayed</h2> <select name="names"> <option value="comm">Common names</option> <option value="sci">Scientific names</option> </select> <h2>Choose how you want to view the results</h2> <input type="radio" name="search" value="all" id="complete" checked = "checked" /> <label for="complete" id="completeLabel">Complete list</label> <br/> <input type="radio" name="p_null" value="house" id="house" /> <label for="house" id="houseLabel">House plants only</label> <br/> <input type="radio" name="p_null" value="illust" id="illustrat" /> <label for="illustrat" id="illustratLabel">Plants with Illustrations</label> <br/> <input type="hidden" name="type" value="list" /> <input type="submit" value="Submit" /> </form> I can get form fields with values using $(#search_form).serializeArray() and massage the array like I want to, but I don't know how to set it back, i.e. modify the serialized values so that the submitted url has my order of parameters. I'm not even sure if this is the right way to go about it, so any pointers would be greatly appreciated.

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >