Search Results

Search found 18092 results on 724 pages for 'matt long'.

Page 94/724 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Cannot create Windows Service for MySQL (Vista)

    - by Matt W
    I uninstalled MySQL 5.1 from Vista as I lost the username/password, however I forgot to stop the MySQL service before doing so. Now whenever I try to reinstall it I get to the final configuration screen which then fails with the message "Cannot create Windows Service for MySQL Error:0". I've uninstalled MySQL and removed all related files/folders/registry entries but still no joy... Does anyone know a way to resolve this as I can't seem to find a fix online anywhere!

    Read the article

  • crontable OR conditions

    - by matt
    Crontable parameters seem to function as 'ands' conditions. So the example 0 9-5 * * 1-5 Runs when the conditions are met "minute is zero AND hour is between 9 and 5 AND day is between monday and friday". What I'd like is an 'or' function, so I can say "run monday to friday OR the 8th day of the month". Does such a thing exist? I realise you could add two entries, but with lots of entries it adds something to forget.

    Read the article

  • What is a Delphi version of the C++ header for the DVP7010B video card DLL?

    - by grzegorz1
    I need help with converting c++ header file to delphi. I spent several days on this problem without success. Below is the original header file and my Delphi translation. C++ header #if _MSC_VER > 1000 #pragma once #endif // _MSC_VER > 1000 #ifdef DVP7010BDLL_EXPORTS #define DVP7010BDLL_API __declspec(dllexport) #else #define DVP7010BDLL_API __declspec(dllimport) #endif #define MAXBOARDS 4 #define MAXDEVS 4 #define ID_NEW_FRAME 37810 #define ID_MUX0_NEW_FRAME 37800 #define ID_MUX1_NEW_FRAME 37801 #define ID_MUX2_NEW_FRAME 37802 #define ID_MUX3_NEW_FRAME 37803 typedef enum { SUCCEEDED = 1, FAILED = 0, SDKINITFAILED = -1, PARAMERROR = -2, NODEVICES = -3, NOSAMPLE = -4, DEVICENUMERROR = -5, INPUTERROR = -6, // VERIFYHWERROR = -7 } Res; typedef enum tagAnalogVideoFormat { Video_None = 0x00000000, Video_NTSC_M = 0x00000001, Video_NTSC_M_J = 0x00000002, Video_PAL_B = 0x00000010, Video_PAL_M = 0x00000200, Video_PAL_N = 0x00000400, Video_SECAM_B = 0x00001000 } AnalogVideoFormat; typedef enum { SIZEFULLPAL=0, SIZED1, SIZEVGA, SIZEQVGA, SIZESUBQVGA } VideoSize; typedef enum { STOPPED = 1, RUNNING = 2, UNINITIALIZED = -1, UNKNOWNSTATE = -2 } CapState; class IDVP7010BDLL { public: int AdvDVP_CreateSDKInstence(void **pp); virtual int AdvDVP_InitSDK() PURE; virtual int AdvDVP_CloseSDK() PURE; virtual int AdvDVP_GetNoOfDevices(int *pNoOfDevs) PURE; virtual int AdvDVP_Start(int nDevNum, int SwitchingChans, HWND Main, HWND hwndPreview) PURE; virtual int AdvDVP_Stop(int nDevNum) PURE; virtual int AdvDVP_GetCapState(int nDevNum) PURE; virtual int AdvDVP_IsVideoPresent(int nDevNum, BOOL* VPresent) PURE; virtual int AdvDVP_GetCurFrameBuffer(int nDevNum, int VMux, long* bufSize, BYTE* buf) PURE; virtual int AdvDVP_SetNewFrameCallback(int nDevNum, int callback) PURE; virtual int AdvDVP_GetVideoFormat(int nDevNum, AnalogVideoFormat* vFormat) PURE; virtual int AdvDVP_SetVideoFormat(int nDevNum, AnalogVideoFormat vFormat) PURE; virtual int AdvDVP_GetFrameRate(int nDevNum, int *nFrameRate) PURE; virtual int AdvDVP_SetFrameRate(int nDevNum, int SwitchingChans, int nFrameRate) PURE; virtual int AdvDVP_GetResolution(int nDevNum, VideoSize *Size) PURE; virtual int AdvDVP_SetResolution(int nDevNum, VideoSize Size) PURE; virtual int AdvDVP_GetVideoInput(int nDevNum, int* input) PURE; virtual int AdvDVP_SetVideoInput(int nDevNum, int input) PURE; virtual int AdvDVP_GetBrightness(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetBrightness(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GetContrast(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetContrast(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GetHue(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetHue(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GetSaturation(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetSaturation(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GPIOGetData(int nDevNum, int DINum, BOOL* value) PURE; virtual int AdvDVP_GPIOSetData(int nDevNum, int DONum, BOOL value) PURE; }; Delphi unit IDVP7010BDLL_h; interface uses Windows, Messages, SysUtils, Classes; //{$if _MSC_VER > 1000} //pragma once //{$endif} // _MSC_VER > 1000 {$ifdef DVP7010BDLL_EXPORTS} //const DVP7010BDLL_API = __declspec(dllexport); {$else} //const DVP7010BDLL_API = __declspec(dllimport); {$endif} const MAXDEVS = 4; MAXMUXS = 4; ID_NEW_FRAME = 37810; ID_MUX0_NEW_FRAME = 37800; ID_MUX1_NEW_FRAME = 37801; ID_MUX2_NEW_FRAME = 37802; ID_MUX3_NEW_FRAME = 37803; // TRec SUCCEEDED = 1; FAILED = 0; SDKINITFAILED = -1; PARAMERROR = -2; NODEVICES = -3; NOSAMPLE = -4; DEVICENUMERROR = -5; INPUTERROR = -6; // TRec // TAnalogVideoFormat Video_None = $00000000; Video_NTSC_M = $00000001; Video_NTSC_M_J = $00000002; Video_PAL_B = $00000010; Video_PAL_M = $00000200; Video_PAL_N = $00000400; Video_SECAM_B = $00001000; // TAnalogVideoFormat // TCapState STOPPED = 1; RUNNING = 2; UNINITIALIZED = -1; UNKNOWNSTATE = -2; // TCapState type TCapState = Longint; TRes = Longint; TtagAnalogVideoFormat = DWORD; TAnalogVideoFormat = TtagAnalogVideoFormat; PAnalogVideoFormat = ^TAnalogVideoFormat; TVideoSize = ( SIZEFULLPAL, SIZED1, SIZEVGA, SIZEQVGA, SIZESUBQVGA); PVideoSize = ^TVideoSize; P_Pointer = ^Pointer; TIDVP7010BDLL = class function AdvDVP_CreateSDKInstence(pp: P_Pointer): integer; virtual; stdcall; abstract; function AdvDVP_InitSDK():Integer; virtual; stdcall; abstract; function AdvDVP_CloseSDK():Integer; virtual; stdcall; abstract; function AdvDVP_GetNoOfDevices(pNoOfDevs : PInteger) :Integer; virtual; stdcall; abstract; function AdvDVP_Start(nDevNum : Integer; SwitchingChans : Integer; Main : HWND; hwndPreview: HWND ) :Integer; virtual; stdcall; abstract; function AdvDVP_Stop(nDevNum : Integer ):Integer; virtual; stdcall; abstract; function AdvDVP_GetCapState(nDevNum : Integer ):Integer; virtual; stdcall; abstract; function AdvDVP_IsVideoPresent(nDevNum : Integer; VPresent : PBool) :Integer; virtual; stdcall; abstract; function AdvDVP_GetCurFrameBuffer(nDevNum : Integer; VMux : Integer; bufSize : PLongInt; buf : PByte) :Integer; virtual; stdcall; abstract; function AdvDVP_SetNewFrameCallback(nDevNum : Integer; callback : Integer ) :Integer; virtual; stdcall; abstract; function AdvDVP_GetVideoFormat(nDevNum : Integer; vFormat : PAnalogVideoFormat) :Integer; virtual; stdcall; abstract; function AdvDVP_SetVideoFormat(nDevNum : Integer; vFormat : TAnalogVideoFormat ) :Integer; virtual; stdcall; abstract; function AdvDVP_GetFrameRate(nDevNum : Integer; nFrameRate : Integer) :Integer; virtual; stdcall; abstract; function AdvDVP_SetFrameRate(nDevNum : Integer; SwitchingChans : Integer; nFrameRate : Integer) :Integer; virtual; stdcall; abstract; function AdvDVP_GetResolution(nDevNum : Integer; Size : PVideoSize) :Integer; virtual; stdcall; abstract; function AdvDVP_SetResolution(nDevNum : Integer; Size : TVideoSize ) :Integer; virtual; stdcall; abstract; function AdvDVP_GetVideoInput(nDevNum : Integer; input : PInteger) :Integer; virtual; stdcall; abstract; function AdvDVP_SetVideoInput(nDevNum : Integer; input : Integer) :Integer; virtual; stdcall; abstract; function AdvDVP_GetBrightness(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetBrightness(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GetContrast(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetContrast(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GetHue(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetHue(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GetSaturation(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetSaturation(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GPIOGetData(nDevNum : Integer; DINum:Integer; value : PBool) :Integer; virtual; stdcall; abstract; function AdvDVP_GPIOSetData(nDevNum : Integer; DONum:Integer; value : Boolean) :Integer; virtual; stdcall; abstract; end; function IDVP7010BDLL : TIDVP7010BDLL ; stdcall; implementation function IDVP7010BDLL; external 'DVP7010B.dll'; end.

    Read the article

  • Need help converting a C++ header file to delphi

    - by grzegorz1
    I need help with converting c++ header file to delphi. I spent several days on this problem without success. Below is the original header file and my Delphi translation. ///////////////////////// C++ header file //////////////////////////////////// if _MSC_VER 1000 pragma once endif // _MSC_VER 1000 ifdef DVP7010BDLL_EXPORTS define DVP7010BDLL_API __declspec(dllexport) else define DVP7010BDLL_API __declspec(dllimport) endif define MAXBOARDS 4 define MAXDEVS 4 define ID_NEW_FRAME 37810 define ID_MUX0_NEW_FRAME 37800 define ID_MUX1_NEW_FRAME 37801 define ID_MUX2_NEW_FRAME 37802 define ID_MUX3_NEW_FRAME 37803 typedef enum { SUCCEEDED = 1, FAILED = 0, SDKINITFAILED = -1, PARAMERROR = -2, NODEVICES = -3, NOSAMPLE = -4, DEVICENUMERROR = -5, INPUTERROR = -6, // VERIFYHWERROR = -7 } Res; typedef enum tagAnalogVideoFormat { Video_None = 0x00000000, Video_NTSC_M = 0x00000001, Video_NTSC_M_J = 0x00000002, Video_PAL_B = 0x00000010, Video_PAL_M = 0x00000200, Video_PAL_N = 0x00000400, Video_SECAM_B = 0x00001000 } AnalogVideoFormat; typedef enum { SIZEFULLPAL=0, SIZED1, SIZEVGA, SIZEQVGA, SIZESUBQVGA } VideoSize; typedef enum { STOPPED = 1, RUNNING = 2, UNINITIALIZED = -1, UNKNOWNSTATE = -2 } CapState; class IDVP7010BDLL { public: int AdvDVP_CreateSDKInstence(void **pp); virtual int AdvDVP_InitSDK() PURE; virtual int AdvDVP_CloseSDK() PURE; virtual int AdvDVP_GetNoOfDevices(int *pNoOfDevs) PURE; virtual int AdvDVP_Start(int nDevNum, int SwitchingChans, HWND Main, HWND hwndPreview) PURE; virtual int AdvDVP_Stop(int nDevNum) PURE; virtual int AdvDVP_GetCapState(int nDevNum) PURE; virtual int AdvDVP_IsVideoPresent(int nDevNum, BOOL* VPresent) PURE; virtual int AdvDVP_GetCurFrameBuffer(int nDevNum, int VMux, long* bufSize, BYTE* buf) PURE; virtual int AdvDVP_SetNewFrameCallback(int nDevNum, int callback) PURE; virtual int AdvDVP_GetVideoFormat(int nDevNum, AnalogVideoFormat* vFormat) PURE; virtual int AdvDVP_SetVideoFormat(int nDevNum, AnalogVideoFormat vFormat) PURE; virtual int AdvDVP_GetFrameRate(int nDevNum, int *nFrameRate) PURE; virtual int AdvDVP_SetFrameRate(int nDevNum, int SwitchingChans, int nFrameRate) PURE; virtual int AdvDVP_GetResolution(int nDevNum, VideoSize *Size) PURE; virtual int AdvDVP_SetResolution(int nDevNum, VideoSize Size) PURE; virtual int AdvDVP_GetVideoInput(int nDevNum, int* input) PURE; virtual int AdvDVP_SetVideoInput(int nDevNum, int input) PURE; virtual int AdvDVP_GetBrightness(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetBrightness(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GetContrast(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetContrast(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GetHue(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetHue(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GetSaturation(int nDevNum, int input, long *pnValue) PURE; virtual int AdvDVP_SetSaturation(int nDevNum, int input, long nValue) PURE; virtual int AdvDVP_GPIOGetData(int nDevNum, int DINum, BOOL* value) PURE; virtual int AdvDVP_GPIOSetData(int nDevNum, int DONum, BOOL value) PURE; }; /////////////////// delphi /////////////////////////////////////// unit IDVP7010BDLL_h; interface uses Windows, Messages, SysUtils, Classes; //{$if _MSC_VER 1000} //pragma once //{$endif} // _MSC_VER 1000 {$ifdef DVP7010BDLL_EXPORTS} //const DVP7010BDLL_API = __declspec(dllexport); {$else} //const DVP7010BDLL_API = __declspec(dllimport); {$endif} const MAXDEVS = 4; MAXMUXS = 4; ID_NEW_FRAME = 37810; ID_MUX0_NEW_FRAME = 37800; ID_MUX1_NEW_FRAME = 37801; ID_MUX2_NEW_FRAME = 37802; ID_MUX3_NEW_FRAME = 37803; // TRec SUCCEEDED = 1; FAILED = 0; SDKINITFAILED = -1; PARAMERROR = -2; NODEVICES = -3; NOSAMPLE = -4; DEVICENUMERROR = -5; INPUTERROR = -6; // TRec // TAnalogVideoFormat Video_None = $00000000; Video_NTSC_M = $00000001; Video_NTSC_M_J = $00000002; Video_PAL_B = $00000010; Video_PAL_M = $00000200; Video_PAL_N = $00000400; Video_SECAM_B = $00001000; // TAnalogVideoFormat // TCapState STOPPED = 1; RUNNING = 2; UNINITIALIZED = -1; UNKNOWNSTATE = -2; // TCapState type TCapState = Longint; TRes = Longint; TtagAnalogVideoFormat = DWORD; TAnalogVideoFormat = TtagAnalogVideoFormat; PAnalogVideoFormat = ^TAnalogVideoFormat; TVideoSize = ( SIZEFULLPAL, SIZED1, SIZEVGA, SIZEQVGA, SIZESUBQVGA); PVideoSize = ^TVideoSize; P_Pointer = ^Pointer; TIDVP7010BDLL = class function AdvDVP_CreateSDKInstence(pp: P_Pointer): integer; virtual; stdcall; abstract; function AdvDVP_InitSDK():Integer; virtual; stdcall; abstract; function AdvDVP_CloseSDK():Integer; virtual; stdcall; abstract; function AdvDVP_GetNoOfDevices(pNoOfDevs : PInteger) :Integer; virtual; stdcall; abstract; function AdvDVP_Start(nDevNum : Integer; SwitchingChans : Integer; Main : HWND; hwndPreview: HWND ) :Integer; virtual; stdcall; abstract; function AdvDVP_Stop(nDevNum : Integer ):Integer; virtual; stdcall; abstract; function AdvDVP_GetCapState(nDevNum : Integer ):Integer; virtual; stdcall; abstract; function AdvDVP_IsVideoPresent(nDevNum : Integer; VPresent : PBool) :Integer; virtual; stdcall; abstract; function AdvDVP_GetCurFrameBuffer(nDevNum : Integer; VMux : Integer; bufSize : PLongInt; buf : PByte) :Integer; virtual; stdcall; abstract; function AdvDVP_SetNewFrameCallback(nDevNum : Integer; callback : Integer ) :Integer; virtual; stdcall; abstract; function AdvDVP_GetVideoFormat(nDevNum : Integer; vFormat : PAnalogVideoFormat) :Integer; virtual; stdcall; abstract; function AdvDVP_SetVideoFormat(nDevNum : Integer; vFormat : TAnalogVideoFormat ) :Integer; virtual; stdcall; abstract; function AdvDVP_GetFrameRate(nDevNum : Integer; nFrameRate : Integer) :Integer; virtual; stdcall; abstract; function AdvDVP_SetFrameRate(nDevNum : Integer; SwitchingChans : Integer; nFrameRate : Integer) :Integer; virtual; stdcall; abstract; function AdvDVP_GetResolution(nDevNum : Integer; Size : PVideoSize) :Integer; virtual; stdcall; abstract; function AdvDVP_SetResolution(nDevNum : Integer; Size : TVideoSize ) :Integer; virtual; stdcall; abstract; function AdvDVP_GetVideoInput(nDevNum : Integer; input : PInteger) :Integer; virtual; stdcall; abstract; function AdvDVP_SetVideoInput(nDevNum : Integer; input : Integer) :Integer; virtual; stdcall; abstract; function AdvDVP_GetBrightness(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetBrightness(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GetContrast(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetContrast(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GetHue(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetHue(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GetSaturation(nDevNum : Integer; input: Integer; pnValue : PLongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_SetSaturation(nDevNum : Integer; input: Integer; nValue : LongInt) :Integer; virtual; stdcall; abstract; function AdvDVP_GPIOGetData(nDevNum : Integer; DINum:Integer; value : PBool) :Integer; virtual; stdcall; abstract; function AdvDVP_GPIOSetData(nDevNum : Integer; DONum:Integer; value : Boolean) :Integer; virtual; stdcall; abstract; end; function IDVP7010BDLL : TIDVP7010BDLL ; stdcall; implementation function IDVP7010BDLL; external 'DVP7010B.dll'; end.

    Read the article

  • How to maintain the state of button cutom listview in android

    - by Akshay
    I have custom ListView with three TextView three Button and three Chronometer. And the situation is I am loading the ListView properly.But while loading ListView I am disabling some button in the ListView by checking one parameter. Up to this point ListView is showing it's row properly. But when I am scrolling the ListView at that time previously enabled Button are getting disabled.What I am doing wrong I am not getting can one please point out my mistake Or any suggestion. Here is my Adapter class. public class OrderSmartKitchenAdapter extends BaseAdapter { private int flagDeliveryComplete = 0; private int flagPreparationComplete = 0; private int flagPreparationStarted = 0; private List<OrderitemdetailsBO> list = new ArrayList<OrderitemdetailsBO(); private int orderStatus; public OrderSmartKitchenAdapter() { // TODO Auto-generated constructor stub } public void setOrderList(List<OrderitemdetailsBO> orderList) { this.list = orderList; } @Override public int getCount() { // TODO Auto-generated method stub Log.i("OrderItemList Size :-", Integer.toString(list.size())); return list.size(); } @Override public Object getItem(int position) { // TODO Auto-generated method stub return null; } @Override public long getItemId(int position) { // TODO Auto-generated method stub return 0; } @Override public View getView(final int position, View convertView,ViewGroup parent) { // TODO Auto-generated method stub final ViewHolder viewHolder ; if (convertView == null) { layoutInflater = LayoutInflater.from(myContext); convertView = layoutInflater.inflate(R.layout.table_row_view,null); viewHolder = new ViewHolder(); viewHolder.txtTableNumber = (TextView) convertView.findViewById(R.id.txtTableNumber); viewHolder.txtMenuItem = (TextView) convertView.findViewById(R.id.txtMenuItem); viewHolder.txtQuantity = (TextView) convertView.findViewById(R.id.txtQuantity); viewHolder.txtOrderAcceptanceTime = (TextView) convertView.findViewById(R.id.txtOrderAcceptanceTime); viewHolder.txtElapsedTimeOfOrderAcceptance = (Chronometer) convertView.findViewById(R.id.txtElapsedTimeOfOrderAcceptance); viewHolder.btnPreparationStart = (Button) convertView.findViewById(R.id.btnPreparationStart); viewHolder.btnPreparationStart.setTag(position); viewHolder.txtElapsedTimeForPreparation = (Chronometer) convertView.findViewById(R.id.txtElapsedTimeForPrepatration); viewHolder.btnPreparationComplete = (Button) convertView.findViewById(R.id.btnPreparationCompleted); viewHolder.btnPreparationComplete.setTag(position); viewHolder.txtElapsedTimeForDeliveryComplete = (Chronometer) convertView.findViewById(R.id.txtElapsedTimeForCompleation); viewHolder.btnDeliveryComplete = (Button) convertView.findViewById(R.id.btnOrderComplete); viewHolder.btnDeliveryComplete.setTag(position); convertView.setTag(viewHolder); } else{ viewHolder = (ViewHolder)convertView.getTag(); viewHolder.btnDeliveryComplete.setTag(position); viewHolder.btnPreparationComplete.setTag(position); viewHolder.btnPreparationStart.setTag(position); } if (list.get(position) != null) { OrderitemdetailsBO orderitemdetailsBO = new OrderitemdetailsBO(); orderitemdetailsBO = list.get(position); viewHolder.txtTableNumber.setText(orderitemdetailsBO.getOrderitemid().toString()); viewHolder.txtMenuItem.setText(orderitemdetailsBO.getMenuitemname().toString()); viewHolder.txtQuantity.setText(orderitemdetailsBO.getQuantity().toString()); Log.i("Table Number :-", Long.toString(orderitemdetailsBO.getOrderitemid())); Log.i("Menu Name :-", orderitemdetailsBO.getMenuitemname().toString()); Log.i("Quantity", orderitemdetailsBO.getQuantity().toString()); Date acceptTime = new Date(); acceptTime = orderitemdetailsBO.getOrderdatetime(); viewHolder.txtOrderAcceptanceTime.setText(DateUtil.getDateAsString(acceptTime,"HH:mm")); Log.i("Order Accept Time :-", acceptTime.getMinutes() + ":"+ acceptTime.getSeconds()); orderStatus = orderitemdetailsBO.getOrderstatus(); Date preparationStartTime = new Date(); preparationStartTime = orderitemdetailsBO.getPreparationstarttime(); if(preparationStartTime != null) { Log.i("OrderSmartKitchenActivity", "2 Order Acceptance Time :-" + "Menu Item id "+ orderitemdetailsBO.getOrderitemid() + " Preparation Start time " + orderitemdetailsBO.getPreparationstarttime() ); viewHolder.txtElapsedTimeOfOrderAcceptance.stop(); Log.i("Preparation Start Time :-",preparationStartTime.getMinutes() + ":" + preparationStartTime.getSeconds()); viewHolder.txtElapsedTimeOfOrderAcceptance.setText(DateUtil.getDateAsString(preparationStartTime,"MM:ss")); viewHolder.txtElapsedTimeOfOrderAcceptance.stop(); viewHolder.btnPreparationStart.setEnabled(false); viewHolder.btnPreparationStart.setClickable(false); viewHolder.btnPreparationStart.setBackgroundColor(Color.LTGRAY); } else { Long n = acceptTime.getTime(); Log.i("OrderSmartKitchenActivity", "Order Acceptance Time :-" + "Menu Item id "+ orderitemdetailsBO.getOrderitemid() + " Acceptance time" + Long.toString(n) + " Preparation Start time " + orderitemdetailsBO.getPreparationstarttime() ); // Calculate Time difference viewHolder.txtElapsedTimeOfOrderAcceptance.setBase(SystemClock.elapsedRealtime() - System.currentTimeMillis() + n); viewHolder.txtElapsedTimeOfOrderAcceptance.getBase(); viewHolder.txtElapsedTimeOfOrderAcceptance.start(); viewHolder.txtElapsedTimeOfOrderAcceptance.setFormat("%s"); } viewHolder.btnPreparationStart.setOnClickListener(new OnClickListener() { @Override public void onClick(final View v) { // TODO Auto-generated method stub if (flagPreparationStarted == 0) { flagPreparationStarted++; v.startAnimation(playAnimation()); handler.postDelayed(new Runnable() { @Override public void run() { // TODO Auto-generated method stub v.clearAnimation(); Date currentTime = new Date(); // Set Preparation Start Time. viewHolder.txtElapsedTimeOfOrderAcceptance.stop(); Date setTime = new Date(currentTime.getTime() * 1000); OrderitemdetailsBO orderitemdetailsBO = list.get(position); orderitemdetailsBO.setPreparationstarttime(setTime); String orderDetails = "2"; String getPosition = Integer.toString(position); viewHolder.btnPreparationStart.setBackgroundColor(Color.LTGRAY); new sendOrderStatusToServer().execute(orderDetails,getPosition); } }, 5000); } else { handler.removeCallbacksAndMessages(null); v.clearAnimation(); flagPreparationStarted = 0; Log.i("Handler Removed. :-", "Here"); } } }); String preparationTime = orderitemdetailsBO.getOrderpreparationtime(); if(preparationTime != null && orderStatus == order_preparationComplete) { viewHolder.txtElapsedTimeForPreparation.setText(preparationTime); viewHolder.txtElapsedTimeForPreparation.stop(); viewHolder.btnPreparationComplete.getTag(position); viewHolder.btnPreparationComplete.setEnabled(false); viewHolder.btnPreparationComplete.setClickable(false); viewHolder.btnPreparationComplete.setBackgroundColor(Color.LTGRAY); } else if( orderStatus == order_preparationStart || orderStatus == orderReceived || orderStatus == order_delivered){ Long n = acceptTime.getTime(); Log.i("Preparation Start Time :-", Long.toString(n)); viewHolder.txtElapsedTimeForPreparation.setBase(SystemClock.elapsedRealtime() - System.currentTimeMillis() + n); viewHolder.txtElapsedTimeForPreparation.getBase(); viewHolder.txtElapsedTimeForPreparation.start(); viewHolder.txtElapsedTimeForPreparation.setFormat("%s"); } viewHolder.btnPreparationComplete.setOnClickListener(new OnClickListener() { @Override public void onClick(final View v) { // TODO Auto-generated method if (flagPreparationComplete == 0) { flagPreparationComplete++; v.startAnimation(playAnimation()); handler.postDelayed(new Runnable() { @Override public void run() { // TODO Auto-generated method stub v.clearAnimation(); OrderitemdetailsBO orderitemdetailsBO = list.get(position); Date date = orderitemdetailsBO.getPreparationstarttime(); if(date != null) { viewHolder.txtElapsedTimeForPreparation.stop(); Date currentTime = new Date(); Calendar calendar = Calendar.getInstance(); int minute = calendar.get(Calendar.MINUTE); int second = calendar.get(Calendar.SECOND); orderitemdetailsBO.setOrderpreparationtime(calendar.get(Calendar.MINUTE) +":" +calendar.get(Calendar.SECOND)); String orderDetails = "3"; String getPosition = Integer.toString(position); viewHolder.btnPreparationComplete.setBackgroundColor(Color.LTGRAY); new sendOrderStatusToServer().execute(orderDetails,getPosition); } else { Toast.makeText(myContext, "Please Enter Preparation Start Time.", Toast.LENGTH_LONG).show(); } } }, 5000); } else { handler.removeCallbacksAndMessages(null); v.clearAnimation(); flagPreparationComplete = 0; } } }); String deleveredTime = orderitemdetailsBO.getOrderdeliverytime(); if(deleveredTime != null && orderStatus == order_delivered) { Date delevered = new Date(Long.parseLong(deleveredTime)); viewHolder.txtElapsedTimeForPreparation.setText(DateUtil.getDateAsString(delevered,"MM:ss")); Log.i("Preparation Start Time :-", delevered.getMinutes()+":"+delevered.getSeconds()); viewHolder.txtElapsedTimeForPreparation.stop(); viewHolder.btnDeliveryComplete.getTag(position); viewHolder.btnDeliveryComplete.setEnabled(false); viewHolder.btnDeliveryComplete.setClickable(false); viewHolder.btnDeliveryComplete.setBackgroundColor(Color.LTGRAY); } else if(orderStatus == 3 || orderStatus == 2 || orderStatus == 1) { Long n = acceptTime.getTime(); Log.i("Preparation Start Time :-", Long.toString(n)); viewHolder.txtElapsedTimeForDeliveryComplete.setTag(list.get(position)); viewHolder.txtElapsedTimeForDeliveryComplete.setBase(SystemClock.elapsedRealtime() - System.currentTimeMillis() + n); viewHolder.txtElapsedTimeForDeliveryComplete.getBase(); viewHolder.txtElapsedTimeForDeliveryComplete.start(); viewHolder.txtElapsedTimeForDeliveryComplete.setFormat("%s"); } viewHolder.btnDeliveryComplete.setOnClickListener(new OnClickListener() { @Override public void onClick(final View v) { // TODO Auto-generated method stub if (flagDeliveryComplete == 0) { flagDeliveryComplete++; v.startAnimation(playAnimation()); handler.postDelayed(new Runnable() { @Override public void run() { // TODO Auto-generated method stub v.clearAnimation(); OrderitemdetailsBO orderitemdetailsBO = list.get(position); Date date = orderitemdetailsBO.getPreparationstarttime(); String preparationComplete = orderitemdetailsBO.getOrderpreparationtime(); if(date != null && preparationComplete != null ) { Date currentTime = new Date(); Calendar calendar = Calendar.getInstance(); viewHolder.txtElapsedTimeForDeliveryComplete.stop(); orderitemdetailsBO.setOrderdeliverytime(calendar.get(Calendar.MINUTE) +":"+calendar.get(Calendar.SECOND)); String orderDetails = Integer.toString(order_delivered); String getPosition = Integer.toString(position); viewHolder.btnDeliveryComplete.setBackgroundColor(Color.LTGRAY); new sendOrderStatusToServer().execute(orderDetails,getPosition); } else { Toast.makeText(myContext, "Please Enter Preparation Start Time & Preparation Complete Time.", Toast.LENGTH_LONG).show(); } } }, 5000); } else { handler.removeCallbacksAndMessages(null); v.clearAnimation(); flagDeliveryComplete = 0; } } }); } return convertView; } } private static class ViewHolder { protected TextView txtTableNumber; protected TextView txtMenuItem; protected TextView txtQuantity; protected TextView txtOrderAcceptanceTime; protected Chronometer txtElapsedTimeOfOrderAcceptance; protected Button btnPreparationStart; protected Chronometer txtElapsedTimeForPreparation; protected Button btnPreparationComplete; protected Chronometer txtElapsedTimeForDeliveryComplete; protected Button btnDeliveryComplete; }

    Read the article

  • Probelm with String.split() in java

    - by Matt
    What I am trying to do is read a .java file, and pick out all of the identifiers and store them in a list. My problem is with the .split() method. If you run this code the way it is, you will get ArrayOutOfBounds, but if you change the delimiter from "." to anything else, the code works. But I need to lines parsed by "." so is there another way I could accomplish this? import java.io.BufferedReader; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; import java.util.*; public class MyHash { private static String[] reserved = new String[100]; private static List list = new LinkedList(); private static List list2 = new LinkedList(); public static void main (String args[]){ Hashtable hashtable = new Hashtable(997); makeReserved(); readFile(); String line; ListIterator itr = list.listIterator(); int listIndex = 0; while (listIndex < list.size()) { if (itr.hasNext()){ line = itr.next().toString(); //PROBLEM IS HERE!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! String[] words = line.split("."); //CHANGE THIS AND IT WILL WORK System.out.println(words[0]); //TESTING TO SEE IF IT WORKED } listIndex++; } } public static void readFile() { String text; String[] words; BufferedReader in = null; try { in = new BufferedReader(new FileReader("MyHash.java")); //NAME OF INPUT FILE } catch (FileNotFoundException ex) { Logger.getLogger(MyHash.class.getName()).log(Level.SEVERE, null, ex); } try { while ((text = in.readLine()) != null){ text = text.trim(); words = text.split("\\s+"); for (int i = 0; i < words.length; i++){ list.add(words[i]); } for (int j = 0; j < reserved.length; j++){ if (list.contains(reserved[j])){ list.remove(reserved[j]); } } } } catch (IOException ex) { Logger.getLogger(MyHash.class.getName()).log(Level.SEVERE, null, ex); } try { in.close(); } catch (IOException ex) { Logger.getLogger(MyHash.class.getName()).log(Level.SEVERE, null, ex); } } public static int keyIt (int x) { int key = x % 997; return key; } public static int horner (String word){ int length = word.length(); char[] letters = new char[length]; for (int i = 0; i < length; i++){ letters[i]=word.charAt(i); } char[] alphabet = new char[26]; String abc = "abcdefghijklmnopqrstuvwxyz"; for (int i = 0; i < 26; i++){ alphabet[i]=abc.charAt(i); } int[] numbers = new int[length]; int place = 0; for (int i = 0; i < length; i++){ for (int j = 0; j < 26; j++){ if (alphabet[j]==letters[i]){ numbers[place]=j+1; place++; } } } int hornered = numbers[0] * 32; for (int i = 1; i < numbers.length; i++){ hornered += numbers[i]; if (i == numbers.length -1){ return hornered; } hornered = hornered % 997; hornered *= 32; } return hornered; } public static String[] makeReserved (){ reserved[0] = "abstract"; reserved[1] = "assert"; reserved[2] = "boolean"; reserved[3] = "break"; reserved[4] = "byte"; reserved[5] = "case"; reserved[6] = "catch"; reserved[7] = "char"; reserved[8] = "class"; reserved[9] = "const"; reserved[10] = "continue"; reserved[11] = "default"; reserved[12] = "do"; reserved[13] = "double"; reserved[14] = "else"; reserved[15] = "enum"; reserved[16] = "extends"; reserved[17] = "false"; reserved[18] = "final"; reserved[19] = "finally"; reserved[20] = "float"; reserved[21] = "for"; reserved[22] = "goto"; reserved[23] = "if"; reserved[24] = "implements"; reserved[25] = "import"; reserved[26] = "instanceof"; reserved[27] = "int"; reserved[28] = "interface"; reserved[29] = "long"; reserved[30] = "native"; reserved[31] = "new"; reserved[32] = "null"; reserved[33] = "package"; reserved[34] = "private"; reserved[35] = "protected"; reserved[36] = "public"; reserved[37] = "return"; reserved[38] = "short"; reserved[39] = "static"; reserved[40] = "strictfp"; reserved[41] = "super"; reserved[42] = "switch"; reserved[43] = "synchronize"; reserved[44] = "this"; reserved[45] = "throw"; reserved[46] = "throws"; reserved[47] = "trasient"; reserved[48] = "true"; reserved[49] = "try"; reserved[50] = "void"; reserved[51] = "volatile"; reserved[52] = "while"; reserved[53] = "="; reserved[54] = "=="; reserved[55] = "!="; reserved[56] = "+"; reserved[57] = "-"; reserved[58] = "*"; reserved[59] = "/"; reserved[60] = "{"; reserved[61] = "}"; return reserved; } }

    Read the article

  • ActionController::RoutingError (No route matches {:action=>"show", :controller=>"users", :id=>nil}):

    - by Matt Bishop
    I have been trying to fix this routing error for a long time. I would appreciate any assistance! This error is preventing me from being able to authenticate. Here is what I am getting in my Heroku logs. app/controllers/authentications_controller.rb:12:in `create' ActionController::RoutingError (No route matches {:action=>"show", :controller=>"users", :id=>nil}) Here is the routes.rb file: Company::Application.routes.draw do resources :profile_individual resources :careers match 'careers' => 'careers#index' match 'about' => 'about#index' constraints(:subdomain => /^$|www/) do devise_for :users resources :authentications, :identities #, :beta_invitations resources :users do resources :invitations, :controller => 'UserInvitation' do post :upload, :on => :collection get :email_template, :on => :collection get :plaintext_template, :on => :collection get :facebook_invitation, :on => :collection end member do get :summary get :recruits get :friends_events get :events_near_me get :recent_activity get :impact get :campaigns end end resources :password_resets do get 'password_reset' => 'password_resets#show', :as => 'password_reset' end resources :events, :only => [:new, :index, :create] resources :organizations, :only => [:index, :create] resources :orders do post :ipn, :on => :member resource :payment do member do post :relay_response get :receipt end end resource :paypal_integration do member do get :authorize get :cancel post :finalize end end end match '/users/:id/impact/money/:d' => 'users#impact_money_graph', :constraints => {:d => /\d+{4}_\d+{2}-\d+{2}/}, :as => :user_impact_money match '/users/:id/impact/money' => 'users#impact_money_graph', :as => :user_impact_money match '/users/:id/impact/recruits/:d' => 'users#impact_recruits_graph', :constraints => {:d => /\d+{4}_\d+{2}-\d+{2}/}, :as => :user_impact_recruits match '/users/:id/impact/recruits' => 'users#impact_recruits_graph', :as => :user_impact_recruits match '/auth/failure' => 'authentications#failure' match '/auth/:provider/callback' => 'authentications#create' match '/auth/:provider/callback' => 'authentications#show', :controller => 'users', :as => :login match '/logout' => 'authentications#destroy', :as => :logout match '/login' => 'authentications#new', :as => :login match "/join_team/:id" => "team_members#join", :as => :join_team match "/rsvp/:id" => "rsvps#show", :as => :rsvp match "/signup" => 'authentications#signup', :as => :signup match "/beacon/:id.gif" => "email_beacons#show", :as => :email_beacon root :to => "homes#show" match '/corporate_giving' => "homes#corporate_giving" end constraints(Subdomain) do resource :organization, :path => "/", :only => [:edit, :update] do member do get :org_photos_videos get :org_recent_activity end end resources :events, :except => [:index] do post :publish, :on => :member resource :supporter_invite resource :team_management do post :mailer, :on => :member end resource :team_member do post :invite, :on => :member end resource :rsvp do put :make_order, :on => :collection get :make_order, :on => :collection end resources :invites do post :upload, :on => :collection end resources :ticket_tiers, :team_members end match "/events" => redirect("/") root :to => "organizations#show" end namespace :admin do resources :stats resources :organizations resources :campaigns do resources :rewards resources :contents put :header, :action => 'header_update' end resources :users do member do post :grant_access post :revoke_access end end resources :nonprofits do member do put :approve put :revoke end end end resources :campaigns do get :find_charities, :on => :collection get :how_many_charities, :on => :collection member do post :join get :join post :header, :action => 'header_creation' put :header, :action => 'header_update' end resources :rewards resources :contents resource :donations do resource :paypal_integration, :controller => 'donations' do member do get :authorize get :cancel post :finalize end end end end match '/campaigns/:id/graph/:d' => 'campaigns#graph', :constraints => {:d => /\d+{4}_\d+ {2}-\d+{2}/}, :as => :graph_campaign match '/campaigns/:id/graph' => 'campaigns#graph', :as => :graph_campaign resources :business_campaigns, :controller => 'campaigns' resources :businesses do put :logo, :on => :collection, :action => 'upload_logo' member do get :summary get :recruits get :friends_events get :events_near_me get :recent_activity get :impact get :campaigns end end resources :nonprofit_campaigns, :controller => 'campaigns' resources :nonprofits do put :logo, :on => :collection, :action => 'upload_logo' member do get :summary get :recruits get :friends_events get :events_near_me get :recent_activity get :impact get :campaigns get :supporting_campaigns end end resources :publicities match '/campaigns/:campaign_id/rewards/:id' => 'campaigns#reward', :via => :get match "/robots.txt" => "application#robots_txt" match "/beta_invitations" => redirect('/') resource :sitemap resources :referrals end Here is my authentications_controller.rb file class AuthenticationsController < ApplicationController skip_before_filter :require_beta_access before_filter :redirect_to_profile_if_logged_in, :only => [:create, :new] layout :resolve_layout def create omniauth = request.env["omniauth.auth"] authentication = Authentication.find_by_provider_and_uid(omniauth['provider'], omniauth['uid']) if authentication && authentication.user.present? sign_in(:user, authentication.user) redirect_to session[:redirect_to] || user_path(current_user, :subdomain => nil) elsif current_user current_user.authentications.create!(:provider => omniauth['provider'], :uid => omniauth['uid']) redirect_to session[:redirect_to] || user_path(current_user, :subdomain => nil) else user = User.new user.apply_omniauth(omniauth) logger.debug "=======================auth=============================" logger.debug session[:referrer_token] logger.debug "========================================================" if session[:referrer_token] publicity = Publicity.find_by_token(session[:referrer_token]) user.invited_by = publicity user.recruited_by = publicity end if user.save sign_in(user) unless session[:redirect_to] session[:referrer_token] = nil end redirect_to session[:redirect_to] || user_path(current_user, :subdomain => nil) #redirect_to session[:redirect_to] || campaigns_url(:tc => request.env['omniauth.params']['tc']) #tc is for AB testing else session[:omniauth] = omniauth.except('extra') redirect_to signup_path end end end def failure flash[:error] = "Please check your email and password and try again" redirect_to login_path end def destroy reset_session redirect_to root_path end def signup # end private def redirect_to_profile_if_logged_in redirect_to user_path(current_user.permalink) if current_user end def resolve_layout case action_name when "new", "signup" "authentication" else "selfcontained" end end end I am adding my appplication_controller.rb too: class ApplicationController < ActionController::Base #Wrote by George for beta users -before_filter :require_beta_access before_filter :save_referrer_token protect_from_forgery helper_method :organization_admin?, :team_member?, :profile_url, :current_profile def set_headers # Set our headers here end def save_referrer_token #session.delete(:referrer_token) if params[:ref] publicity = Publicity.find_by_token(params[:ref]) logger.debug "========================================================" logger.debug current_profile.nil? logger.debug publicity.creator logger.debug current_profile logger.debug current_profile != publicity.creator session[:referrer_token] = params[:ref] if current_profile.nil? or publicity.creator != current_profile logger.debug session[:referrer_token] logger.debug "========================================================" end end def robots_txt robots = File.read(Rails.root + "public/robots.#{Rails.env}.txt") render :text => robots, :layout => false, :content_type => "text/plain" end def load_organization @organization = Organization.find_by_permalink(request.subdomain) raise ActiveRecord::RecordNotFound if @organization.nil? end def require_user unless current_user session[:redirect_to] = request.url redirect_to login_url(:host => request.domain) end end def require_beta_access if !current_user redirect_to root_url(:host => request.domain) elsif !current_user.beta_access? redirect_to new_beta_invitation_url(:host => request.domain) end end def require_organization_admin unless organization_admin? redirect_to root_url(:subdomain => @organization.permalink) end end def team_member? if current_user && @event.team_memberships.where(:user_id => current_user.id).count != 0 true end end def organization_admin? if current_user && current_user.beta_access? && @organization && @organization.memberships.where(:user_id => current_user.id, :role => 'admin').count != 0 true end end def profile_url(profile, opt = nil) if profile == current_user user_url(profile, :host => opt[:host]) elsif profile.is_a? BusinessProfile business_url(profile) elsif profile.is_a? NonprofitProfile nonprofit_url(profile) end end def set_current_profile(profile) session[:current_profile] = profile end def current_user @current_user ||= User.find_by_auth_token!(cookies[:auth_token]) if cookies[:auth_token] end def current_profile #if session session[:current_profile] || current_user #else # nil #end end IGIVEMORE_HTML5_OPTIOINS = { :style => 'z-index: 0;',:width => '290', :height => '200', :frameborder => '0', :url_params => {:wmode=>"opaque"} } def campaign_header_body(camp, opt = IGIVEMORE_HTML5_OPTIOINS) if camp.header_type == Campaign::HEADER_YOUTUBE youtube_html5(camp.header_url, opt).html_safe elsif camp.header_type == Campaign::HEADER_IMAGE "<img src=\"#{camp.header_url}\" width=\"#{opt[:width]}\" height=\"#{opt[:height]}\"/>'".html_safe else "Unsupported Type!!" end end def youtube_html5(url, opt) begin video = YouTubeIt::Client.new.video_by(url) video.embed_html5(opt).gsub(/http:\/\//,"https://") rescue => e "<div style='color:red; width:290px; height:100px; padding-top:100px'>Given Video URL has problem.</div>" end end end

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

  • SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28

    - by pinaldave
    It is very easy to say that you replace your hardware as that is not up to the mark. In reality, it is very difficult to implement. It is really hard to convince an infrastructure team to change any hardware because they are not performing at their best. I had a nightmare related to this issue in a deal with an infrastructure team as I suggested that they replace their faulty hardware. This is because they were initially not accepting the fact that it is the fault of their hardware. But it is really easy to say “Trust me, I am correct”, while it is equally important that you put some logical reasoning along with this statement. PAGEIOLATCH_XX is such a kind of those wait stats that we would directly like to blame on the underlying subsystem. Of course, most of the time, it is correct – the underlying subsystem is usually the problem. From Book On-Line: PAGEIOLATCH_DT Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_KP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_SH Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_XX Explanation: Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache. ReducingPAGEIOLATCH_XX wait: Just like any other wait type, this is again a very challenging and interesting subject to resolve. Here are a few things you can experiment on: Improve your IO subsystem speed (read the first paragraph of this article, if you have not read it, I repeat that it is easy to say a step like this than to actually implement or do it). This type of wait stats can also happen due to memory pressure or any other memory issues. Putting aside the issue of a faulty IO subsystem, this wait type warrants proper analysis of the memory counters. If due to any reasons, the memory is not optimal and unable to receive the IO data. This situation can create this kind of wait type. Proper placing of files is very important. We should check file system for the proper placement of files – LDF and MDF on separate drive, TempDB on separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. It is very possible that there are no proper indexes on the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can significantly reduce lots of CPU, Memory and IO (considering cover index has much lesser columns than cluster table and all other it depends conditions). You can refer to the two articles’ links below previously written by me that talk about how to optimize indexes. Create Missing Indexes Drop Unused Indexes Updating statistics can help the Query Optimizer to render optimal plan, which can only be either directly or indirectly. I have seen that updating statistics with full scan (again, if your database is huge and you cannot do this – never mind!) can provide optimal information to SQL Server optimizer leading to efficient plan. Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • No, iCloud Isn’t Backing Them All Up: How to Manage Photos on Your iPhone or iPad

    - by Chris Hoffman
    Are the photos you take with your iPhone or iPad backed up in case you lose your device? If you’re just relying on iCloud to manage your important memories, your photos may not be backed up at all. Apple’s iCloud has a photo-syncing feature in the form of “Photo Stream,” but Photo Stream doesn’t actually perform any long-term backups of your photos. iCloud’s Photo Backup Limitations Assuming you’ve set up iCloud on your iPhone or iPad, your device is using a feature called “Photo Stream” to automatically upload the photos you take to your iCloud storage and sync them across your devices. Unfortunately, there are some big limitations here. 1000 Photos: Photo Stream only backs up the latest 1000 photos. Do you have 1500 photos in your Camera Roll folder on your phone? If so, only the latest 1000 photos are stored in your iCloud account online. If you don’t have those photos backed up elsewhere, you’ll lose them when you lose your phone. If you have 1000 photos and take one more, the oldest photo will be removed from your iCloud Photo Stream. 30 Days: Apple also states that photos in your Photo Stream will be automatically deleted after 30 days “to give your devices plenty of time to connect and download them.” Some people report photos aren’t deleted after 30 days, but it’s clear you shouldn’t rely on iCloud for more than 30 days of storage. iCloud Storage Limits: Apple only gives you 5 GB of iCloud storage space for free, and this is shared between backups, documents, and all other iCloud data. This 5 GB can fill up pretty quickly. If your iCloud storage is full and you haven’t purchased any more storage more from Apple, your photos aren’t being backed up. Videos Aren’t Included: Photo Stream doesn’t include videos, so any videos you take aren’t automatically backed up. It’s clear that iCloud’s Photo Stream isn’t designed as a long-term way to store your photos, just a convenient way to access recent photos on all your devices before you back them up for real. iCloud’s Photo Stream is Designed for Desktop Backups If you have a Mac, you can launch iPhoto and enable the Automatic Import option under Photo Stream in its preferences pane. Assuming your Mac is on and connected to the Internet, iPhoto will automatically download photos from your photo stream and make local backups of them on your hard drive. You’ll then have to back up your photos manually so you don’t lose them if your Mac’s hard drive ever fails. If you have a Windows PC, you can install the iCloud Control Panel, which will create a Photo Stream folder on your PC. Your photos will be automatically downloaded to this folder and stored in it. You’ll want to back up your photos so you don’t lose them if your PC’s hard drive ever fails. Photo Stream is clearly designed to be used along with a desktop application. Photo Stream temporarily backs up your photos to iCloud so iPhoto or iCloud Control Panel can download them to your Mac or PC and make a local backup before they’re deleted. You could also use iTunes to sync your photos from your device to your PC or Mac, but we don’t really recommend it — you should never have to use iTunes. How to Actually Back Up All Your Photos Online So Photo Stream is actually pretty inconvenient — or, at least, it’s just a way to temporarily sync photos between your devices without storing them long-term. But what if you actually want to automatically back up your photos online without them being deleted automatically? The solution here is a third-party app that does this for you, offering the automatic photo uploads with long-term storage. There are several good services with apps in the App Store: Dropbox: Dropbox’s Camera Upload feature allows you to automatically upload the photos — and videos — you take to your Dropbox account. They’ll be easily accessible anywhere there’s a Dropbox app and you can get much more free Dropbox storage than you can iCloud storage. Dropbox will never automatically delete your old photos. Google+: Google+ offers photo and video backups with its Auto Upload feature, too. Photos will be stored in your Google+ Photos — formerly Picasa Web Albums — and will be marked as private by default so no one else can view them. Full-size photos will count against your free 15 GB of Google account storage space, but you can also choose to upload an unlimited amount of photos at a smaller resolution. Flickr: The Flickr app is no longer a mess. Flickr offers an Auto Upload feature for uploading full-size photos you take and free Flickr accounts offer a massive 1 TB of storage for you to store your photos. The massive amount of free storage alone makes Flickr worth a look. Use any of these services and you’ll get an online, automatic photo backup solution you can rely on. You’ll get a good chunk of free space, your photos will never be automatically deleted, and you can easily access them from any device. You won’t have to worry about storing local copies of your photos and backing them up manually. Apple should fix this mess and offer a better solution for long-term photo backup, especially considering the limitations aren’t immediately obvious to users. Until they do, third-party apps are ready to step in and take their place. You can also automatically back up your photos to the web on Android with Google+’s Auto Upload or Dropbox’s Camera Upload. Image Credit: Simon Yeo on Flickr     

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

  • How to simulate inner join on very large files in java (without running out of memory)

    - by Constantin
    I am trying to simulate SQL joins using java and very large text files (INNER, RIGHT OUTER and LEFT OUTER). The files have already been sorted using an external sort routine. The issue I have is I am trying to find the most efficient way to deal with the INNER join part of the algorithm. Right now I am using two Lists to store the lines that have the same key and iterate through the set of lines in the right file once for every line in the left file (provided the keys still match). In other words, the join key is not unique in each file so would need to account for the Cartesian product situations ... left_01, 1 left_02, 1 right_01, 1 right_02, 1 right_03, 1 left_01 joins to right_01 using key 1 left_01 joins to right_02 using key 1 left_01 joins to right_03 using key 1 left_02 joins to right_01 using key 1 left_02 joins to right_02 using key 1 left_02 joins to right_03 using key 1 My concern is one of memory. I will run out of memory if i use the approach below but still want the inner join part to work fairly quickly. What is the best approach to deal with the INNER join part keeping in mind that these files may potentially be huge public class Joiner { private void join(BufferedReader left, BufferedReader right, BufferedWriter output) throws Throwable { BufferedReader _left = left; BufferedReader _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _rightRecord = read(_right); } else { List<Record> leftList = new ArrayList<Record>(); List<Record> rightList = new ArrayList<Record>(); _leftRecord = readRecords(leftList, _leftRecord, _left); _rightRecord = readRecords(rightList, _rightRecord, _right); for( Record equalKeyLeftRecord : leftList ){ for( Record equalKeyRightRecord : rightList ){ write(_output, equalKeyLeftRecord, equalKeyRightRecord); } } } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } private Record read(BufferedReader reader) throws Throwable { Record record = null; String data = reader.readLine(); if( data != null ) { record = new Record(data.split("\t")); } return record; } private Record readRecords(List<Record> list, Record record, BufferedReader reader) throws Throwable { int key = record.getKey(); list.add(record); record = read(reader); while( record != null && record.getKey() == key) { list.add(record); record = read(reader); } return record; } private void write(BufferedWriter writer, Record left, Record right) throws Throwable { String leftKey = (left == null ? "null" : Integer.toString(left.getKey())); String leftData = (left == null ? "null" : left.getData()); String rightKey = (right == null ? "null" : Integer.toString(right.getKey())); String rightData = (right == null ? "null" : right.getData()); writer.write("[" + leftKey + "][" + leftData + "][" + rightKey + "][" + rightData + "]\n"); } public static void main(String[] args) { try { BufferedReader leftReader = new BufferedReader(new FileReader("LEFT.DAT")); BufferedReader rightReader = new BufferedReader(new FileReader("RIGHT.DAT")); BufferedWriter output = new BufferedWriter(new FileWriter("OUTPUT.DAT")); Joiner joiner = new Joiner(); joiner.join(leftReader, rightReader, output); } catch (Throwable e) { e.printStackTrace(); } } } After applying the ideas from the proposed answer, I changed the loop to this private void join(RandomAccessFile left, RandomAccessFile right, BufferedWriter output) throws Throwable { long _pointer = 0; RandomAccessFile _left = left; RandomAccessFile _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _pointer = _right.getFilePointer(); _rightRecord = read(_right); } else { long _tempPointer = 0; int key = _leftRecord.getKey(); while( _leftRecord != null && _leftRecord.getKey() == key ) { _right.seek(_pointer); _rightRecord = read(_right); while( _rightRecord != null && _rightRecord.getKey() == key ) { write(_output, _leftRecord, _rightRecord ); _tempPointer = _right.getFilePointer(); _rightRecord = read(_right); } _leftRecord = read(_left); } _pointer = _tempPointer; } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } UPDATE While this approach worked, it was terribly slow and so I have modified this to create files as buffers and this works very well. Here is the update ... private long getMaxBufferedLines(File file) throws Throwable { long freeBytes = Runtime.getRuntime().freeMemory() / 2; return (freeBytes / (file.length() / getLineCount(file))); } private void join(File left, File right, File output, JoinType joinType) throws Throwable { BufferedReader leftFile = new BufferedReader(new FileReader(left)); BufferedReader rightFile = new BufferedReader(new FileReader(right)); BufferedWriter outputFile = new BufferedWriter(new FileWriter(output)); long maxBufferedLines = getMaxBufferedLines(right); Record leftRecord; Record rightRecord; leftRecord = read(leftFile); rightRecord = read(rightFile); while( leftRecord != null && rightRecord != null ) { if( leftRecord.getKey().compareTo(rightRecord.getKey()) < 0) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) > 0 ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) == 0 ) { String key = leftRecord.getKey(); List<File> rightRecordFileList = new ArrayList<File>(); List<Record> rightRecordList = new ArrayList<Record>(); rightRecordList.add(rightRecord); rightRecord = consume(key, rightFile, rightRecordList, rightRecordFileList, maxBufferedLines); while( leftRecord != null && leftRecord.getKey().compareTo(key) == 0 ) { processRightRecords(outputFile, leftRecord, rightRecordFileList, rightRecordList, joinType); leftRecord = read(leftFile); } // need a dispose for deleting files in list } else { throw new Exception("DATA IS NOT SORTED"); } } if( leftRecord != null ) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); while(leftRecord != null) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } } else { if( rightRecord != null ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); while(rightRecord != null) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } } } leftFile.close(); rightFile.close(); outputFile.flush(); outputFile.close(); } public void processRightRecords(BufferedWriter outputFile, Record leftRecord, List<File> rightFiles, List<Record> rightRecords, JoinType joinType) throws Throwable { for(File rightFile : rightFiles) { BufferedReader rightReader = new BufferedReader(new FileReader(rightFile)); Record rightRecord = read(rightReader); while(rightRecord != null){ if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } rightRecord = read(rightReader); } rightReader.close(); } for(Record rightRecord : rightRecords) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } } } /** * consume all records having key (either to a single list or multiple files) each file will * store a buffer full of data. The right record returned represents the outside flow (key is * already positioned to next one or null) so we can't use this record in below while loop or * within this block in general when comparing current key. The trick is to keep consuming * from a List. When it becomes empty, re-fill it from the next file until all files have * been consumed (and the last node in the list is read). The next outside iteration will be * ready to be processed (either it will be null or it points to the next biggest key * @throws Throwable * */ private Record consume(String key, BufferedReader reader, List<Record> records, List<File> files, long bufferMaxRecordLines ) throws Throwable { boolean processComplete = false; Record record = records.get(records.size() - 1); while(!processComplete){ long recordCount = records.size(); if( record.getKey().compareTo(key) == 0 ){ record = read(reader); while( record != null && record.getKey().compareTo(key) == 0 && recordCount < bufferMaxRecordLines ) { records.add(record); recordCount++; record = read(reader); } } processComplete = true; // if record is null, we are done if( record != null ) { // if the key has changed, we are done if( record.getKey().compareTo(key) == 0 ) { // Same key means we have exhausted the buffer. // Dump entire buffer into a file. The list of file // pointers will keep track of the files ... processComplete = false; dumpBufferToFile(records, files); records.clear(); records.add(record); } } } return record; } /** * Dump all records in List of Record objects to a file. Then, add that * file to List of File objects * * NEED TO PLACE A LIMIT ON NUMBER OF FILE POINTERS (check size of file list) * * @param records * @param files * @throws Throwable */ private void dumpBufferToFile(List<Record> records, List<File> files) throws Throwable { String prefix = "joiner_" + files.size() + 1; String suffix = ".dat"; File file = File.createTempFile(prefix, suffix, new File("cache")); BufferedWriter writer = new BufferedWriter(new FileWriter(file)); for( Record record : records ) { writer.write( record.dump() ); } files.add(file); writer.flush(); writer.close(); }

    Read the article

  • Most efficient way to implement delta time

    - by Starkers
    Here's one way to implement delta time: /// init /// var duration = 5000, currentTime = Date.now(); // and create cube, scene, camera ect ////// function animate() { /// determine delta /// var now = Date.now(), deltat = now - currentTime, currentTime = now, scalar = deltat / duration, angle = (Math.PI * 2) * scalar; ////// /// animate /// cube.rotation.y += angle; ////// /// update /// requestAnimationFrame(render); ////// } Could someone confirm I know how it works? Here what I think is going on: Firstly, we set duration at 5000, which how long the loop will take to complete in an ideal world. With a computer that is slow/busy, let's say the animation loop takes twice as long as it should, so 10000: When this happens, the scalar is set to 2.0: scalar = deltat / duration scalar = 10000 / 5000 scalar = 2.0 We now times all animation by twice as much: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 2.0; angle = (Math.PI * 4) // which is 2 rotations When we do this, the cube rotation will appear to 'jump', but this is good because the animation remains real-time. With a computer that is going too quickly, let's say the animation loop takes half as long as it should, so 2500: When this happens, the scalar is set to 0.5: scalar = deltat / duration scalar = 2500 / 5000 scalar = 0.5 We now times all animation by a half: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 0.5; angle = (Math.PI * 1) // which is half a rotation When we do this, the cube won't jump at all, and the animation remains real time, and doesn't speed up. However, would I be right in thinking this doesn't alter how hard the computer is working? I mean it still goes through the loop as fast as it can, and it still has render the whole scene, just with different smaller angles! So this a bad way to implement delta time, right? Now let's pretend the computer is taking exactly as long as it should, so 5000: When this happens, the scalar is set to 1.0: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 1; angle = (Math.PI * 2) // which is 1 rotation When we do this, everything is timsed by 1, so nothing is changed. We'd get the same result if we weren't using delta time at all! My questions are as follows Mostly importantly, have I got the right end of the stick here? How do we know to set the duration to 5000 ? Or can it be any number? I'm a bit vague about the "computer going too quickly". Is there a way loop less often rather than reduce the animation steps? Seems like a better idea. Using this method, do all of our animations need to be timesed by the scalar? Do we have to hunt down every last one and times it? Is this the best way to implement delta time? I think not, due to the fact the computer can go nuts and all we do is divide each animation step and because we need to hunt down every step and times it by the scalar. Not a very nice DSL, as it were. So what is the best way to implement delta time? Below is one way that I do not really get but may be a better way to implement delta time. Could someone explain please? // Globals INV_MAX_FPS = 1 / 60; frameDelta = 0; clock = new THREE.Clock(); // In the animation loop (the requestAnimationFrame callback)… frameDelta += clock.getDelta(); // API: "Get the seconds passed since the last call to this method." while (frameDelta >= INV_MAX_FPS) { update(INV_MAX_FPS); // calculate physics frameDelta -= INV_MAX_FPS; } How I think this works: Firstly we set INV_MAX_FPS to 0.01666666666 How we will use this number number does not jump out at me. We then intialize a frameDelta which stores how long the last loop took to run. Come the first loop frameDelta is not greater than INV_MAX_FPS so the loop is not run (0 = 0.01666666666). So nothing happens. Now I really don't know what would cause this to happen, but let's pretend that the loop we just went through took 2 seconds to complete: We set frameDelta to 2: frameDelta += clock.getDelta(); frameDelta += 2.00 Now we run an animation thanks to update(0.01666666666). Again what is relevance of 0.01666666666?? And then we take away 0.01666666666 from the frameDelta: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 2 - 0.01666666666 frameDelta = 1.98333333334 So let's go into the second loop. Let's say it took 2(? Why not 2? Or 12? I am a bit confused): frameDelta += clock.getDelta(); frameDelta = frameDelta + clock.getDelta(); frameDelta = 1.98333333334 + 2 frameDelta = 3.98333333334 This time we enter the while loop because 3.98333333334 = 0.01666666666 We run update We take away 0.01666666666 from frameDelta again: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 3.98333333334 - 0.01666666666 frameDelta = 3.96666666668 Now let's pretend the loop is super quick and runs in just 0.1 seconds and continues to do this. (Because the computer isn't busy any more). Basically, the update function will be run, and every loop we take away 0.01666666666 from the frameDelta untill the frameDelta is less than 0.01666666666. And then nothing happens until the computer runs slowly again? Could someone shed some light please? Does the update() update the scalar or something like that and we still have to times everything by the scalar like in the first example?

    Read the article

  • Optimizing a thread safe Java NIO / Serialization / FIFO Queue [migrated]

    - by trialcodr
    I've written a thread safe, persistent FIFO for Serializable items. The reason for reinventing the wheel is that we simply can't afford any third party dependencies in this project and want to keep this really simple. The problem is it isn't fast enough. Most of it is undoubtedly due to reading and writing directly to disk but I think we should be able to squeeze a bit more out of it anyway. Any ideas on how to improve the performance of the 'take'- and 'add'-methods? /** * <code>DiskQueue</code> Persistent, thread safe FIFO queue for * <code>Serializable</code> items. */ public class DiskQueue<ItemT extends Serializable> { public static final int EMPTY_OFFS = -1; public static final int LONG_SIZE = 8; public static final int HEADER_SIZE = LONG_SIZE * 2; private InputStream inputStream; private OutputStream outputStream; private RandomAccessFile file; private FileChannel channel; private long offs = EMPTY_OFFS; private long size = 0; public DiskQueue(String filename) { try { boolean fileExists = new File(filename).exists(); file = new RandomAccessFile(filename, "rwd"); if (fileExists) { size = file.readLong(); offs = file.readLong(); } else { file.writeLong(size); file.writeLong(offs); } } catch (FileNotFoundException e) { throw new RuntimeException(e); } catch (IOException e) { throw new RuntimeException(e); } channel = file.getChannel(); inputStream = Channels.newInputStream(channel); outputStream = Channels.newOutputStream(channel); } /** * Add item to end of queue. */ public void add(ItemT item) { try { synchronized (this) { channel.position(channel.size()); ObjectOutputStream s = new ObjectOutputStream(outputStream); s.writeObject(item); s.flush(); size++; file.seek(0); file.writeLong(size); if (offs == EMPTY_OFFS) { offs = HEADER_SIZE; file.writeLong(offs); } notify(); } } catch (IOException e) { throw new RuntimeException(e); } } /** * Clears overhead by moving the remaining items up and shortening the file. */ public synchronized void defrag() { if (offs > HEADER_SIZE && size > 0) { try { long totalBytes = channel.size() - offs; ByteBuffer buffer = ByteBuffer.allocateDirect((int) totalBytes); channel.position(offs); for (int bytes = 0; bytes < totalBytes;) { int res = channel.read(buffer); if (res == -1) { throw new IOException("Failed to read data into buffer"); } bytes += res; } channel.position(HEADER_SIZE); buffer.flip(); for (int bytes = 0; bytes < totalBytes;) { int res = channel.write(buffer); if (res == -1) { throw new IOException("Failed to write buffer to file"); } bytes += res; } offs = HEADER_SIZE; file.seek(LONG_SIZE); file.writeLong(offs); file.setLength(HEADER_SIZE + totalBytes); } catch (IOException e) { throw new RuntimeException(e); } } } /** * Returns the queue overhead in bytes. */ public synchronized long overhead() { return (offs == EMPTY_OFFS) ? 0 : offs - HEADER_SIZE; } /** * Returns the first item in the queue, blocks if queue is empty. */ public ItemT peek() throws InterruptedException { block(); synchronized (this) { if (offs != EMPTY_OFFS) { return readItem(); } } return peek(); } /** * Returns the number of remaining items in queue. */ public synchronized long size() { return size; } /** * Removes and returns the first item in the queue, blocks if queue is empty. */ public ItemT take() throws InterruptedException { block(); try { synchronized (this) { if (offs != EMPTY_OFFS) { ItemT result = readItem(); size--; offs = channel.position(); file.seek(0); if (offs == channel.size()) { truncate(); } file.writeLong(size); file.writeLong(offs); return result; } } return take(); } catch (IOException e) { throw new RuntimeException(e); } } /** * Throw away all items and reset the file. */ public synchronized void truncate() { try { offs = EMPTY_OFFS; file.setLength(HEADER_SIZE); size = 0; } catch (IOException e) { throw new RuntimeException(e); } } /** * Block until an item is available. */ protected void block() throws InterruptedException { while (offs == EMPTY_OFFS) { try { synchronized (this) { wait(); file.seek(LONG_SIZE); offs = file.readLong(); } } catch (IOException e) { throw new RuntimeException(e); } } } /** * Read and return item. */ @SuppressWarnings("unchecked") protected ItemT readItem() { try { channel.position(offs); return (ItemT) new ObjectInputStream(inputStream).readObject(); } catch (ClassNotFoundException e) { throw new RuntimeException(e); } catch (IOException e) { throw new RuntimeException(e); } } }

    Read the article

  • how would you debug this javascript problem?

    - by pedalpete
    I've been travelling and developing for the past few weeks. The site I'm developing was running well. Then, the other day, i connected to a network and the page 'looked' fine, but it turns out the javascript wasn't running. I checked firebug, and there were no errors, as I was suspecting that maybe a script didn't load (I'm using the google api for jQuery and jQuery UI, as well as loading google maps api and fbconnect). I would suspect that if the issue was with one of these pages not loading I would get an error, and yet there was nothing. Thinking maybe i didn't connect properly or something, i reconnected to the network and even restarted my computer, as well as trying to run the local version. I got nothing. The local version not running also hinted to me that it was the loading of an external javascript which caused the problem. I let it pass as something strange with that one network. Unfortunately now I'm 100s of miles away. Today my brother sent me an e-mail that the network he was on at the airport wouldn't load my page. Same issue. Everything is laid out properly, and part of the layout is set in Javascript, so clearly javascript is running. he too got no errors. Of course, he got on his plane, and now he is no longer at the airport. Now the site works on his computer (and i haven't changed anything). How on earth would you go about figuring out what happened in this situation? That is two of maybe 12 or so networks. But I have no idea how i would find a network that doesn't work (and living in a small town, it could be difficult for me to find a network that doesn't work). Any ideas? The site is still in Dev, so I'd rather not post a link just yet (but could in a few days). What I can see not working is the javascript functions which are called on load, and on click. So i do think it is a javascript issue, but no errors. This wouldn't be as HUGE an issue if I could find and sit on one of these networks, but I can't. So what would you do? EDIT ---------------------------------------------------------- the first function(s - their linked) that doesn't get called is below. I've cut the code of at the .ajax call as the call wasn't being made. function getResultsFromForm(){ jQuery('form#filterList input.button').hide(); var searchAddress=jQuery('form#filterList input#searchTxt').val(); if(searchAddress=='' || searchAddress=='<?php echo $searchLocation; ?>'){ mapShow(20, -40, 0, 'areaMap', 2); jQuery('form#filterList input.button').show(); return; } if (GBrowserIsCompatible()) { var geo = new GClientGeocoder(); geo.setBaseCountryCode(cl.address.country); geo.getLocations(searchAddress, function (result) { if(!result.Placemark && searchAddress!='<?php echo $searchLocation; ?>'){ jQuery('span#addressNotFound').text('<?php echo $addressNotFound; ?>').slideDown('slow'); jQuery('form#filterList input.button').show(); } else { jQuery('span#addressNotFound').slideUp('slow').empty(); jQuery('span#headerLocal').text(searchAddress); var date = new Date(); date.setTime(date.getTime() + (8 * 24 * 60 * 60 * 1000)); jQuery.cookie('address', searchAddress, { expires: date}); var accuracy= result.Placemark[0].AddressDetails.Accuracy; var lat = result.Placemark[0].Point.coordinates[1]; var long = result.Placemark[0].Point.coordinates[0]; lat=parseFloat(lat); long=parseFloat(long); var getTab=jQuery('div#tabs div#active').attr('class'); jQuery('div#tabs').show(); loadForecast(lat, long, getTab, 'true', 0); var zoom=zoomLevel(); mapShow(lat, long, accuracy, 'areaMap', zoom ); } }); } } function zoomLevel(){ var zoomarray= new Array(); zoomarray=jQuery('span.viewDist').attr('id'); zoomarray=zoomarray.split("-"); var zoom=zoomarray[1]; if(zoom==''){ zoom=5; } zoom=parseFloat(zoom); return(zoom); } function loadForecast(lat, long, type, loadForecast, page){ jQuery('div#holdForecast').empty(); var date = new Date(); var d = date.getDate(); var day = (d < 10) ? '0' + d : d; var m = date.getMonth() + 1; var month = (m < 10) ? '0' + m : m; var year='2009'; toDate=year+'-'+month+'-'+day; var genre=jQuery('span.genreblock span#updateGenre').html(); var numDays=''; var numResults=''; var range=jQuery('span.viewDist').attr('id'); var dateRange = jQuery('.updateDate').attr('id'); jQuery('div#holdShows ul.showList').html('<li class="show"><div class="showData"><center><img src="../hwImages/loading.gif"/></center></div></li>'); jQuery('div#holdShows ul.'+type+'List').livequery(function(){ jQuery.ajax({ type: "GET", url: "processes/formatShows.php", data: "output=&genre="+genre+"&numResults="+numResults+"&date="+toDate+"&dateRange="+dateRange+"&range="+range+"&lat="+lat+"&long="+long+'&page='+page, success: function(response){ EDIT 2 ----------------------------------------------------------------------------- Please keep in mind that the problem is not that I can't load the site, the site works fine on most connections, but there are times when the site doesn't work, and no errors are thrown, and nothing changes. My brother couldn't run it earlier today while I had no problems, so it was something to do with his location/network. HOWEVER, the page loads, he had a connection, it was his first time visiting the site, so nothing could have been cashed. Same with when I had the issue a few days before. I didn't change anything, and I got to a different network and everything worked fine.

    Read the article

  • Traditional IO vs memory-mapped

    - by Senne
    I'm trying to illustrate the difference in performance between traditional IO and memory mapped files in java to students. I found an example somewhere on internet but not everything is clear to me, I don't even think all steps are nececery. I read a lot about it here and there but I'm not convinced about a correct implementation of neither of them. The code I try to understand is: public class FileCopy{ public static void main(String args[]){ if (args.length < 1){ System.out.println(" Wrong usage!"); System.out.println(" Correct usage is : java FileCopy <large file with full path>"); System.exit(0); } String inFileName = args[0]; File inFile = new File(inFileName); if (inFile.exists() != true){ System.out.println(inFileName + " does not exist!"); System.exit(0); } try{ new FileCopy().memoryMappedCopy(inFileName, inFileName+".new" ); new FileCopy().customBufferedCopy(inFileName, inFileName+".new1"); }catch(FileNotFoundException fne){ fne.printStackTrace(); }catch(IOException ioe){ ioe.printStackTrace(); }catch (Exception e){ e.printStackTrace(); } } public void memoryMappedCopy(String fromFile, String toFile ) throws Exception{ long timeIn = new Date().getTime(); // read input file RandomAccessFile rafIn = new RandomAccessFile(fromFile, "rw"); FileChannel fcIn = rafIn.getChannel(); ByteBuffer byteBuffIn = fcIn.map(FileChannel.MapMode.READ_WRITE, 0,(int) fcIn.size()); fcIn.read(byteBuffIn); byteBuffIn.flip(); RandomAccessFile rafOut = new RandomAccessFile(toFile, "rw"); FileChannel fcOut = rafOut.getChannel(); ByteBuffer writeMap = fcOut.map(FileChannel.MapMode.READ_WRITE,0,(int) fcIn.size()); writeMap.put(byteBuffIn); long timeOut = new Date().getTime(); System.out.println("Memory mapped copy Time for a file of size :" + (int) fcIn.size() +" is "+(timeOut-timeIn)); fcOut.close(); fcIn.close(); } static final int CHUNK_SIZE = 100000; static final char[] inChars = new char[CHUNK_SIZE]; public static void customBufferedCopy(String fromFile, String toFile) throws IOException{ long timeIn = new Date().getTime(); Reader in = new FileReader(fromFile); Writer out = new FileWriter(toFile); while (true) { synchronized (inChars) { int amountRead = in.read(inChars); if (amountRead == -1) { break; } out.write(inChars, 0, amountRead); } } long timeOut = new Date().getTime(); System.out.println("Custom buffered copy Time for a file of size :" + (int) new File(fromFile).length() +" is "+(timeOut-timeIn)); in.close(); out.close(); } } When exactly is it nececary to use RandomAccessFile? Here it is used to read and write in the memoryMappedCopy, is it actually nececary just to copy a file at all? Or is it a part of memorry mapping? In customBufferedCopy, why is synchronized used here? I also found a different example that -should- test the performance between the 2: public class MappedIO { private static int numOfInts = 4000000; private static int numOfUbuffInts = 200000; private abstract static class Tester { private String name; public Tester(String name) { this.name = name; } public long runTest() { System.out.print(name + ": "); try { long startTime = System.currentTimeMillis(); test(); long endTime = System.currentTimeMillis(); return (endTime - startTime); } catch (IOException e) { throw new RuntimeException(e); } } public abstract void test() throws IOException; } private static Tester[] tests = { new Tester("Stream Write") { public void test() throws IOException { DataOutputStream dos = new DataOutputStream( new BufferedOutputStream( new FileOutputStream(new File("temp.tmp")))); for(int i = 0; i < numOfInts; i++) dos.writeInt(i); dos.close(); } }, new Tester("Mapped Write") { public void test() throws IOException { FileChannel fc = new RandomAccessFile("temp.tmp", "rw") .getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_WRITE, 0, fc.size()) .asIntBuffer(); for(int i = 0; i < numOfInts; i++) ib.put(i); fc.close(); } }, new Tester("Stream Read") { public void test() throws IOException { DataInputStream dis = new DataInputStream( new BufferedInputStream( new FileInputStream("temp.tmp"))); for(int i = 0; i < numOfInts; i++) dis.readInt(); dis.close(); } }, new Tester("Mapped Read") { public void test() throws IOException { FileChannel fc = new FileInputStream( new File("temp.tmp")).getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_ONLY, 0, fc.size()) .asIntBuffer(); while(ib.hasRemaining()) ib.get(); fc.close(); } }, new Tester("Stream Read/Write") { public void test() throws IOException { RandomAccessFile raf = new RandomAccessFile( new File("temp.tmp"), "rw"); raf.writeInt(1); for(int i = 0; i < numOfUbuffInts; i++) { raf.seek(raf.length() - 4); raf.writeInt(raf.readInt()); } raf.close(); } }, new Tester("Mapped Read/Write") { public void test() throws IOException { FileChannel fc = new RandomAccessFile( new File("temp.tmp"), "rw").getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_WRITE, 0, fc.size()) .asIntBuffer(); ib.put(0); for(int i = 1; i < numOfUbuffInts; i++) ib.put(ib.get(i - 1)); fc.close(); } } }; public static void main(String[] args) { for(int i = 0; i < tests.length; i++) System.out.println(tests[i].runTest()); } } I more or less see whats going on, my output looks like this: Stream Write: 653 Mapped Write: 51 Stream Read: 651 Mapped Read: 40 Stream Read/Write: 14481 Mapped Read/Write: 6 What is makeing the Stream Read/Write so unbelievably long? And as a read/write test, to me it looks a bit pointless to read the same integer over and over (if I understand well what's going on in the Stream Read/Write) Wouldn't it be better to read int's from the previously written file and just read and write ints on the same place? Is there a better way to illustrate it? I've been breaking my head about a lot of these things for a while and I just can't get the whole picture..

    Read the article

  • Calling AuditQuerySystemPolicy() (advapi32.dll) from C# returns "The parameter is incorrect"

    - by JCCyC
    The sequence is like follows: Open a policy handle with LsaOpenPolicy() (not shown) Call LsaQueryInformationPolicy() to get the number of categories; For each category: Call AuditLookupCategoryGuidFromCategoryId() to turn the enum value into a GUID; Call AuditEnumerateSubCategories() to get a list of the GUIDs of all subcategories; Call AuditQuerySystemPolicy() to get the audit policies for the subcategories. All of these work and return expected, sensible values except the last. Calling AuditQuerySystemPolicy() gets me a "The parameter is incorrect" error. I'm thinking there must be some subtle unmarshaling problem. I'm probably misinterpreting what exactly AuditEnumerateSubCategories() returns, but I'm stumped. You'll see (commented) I tried to dereference the return pointer from AuditEnumerateSubCategories() as a pointer. Doing or not doing that gives the same result. Code: #region LSA types public enum POLICY_INFORMATION_CLASS { PolicyAuditLogInformation = 1, PolicyAuditEventsInformation, PolicyPrimaryDomainInformation, PolicyPdAccountInformation, PolicyAccountDomainInformation, PolicyLsaServerRoleInformation, PolicyReplicaSourceInformation, PolicyDefaultQuotaInformation, PolicyModificationInformation, PolicyAuditFullSetInformation, PolicyAuditFullQueryInformation, PolicyDnsDomainInformation } public enum POLICY_AUDIT_EVENT_TYPE { AuditCategorySystem, AuditCategoryLogon, AuditCategoryObjectAccess, AuditCategoryPrivilegeUse, AuditCategoryDetailedTracking, AuditCategoryPolicyChange, AuditCategoryAccountManagement, AuditCategoryDirectoryServiceAccess, AuditCategoryAccountLogon } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct POLICY_AUDIT_EVENTS_INFO { public bool AuditingMode; public IntPtr EventAuditingOptions; public UInt32 MaximumAuditEventCount; } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct GUID { public UInt32 Data1; public UInt16 Data2; public UInt16 Data3; public Byte Data4a; public Byte Data4b; public Byte Data4c; public Byte Data4d; public Byte Data4e; public Byte Data4f; public Byte Data4g; public Byte Data4h; public override string ToString() { return Data1.ToString("x8") + "-" + Data2.ToString("x4") + "-" + Data3.ToString("x4") + "-" + Data4a.ToString("x2") + Data4b.ToString("x2") + "-" + Data4c.ToString("x2") + Data4d.ToString("x2") + Data4e.ToString("x2") + Data4f.ToString("x2") + Data4g.ToString("x2") + Data4h.ToString("x2"); } } #endregion #region LSA Imports [DllImport("kernel32.dll")] extern static int GetLastError(); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern UInt32 LsaNtStatusToWinError( long Status); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern long LsaOpenPolicy( ref LSA_UNICODE_STRING SystemName, ref LSA_OBJECT_ATTRIBUTES ObjectAttributes, Int32 DesiredAccess, out IntPtr PolicyHandle ); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern long LsaClose(IntPtr PolicyHandle); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern long LsaFreeMemory(IntPtr Buffer); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern void AuditFree(IntPtr Buffer); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern long LsaQueryInformationPolicy( IntPtr PolicyHandle, POLICY_INFORMATION_CLASS InformationClass, out IntPtr Buffer); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern bool AuditLookupCategoryGuidFromCategoryId( POLICY_AUDIT_EVENT_TYPE AuditCategoryId, IntPtr pAuditCategoryGuid); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern bool AuditEnumerateSubCategories( IntPtr pAuditCategoryGuid, bool bRetrieveAllSubCategories, out IntPtr ppAuditSubCategoriesArray, out ulong pCountReturned); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern bool AuditQuerySystemPolicy( IntPtr pSubCategoryGuids, ulong PolicyCount, out IntPtr ppAuditPolicy); #endregion Dictionary<string, UInt32> retList = new Dictionary<string, UInt32>(); long lretVal; uint retVal; IntPtr pAuditEventsInfo; lretVal = LsaQueryInformationPolicy(policyHandle, POLICY_INFORMATION_CLASS.PolicyAuditEventsInformation, out pAuditEventsInfo); retVal = LsaNtStatusToWinError(lretVal); if (retVal != 0) { LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception((int)retVal); } POLICY_AUDIT_EVENTS_INFO myAuditEventsInfo = new POLICY_AUDIT_EVENTS_INFO(); myAuditEventsInfo = (POLICY_AUDIT_EVENTS_INFO)Marshal.PtrToStructure(pAuditEventsInfo, myAuditEventsInfo.GetType()); IntPtr subCats = IntPtr.Zero; ulong nSubCats = 0; for (int audCat = 0; audCat < myAuditEventsInfo.MaximumAuditEventCount; audCat++) { GUID audCatGuid = new GUID(); if (!AuditLookupCategoryGuidFromCategoryId((POLICY_AUDIT_EVENT_TYPE)audCat, new IntPtr(&audCatGuid))) { int causingError = GetLastError(); LsaFreeMemory(pAuditEventsInfo); LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception(causingError); } if (!AuditEnumerateSubCategories(new IntPtr(&audCatGuid), true, out subCats, out nSubCats)) { int causingError = GetLastError(); LsaFreeMemory(pAuditEventsInfo); LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception(causingError); } // Dereference the first pointer-to-pointer to point to the first subcategory // subCats = (IntPtr)Marshal.PtrToStructure(subCats, subCats.GetType()); if (nSubCats > 0) { IntPtr audPolicies = IntPtr.Zero; if (!AuditQuerySystemPolicy(subCats, nSubCats, out audPolicies)) { int causingError = GetLastError(); if (subCats != IntPtr.Zero) AuditFree(subCats); LsaFreeMemory(pAuditEventsInfo); LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception(causingError); } AUDIT_POLICY_INFORMATION myAudPol = new AUDIT_POLICY_INFORMATION(); for (ulong audSubCat = 0; audSubCat < nSubCats; audSubCat++) { // Process audPolicies[audSubCat], turn GUIDs into names, fill retList. // http://msdn.microsoft.com/en-us/library/aa373931%28VS.85%29.aspx // http://msdn.microsoft.com/en-us/library/bb648638%28VS.85%29.aspx IntPtr itemAddr = IntPtr.Zero; IntPtr itemAddrAddr = new IntPtr(audPolicies.ToInt64() + (long)(audSubCat * (ulong)Marshal.SizeOf(itemAddr))); itemAddr = (IntPtr)Marshal.PtrToStructure(itemAddrAddr, itemAddr.GetType()); myAudPol = (AUDIT_POLICY_INFORMATION)Marshal.PtrToStructure(itemAddr, myAudPol.GetType()); retList[myAudPol.AuditSubCategoryGuid.ToString()] = myAudPol.AuditingInformation; } if (audPolicies != IntPtr.Zero) AuditFree(audPolicies); } if (subCats != IntPtr.Zero) AuditFree(subCats); subCats = IntPtr.Zero; nSubCats = 0; } lretVal = LsaFreeMemory(pAuditEventsInfo); retVal = LsaNtStatusToWinError(lretVal); if (retVal != 0) throw new System.ComponentModel.Win32Exception((int)retVal); lretVal = LsaClose(policyHandle); retVal = LsaNtStatusToWinError(lretVal); if (retVal != 0) throw new System.ComponentModel.Win32Exception((int)retVal);

    Read the article

  • Lazy loading of child throwing session error

    - by Thomas Buckley
    I'm the following error when calling purchaseService.updatePurchase(purchase) inside my TagController: SEVERE: Servlet.service() for servlet [PurchaseAPIServer] in context with path [/PurchaseAPIServer] threw exception [Request processing failed; nested exception is org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.app.model.Purchase.tags, no session or session was closed] with root cause org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.app.model.Purchase.tags, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:383) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:375) at org.hibernate.collection.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:368) at org.hibernate.collection.PersistentSet.add(PersistentSet.java:212) at com.app.model.Purchase.addTags(Purchase.java:207) at com.app.controller.TagController.createAll(TagController.java:79) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:212) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:126) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:96) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:617) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:578) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:80) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:900) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:827) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:789) at javax.servlet.http.HttpServlet.service(HttpServlet.java:641) at javax.servlet.http.HttpServlet.service(HttpServlet.java:722) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:309) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) TagController: @RequestMapping(value = "purchases/{purchaseId}/tags", method = RequestMethod.POST, params = "manyTags") @ResponseStatus(HttpStatus.CREATED) public void createAll(@PathVariable("purchaseId") final Long purchaseId, @RequestBody final Tag[] entities) { Purchase purchase = purchaseService.getById(purchaseId); Set<Tag> tags = new HashSet<Tag>(Arrays.asList(entities)); purchase.addTags(tags); purchaseService.updatePurchase(purchase); } Purchase: @Entity @XmlRootElement public class Purchase implements Serializable { /** * */ private static final long serialVersionUID = 6603477834338392140L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @OneToMany(mappedBy = "purchase", fetch = FetchType.LAZY, cascade={CascadeType.ALL}) private Set<Tag> tags; @JsonIgnore public Set<Tag> getTags() { if (tags == null) { tags = new LinkedHashSet<Tag>(); } return tags; } public void setTags(Set<Tag> tags) { this.tags = tags; } ... } Tag: @Entity @XmlRootElement public class Tag implements Serializable { /** * */ private static final long serialVersionUID = 5165922776051697002L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @ManyToOne(fetch = FetchType.LAZY) @JoinColumns({@JoinColumn(name = "PURCHASEID", referencedColumnName = "ID")}) private Purchase purchase; @JsonIgnore public Purchase getPurchase() { return purchase; } public void setPurchase(Purchase purchase) { this.purchase = purchase; } } PurchaseService: @Service public class PurchaseService implements IPurchaseService { @Autowired private IPurchaseDAO purchaseDAO; public PurchaseService() { } @Transactional public List<Purchase> getAll() { return purchaseDAO.findAll(); } @Transactional public Purchase getById(Long id) { return purchaseDAO.findOne(id); } @Transactional public void addPurchase(Purchase purchase) { purchaseDAO.save(purchase); } @Transactional public void updatePurchase(Purchase purchase) { purchaseDAO.update(purchase); } } TagService: @Service public class TagService implements ITagService { @Autowired private ITagDAO tagDAO; public TagService() { } @Transactional public List<Tag> getAll() { return tagDAO.findAll(); } @Transactional public Tag getById(Long id) { return tagDAO.findOne(id); } @Transactional public void addTag(Tag tag) { tagDAO.save(tag); } @Transactional public void updateTag(Tag tag) { tagDAO.update(tag); } } Any ideas on how I can fix this? (I want to avoid using EAGER loading). Do I need to setup some form of session management for transactions? Thanks

    Read the article

  • Apache 2 Virtual Hosts no working on OSX 10.6

    - by matt_lethargic
    This is my first MacBook and I'm trying to get virtual hosts up and running so as it's going to be my dev machine. I've got apache/php/mysql running fine, the problem is that what ever address I go to I just get one of the virtual hosts I've setup. I can't even get to the root site anymore. I had phpmyadmin setup on http://localhost/pma but now that comes up with an error. If I take out the vhosts config file it seems to work again. I've put all my configs I can think you'll need below. ############## httpd config ############# ServerRoot "/usr" Listen 80 LoadModule authn_file_module libexec/apache2/mod_authn_file.so LoadModule authn_dbm_module libexec/apache2/mod_authn_dbm.so LoadModule authn_anon_module libexec/apache2/mod_authn_anon.so LoadModule authn_dbd_module libexec/apache2/mod_authn_dbd.so LoadModule authn_default_module libexec/apache2/mod_authn_default.so LoadModule authz_host_module libexec/apache2/mod_authz_host.so LoadModule authz_groupfile_module libexec/apache2/mod_authz_groupfile.so LoadModule authz_user_module libexec/apache2/mod_authz_user.so LoadModule authz_dbm_module libexec/apache2/mod_authz_dbm.so LoadModule authz_owner_module libexec/apache2/mod_authz_owner.so LoadModule authz_default_module libexec/apache2/mod_authz_default.so LoadModule auth_basic_module libexec/apache2/mod_auth_basic.so LoadModule auth_digest_module libexec/apache2/mod_auth_digest.so LoadModule cache_module libexec/apache2/mod_cache.so LoadModule disk_cache_module libexec/apache2/mod_disk_cache.so LoadModule mem_cache_module libexec/apache2/mod_mem_cache.so LoadModule dbd_module libexec/apache2/mod_dbd.so LoadModule dumpio_module libexec/apache2/mod_dumpio.so LoadModule reqtimeout_module libexec/apache2/mod_reqtimeout.so LoadModule ext_filter_module libexec/apache2/mod_ext_filter.so LoadModule include_module libexec/apache2/mod_include.so LoadModule filter_module libexec/apache2/mod_filter.so LoadModule substitute_module libexec/apache2/mod_substitute.so LoadModule deflate_module libexec/apache2/mod_deflate.so LoadModule log_config_module libexec/apache2/mod_log_config.so LoadModule log_forensic_module libexec/apache2/mod_log_forensic.so LoadModule logio_module libexec/apache2/mod_logio.so LoadModule env_module libexec/apache2/mod_env.so LoadModule mime_magic_module libexec/apache2/mod_mime_magic.so LoadModule cern_meta_module libexec/apache2/mod_cern_meta.so LoadModule expires_module libexec/apache2/mod_expires.so LoadModule headers_module libexec/apache2/mod_headers.so LoadModule ident_module libexec/apache2/mod_ident.so LoadModule usertrack_module libexec/apache2/mod_usertrack.so LoadModule setenvif_module libexec/apache2/mod_setenvif.so LoadModule version_module libexec/apache2/mod_version.so LoadModule proxy_module libexec/apache2/mod_proxy.so LoadModule proxy_connect_module libexec/apache2/mod_proxy_connect.so LoadModule proxy_ftp_module libexec/apache2/mod_proxy_ftp.so LoadModule proxy_http_module libexec/apache2/mod_proxy_http.so LoadModule proxy_scgi_module libexec/apache2/mod_proxy_scgi.so LoadModule proxy_ajp_module libexec/apache2/mod_proxy_ajp.so LoadModule proxy_balancer_module libexec/apache2/mod_proxy_balancer.so LoadModule ssl_module libexec/apache2/mod_ssl.so LoadModule mime_module libexec/apache2/mod_mime.so LoadModule dav_module libexec/apache2/mod_dav.so LoadModule status_module libexec/apache2/mod_status.so LoadModule autoindex_module libexec/apache2/mod_autoindex.so LoadModule asis_module libexec/apache2/mod_asis.so LoadModule info_module libexec/apache2/mod_info.so LoadModule cgi_module libexec/apache2/mod_cgi.so LoadModule dav_fs_module libexec/apache2/mod_dav_fs.so LoadModule vhost_alias_module libexec/apache2/mod_vhost_alias.so LoadModule negotiation_module libexec/apache2/mod_negotiation.so LoadModule dir_module libexec/apache2/mod_dir.so LoadModule imagemap_module libexec/apache2/mod_imagemap.so LoadModule actions_module libexec/apache2/mod_actions.so LoadModule speling_module libexec/apache2/mod_speling.so LoadModule userdir_module libexec/apache2/mod_userdir.so LoadModule alias_module libexec/apache2/mod_alias.so LoadModule rewrite_module libexec/apache2/mod_rewrite.so LoadModule bonjour_module libexec/apache2/mod_bonjour.so LoadModule php5_module libexec/apache2/libphp5.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User _www Group _www </IfModule> </IfModule> ServerAdmin [email protected] ServerName localhost:80 DocumentRoot "/Library/WebServer/Documents" <Directory /> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/Library/WebServer/Documents"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html </IfModule> <FilesMatch "^\.([Hh][Tt]|[Dd][Ss]_[Ss])"> Order allow,deny Deny from all Satisfy All </FilesMatch> <Files "rsrc"> Order allow,deny Deny from all Satisfy All </Files> <DirectoryMatch ".*\.\.namedfork"> Order allow,deny Deny from all Satisfy All </DirectoryMatch> ErrorLog "/private/var/log/apache2/error_log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "/private/var/log/apache2/access_log" common </IfModule> <IfModule alias_module> ScriptAliasMatch ^/cgi-bin/((?!(?i:webobjects)).*$) "/Library/WebServer/CGI-Executables/$1" </IfModule> <Directory "/Library/WebServer/CGI-Executables"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig /private/etc/apache2/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> TraceEnable off Include /private/etc/apache2/extra/httpd-mpm.conf Include /private/etc/apache2/extra/httpd-autoindex.conf Include /private/etc/apache2/extra/httpd-languages.conf Include /private/etc/apache2/extra/httpd-userdir.conf Include /private/etc/apache2/extra/httpd-vhosts.conf Include /private/etc/apache2/extra/httpd-manual.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include /private/etc/apache2/other/*.conf ############# httpd-vhosts ################ NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/Users/matt/Workspace/farmers-arms/website/farmers_arms" ServerName dev.farmers ServerAlias www.dev.farmers ErrorLog "/private/var/log/apache2/localhost.farmers-error_log" CustomLog "/private/var/log/apache2/localhost.farmers-access_log" common <Directory "/Users/matt/Workspace/farmers-arms/website/farmers_arms"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> Hosts file 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 dev.farmers 127.0.0.1 dev.hft Help!!!

    Read the article

  • Agile Development

    - by James Oloo Onyango
    Alot of literature has and is being written about agile developement and its surrounding philosophies. In my quest to find the best way to express the importance of agile methodologies, i have found Robert C. Martin's "A Satire Of Two Companies" to be both the most concise and thorough! Enjoy the read! Rufus Inc Project Kick Off Your name is Bob. The date is January 3, 2001, and your head still aches from the recent millennial revelry. You are sitting in a conference room with several managers and a group of your peers. You are a project team leader. Your boss is there, and he has brought along all of his team leaders. His boss called the meeting. "We have a new project to develop," says your boss's boss. Call him BB. The points in his hair are so long that they scrape the ceiling. Your boss's points are just starting to grow, but he eagerly awaits the day when he can leave Brylcream stains on the acoustic tiles. BB describes the essence of the new market they have identified and the product they want to develop to exploit this market. "We must have this new project up and working by fourth quarter October 1," BB demands. "Nothing is of higher priority, so we are cancelling your current project." The reaction in the room is stunned silence. Months of work are simply going to be thrown away. Slowly, a murmur of objection begins to circulate around the conference table.   His points give off an evil green glow as BB meets the eyes of everyone in the room. One by one, that insidious stare reduces each attendee to quivering lumps of protoplasm. It is clear that he will brook no discussion on this matter. Once silence has been restored, BB says, "We need to begin immediately. How long will it take you to do the analysis?" You raise your hand. Your boss tries to stop you, but his spitwad misses you and you are unaware of his efforts.   "Sir, we can't tell you how long the analysis will take until we have some requirements." "The requirements document won't be ready for 3 or 4 weeks," BB says, his points vibrating with frustration. "So, pretend that you have the requirements in front of you now. How long will you require for analysis?" No one breathes. Everyone looks around to see whether anyone has some idea. "If analysis goes beyond April 1, we have a problem. Can you finish the analysis by then?" Your boss visibly gathers his courage: "We'll find a way, sir!" His points grow 3 mm, and your headache increases by two Tylenol. "Good." BB smiles. "Now, how long will it take to do the design?" "Sir," you say. Your boss visibly pales. He is clearly worried that his 3 mms are at risk. "Without an analysis, it will not be possible to tell you how long design will take." BB's expression shifts beyond austere.   "PRETEND you have the analysis already!" he says, while fixing you with his vacant, beady little eyes. "How long will it take you to do the design?" Two Tylenol are not going to cut it. Your boss, in a desperate attempt to save his new growth, babbles: "Well, sir, with only six months left to complete the project, design had better take no longer than 3 months."   "I'm glad you agree, Smithers!" BB says, beaming. Your boss relaxes. He knows his points are secure. After a while, he starts lightly humming the Brylcream jingle. BB continues, "So, analysis will be complete by April 1, design will be complete by July 1, and that gives you 3 months to implement the project. This meeting is an example of how well our new consensus and empowerment policies are working. Now, get out there and start working. I'll expect to see TQM plans and QIT assignments on my desk by next week. Oh, and don't forget that your crossfunctional team meetings and reports will be needed for next month's quality audit." "Forget the Tylenol," you think to yourself as you return to your cubicle. "I need bourbon."   Visibly excited, your boss comes over to you and says, "Gosh, what a great meeting. I think we're really going to do some world shaking with this project." You nod in agreement, too disgusted to do anything else. "Oh," your boss continues, "I almost forgot." He hands you a 30-page document. "Remember that the SEI is coming to do an evaluation next week. This is the evaluation guide. You need to read through it, memorize it, and then shred it. It tells you how to answer any questions that the SEI auditors ask you. It also tells you what parts of the building you are allowed to take them to and what parts to avoid. We are determined to be a CMM level 3 organization by June!"   You and your peers start working on the analysis of the new project. This is difficult because you have no requirements. But from the 10-minute introduction given by BB on that fateful morning, you have some idea of what the product is supposed to do.   Corporate process demands that you begin by creating a use case document. You and your team begin enumerating use cases and drawing oval and stick diagrams. Philosophical debates break out among the team members. There is disagreement as to whether certain use cases should be connected with <<extends>> or <<includes>> relationships. Competing models are created, but nobody knows how to evaluate them. The debate continues, effectively paralyzing progress.   After a week, somebody finds the iceberg.com Web site, which recommends disposing entirely of <<extends>> and <<includes>> and replacing them with <<precedes>> and <<uses>>. The documents on this Web site, authored by Don Sengroiux, describes a method known as stalwart-analysis, which claims to be a step-by-step method for translating use cases into design diagrams. More competing use case models are created using this new scheme, but again, people can't agree on how to evaluate them. The thrashing continues. More and more, the use case meetings are driven by emotion rather than by reason. If it weren't for the fact that you don't have requirements, you'd be pretty upset by the lack of progress you are making. The requirements document arrives on February 15. And then again on February 20, 25, and every week thereafter. Each new version contradicts the previous one. Clearly, the marketing folks who are writing the requirements, empowered though they might be, are not finding consensus.   At the same time, several new competing use case templates have been proposed by the various team members. Each template presents its own particularly creative way of delaying progress. The debates rage on. On March 1, Prudence Putrigence, the process proctor, succeeds in integrating all the competing use case forms and templates into a single, all-encompassing form. Just the blank form is 15 pages long. She has managed to include every field that appeared on all the competing templates. She also presents a 159- page document describing how to fill out the use case form. All current use cases must be rewritten according to the new standard.   You marvel to yourself that it now requires 15 pages of fill-in-the-blank and essay questions to answer the question: What should the system do when the user presses Return? The corporate process (authored by L. E. Ott, famed author of "Holistic Analysis: A Progressive Dialectic for Software Engineers") insists that you discover all primary use cases, 87 percent of all secondary use cases, and 36.274 percent of all tertiary use cases before you can complete analysis and enter the design phase. You have no idea what a tertiary use case is. So in an attempt to meet this requirement, you try to get your use case document reviewed by the marketing department, which you hope will know what a tertiary use case is.   Unfortunately, the marketing folks are too busy with sales support to talk to you. Indeed, since the project started, you have not been able to get a single meeting with marketing, which has provided a never-ending stream of changing and contradictory requirements documents.   While one team has been spinning endlessly on the use case document, another team has been working out the domain model. Endless variations of UML documents are pouring out of this team. Every week, the model is reworked.   The team members can't decide whether to use <<interfaces>> or <<types>> in the model. A huge disagreement has been raging on the proper syntax and application of OCL. Others on the team just got back from a 5-day class on catabolism, and have been producing incredibly detailed and arcane diagrams that nobody else can fathom.   On March 27, with one week to go before analysis is to be complete, you have produced a sea of documents and diagrams but are no closer to a cogent analysis of the problem than you were on January 3. **** And then, a miracle happens.   **** On Saturday, April 1, you check your e-mail from home. You see a memo from your boss to BB. It states unequivocally that you are done with the analysis! You phone your boss and complain. "How could you have told BB that we were done with the analysis?" "Have you looked at a calendar lately?" he responds. "It's April 1!" The irony of that date does not escape you. "But we have so much more to think about. So much more to analyze! We haven't even decided whether to use <<extends>> or <<precedes>>!" "Where is your evidence that you are not done?" inquires your boss, impatiently. "Whaaa . . . ." But he cuts you off. "Analysis can go on forever; it has to be stopped at some point. And since this is the date it was scheduled to stop, it has been stopped. Now, on Monday, I want you to gather up all existing analysis materials and put them into a public folder. Release that folder to Prudence so that she can log it in the CM system by Monday afternoon. Then get busy and start designing."   As you hang up the phone, you begin to consider the benefits of keeping a bottle of bourbon in your bottom desk drawer. They threw a party to celebrate the on-time completion of the analysis phase. BB gave a colon-stirring speech on empowerment. And your boss, another 3 mm taller, congratulated his team on the incredible show of unity and teamwork. Finally, the CIO takes the stage to tell everyone that the SEI audit went very well and to thank everyone for studying and shredding the evaluation guides that were passed out. Level 3 now seems assured and will be awarded by June. (Scuttlebutt has it that managers at the level of BB and above are to receive significant bonuses once the SEI awards level 3.)   As the weeks flow by, you and your team work on the design of the system. Of course, you find that the analysis that the design is supposedly based on is flawedno, useless; no, worse than useless. But when you tell your boss that you need to go back and work some more on the analysis to shore up its weaker sections, he simply states, "The analysis phase is over. The only allowable activity is design. Now get back to it."   So, you and your team hack the design as best you can, unsure of whether the requirements have been properly analyzed. Of course, it really doesn't matter much, since the requirements document is still thrashing with weekly revisions, and the marketing department still refuses to meet with you.     The design is a nightmare. Your boss recently misread a book named The Finish Line in which the author, Mark DeThomaso, blithely suggested that design documents should be taken down to code-level detail. "If we are going to be working at that level of detail," you ask, "why don't we simply write the code instead?" "Because then you wouldn't be designing, of course. And the only allowable activity in the design phase is design!" "Besides," he continues, "we have just purchased a companywide license for Dandelion! This tool enables 'Round the Horn Engineering!' You are to transfer all design diagrams into this tool. It will automatically generate our code for us! It will also keep the design diagrams in sync with the code!" Your boss hands you a brightly colored shrinkwrapped box containing the Dandelion distribution. You accept it numbly and shuffle off to your cubicle. Twelve hours, eight crashes, one disk reformatting, and eight shots of 151 later, you finally have the tool installed on your server. You consider the week your team will lose while attending Dandelion training. Then you smile and think, "Any week I'm not here is a good week." Design diagram after design diagram is created by your team. Dandelion makes it very difficult to draw these diagrams. There are dozens and dozens of deeply nested dialog boxes with funny text fields and check boxes that must all be filled in correctly. And then there's the problem of moving classes between packages. At first, these diagram are driven from the use cases. But the requirements are changing so often that the use cases rapidly become meaningless. Debates rage about whether VISITOR or DECORATOR design patterns should be used. One developer refuses to use VISITOR in any form, claiming that it's not a properly object-oriented construct. Someone refuses to use multiple inheritance, since it is the spawn of the devil. Review meetings rapidly degenerate into debates about the meaning of object orientation, the definition of analysis versus design, or when to use aggregation versus association. Midway through the design cycle, the marketing folks announce that they have rethought the focus of the system. Their new requirements document is completely restructured. They have eliminated several major feature areas and replaced them with feature areas that they anticipate customer surveys will show to be more appropriate. You tell your boss that these changes mean that you need to reanalyze and redesign much of the system. But he says, "The analysis phase is system. But he says, "The analysis phase is over. The only allowable activity is design. Now get back to it."   You suggest that it might be better to create a simple prototype to show to the marketing folks and even some potential customers. But your boss says, "The analysis phase is over. The only allowable activity is design. Now get back to it." Hack, hack, hack, hack. You try to create some kind of a design document that might reflect the new requirements documents. However, the revolution of the requirements has not caused them to stop thrashing. Indeed, if anything, the wild oscillations of the requirements document have only increased in frequency and amplitude.   You slog your way through them.   On June 15, the Dandelion database gets corrupted. Apparently, the corruption has been progressive. Small errors in the DB accumulated over the months into bigger and bigger errors. Eventually, the CASE tool just stopped working. Of course, the slowly encroaching corruption is present on all the backups. Calls to the Dandelion technical support line go unanswered for several days. Finally, you receive a brief e-mail from Dandelion, informing you that this is a known problem and that the solution is to purchase the new version, which they promise will be ready some time next quarter, and then reenter all the diagrams by hand.   ****   Then, on July 1 another miracle happens! You are done with the design!   Rather than go to your boss and complain, you stock your middle desk drawer with some vodka.   **** They threw a party to celebrate the on-time completion of the design phase and their graduation to CMM level 3. This time, you find BB's speech so stirring that you have to use the restroom before it begins. New banners and plaques are all over your workplace. They show pictures of eagles and mountain climbers, and they talk about teamwork and empowerment. They read better after a few scotches. That reminds you that you need to clear out your file cabinet to make room for the brandy. You and your team begin to code. But you rapidly discover that the design is lacking in some significant areas. Actually, it's lacking any significance at all. You convene a design session in one of the conference rooms to try to work through some of the nastier problems. But your boss catches you at it and disbands the meeting, saying, "The design phase is over. The only allowable activity is coding. Now get back to it."   ****   The code generated by Dandelion is really hideous. It turns out that you and your team were using association and aggregation the wrong way, after all. All the generated code has to be edited to correct these flaws. Editing this code is extremely difficult because it has been instrumented with ugly comment blocks that have special syntax that Dandelion needs in order to keep the diagrams in sync with the code. If you accidentally alter one of these comments, the diagrams will be regenerated incorrectly. It turns out that "Round the Horn Engineering" requires an awful lot of effort. The more you try to keep the code compatible with Dandelion, the more errors Dandelion generates. In the end, you give up and decide to keep the diagrams up to date manually. A second later, you decide that there's no point in keeping the diagrams up to date at all. Besides, who has time?   Your boss hires a consultant to build tools to count the number of lines of code that are being produced. He puts a big thermometer graph on the wall with the number 1,000,000 on the top. Every day, he extends the red line to show how many lines have been added. Three days after the thermometer appears on the wall, your boss stops you in the hall. "That graph isn't growing quickly enough. We need to have a million lines done by October 1." "We aren't even sh-sh-sure that the proshect will require a m-million linezh," you blather. "We have to have a million lines done by October 1," your boss reiterates. His points have grown again, and the Grecian formula he uses on them creates an aura of authority and competence. "Are you sure your comment blocks are big enough?" Then, in a flash of managerial insight, he says, "I have it! I want you to institute a new policy among the engineers. No line of code is to be longer than 20 characters. Any such line must be split into two or more preferably more. All existing code needs to be reworked to this standard. That'll get our line count up!"   You decide not to tell him that this will require two unscheduled work months. You decide not to tell him anything at all. You decide that intravenous injections of pure ethanol are the only solution. You make the appropriate arrangements. Hack, hack, hack, and hack. You and your team madly code away. By August 1, your boss, frowning at the thermometer on the wall, institutes a mandatory 50-hour workweek.   Hack, hack, hack, and hack. By September 1st, the thermometer is at 1.2 million lines and your boss asks you to write a report describing why you exceeded the coding budget by 20 percent. He institutes mandatory Saturdays and demands that the project be brought back down to a million lines. You start a campaign of remerging lines. Hack, hack, hack, and hack. Tempers are flaring; people are quitting; QA is raining trouble reports down on you. Customers are demanding installation and user manuals; salespeople are demanding advance demonstrations for special customers; the requirements document is still thrashing, the marketing folks are complaining that the product isn't anything like they specified, and the liquor store won't accept your credit card anymore. Something has to give.    On September 15, BB calls a meeting. As he enters the room, his points are emitting clouds of steam. When he speaks, the bass overtones of his carefully manicured voice cause the pit of your stomach to roll over. "The QA manager has told me that this project has less than 50 percent of the required features implemented. He has also informed me that the system crashes all the time, yields wrong results, and is hideously slow. He has also complained that he cannot keep up with the continuous train of daily releases, each more buggy than the last!" He stops for a few seconds, visibly trying to compose himself. "The QA manager estimates that, at this rate of development, we won't be able to ship the product until December!" Actually, you think it's more like March, but you don't say anything. "December!" BB roars with such derision that people duck their heads as though he were pointing an assault rifle at them. "December is absolutely out of the question. Team leaders, I want new estimates on my desk in the morning. I am hereby mandating 65-hour work weeks until this project is complete. And it better be complete by November 1."   As he leaves the conference room, he is heard to mutter: "Empowermentbah!" * * * Your boss is bald; his points are mounted on BB's wall. The fluorescent lights reflecting off his pate momentarily dazzle you. "Do you have anything to drink?" he asks. Having just finished your last bottle of Boone's Farm, you pull a bottle of Thunderbird from your bookshelf and pour it into his coffee mug. "What's it going to take to get this project done? " he asks. "We need to freeze the requirements, analyze them, design them, and then implement them," you say callously. "By November 1?" your boss exclaims incredulously. "No way! Just get back to coding the damned thing." He storms out, scratching his vacant head.   A few days later, you find that your boss has been transferred to the corporate research division. Turnover has skyrocketed. Customers, informed at the last minute that their orders cannot be fulfilled on time, have begun to cancel their orders. Marketing is re-evaluating whether this product aligns with the overall goals of the company. Memos fly, heads roll, policies change, and things are, overall, pretty grim. Finally, by March, after far too many sixty-five hour weeks, a very shaky version of the software is ready. In the field, bug-discovery rates are high, and the technical support staff are at their wits' end, trying to cope with the complaints and demands of the irate customers. Nobody is happy.   In April, BB decides to buy his way out of the problem by licensing a product produced by Rupert Industries and redistributing it. The customers are mollified, the marketing folks are smug, and you are laid off.     Rupert Industries: Project Alpha   Your name is Robert. The date is January 3, 2001. The quiet hours spent with your family this holiday have left you refreshed and ready for work. You are sitting in a conference room with your team of professionals. The manager of the division called the meeting. "We have some ideas for a new project," says the division manager. Call him Russ. He is a high-strung British chap with more energy than a fusion reactor. He is ambitious and driven but understands the value of a team. Russ describes the essence of the new market opportunity the company has identified and introduces you to Jane, the marketing manager, who is responsible for defining the products that will address it. Addressing you, Jane says, "We'd like to start defining our first product offering as soon as possible. When can you and your team meet with me?" You reply, "We'll be done with the current iteration of our project this Friday. We can spare a few hours for you between now and then. After that, we'll take a few people from the team and dedicate them to you. We'll begin hiring their replacements and the new people for your team immediately." "Great," says Russ, "but I want you to understand that it is critical that we have something to exhibit at the trade show coming up this July. If we can't be there with something significant, we'll lose the opportunity."   "I understand," you reply. "I don't yet know what it is that you have in mind, but I'm sure we can have something by July. I just can't tell you what that something will be right now. In any case, you and Jane are going to have complete control over what we developers do, so you can rest assured that by July, you'll have the most important things that can be accomplished in that time ready to exhibit."   Russ nods in satisfaction. He knows how this works. Your team has always kept him advised and allowed him to steer their development. He has the utmost confidence that your team will work on the most important things first and will produce a high-quality product.   * * *   "So, Robert," says Jane at their first meeting, "How does your team feel about being split up?" "We'll miss working with each other," you answer, "but some of us were getting pretty tired of that last project and are looking forward to a change. So, what are you people cooking up?" Jane beams. "You know how much trouble our customers currently have . . ." And she spends a half hour or so describing the problem and possible solution. "OK, wait a second" you respond. "I need to be clear about this." And so you and Jane talk about how this system might work. Some of her ideas aren't fully formed. You suggest possible solutions. She likes some of them. You continue discussing.   During the discussion, as each new topic is addressed, Jane writes user story cards. Each card represents something that the new system has to do. The cards accumulate on the table and are spread out in front of you. Both you and Jane point at them, pick them up, and make notes on them as you discuss the stories. The cards are powerful mnemonic devices that you can use to represent complex ideas that are barely formed.   At the end of the meeting, you say, "OK, I've got a general idea of what you want. I'm going to talk to the team about it. I imagine they'll want to run some experiments with various database structures and presentation formats. Next time we meet, it'll be as a group, and we'll start identifying the most important features of the system."   A week later, your nascent team meets with Jane. They spread the existing user story cards out on the table and begin to get into some of the details of the system. The meeting is very dynamic. Jane presents the stories in the order of their importance. There is much discussion about each one. The developers are concerned about keeping the stories small enough to estimate and test. So they continually ask Jane to split one story into several smaller stories. Jane is concerned that each story have a clear business value and priority, so as she splits them, she makes sure that this stays true.   The stories accumulate on the table. Jane writes them, but the developers make notes on them as needed. Nobody tries to capture everything that is said; the cards are not meant to capture everything but are simply reminders of the conversation.   As the developers become more comfortable with the stories, they begin writing estimates on them. These estimates are crude and budgetary, but they give Jane an idea of what the story will cost.   At the end of the meeting, it is clear that many more stories could be discussed. It is also clear that the most important stories have been addressed and that they represent several months worth of work. Jane closes the meeting by taking the cards with her and promising to have a proposal for the first release in the morning.   * * *   The next morning, you reconvene the meeting. Jane chooses five cards and places them on the table. "According to your estimates, these cards represent about one perfect team-week's worth of work. The last iteration of the previous project managed to get one perfect team-week done in 3 real weeks. If we can get these five stories done in 3 weeks, we'll be able to demonstrate them to Russ. That will make him feel very comfortable about our progress." Jane is pushing it. The sheepish look on her face lets you know that she knows it too. You reply, "Jane, this is a new team, working on a new project. It's a bit presumptuous to expect that our velocity will be the same as the previous team's. However, I met with the team yesterday afternoon, and we all agreed that our initial velocity should, in fact, be set to one perfectweek for every 3 real-weeks. So you've lucked out on this one." "Just remember," you continue, "that the story estimates and the story velocity are very tentative at this point. We'll learn more when we plan the iteration and even more when we implement it."   Jane looks over her glasses at you as if to say "Who's the boss around here, anyway?" and then smiles and says, "Yeah, don't worry. I know the drill by now."Jane then puts 15 more cards on the table. She says, "If we can get all these cards done by the end of March, we can turn the system over to our beta test customers. And we'll get good feedback from them."   You reply, "OK, so we've got our first iteration defined, and we have the stories for the next three iterations after that. These four iterations will make our first release."   "So," says Jane, can you really do these five stories in the next 3 weeks?" "I don't know for sure, Jane," you reply. "Let's break them down into tasks and see what we get."   So Jane, you, and your team spend the next several hours taking each of the five stories that Jane chose for the first iteration and breaking them down into small tasks. The developers quickly realize that some of the tasks can be shared between stories and that other tasks have commonalities that can probably be taken advantage of. It is clear that potential designs are popping into the developers' heads. From time to time, they form little discussion knots and scribble UML diagrams on some cards.   Soon, the whiteboard is filled with the tasks that, once completed, will implement the five stories for this iteration. You start the sign-up process by saying, "OK, let's sign up for these tasks." "I'll take the initial database generation." Says Pete. "That's what I did on the last project, and this doesn't look very different. I estimate it at two of my perfect workdays." "OK, well, then, I'll take the login screen," says Joe. "Aw, darn," says Elaine, the junior member of the team, "I've never done a GUI, and kinda wanted to try that one."   "Ah, the impatience of youth," Joe says sagely, with a wink in your direction. "You can assist me with it, young Jedi." To Jane: "I think it'll take me about three of my perfect workdays."   One by one, the developers sign up for tasks and estimate them in terms of their own perfect workdays. Both you and Jane know that it is best to let the developers volunteer for tasks than to assign the tasks to them. You also know full well that you daren't challenge any of the developers' estimates. You know these people, and you trust them. You know that they are going to do the very best they can.   The developers know that they can't sign up for more perfect workdays than they finished in the last iteration they worked on. Once each developer has filled his or her schedule for the iteration, they stop signing up for tasks.   Eventually, all the developers have stopped signing up for tasks. But, of course, tasks are still left on the board.   "I was worried that that might happen," you say, "OK, there's only one thing to do, Jane. We've got too much to do in this iteration. What stories or tasks can we remove?" Jane sighs. She knows that this is the only option. Working overtime at the beginning of a project is insane, and projects where she's tried it have not fared well.   So Jane starts to remove the least-important functionality. "Well, we really don't need the login screen just yet. We can simply start the system in the logged-in state." "Rats!" cries Elaine. "I really wanted to do that." "Patience, grasshopper." says Joe. "Those who wait for the bees to leave the hive will not have lips too swollen to relish the honey." Elaine looks confused. Everyone looks confused. "So . . .," Jane continues, "I think we can also do away with . . ." And so, bit by bit, the list of tasks shrinks. Developers who lose a task sign up for one of the remaining ones.   The negotiation is not painless. Several times, Jane exhibits obvious frustration and impatience. Once, when tensions are especially high, Elaine volunteers, "I'll work extra hard to make up some of the missing time." You are about to correct her when, fortunately, Joe looks her in the eye and says, "When once you proceed down the dark path, forever will it dominate your destiny."   In the end, an iteration acceptable to Jane is reached. It's not what Jane wanted. Indeed, it is significantly less. But it's something the team feels that can be achieved in the next 3 weeks.   And, after all, it still addresses the most important things that Jane wanted in the iteration. "So, Jane," you say when things had quieted down a bit, "when can we expect acceptance tests from you?" Jane sighs. This is the other side of the coin. For every story the development team implements,   Jane must supply a suite of acceptance tests that prove that it works. And the team needs these long before the end of the iteration, since they will certainly point out differences in the way Jane and the developers imagine the system's behaviour.   "I'll get you some example test scripts today," Jane promises. "I'll add to them every day after that. You'll have the entire suite by the middle of the iteration."   * * *   The iteration begins on Monday morning with a flurry of Class, Responsibilities, Collaborators sessions. By midmorning, all the developers have assembled into pairs and are rapidly coding away. "And now, my young apprentice," Joe says to Elaine, "you shall learn the mysteries of test-first design!"   "Wow, that sounds pretty rad," Elaine replies. "How do you do it?" Joe beams. It's clear that he has been anticipating this moment. "OK, what does the code do right now?" "Huh?" replied Elaine, "It doesn't do anything at all; there is no code."   "So, consider our task; can you think of something the code should do?" "Sure," Elaine said with youthful assurance, "First, it should connect to the database." "And thereupon, what must needs be required to connecteth the database?" "You sure talk weird," laughed Elaine. "I think we'd have to get the database object from some registry and call the Connect() method. "Ah, astute young wizard. Thou perceives correctly that we requireth an object within which we can cacheth the database object." "Is 'cacheth' really a word?" "It is when I say it! So, what test can we write that we know the database registry should pass?" Elaine sighs. She knows she'll just have to play along. "We should be able to create a database object and pass it to the registry in a Store() method. And then we should be able to pull it out of the registry with a Get() method and make sure it's the same object." "Oh, well said, my prepubescent sprite!" "Hay!" "So, now, let's write a test function that proves your case." "But shouldn't we write the database object and registry object first?" "Ah, you've much to learn, my young impatient one. Just write the test first." "But it won't even compile!" "Are you sure? What if it did?" "Uh . . ." "Just write the test, Elaine. Trust me." And so Joe, Elaine, and all the other developers began to code their tasks, one test case at a time. The room in which they worked was abuzz with the conversations between the pairs. The murmur was punctuated by an occasional high five when a pair managed to finish a task or a difficult test case.   As development proceeded, the developers changed partners once or twice a day. Each developer got to see what all the others were doing, and so knowledge of the code spread generally throughout the team.   Whenever a pair finished something significant whether a whole task or simply an important part of a task they integrated what they had with the rest of the system. Thus, the code base grew daily, and integration difficulties were minimized.   The developers communicated with Jane on a daily basis. They'd go to her whenever they had a question about the functionality of the system or the interpretation of an acceptance test case.   Jane, good as her word, supplied the team with a steady stream of acceptance test scripts. The team read these carefully and thereby gained a much better understanding of what Jane expected the system to do. By the beginning of the second week, there was enough functionality to demonstrate to Jane. She watched eagerly as the demonstration passed test case after test case. "This is really cool," Jane said as the demonstration finally ended. "But this doesn't seem like one-third of the tasks. Is your velocity slower than anticipated?"   You grimace. You'd been waiting for a good time to mention this to Jane but now she was forcing the issue. "Yes, unfortunately, we are going more slowly than we had expected. The new application server we are using is turning out to be a pain to configure. Also, it takes forever to reboot, and we have to reboot it whenever we make even the slightest change to its configuration."   Jane eyes you with suspicion. The stress of last Monday's negotiations had still not entirely dissipated. She says, "And what does this mean to our schedule? We can't slip it again, we just can't. Russ will have a fit! He'll haul us all into the woodshed and ream us some new ones."   You look Jane right in the eyes. There's no pleasant way to give someone news like this. So you just blurt out, "Look, if things keep going like they're going, we're not going to be done with everything by next Friday. Now it's possible that we'll figure out a way to go faster. But, frankly, I wouldn't depend on that. You should start thinking about one or two tasks that could be removed from the iteration without ruining the demonstration for Russ. Come hell or high water, we are going to give that demonstration on Friday, and I don't think you want us to choose which tasks to omit."   "Aw forchrisakes!" Jane barely manages to stifle yelling that last word as she stalks away, shaking her head. Not for the first time, you say to yourself, "Nobody ever promised me project management would be easy." You are pretty sure it won't be the last time, either.   Actually, things went a bit better than you had hoped. The team did, in fact, have to drop one task from the iteration, but Jane had chosen wisely, and the demonstration for Russ went without a hitch. Russ was not impressed with the progress, but neither was he dismayed. He simply said, "This is pretty good. But remember, we have to be able to demonstrate this system at the trade show in July, and at this rate, it doesn't look like you'll have all that much to show." Jane, whose attitude had improved dramatically with the completion of the iteration, responded to Russ by saying, "Russ, this team is working hard, and well. When July comes around, I am confident that we'll have something significant to demonstrate. It won't be everything, and some of it may be smoke and mirrors, but we'll have something."   Painful though the last iteration was, it had calibrated your velocity numbers. The next iteration went much better. Not because your team got more done than in the last iteration but simply because the team didn't have to remove any tasks or stories in the middle of the iteration.   By the start of the fourth iteration, a natural rhythm has been established. Jane, you, and the team know exactly what to expect from one another. The team is running hard, but the pace is sustainable. You are confident that the team can keep up this pace for a year or more.   The number of surprises in the schedule diminishes to near zero; however, the number of surprises in the requirements does not. Jane and Russ frequently look over the growing system and make recommendations or changes to the existing functionality. But all parties realize that these changes take time and must be scheduled. So the changes do not cause anyone's expectations to be violated. In March, there is a major demonstration of the system to the board of directors. The system is very limited and is not yet in a form good enough to take to the trade show, but progress is steady, and the board is reasonably impressed.   The second release goes even more smoothly than the first. By now, the team has figured out a way to automate Jane's acceptance test scripts. The team has also refactored the design of the system to the point that it is really easy to add new features and change old ones. The second release was done by the end of June and was taken to the trade show. It had less in it than Jane and Russ would have liked, but it did demonstrate the most important features of the system. Although customers at the trade show noticed that certain features were missing, they were very impressed overall. You, Russ, and Jane all returned from the trade show with smiles on your faces. You all felt as though this project was a winner.   Indeed, many months later, you are contacted by Rufus Inc. That company had been working on a system like this for its internal operations. Rufus has canceled the development of that system after a death-march project and is negotiating to license your technology for its environment.   Indeed, things are looking up!

    Read the article

  • Using Completed User Stories to Estimate Future User Stories

    - by David Kaczynski
    In Scrum/Agile, the complexity of a user story can be estimated in story points. After completing some user stories, a programmer or team of programmers can use those experiences to better estimate how much time it might take to complete a future user story. Is there a methodology for breaking down the complexity of user stories into quantifiable or quantifiable attributes? For example, User Story X requires a rich, new view in the GUI, but User Story X can perform most of its functionality using existing business logic on the server. On a scale of 1 to 10, User Story X has a complexity of 7 on the client and a complexity of 2 on the server. After User Story X is completed, someone asks how long would it take to complete User Story Y, which has a complexity of 3 on the client and 6 on the server. Looking at how long it took to complete User Story X, we can make an educated estimate on how long it might take to complete User Story Y. I can imagine some other details: The complexity of one attribute (such as complexity of client) could have sub-attributes, such as number of steps in a sequence, function points, etc. Several other attributes that could be considered as well, such as the programmer's familiarity with the system or the number of components/interfaces involved These attributes could be accumulated into some sort of user story checklist. To reiterate: is there an existing methodology for decomposing the complexity of a user story into complexity of attributes/sub-attributes, or is using completed user stories as indicators in estimating future user stories more of an informal process?

    Read the article

  • XAMPP vs WAMP security and other on Windows XP

    - by typoknig
    Not long ago I found WAMP and thought it was a God send because it had all the things I wanted/needed (Apache, PHP, MySQL, and phpMyAdmin) all built into one installer. One thing about WAMP has been making me mad is an error I get in phpMyAdmin about the advanced features not working. I have tried to fix that error long enough on that error for long enough. http://stackoverflow.com/questions/2688385/problem-with-phpmyadmin-advanced-features I now read that most people prefer XAMPP over WAMP, but I am a bit concerned that XAMPP might have some extra security holes with Mercury and Perl, two thing that I don't really need or want right now. Are my security concerns justified or not? Is there any other reasons to go with XAMPP over WAMP or vice versa?

    Read the article

  • The Social Business Thought Leaders - John Hagel

    - by kellsey.ruppel
    While many European economies are on the brink of a recession between increasing taxation and mounting loss of jobs and bankruptcy filing rates, there's an understandable risk of losing sight of the deeper forces at play. Yet instead of surrendering to uncertainty and trying to survive in the short term, many organizations are feeling the urge to be better prepared to thrive in these complex times by developing a more articulated long term understanding of both the opportunities / challenges ahead. For example: What long-term economic, technological and societal changes are rolling out? Which foundational dynamics will affect our companies' performance, productivity, competition, and innovative potential in the upcoming decades? How will digital infrastructure change our business landscape? What kind of capabilities will be key to compete in a market shaped by growing turbulence, unpredictability and volatility? Breaking out from a strictly cyclical thinking, studies such as the Shift Index by John Hagel, Co-Chairman of the Center for the Edge at Deloitte & Touche (See Measuring the forces of long-term change - The 2009 Shift Index), depict a worrying performance challenge that affected every industry in the entire US economy over the last 45 years. Amidst a more than doubled competitive intensity of the market, and even with an improved labor productivity, the actual performance of US firms has consistently fallen to 25% of what it was in 1965. Most of this reported value is shifting from institutions and organizations to individuals, whether they are customers or young creative talent. To thrive in the digital economy and reverse declining performance trends, companies will have to fundamentally rethink their management approach by moving from knowledge stocks to knowledge flows, from scalable efficiency to scalable learning, from push organizations to pull organizations. Based on the outcomes of the Shift Index and on the book The Power of Pull, the first episode of the Social Business Thought-Leaders features John Hagel to provide strategic insights on how companies will succeed in the 21st century.

    Read the article

  • Login loop in Snow Leopard

    - by hgpc
    I can't get out of a login loop of a particular admin user. After entering the password the login screen is shown again after about a minute. Other users work fine. It started happening after a simple reboot. Can you please help me? Thank you! Tried to no avail: Change the password Remove the password Repair disk (no errors) Boot in safe mode Reinstall Snow Leopard and updating to 10.6.6 Remove content of ~/Library/Caches Removed content of ~/Library/Preferences Replaced /etc/authorization with Install DVD copy The system.log mentions a crash report. I'm including both below. system.log Jan 8 02:43:30 loginwindow218: Login Window - Returned from Security Agent Jan 8 02:43:30 loginwindow218: USER_PROCESS: 218 console Jan 8 02:44:42 kernel[0]: Jan 8 02:44:43: --- last message repeated 1 time --- Jan 8 02:44:43 com.apple.launchd[1] (com.apple.loginwindow218): Job appears to have crashed: Bus error Jan 8 02:44:43 com.apple.UserEventAgent-LoginWindow223: ALF error: cannot find useragent 1102 Jan 8 02:44:43 com.apple.UserEventAgent-LoginWindow223: plugin.UserEventAgentFactory: called with typeID=FC86416D-6164-2070-726F-70735C216EC0 Jan 8 02:44:43 /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow233: Login Window Application Started Jan 8 02:44:43 SecurityAgent228: CGSShutdownServerConnections: Detaching application from window server Jan 8 02:44:43 com.apple.ReportCrash.Root232: 2011-01-08 02:44:43.936 ReportCrash232:2903 Saved crash report for loginwindow218 version ??? (???) to /Library/Logs/DiagnosticReports/loginwindow_2011-01-08-024443_localhost.crash Jan 8 02:44:44 SecurityAgent228: MIG: server died: CGSReleaseShmem : Cannot release shared memory Jan 8 02:44:44 SecurityAgent228: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Jan 8 02:44:44 SecurityAgent228: CGSDisplayServerShutdown: Detaching display subsystem from window server Jan 8 02:44:44 SecurityAgent228: HIToolbox: received notification of WindowServer event port death. Jan 8 02:44:44 SecurityAgent228: port matched the WindowServer port created in BindCGSToRunLoop Jan 8 02:44:44 loginwindow233: Login Window Started Security Agent Jan 8 02:44:44 WindowServer234: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Jan 8 02:44:44 com.apple.WindowServer234: Sat Jan 8 02:44:44 .local WindowServer234 <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Jan 8 02:44:54 SecurityAgent243: NSSecureTextFieldCell detected a field editor ((null)) that is not a NSTextView subclass designed to work with the cell. Ignoring... Crash report Process: loginwindow 218 Path: /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow Identifier: loginwindow Version: ??? (???) Code Type: X86-64 (Native) Parent Process: launchd [1] Date/Time: 2011-01-08 02:44:42.748 +0100 OS Version: Mac OS X 10.6.6 (10J567) Report Version: 6 Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: 0x000000000000000a, 0x000000010075b000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.security 0x00007fff801c6e8b Security::ReadSection::at(unsigned int) const + 25 1 com.apple.security 0x00007fff801c632f Security::DbVersion::open() + 123 2 com.apple.security 0x00007fff801c5e41 Security::DbVersion::DbVersion(Security::AppleDatabase const&, Security::RefPointer<Security::AtomicBufferedFile> const&) + 179 3 com.apple.security 0x00007fff801c594e Security::DbModifier::getDbVersion(bool) + 330 4 com.apple.security 0x00007fff801c57f5 Security::DbModifier::openDatabase() + 33 5 com.apple.security 0x00007fff801c5439 Security::Database::_dbOpen(Security::DatabaseSession&, unsigned int, Security::AccessCredentials const*, void const*) + 221 6 com.apple.security 0x00007fff801c4841 Security::DatabaseManager::dbOpen(Security::DatabaseSession&, Security::DbName const&, unsigned int, Security::AccessCredentials const*, void const*) + 77 7 com.apple.security 0x00007fff801c4723 Security::DatabaseSession::DbOpen(char const*, cssm_net_address const*, unsigned int, Security::AccessCredentials const*, void const*, long&) + 285 8 com.apple.security 0x00007fff801d8414 cssm_DbOpen(long, char const*, cssm_net_address const*, unsigned int, cssm_access_credentials const*, void const*, long*) + 108 9 com.apple.security 0x00007fff801d7fba CSSM_DL_DbOpen + 106 10 com.apple.security 0x00007fff801d62f6 Security::CssmClient::DbImpl::open() + 162 11 com.apple.security 0x00007fff801d8977 SSDatabaseImpl::open(Security::DLDbIdentifier const&) + 53 12 com.apple.security 0x00007fff801d8715 SSDLSession::DbOpen(char const*, cssm_net_address const*, unsigned int, Security::AccessCredentials const*, void const*, long&) + 263 13 com.apple.security 0x00007fff801d8414 cssm_DbOpen(long, char const*, cssm_net_address const*, unsigned int, cssm_access_credentials const*, void const*, long*) + 108 14 com.apple.security 0x00007fff801d7fba CSSM_DL_DbOpen + 106 15 com.apple.security 0x00007fff801d62f6 Security::CssmClient::DbImpl::open() + 162 16 com.apple.security 0x00007fff802fa786 Security::CssmClient::DbImpl::unlock(cssm_data const&) + 28 17 com.apple.security 0x00007fff80275b5d Security::KeychainCore::KeychainImpl::unlock(Security::CssmData const&) + 89 18 com.apple.security 0x00007fff80291a06 Security::KeychainCore::StorageManager::login(unsigned int, void const*, unsigned int, void const*) + 3336 19 com.apple.security 0x00007fff802854d3 SecKeychainLogin + 91 20 com.apple.loginwindow 0x000000010000dfc5 0x100000000 + 57285 21 com.apple.loginwindow 0x000000010000cfb4 0x100000000 + 53172 22 com.apple.Foundation 0x00007fff8721e44f __NSThreadPerformPerform + 219 23 com.apple.CoreFoundation 0x00007fff82627401 __CFRunLoopDoSources0 + 1361 24 com.apple.CoreFoundation 0x00007fff826255f9 __CFRunLoopRun + 873 25 com.apple.CoreFoundation 0x00007fff82624dbf CFRunLoopRunSpecific + 575 26 com.apple.HIToolbox 0x00007fff8444493a RunCurrentEventLoopInMode + 333 27 com.apple.HIToolbox 0x00007fff8444473f ReceiveNextEventCommon + 310 28 com.apple.HIToolbox 0x00007fff844445f8 BlockUntilNextEventMatchingListInMode + 59 29 com.apple.AppKit 0x00007fff80b01e64 _DPSNextEvent + 718 30 com.apple.AppKit 0x00007fff80b017a9 -NSApplication nextEventMatchingMask:untilDate:inMode:dequeue: + 155 31 com.apple.AppKit 0x00007fff80ac748b -NSApplication run + 395 32 com.apple.loginwindow 0x0000000100004b16 0x100000000 + 19222 33 com.apple.loginwindow 0x0000000100004580 0x100000000 + 17792 Thread 1: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x00007fff8755216a kevent + 10 1 libSystem.B.dylib 0x00007fff8755403d _dispatch_mgr_invoke + 154 2 libSystem.B.dylib 0x00007fff87553d14 _dispatch_queue_invoke + 185 3 libSystem.B.dylib 0x00007fff8755383e _dispatch_worker_thread2 + 252 4 libSystem.B.dylib 0x00007fff87553168 _pthread_wqthread + 353 5 libSystem.B.dylib 0x00007fff87553005 start_wqthread + 13 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x000000010075b000 rbx: 0x00007fff5fbfd990 rcx: 0x00007fff875439da rdx: 0x0000000000000000 rdi: 0x00007fff5fbfd990 rsi: 0x0000000000000000 rbp: 0x00007fff5fbfd5d0 rsp: 0x00007fff5fbfd5d0 r8: 0x0000000000000007 r9: 0x0000000000000000 r10: 0x00007fff8753beda r11: 0x0000000000000202 r12: 0x0000000100133e78 r13: 0x00007fff5fbfda50 r14: 0x00007fff5fbfda50 r15: 0x00007fff5fbfdaa0 rip: 0x00007fff801c6e8b rfl: 0x0000000000010287 cr2: 0x000000010075b000

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >