Search Results

Search found 48797 results on 1952 pages for 'read write'.

Page 730/1952 | < Previous Page | 726 727 728 729 730 731 732 733 734 735 736 737  | Next Page >

  • How to get values of atributes on a XML file using C++ ?

    - by Reversed
    Need to write some C++ code that reads XML string and if i do something like: get valueofElement("ACTION_ON_CARD") it returns 3 get valueofElement("ACTION_ON_ENVELOPE") it returns YES XML String: <ACTION_ON_CARD>3</ACTION_ON_CARD> <ACTION_ON_ENVELOPE>YES</ACTION_ON_ENVELOPE> Any code example would be helpfull Thanks

    Read the article

  • C compiler producing lightweight executeables

    - by samuel
    I'm currently using MSVC for C++ but as I'm switching to C to write a very performance-intensive program (interpreter) I have to search for a fitting C compiler. I've looked at some binaries produced by Turbo-C and even if its old they seem pretty straigthforward and optimized. Now I don't know what the best compiler for building an interpreter is, but maybe you can help me. I've considered GCC but as I don't know much about it, I can't be really sure.

    Read the article

  • error LNK2005: xxx already defined in MSVCRT.lib(MSVCR100.dll) C:\something\LIBCMT.lib(setlocal.obj)

    - by volpack
    Hello, I'm using DCMTK library for reading Dicom files (Image format used in medical image processing.) I'm having a problem in compiling this DCMTK source code. DCMTK uses some additional external libraries (zlib, tiff, libpng, libxml2, libiconv). I know that all libraries should be generated with same Code Generation Options. I've downloaded the compiled versions of these support libraries which are compiled with "Multithreaded DLL" runtime options (/MD). In each project of DCMTK source code I ensured that runtime options are "Multithreaded DLL" (/MD). But still I'm getting these errors: Error 238 error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmp2pgm Error 239 error LNK2005: __lock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmp2pgm Error 240 error LNK2005: __unlock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmp2pgm Error 241 error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmp2pgm Error 242 error LNK2005: _exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmp2pgm Error 243 error LNK2005: __exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmp2pgm Error 244 error LNK2005: __cexit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmp2pgm Error 245 error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmp2pgm Error 246 error LNK2005: _fflush already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(fflush.obj) dcmp2pgm Error 247 error LNK2005: __errno already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(dosmap.obj) dcmp2pgm Error 248 error LNK2005: __invoke_watson already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(invarg.obj) dcmp2pgm Error 249 error LNK2005: "void __cdecl terminate(void)" (?terminate@@YAXXZ) already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(hooks.obj) dcmp2pgm Error 250 error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmp2pgm Error 251 error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmp2pgm Error 252 error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmp2pgm Error 253 error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmp2pgm Error 254 error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmp2pgm Error 255 error LNK2005: __lock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmp2pgm Error 256 error LNK2005: __XcptFilter already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(winxfltr.obj) dcmp2pgm Error 257 error LNK2005: _mainCRTStartup already defined in MSVCRT.lib(crtexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0.obj) dcmp2pgm Error 258 error LNK2005: ___set_app_type already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(errmode.obj) dcmp2pgm Error 259 error LNK2005: __configthreadlocale already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(setlocal.obj) dcmp2pgm Error 260 error LNK2005: _getenv already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(getenv.obj) dcmp2pgm Error 261 error LNK2005: __isctype already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(isctype.obj) dcmp2pgm Error 262 error LNK2005: __strnicmp already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(strnicmp.obj) dcmp2pgm Error 263 error LNK2005: __close already defined in LIBCMT.lib(close.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 264 error LNK2005: __fileno already defined in LIBCMT.lib(fileno.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 265 error LNK2005: _calloc already defined in LIBCMT.lib(calloc.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 266 error LNK2005: _atol already defined in LIBCMT.lib(atox.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 267 error LNK2005: _strcspn already defined in LIBCMT.lib(strcspn.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 268 error LNK2005: __stricmp already defined in LIBCMT.lib(stricmp.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 269 error LNK2005: _atoi already defined in LIBCMT.lib(atox.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 270 error LNK2005: __lseek already defined in LIBCMT.lib(lseek.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 271 error LNK2005: __read already defined in LIBCMT.lib(read.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 272 error LNK2005: __write already defined in LIBCMT.lib(write.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 273 error LNK2005: __open already defined in LIBCMT.lib(open.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 274 error LNK2005: __get_osfhandle already defined in LIBCMT.lib(osfinfo.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 278 error LNK1169: one or more multiply defined symbols found C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\Release\dcmp2pgm.exe 1 1 dcmp2pgm Error 201 error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmprscp Error 202 error LNK2005: __lock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmprscp Error 203 error LNK2005: __unlock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmprscp Error 204 error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscp Error 205 error LNK2005: _exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscp Error 206 error LNK2005: __exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscp Error 207 error LNK2005: __cexit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscp Error 208 error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscp Error 209 error LNK2005: _fflush already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(fflush.obj) dcmprscp Error 210 error LNK2005: __errno already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(dosmap.obj) dcmprscp Error 211 error LNK2005: __invoke_watson already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(invarg.obj) dcmprscp Error 212 error LNK2005: "void __cdecl terminate(void)" (?terminate@@YAXXZ) already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(hooks.obj) dcmprscp Error 213 error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscp Error 214 error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscp Error 215 error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscp Error 216 error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscp Error 217 error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmprscp Error 218 error LNK2005: __lock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmprscp Error 219 error LNK2005: __XcptFilter already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(winxfltr.obj) dcmprscp Error 220 error LNK2005: __stricmp already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(stricmp.obj) dcmprscp Error 221 error LNK2005: _mainCRTStartup already defined in MSVCRT.lib(crtexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0.obj) dcmprscp Error 222 error LNK2005: ___set_app_type already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(errmode.obj) dcmprscp Error 223 error LNK2005: __configthreadlocale already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(setlocal.obj) dcmprscp Error 224 error LNK2005: _getenv already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(getenv.obj) dcmprscp Error 225 error LNK2005: __isctype already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(isctype.obj) dcmprscp Error 226 error LNK2005: __strnicmp already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(strnicmp.obj) dcmprscp Error 227 error LNK2005: __close already defined in LIBCMT.lib(close.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 228 error LNK2005: __fileno already defined in LIBCMT.lib(fileno.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 229 error LNK2005: __lseek already defined in LIBCMT.lib(lseek.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 230 error LNK2005: __read already defined in LIBCMT.lib(read.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 231 error LNK2005: __write already defined in LIBCMT.lib(write.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 232 error LNK2005: __open already defined in LIBCMT.lib(open.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 233 error LNK2005: __get_osfhandle already defined in LIBCMT.lib(osfinfo.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 237 error LNK1169: one or more multiply defined symbols found C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\Release\dcmprscp.exe 1 1 dcmprscp Error 160 error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmprscu Error 161 error LNK2005: __lock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmprscu Error 162 error LNK2005: __unlock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmprscu Error 163 error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscu Error 164 error LNK2005: _exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscu Error 165 error LNK2005: __exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscu Error 166 error LNK2005: __cexit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscu Error 167 error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscu Error 168 error LNK2005: _fflush already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(fflush.obj) dcmprscu Error 169 error LNK2005: __errno already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(dosmap.obj) dcmprscu Error 170 error LNK2005: __invoke_watson already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(invarg.obj) dcmprscu Error 171 error LNK2005: "void __cdecl terminate(void)" (?terminate@@YAXXZ) already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(hooks.obj) dcmprscu Error 172 error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscu Error 173 error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscu Error 174 error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscu Error 175 error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscu Error 176 error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmprscu Error 177 error LNK2005: __lock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmprscu Error 178 error LNK2005: __XcptFilter already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(winxfltr.obj) dcmprscu Error 179 error LNK2005: _mainCRTStartup already defined in MSVCRT.lib(crtexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0.obj) dcmprscu Error 180 error LNK2005: ___set_app_type already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(errmode.obj) dcmprscu Error 181 error LNK2005: __configthreadlocale already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(setlocal.obj) dcmprscu Error 182 error LNK2005: _getenv already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(getenv.obj) dcmprscu Error 183 error LNK2005: __isctype already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(isctype.obj) dcmprscu Error 184 error LNK2005: __strnicmp already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(strnicmp.obj) dcmprscu Error 185 error LNK2005: __close already defined in LIBCMT.lib(close.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 186 error LNK2005: __fileno already defined in LIBCMT.lib(fileno.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 187 error LNK2005: _calloc already defined in LIBCMT.lib(calloc.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 188 error LNK2005: _atol already defined in LIBCMT.lib(atox.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 189 error LNK2005: _strcspn already defined in LIBCMT.lib(strcspn.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 190 error LNK2005: __stricmp already defined in LIBCMT.lib(stricmp.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 191 error LNK2005: _atoi already defined in LIBCMT.lib(atox.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 192 error LNK2005: __lseek already defined in LIBCMT.lib(lseek.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 193 error LNK2005: __read already defined in LIBCMT.lib(read.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 194 error LNK2005: __write already defined in LIBCMT.lib(write.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 195 error LNK2005: __open already defined in LIBCMT.lib(open.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 196 error LNK2005: __get_osfhandle already defined in LIBCMT.lib(osfinfo.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 200 error LNK1169: one or more multiply defined symbols found C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\Release\dcmprscu.exe dcmprscu Error 119 error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmpsprt Error 120 error LNK2005: __lock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmpsprt Error 121 error LNK2005: __unlock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmpsprt Error 122 error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmpsprt Error 123 error LNK2005: _exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmpsprt Error 124 error LNK2005: __exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmpsprt Error 125 error LNK2005: __cexit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmpsprt Error 126 error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmpsprt Error 127 error LNK2005: _fflush already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(fflush.obj) dcmpsprt Error 128 error LNK2005: __errno already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(dosmap.obj) dcmpsprt Error 129 error LNK2005: __invoke_watson already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(invarg.obj) dcmpsprt Error 130 error LNK2005: "void __cdecl terminate(void)" (?terminate@@YAXXZ) already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(hooks.obj) dcmpsprt Error 131 error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmpsprt Error 132 error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmpsprt Error 133 error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmpsprt Error 134 error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmpsprt Error 135 error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmpsprt Error 136 error LNK2005: __lock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmpsprt Error 137 error LNK2005: __XcptFilter already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(winxfltr.obj) dcmpsprt Error 138 error LNK2005: _mainCRTStartup already defined in MSVCRT.lib(crtexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0.obj) dcmpsprt Error 139 error LNK2005: ___set_app_type already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(errmode.obj) dcmpsprt Error 140 error LNK2005: __configthreadlocale already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(setlocal.obj) dcmpsprt Error 141 error LNK2005: _getenv already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(getenv.obj) dcmpsprt Error 142 error LNK2005: __isctype already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(isctype.obj) dcmpsprt Error 143 error LNK2005: __strnicmp already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(strnicmp.obj) dcmpsprt Error 144 error LNK2005: __close already defined in LIBCMT.lib(close.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 145 error LNK2005: __fileno already defined in LIBCMT.lib(fileno.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 146 error LNK2005: _calloc already defined in LIBCMT.lib(calloc.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 147 error LNK2005: _atol already defined in LIBCMT.lib(atox.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 148 error LNK2005: _strcspn already defined in LIBCMT.lib(strcspn.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 149 error LNK2005: __stricmp already defined in LIBCMT.lib(stricmp.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 150 error LNK2005: _atoi already defined in LIBCMT.lib(atox.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 151 error LNK2005: __lseek already defined in LIBCMT.lib(lseek.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 152 error LNK2005: __read already defined in LIBCMT.lib(read.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 153 error LNK2005: __write already defined in LIBCMT.lib(write.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 154 error LNK2005: __open already defined in LIBCMT.lib(open.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 155 error LNK2005: __get_osfhandle already defined in LIBCMT.lib(osfinfo.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 159 error LNK1169: one or more multiply defined symbols found C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\Release\dcmpsprt.exe 1 1 dcmpsprt Error 61 error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(_file.obj) dsr2html Error 62 error LNK2005: __lock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(_file.obj) dsr2html Error 63 error LNK2005: __unlock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(_file.obj) dsr2html Error 64 error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0dat.obj) dsr2html Error 65 error LNK2005: _exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0dat.obj) dsr2html Error 66 error LNK2005: __exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0dat.obj) dsr2html Error 67 error LNK2005: __cexit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0dat.obj) dsr2html Error 68 error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0dat.obj) dsr2html Error 69 error LNK2005: _fflush already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(fflush.obj) dsr2html Error 70 error LNK2005: __errno already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(dosmap.obj) dsr2html Error 71 error LNK2005: __invoke_watson already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(invarg.obj) dsr2html Error 72 error LNK2005: "void __cdecl terminate(void)" (?terminate@@YAXXZ) already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(hooks.obj) dsr2html Error 73 error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0init.obj) dsr2html Error 74 error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0init.obj) dsr2html Error 75 error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0init.obj) dsr2html Error 76 error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0init.obj) dsr2html Error 77 error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(mlock.obj) dsr2html Error 78 error LNK2005: __lock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(mlock.obj) dsr2html Error 79 error LNK2005: __XcptFilter already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(winxfltr.obj) dsr2html Error 80 error LNK2005: _mainCRTStartup already defined in MSVCRT.lib(crtexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0.obj) dsr2html Error 81 error LNK2005: ___set_app_type already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(errmode.obj) dsr2html Error 82 error LNK2005: __configthreadlocale already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(setlocal.obj) dsr2html Error 83 error LNK2005: _getenv already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(getenv.obj) dsr2html Error 84 error LNK2005: __isctype already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(isctype.obj) dsr2html Error 85 error LNK2005: __close already defined in LIBCMT.lib(close.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\MSVCRT.lib(MSVCR100.dll) dsr2html Error 86 error LNK2005: __fileno already defined in LIBCMT.lib(fileno.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\MSVCRT.lib(MSVCR100.dll) dsr2html Error 90 error LNK1169: one or more multiply defined symbols found C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\Release\dsr2html.exe 1 1 dsr2html Error 31 error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(_file.obj) dsr2xml Error 32 error LNK2005: __lock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(_file.obj) dsr2xml Error 33 error LNK2005: __unlock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(_file.obj) dsr2xml Error 34 error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0dat.obj) dsr2xml Error 35 error LNK2005: _exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCM

    Read the article

  • How to rewrite a URL with %23 in it?

    - by Jan P.
    I have a (wordpress) blog where after commenting the users are redirected back to the page with an anchor to their comment. Should look like this: http://example.org/foo-bar/#comment-570630 But somehow I get a lot of 404 ins my logfiles for such URLs: http://example.org/foo-bar/%23comment-570630 Is there a way to write a .htaccess rewrite rule to fix this? Bonus question: Any idea why this happens and what I can do about it?

    Read the article

  • SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008

    - by pinaldave
    Note: Please read the complete post before taking any actions. This blog post would discuss SHRINKFILE and TRUNCATE Log File. The script mentioned in the email received from reader contains the following questionable code: “Hi Pinal, If you could remember, I and my manager met you at TechEd in Bangalore. We just upgraded to SQL Server 2008. One of our jobs failed as it was using the following code. The error was: Msg 155, Level 15, State 1, Line 1 ‘TRUNCATE_ONLY’ is not a recognized BACKUP option. The code was: DBCC SHRINKFILE(TestDBLog, 1) BACKUP LOG TestDB WITH TRUNCATE_ONLY DBCC SHRINKFILE(TestDBLog, 1) GO I have modified that code to subsequent code and it works fine. But, are there other suggestions you have at the moment? USE [master] GO ALTER DATABASE [TestDb] SET RECOVERY SIMPLE WITH NO_WAIT DBCC SHRINKFILE(TestDbLog, 1) ALTER DATABASE [TestDb] SET RECOVERY FULL WITH NO_WAIT GO Configuration of our server and system is as follows: [Removed not relevant data]“ An email like this that suddenly pops out in early morning is alarming email. Because I am a dead, busy mind, so I had only one min to reply. I wrote down quickly the following note. (As I said, it was a single-minute email so it is not completely accurate). Here is that quick email shared with all of you. “Hi Mr. DBA [removed the name] Thanks for your email. I suggest you stop this practice. There are many issues included here, but I would list two major issues: 1) From the setting database to simple recovery, shrinking the file and once again setting in full recovery, you are in fact losing your valuable log data and will be not able to restore point in time. Not only that, you will also not able to use subsequent log files. 2) Shrinking file or database adds fragmentation. There are a lot of things you can do. First, start taking proper log backup using following command instead of truncating them and losing them frequently. BACKUP LOG [TestDb] TO  DISK = N'C:\Backup\TestDb.bak' GO Remove the code of SHRINKING the file. If you are taking proper log backups, your log file usually (again usually, special cases are excluded) do not grow very big. There are so many things to add here, but you can call me on my [phone number]. Before you call me, I suggest for accuracy you read Paul Randel‘s two posts here and here and Brent Ozar‘s Post here. Kind Regards, Pinal Dave” I guess this post is very much clear to you. Please leave your comments here. As mentioned, this is a very huge subject; I have just touched a tip of the ice-berg and have tried to point to authentic knowledge. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CRM 2011 - Workflows Vs JavaScripts

    - by Kanini
    In the Contact entity, I have the following attributes Preferred email - A read only field of type Email Personal email 1 - An email field Personal email 2 - An email field Work email 1 - An email field Work email 2 - An email field School email - An email field Other email - An email field Preferred email option - An option set with the following values {Personal email 1, Personal email 2, Work email 1, Work email 2, School email and Other email). None of the above mentioned fields are required. Requirement When user picks a value from Preferred email option, we copy the email address available in that field and apply the same in the Preferred email field. Implementation The Solution Architect suggested that we implement the above requirement as a Workflow. The reason he provided was - most of the times, these values are to be populated by an external website and the data is then fed into CRM 2011 system. So, when they update Preferred email option via a Web Service call to CRM, the WF will run and updated the Preferred email field. My argument / solution What will happen if I do not pick a value from the Preferred email Option Set? Do I set it to any of the email addresses that has a value in it? If so, what if there is more than one of the email address fields are populated, i.e., what if Personal email 1 and Work email 1 is populated but no value is picked in the Option Set? What if a value existed in the Preferred email Option Set and I then change it to NULL? Should the field Preferred email (where the text value of email address is stored) be set to Read Only? If not, what if I have picked Personal email 1 in the Option Set and then edit the Preferred email address text field with a completely new email address If yes, then we are enforcing that the preferred email should be one among Personal email 1, Personal email 2, Work email 1, Work email 2, School email or Other email [My preference would be this] What if I had a value of [email protected] in the personal email 1 field and personal email 2 is empty and choose value of Personal email 1 in the drop down for Preferred email (this will set the Preferred email field to [email protected]) and later, I change the value to Personal email 2 in the Preferred email. It overwrites a valid email address with nothing. I agree that it would be highly unlikely that a user will pick Preferred email as Personal email 2 and not have a value in it but nevertheless it is a possible scenario, isn’t it? What if users typed in a value in Personal email 1 but by mistake picked Personal email 2 in the option set and Personal email 2 field had no value in it. Solution The field Preferred email option should be a required field A JS should run whenever Preferred email option is changed. That JS function should set the relevant email field as required (based on the option chosen) and another JS function should be called (see step 3). A JS function should update the value of Preferred email with the value in the email field (as picked in the option set). The JS function should also be run every time someone updates the actual email field which is chosen in the option set. The guys who are managing the external website should update the Preferred email field - surely, if they can update Preferred email option via a Web Service call, it is easy enough to update the Preferred email right? Question Which is a better method? Should it be written as a JS or a WorkFlow? Also, whose responsibility is it to update the Preferred email field when the data flows from an external website? I am new to CRM 2011 but have around 6 years of experience as a CRM consultant (with other products). I do not come from a development background as I started off as a Application Support Engineer but have picked up development in the last couple of years.

    Read the article

  • How LINQ to Object statements work

    - by rajbk
    This post goes into detail as to now LINQ statements work when querying a collection of objects. This topic assumes you have an understanding of how generics, delegates, implicitly typed variables, lambda expressions, object/collection initializers, extension methods and the yield statement work. I would also recommend you read my previous two posts: Using Delegates in C# Part 1 Using Delegates in C# Part 2 We will start by writing some methods to filter a collection of data. Assume we have an Employee class like so: 1: public class Employee { 2: public int ID { get; set;} 3: public string FirstName { get; set;} 4: public string LastName {get; set;} 5: public string Country { get; set; } 6: } and a collection of employees like so: 1: var employees = new List<Employee> { 2: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 3: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 4: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 5: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" }, 6: }; Filtering We wish to  find all employees that have an even ID. We could start off by writing a method that takes in a list of employees and returns a filtered list of employees with an even ID. 1: static List<Employee> GetEmployeesWithEvenID(List<Employee> employees) { 2: var filteredEmployees = new List<Employee>(); 3: foreach (Employee emp in employees) { 4: if (emp.ID % 2 == 0) { 5: filteredEmployees.Add(emp); 6: } 7: } 8: return filteredEmployees; 9: } The method can be rewritten to return an IEnumerable<Employee> using the yield return keyword. 1: static IEnumerable<Employee> GetEmployeesWithEvenID(IEnumerable<Employee> employees) { 2: foreach (Employee emp in employees) { 3: if (emp.ID % 2 == 0) { 4: yield return emp; 5: } 6: } 7: } We put these together in a console application. 1: using System; 2: using System.Collections.Generic; 3: //No System.Linq 4:  5: public class Program 6: { 7: [STAThread] 8: static void Main(string[] args) 9: { 10: var employees = new List<Employee> { 11: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 12: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 13: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 14: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" }, 15: }; 16: var filteredEmployees = GetEmployeesWithEvenID(employees); 17:  18: foreach (Employee emp in filteredEmployees) { 19: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 20: emp.ID, emp.FirstName, emp.LastName, emp.Country); 21: } 22:  23: Console.ReadLine(); 24: } 25: 26: static IEnumerable<Employee> GetEmployeesWithEvenID(IEnumerable<Employee> employees) { 27: foreach (Employee emp in employees) { 28: if (emp.ID % 2 == 0) { 29: yield return emp; 30: } 31: } 32: } 33: } 34:  35: public class Employee { 36: public int ID { get; set;} 37: public string FirstName { get; set;} 38: public string LastName {get; set;} 39: public string Country { get; set; } 40: } Output: ID 2 First_Name Jim Last_Name Ashlock Country UK ID 4 First_Name Jill Last_Name Anderson Country AUS Our filtering method is too specific. Let us change it so that it is capable of doing different types of filtering and lets give our method the name Where ;-) We will add another parameter to our Where method. This additional parameter will be a delegate with the following declaration. public delegate bool Filter(Employee emp); The idea is that the delegate parameter in our Where method will point to a method that contains the logic to do our filtering thereby freeing our Where method from any dependency. The method is shown below: 1: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 2: foreach (Employee emp in employees) { 3: if (filter(emp)) { 4: yield return emp; 5: } 6: } 7: } Making the change to our app, we create a new instance of the Filter delegate on line 14 with a target set to the method EmployeeHasEvenId. Running the code will produce the same output. 1: public delegate bool Filter(Employee emp); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: var employees = new List<Employee> { 9: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 10: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 11: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 12: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 13: }; 14: var filterDelegate = new Filter(EmployeeHasEvenId); 15: var filteredEmployees = Where(employees, filterDelegate); 16:  17: foreach (Employee emp in filteredEmployees) { 18: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 19: emp.ID, emp.FirstName, emp.LastName, emp.Country); 20: } 21: Console.ReadLine(); 22: } 23: 24: static bool EmployeeHasEvenId(Employee emp) { 25: return emp.ID % 2 == 0; 26: } 27: 28: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 29: foreach (Employee emp in employees) { 30: if (filter(emp)) { 31: yield return emp; 32: } 33: } 34: } 35: } 36:  37: public class Employee { 38: public int ID { get; set;} 39: public string FirstName { get; set;} 40: public string LastName {get; set;} 41: public string Country { get; set; } 42: } Lets use lambda expressions to inline the contents of the EmployeeHasEvenId method in place of the method. The next code snippet shows this change (see line 15).  For brevity, the Employee class declaration has been skipped. 1: public delegate bool Filter(Employee emp); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: var employees = new List<Employee> { 9: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 10: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 11: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 12: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 13: }; 14: var filterDelegate = new Filter(EmployeeHasEvenId); 15: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 16:  17: foreach (Employee emp in filteredEmployees) { 18: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 19: emp.ID, emp.FirstName, emp.LastName, emp.Country); 20: } 21: Console.ReadLine(); 22: } 23: 24: static bool EmployeeHasEvenId(Employee emp) { 25: return emp.ID % 2 == 0; 26: } 27: 28: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 29: foreach (Employee emp in employees) { 30: if (filter(emp)) { 31: yield return emp; 32: } 33: } 34: } 35: } 36:  The output displays the same two employees.  Our Where method is too restricted since it works with a collection of Employees only. Lets change it so that it works with any IEnumerable<T>. In addition, you may recall from my previous post,  that .NET 3.5 comes with a lot of predefined delegates including public delegate TResult Func<T, TResult>(T arg); We will get rid of our Filter delegate and use the one above instead. We apply these two changes to our code. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: var employees = new List<Employee> { 7: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 8: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 9: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 10: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 11: }; 12:  13: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 14:  15: foreach (Employee emp in filteredEmployees) { 16: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 17: emp.ID, emp.FirstName, emp.LastName, emp.Country); 18: } 19: Console.ReadLine(); 20: } 21: 22: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 23: foreach (var x in source) { 24: if (filter(x)) { 25: yield return x; 26: } 27: } 28: } 29: } We have successfully implemented a way to filter any IEnumerable<T> based on a  filter criteria. Projection Now lets enumerate on the items in the IEnumerable<Employee> we got from the Where method and copy them into a new IEnumerable<EmployeeFormatted>. The EmployeeFormatted class will only have a FullName and ID property. 1: public class EmployeeFormatted { 2: public int ID { get; set; } 3: public string FullName {get; set;} 4: } We could “project” our existing IEnumerable<Employee> into a new collection of IEnumerable<EmployeeFormatted> with the help of a new method. We will call this method Select ;-) 1: static IEnumerable<EmployeeFormatted> Select(IEnumerable<Employee> employees) { 2: foreach (var emp in employees) { 3: yield return new EmployeeFormatted { 4: ID = emp.ID, 5: FullName = emp.LastName + ", " + emp.FirstName 6: }; 7: } 8: } The changes are applied to our app. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: var employees = new List<Employee> { 7: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 8: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 9: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 10: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 11: }; 12:  13: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 14: var formattedEmployees = Select(filteredEmployees); 15:  16: foreach (EmployeeFormatted emp in formattedEmployees) { 17: Console.WriteLine("ID {0} Full_Name {1}", 18: emp.ID, emp.FullName); 19: } 20: Console.ReadLine(); 21: } 22:  23: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 24: foreach (var x in source) { 25: if (filter(x)) { 26: yield return x; 27: } 28: } 29: } 30: 31: static IEnumerable<EmployeeFormatted> Select(IEnumerable<Employee> employees) { 32: foreach (var emp in employees) { 33: yield return new EmployeeFormatted { 34: ID = emp.ID, 35: FullName = emp.LastName + ", " + emp.FirstName 36: }; 37: } 38: } 39: } 40:  41: public class Employee { 42: public int ID { get; set;} 43: public string FirstName { get; set;} 44: public string LastName {get; set;} 45: public string Country { get; set; } 46: } 47:  48: public class EmployeeFormatted { 49: public int ID { get; set; } 50: public string FullName {get; set;} 51: } Output: ID 2 Full_Name Ashlock, Jim ID 4 Full_Name Anderson, Jill We have successfully selected employees who have an even ID and then shaped our data with the help of the Select method so that the final result is an IEnumerable<EmployeeFormatted>.  Lets make our Select method more generic so that the user is given the freedom to shape what the output would look like. We can do this, like before, with lambda expressions. Our Select method is changed to accept a delegate as shown below. TSource will be the type of data that comes in and TResult will be the type the user chooses (shape of data) as returned from the selector delegate. 1:  2: static IEnumerable<TResult> Select<TSource, TResult>(IEnumerable<TSource> source, Func<TSource, TResult> selector) { 3: foreach (var x in source) { 4: yield return selector(x); 5: } 6: } We see the new changes to our app. On line 15, we use lambda expression to specify the shape of the data. In this case the shape will be of type EmployeeFormatted. 1:  2: public class Program 3: { 4: [STAThread] 5: static void Main(string[] args) 6: { 7: var employees = new List<Employee> { 8: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 9: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 10: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 11: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 12: }; 13:  14: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 15: var formattedEmployees = Select(filteredEmployees, (emp) => 16: new EmployeeFormatted { 17: ID = emp.ID, 18: FullName = emp.LastName + ", " + emp.FirstName 19: }); 20:  21: foreach (EmployeeFormatted emp in formattedEmployees) { 22: Console.WriteLine("ID {0} Full_Name {1}", 23: emp.ID, emp.FullName); 24: } 25: Console.ReadLine(); 26: } 27: 28: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 29: foreach (var x in source) { 30: if (filter(x)) { 31: yield return x; 32: } 33: } 34: } 35: 36: static IEnumerable<TResult> Select<TSource, TResult>(IEnumerable<TSource> source, Func<TSource, TResult> selector) { 37: foreach (var x in source) { 38: yield return selector(x); 39: } 40: } 41: } The code outputs the same result as before. On line 14 we filter our data and on line 15 we project our data. What if we wanted to be more expressive and concise? We could combine both line 14 and 15 into one line as shown below. Assuming you had to perform several operations like this on our collection, you would end up with some very unreadable code! 1: var formattedEmployees = Select(Where(employees, emp => emp.ID % 2 == 0), (emp) => 2: new EmployeeFormatted { 3: ID = emp.ID, 4: FullName = emp.LastName + ", " + emp.FirstName 5: }); A cleaner way to write this would be to give the appearance that the Select and Where methods were part of the IEnumerable<T>. This is exactly what extension methods give us. Extension methods have to be defined in a static class. Let us make the Select and Where extension methods on IEnumerable<T> 1: public static class MyExtensionMethods { 2: static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 3: foreach (var x in source) { 4: if (filter(x)) { 5: yield return x; 6: } 7: } 8: } 9: 10: static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 11: foreach (var x in source) { 12: yield return selector(x); 13: } 14: } 15: } The creation of the extension method makes the syntax much cleaner as shown below. We can write as many extension methods as we want and keep on chaining them using this technique. 1: var formattedEmployees = employees 2: .Where(emp => emp.ID % 2 == 0) 3: .Select (emp => new EmployeeFormatted { ID = emp.ID, FullName = emp.LastName + ", " + emp.FirstName }); Making these changes and running our code produces the same result. 1: using System; 2: using System.Collections.Generic; 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: var employees = new List<Employee> { 10: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 11: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 12: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 13: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 14: }; 15:  16: var formattedEmployees = employees 17: .Where(emp => emp.ID % 2 == 0) 18: .Select (emp => 19: new EmployeeFormatted { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: } 23: ); 24:  25: foreach (EmployeeFormatted emp in formattedEmployees) { 26: Console.WriteLine("ID {0} Full_Name {1}", 27: emp.ID, emp.FullName); 28: } 29: Console.ReadLine(); 30: } 31: } 32:  33: public static class MyExtensionMethods { 34: static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 35: foreach (var x in source) { 36: if (filter(x)) { 37: yield return x; 38: } 39: } 40: } 41: 42: static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 43: foreach (var x in source) { 44: yield return selector(x); 45: } 46: } 47: } 48:  49: public class Employee { 50: public int ID { get; set;} 51: public string FirstName { get; set;} 52: public string LastName {get; set;} 53: public string Country { get; set; } 54: } 55:  56: public class EmployeeFormatted { 57: public int ID { get; set; } 58: public string FullName {get; set;} 59: } Let’s change our code to return a collection of anonymous types and get rid of the EmployeeFormatted type. We see that the code produces the same output. 1: using System; 2: using System.Collections.Generic; 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: var employees = new List<Employee> { 10: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 11: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 12: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 13: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 14: }; 15:  16: var formattedEmployees = employees 17: .Where(emp => emp.ID % 2 == 0) 18: .Select (emp => 19: new { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: } 23: ); 24:  25: foreach (var emp in formattedEmployees) { 26: Console.WriteLine("ID {0} Full_Name {1}", 27: emp.ID, emp.FullName); 28: } 29: Console.ReadLine(); 30: } 31: } 32:  33: public static class MyExtensionMethods { 34: public static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 35: foreach (var x in source) { 36: if (filter(x)) { 37: yield return x; 38: } 39: } 40: } 41: 42: public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 43: foreach (var x in source) { 44: yield return selector(x); 45: } 46: } 47: } 48:  49: public class Employee { 50: public int ID { get; set;} 51: public string FirstName { get; set;} 52: public string LastName {get; set;} 53: public string Country { get; set; } 54: } To be more expressive, C# allows us to write our extension method calls as a query expression. Line 16 can be rewritten a query expression like so: 1: var formattedEmployees = from emp in employees 2: where emp.ID % 2 == 0 3: select new { 4: ID = emp.ID, 5: FullName = emp.LastName + ", " + emp.FirstName 6: }; When the compiler encounters an expression like the above, it simply rewrites it as calls to our extension methods.  So far we have been using our extension methods. The System.Linq namespace contains several extension methods for objects that implement the IEnumerable<T>. You can see a listing of these methods in the Enumerable class in the System.Linq namespace. Let’s get rid of our extension methods (which I purposefully wrote to be of the same signature as the ones in the Enumerable class) and use the ones provided in the Enumerable class. Our final code is shown below: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; //Added 4:  5: public class Program 6: { 7: [STAThread] 8: static void Main(string[] args) 9: { 10: var employees = new List<Employee> { 11: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 12: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 13: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 14: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 15: }; 16:  17: var formattedEmployees = from emp in employees 18: where emp.ID % 2 == 0 19: select new { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: }; 23:  24: foreach (var emp in formattedEmployees) { 25: Console.WriteLine("ID {0} Full_Name {1}", 26: emp.ID, emp.FullName); 27: } 28: Console.ReadLine(); 29: } 30: } 31:  32: public class Employee { 33: public int ID { get; set;} 34: public string FirstName { get; set;} 35: public string LastName {get; set;} 36: public string Country { get; set; } 37: } 38:  39: public class EmployeeFormatted { 40: public int ID { get; set; } 41: public string FullName {get; set;} 42: } This post has shown you a basic overview of LINQ to Objects work by showning you how an expression is converted to a sequence of calls to extension methods when working directly with objects. It gets more interesting when working with LINQ to SQL where an expression tree is constructed – an in memory data representation of the expression. The C# compiler compiles these expressions into code that builds an expression tree at runtime. The provider can then traverse the expression tree and generate the appropriate SQL query. You can read more about expression trees in this MSDN article.

    Read the article

  • Implementing Release Notes in TFS Team Build 2010

    - by Jakob Ehn
    In TFS Team Build (all versions), each build is associated with changesets and work items. To determine which changesets that should be associated with the current build, Team Build finds the label of the “Last Good Build” an then aggregates all changesets up unitl the label for the current build. Basically this means that if your build is failing, every changeset that is checked in will be accumulated in this list until the build is successful. All well, but there uis a dimension missing here, regarding to releases. Often you can run several release builds until you actually deploy the result of the build to a test or production system. When you do this, wouldn’t it be nice to be able to send the customer a nice release note that contain all work items and changeset since the previously deployed version? At our company, we have developed a Release Repository, which basically is a siple web site with a SQL database as storage. Every time we run a Release Build, the resulting installers, zip-files, sql scripts etc, gets pushed into the release repositor together with the relevant build information. This information contains things such as start time, who triggered the build etc. Also, it contains the associated changesets and work items. When deploying the MSI’s for a new version, we mark the build as Deployed in the release repository. The depoyed status is stored in the release repository database, but it could also have been implemented by setting the Build Quality for that build to Deployed. When generating the release notes, the web site simple runs through each release build back to the previous build that was marked as Deplyed, and aggregates the work items and changesets: Here is a sample screenshot on how this looks for a sample build/application The web site is available both for us and also for the customers and testers, which means that they can easily get the latest version of a particular application and at the same time see what changes are included in this version. There is a lot going on in the Release Build Process that drives this in our TFS 2010 server, but in this post I will show how you can access and read the changeset and work item information in a custom activity. Since Team Build associates changesets and work items for each build, this information is (partially) available inside the build process template. The Associate Changesets and Work Items for non-Shelveset Builds activity (located inside the Try  Compile, Test, and Associate Changesets and Work Items activity) defines and populates a variable called associatedWorkItems   You can see that this variable is an IList containing instances of the Changeset class (from the Microsoft.TeamFoundation.VersionControl.Client namespace). Now, if you want to access this variable later on in the build process template, you need to declare a new variable in the corresponding scope and the assign the value to this variable. In this sample, I declared a variable called assocChangesets in the RunAgent sequence, which basically covers the whol compile, test and drop part of the build process:   Now, you need to assign the value from the AssociatedChangesets to this variable. This is done using the Assign workflow activity:   Now you can add a custom activity any where inside the RunAgent sequence and use this variable. NB: Of course your activity must place somewhere after the variable has been poplated. To finish off, here is code snippet that shows how you can read the changeset and work item information from the variable.   First you add an InArgumet on your activity where you can pass i the variable that we defined. [RequiredArgument] public InArgument<IList<Changeset>> AssociatedChangesets { get; set; } Then you can traverse all the changesets in the list, and for each changeset use the WorkItems property to get the work items that were associated in that changeset: foreach (Changeset ch in associatedChangesets) { // Add change theChangesets.Add( new AssociatedChangeset(ch.ChangesetId, ch.ArtifactUri, ch.Committer, ch.Comment, ch.ChangesetId)); foreach (var wi in ch.WorkItems) { theWorkItems.Add( new AssociatedWorkItem(wi["System.AssignedTo"].ToString(), wi.Id, wi["System.State"].ToString(), wi.Title, wi.Type.Name, wi.Id, wi.Uri)); } } NB: AssociatedChangeset and AssociatedWorkItem are custom classes that we use internally for storing this information that is eventually pushed to the release repository.

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • Stream Media from Windows 7 to XP with VLC Media Player

    - by DigitalGeekery
    So you’ve got yourself a new computer with Windows 7 and you’re itching to take advantage of it’s ability to stream media across your home network. But, the rest of the family is still on Windows XP and you’re not quite ready to shell out the cash for the upgrades. Well, today we’ll show you how to easily stream media from Windows 7 to Windows XP with VLC Media Player. On the host computer running Windows 7, you’ll need to have an account set up with both a username and password. A blank password will not work. The media files will need to be located in a shared folder. Note: If the media files are located within the Public directory, or within the profile of the user account you use to log into the Windows 7 computer, they will be shared automatically. Sharing your Media Folders On your Windows 7 computer, right-click on the folder containing the files you’d like to stream and choose Properties.     On the Sharing Tab of the folder properties, click the Share button. Click OK.   Type or select from the drop down the user account you’ll use to log in, or select “Everyone” to share with all users. Then click Add. You may change the permission level, but only Read permission is required to play the media. Repeat this process for any additional folders you wish to share.    The Windows XP Client Computer Now that we’ve shared our media folders from the Windows 7 computer, we’re ready to play our files on the Windows XP computer. Download and install the VLC Media Player. (See link below) Then open VLC. Click on Media from the and select Open File… Browse your network for the shared folder that contains your media.   You’ll be prompted to log in to the host computer. Provide the credentials for a user on the Windows 7 computer. Click OK.   Select your media file and click Open.    Your media playback will begin momentarily.   This is a nice and easy way to stream media across your home network without upgrading multiple computers to Windows 7.  Plus, VLC is certainly no slouch as a Media Player. It’ll play virtually any video or audio file you can throw at it. Have you already upgraded all your home PCs to Windows 7? Check out our previous article on streaming media between Windows 7 computers on your home network. Download VLC Media Player Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesShare Digital Media With Other Computers on a Home Network with Windows 7Enable Media Streaming in Windows Home Server to Windows Media PlayerInstall and Use the VLC Media Player on Ubuntu LinuxInstalling Windows Media Player Plugin for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images

    Read the article

  • My ASP.NET news sources

    - by Jon Galloway
    I just posted about the ASP.NET Daily Community Spotlight. I was going to list a bunch of my news sources at the end, but figured this deserves a separate post. I've been following a lot of development blogs for a long time - for a while I subscribed to over 1500 feeds and read them all. That doesn't scale very well, though, and it's really time consuming. Since the community spotlight requires an interesting ASP.NET post every day of the year, I've come up with a few sources of ASP.NET news. Top Link Blogs Chris Alcock's The Morning Brew is a must-read blog which highlights each day's best blog posts across the .NET community. He covers the entire Microsoft development, but generally any of the top ASP.NET posts I see either have already been listed on The Morning Brew or will be there soon. Elijah Manor posts a lot of great content, which is available in his Twitter feed at @elijahmanor, on his Delicious feed, and on a dedicated website - Web Dev Tweets. While not 100% ASP.NET focused, I've been appreciating Joe Stagner's Weekly Links series, partly since he includes a lot of links that don't show up on my other lists. Twitter Over the past few years, I've been getting more and more of my information from my Twitter network (as opposed to RSS or other means). Twitter is as good as your network, so if getting good information off Twitter sounds crazy, you're probably not following the right people. I already mentioned Elijah Manor (@elijahmanor). I follow over a thousand people on Twitter, so I'm not going to try to pick and choose a list, but one good way to get started building out a Twitter network is to follow active Twitter users on the ASP.NET team at Microsoft: @scottgu (well, not on the ASP.NET team, but their great grand boss, and always a great source of ASP.NET info) @shanselman @haacked @bradwilson @davidfowl @InfinitiesLoop @davidebbo @marcind @DamianEdwards @stevensanderson @bleroy @humancompiler @osbornm @anurse I'm sure I'm missing a few, and I'll update the list. Building a Twitter network that follows topics you're interested in allows you to use other tools like Cadmus to automatically summarize top content by leveraging the collective input of many users. Twitter Search with Topsy You can search Twitter for hashtags (like #aspnet, #aspnetmvc, and #webmatrix) to get a raw view of what people are talking about on Twitter. Twitter's search is pretty poor; I prefer Topsy. Here's an example search for the #aspnetmvc hashtag: http://topsy.com/s?q=%23aspnetmvc You can also do combined queries for several tags: http://topsy.com/s?q=%23aspnetmvc+OR+%23aspnet+OR+%23webmatrix Paper.li Paper.li is a handy service that builds a custom daily newspaper based on your social network. They've turned a lot of people off by automatically tweeting "The SuperDevFoo Daily is out!!!" messages (which can be turned off), but if you're ignoring them because of those message, you're missing out on a handy, free service. My paper.li page includes content across a lot of interests, including ASP.NET: http://paper.li/jongalloway When I want to drill into a specific tag, though, I'll just look at the Paper.li post for that hashtag. For example, here's the #aspnetmvc paper.li page: http://paper.li/tag/aspnetmvc Delicious I mentioned previously that I use Delicious for managing site links. I also use their network and search features. The tag based search is pretty good: Even better, though, is that I can see who's bookmarked these links, and add them to my Delicious network. After having built out a network, I can optimize by doing less searching and more leaching leveraging of collective intelligence. Community Sites I scan DotNetKicks, the weblogs.asp.net combined feed, and the ASP.NET Community page, CodeBetter, Los Techies,  CodeProject,  and DotNetSlackers from time to time. They're hit and miss, but they do offer more of an opportunity for finding original content which others may have missed. Terms of Enrampagement When someone's on a tear, I just manually check their sites more often. I could use RSS for that, but it changes pretty often. I just keep a mental note of people who are cranking out a lot of good content and check their sites more often. What works for you?

    Read the article

  • SQL SERVER – Guest Post – Jacob Sebastian – Filestream – Wait Types – Wait Queues – Day 22 of 28

    - by pinaldave
    Jacob Sebastian is a SQL Server MVP, Author, Speaker and Trainer. Jacob is one of the top rated expert community. Jacob wrote the book The Art of XSD – SQL Server XML Schema Collections and wrote the XML Chapter in SQL Server 2008 Bible. See his Blog | Profile. He is currently researching on the subject of Filestream and have submitted this interesting article on the very subject. What is FILESTREAM? FILESTREAM is a new feature introduced in SQL Server 2008 which provides an efficient storage and management option for BLOB data. Many applications that deal with BLOB data today stores them in the file system and stores the path to the file in the relational tables. Storing BLOB data in the file system is more efficient that storing them in the database. However, this brings up a few disadvantages as well. When the BLOB data is stored in the file system, it is hard to ensure transactional consistency between the file system data and relational data. Some applications store the BLOB data within the database to overcome the limitations mentioned earlier. This approach ensures transactional consistency between the relational data and BLOB data, but is very bad in terms of performance. FILESTREAM combines the benefits of both approaches mentioned above without the disadvantages we examined. FILESTREAM stores the BLOB data in the file system (thus takes advantage of the IO Streaming capabilities of NTFS) and ensures transactional consistency between the BLOB data in the file system and the relational data in the database. For more information on the FILESTREAM feature, visit: http://beyondrelational.com/filestream/default.aspx FILESTREAM Wait Types Since this series is on the different SQL Server wait types, let us take a look at the various wait types that are related to the FILESTREAM feature. FS_FC_RWLOCK This wait type is generated by FILESTREAM Garbage Collector. This occurs when Garbage collection is disabled prior to a backup/restore operation or when a garbage collection cycle is being executed. FS_GARBAGE_COLLECTOR_SHUTDOWN This wait type occurs when during the cleanup process of a garbage collection cycle. It indicates that that garbage collector is waiting for the cleanup tasks to be completed. FS_HEADER_RWLOCK This wait type indicates that the process is waiting for obtaining access to the FILESTREAM header file for read or write operation. The FILESTREAM header is a disk file located in the FILESTREAM data container and is named “filestream.hdr”. FS_LOGTRUNC_RWLOCK This wait type indicates that the process is trying to perform a FILESTREAM log truncation related operation. It can be either a log truncate operation or to disable log truncation prior to a backup or restore operation. FSA_FORCE_OWN_XACT This wait type occurs when a FILESTREAM file I/O operation needs to bind to the associated transaction, but the transaction is currently owned by another session. FSAGENT This wait type occurs when a FILESTREAM file I/O operation is waiting for a FILESTREAM agent resource that is being used by another file I/O operation. FSTR_CONFIG_MUTEX This wait type occurs when there is a wait for another FILESTREAM feature reconfiguration to be completed. FSTR_CONFIG_RWLOCK This wait type occurs when there is a wait to serialize access to the FILESTREAM configuration parameters. Waits and Performance System waits has got a direct relationship with the overall performance. In most cases, when waits increase the performance degrades. SQL Server documentation does not say much about how we can reduce these waits. However, following the FILESTREAM best practices will help you to improve the overall performance and reduce the wait types to a good extend. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: Filestream

    Read the article

  • Ubuntu 10.04 & IBM DS3524 with FC multipath, inactive path is [failed][faulty] instead of [active][ghost]

    - by Graeme Donaldson
    OK, this is my setup: FC Switches IBM/Brocade, Switch1 and Switch2, independent fabrics. Server IBM x3650 M2, 2x QLogic QLE2460, 1 connected to each FC Switch. Storage IBM DS3524, 2x controllers with 4x FC ports each, but only 2x connected on each. +-----------------------------------------------------------------------+ | HBA1 Server HBA2 | +-----------------------------------------------------------------------+ | | | | | | +-----------------------------+ +------------------------------+ | Switch1 | | Switch2 | +-----------------------------+ +------------------------------+ | | | | | | | | | | | | | | | | | | | | +-----------------------------------+-----------------------------------+ | Contr A, port 3 | Contr A, port 4 | Contr B, port 3 | Contr B, port 4 | +-----------------------------------+-----------------------------------+ | Storage | +-----------------------------------------------------------------------+ My /etc/multipath.conf is from the IBM redbook for the DS3500, except I use a different setting for prio_callout, IBM uses /sbin/mpath_prio_tpc, but according to http://changelogs.ubuntu.com/changelogs/pool/main/m/multipath-tools/multipath-tools_0.4.8-7ubuntu2/changelog, this was renamed to /sbin/mpath_prio_rdac, which I'm using. devices { device { #ds3500 vendor "IBM" product "1746 FAStT" hardware_handler "1 rdac" path_checker rdac failback 0 path_grouping_policy multibus prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } multipaths { multipath { wwid xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx alias array07 path_grouping_policy multibus path_checker readsector0 path_selector "round-robin 0" failback "5" rr_weight priorities no_path_retry "5" } } The output of multipath -ll with controller A as the preferred path: root@db06:~# multipath -ll sdg: checker msg is "directio checker reports path is down" sdh: checker msg is "directio checker reports path is down" array07 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) dm-2 IBM ,1746 FASt [size=4.9T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=2][active] \_ 5:0:1:0 sdd 8:48 [active][ready] \_ 5:0:2:0 sde 8:64 [active][ready] \_ 6:0:1:0 sdg 8:96 [failed][faulty] \_ 6:0:2:0 sdh 8:112 [failed][faulty] If I change the preferred path using IBM DS Storage Manager to Controller B, the output swaps accordingly: root@db06:~# multipath -ll sdd: checker msg is "directio checker reports path is down" sde: checker msg is "directio checker reports path is down" array07 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) dm-2 IBM ,1746 FASt [size=4.9T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=2][active] \_ 5:0:1:0 sdd 8:48 [failed][faulty] \_ 5:0:2:0 sde 8:64 [failed][faulty] \_ 6:0:1:0 sdg 8:96 [active][ready] \_ 6:0:2:0 sdh 8:112 [active][ready] According to IBM, the inactive path should be "[active][ghost]", not "[failed][faulty]". Despite this, I don't seem to have any I/O issues, but my syslog is being spammed with this every 5 seconds: Jun 1 15:30:09 db06 multipathd: sdg: directio checker reports path is down Jun 1 15:30:09 db06 kernel: [ 2350.282065] sd 6:0:2:0: [sdh] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 1 15:30:09 db06 kernel: [ 2350.282071] sd 6:0:2:0: [sdh] Sense Key : Illegal Request [current] Jun 1 15:30:09 db06 kernel: [ 2350.282076] sd 6:0:2:0: [sdh] <<vendor>> ASC=0x94 ASCQ=0x1ASC=0x94 ASCQ=0x1 Jun 1 15:30:09 db06 kernel: [ 2350.282083] sd 6:0:2:0: [sdh] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Jun 1 15:30:09 db06 kernel: [ 2350.282092] end_request: I/O error, dev sdh, sector 0 Jun 1 15:30:10 db06 multipathd: sdh: directio checker reports path is down Jun 1 15:30:14 db06 kernel: [ 2355.312270] sd 6:0:1:0: [sdg] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 1 15:30:14 db06 kernel: [ 2355.312277] sd 6:0:1:0: [sdg] Sense Key : Illegal Request [current] Jun 1 15:30:14 db06 kernel: [ 2355.312282] sd 6:0:1:0: [sdg] <<vendor>> ASC=0x94 ASCQ=0x1ASC=0x94 ASCQ=0x1 Jun 1 15:30:14 db06 kernel: [ 2355.312290] sd 6:0:1:0: [sdg] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Jun 1 15:30:14 db06 kernel: [ 2355.312299] end_request: I/O error, dev sdg, sector 0 Does anyone know how I can get the inactive path to show "[active][ghost]" instead of "[failed][faulty]"? I assume that once I can get that right then the spam in my syslog will end as well. One final thing worth mentioning is that the IBM redbook doc targets SLES 11 so I'm assuming there's something a little different under Ubuntu that I just haven't figured out yet. Update: As suggested by Mitch, I've tried removing /etc/multipath.conf, and now the output of multipath -ll looks like this: root@db06:~# multipath -ll sdg: checker msg is "directio checker reports path is down" sdh: checker msg is "directio checker reports path is down" xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxdm-1 IBM ,1746 FASt [size=4.9T][features=0][hwhandler=0] \_ round-robin 0 [prio=1][active] \_ 5:0:2:0 sde 8:64 [active][ready] \_ round-robin 0 [prio=1][enabled] \_ 5:0:1:0 sdd 8:48 [active][ready] \_ round-robin 0 [prio=0][enabled] \_ 6:0:1:0 sdg 8:96 [failed][faulty] \_ round-robin 0 [prio=0][enabled] \_ 6:0:2:0 sdh 8:112 [failed][faulty] So its more or less the same, with the same message in the syslog every 5 minutes as before, but the grouping has changed.

    Read the article

  • Understanding MotionEvent to implement a virtual DPad and Buttons on Android (Multitouch)

    - by Fabio Gomes
    I once implemented a DPad in XNA and now I'm trying to port it to android, put, I still don't get how the touch events work in android, the more I read the more confused I get. Here is the code I wrote so far, it works, but guess that it will only handle one touch point. public boolean onTouchEvent(MotionEvent event) { if (event.getPointerCount() == 0) return true; int touchX = -1; int touchY = -1; pressedDirection = DPadDirection.None; int actionCode = event.getAction() & MotionEvent.ACTION_MASK; if (actionCode == MotionEvent.ACTION_UP) { if (event.getPointerId(0) == idDPad) { pressedDirection = DPadDirection.None; idDPad = -1; } } else if (actionCode == MotionEvent.ACTION_DOWN || actionCode == MotionEvent.ACTION_MOVE) { touchX = (int)event.getX(); touchY = (int)event.getY(); if (rightRect.contains(touchX, touchY)) pressedDirection = DPadDirection.Right; else if (leftRect.contains(touchX, touchY)) pressedDirection = DPadDirection.Left; else if (upRect.contains(touchX, touchY)) pressedDirection = DPadDirection.Up; else if (downRect.contains(touchX, touchY)) pressedDirection = DPadDirection.Down; if (pressedDirection != DPadDirection.None) idDPad = event.getPointerId(0); } return true; } The logic is: Test if there is a "DOWN" or "MOVED" event, then if one of this events collides with one of the 4 rectangles of my DPad, I set the pressedDirectin variable to the side of the touch event, then I read the DPad actual pressed direction in my Update() event on another class. The thing I'm not sure, is how do I get track of the touch points, I store the ID of the touch point which generated the diretion that is being stored (last one), so when this ID is released I set the Direction to None, but I'm really confused about how to handle this in android, here is the code I had in XNA: public override void Update(GameTime gameTime) { PressedDirection = DpadDirection.None; foreach (TouchLocation _touchLocation in TouchPanel.GetState()) { if (_touchLocation.State == TouchLocationState.Released) { if (_touchLocation.Id == _idDPad) { PressedDirection = DpadDirection.None; _idDPad = -1; } } else if (_touchLocation.State == TouchLocationState.Pressed || _touchLocation.State == TouchLocationState.Moved) { _intersectRect.X = (int)_touchLocation.Position.X; _intersectRect.Y = (int)_touchLocation.Position.Y; _intersectRect.Width = 1; _intersectRect.Height = 1; if (_intersectRect.Intersects(_rightRect)) PressedDirection = DpadDirection.Right; else if (_intersectRect.Intersects(_leftRect)) PressedDirection = DpadDirection.Left; else if (_intersectRect.Intersects(_upRect)) PressedDirection = DpadDirection.Up; else if (_intersectRect.Intersects(_downRect)) PressedDirection = DpadDirection.Down; if (PressedDirection != DpadDirection.None) { _idDPad = _touchLocation.Id; continue; } } } base.Update(gameTime); } So, first of all: Am I doing this correctly? if not, why? I don't want my DPad to handle multiple directions, but I still didn't get how to handle the multiple touch points, is the event called for every touch point, or all touch points comes in a single call? I still don't get it.

    Read the article

  • Outlook 2010 – My Top 9 features

    - by Daniel Moth
    Office 2010 has reached RTM. Here are my favorite Outlook features. Speed. It is faster than previous versions and hangs much less… Ignore Conversation (Ctrl+Del). Not interested in a conversation? Click this button on the new ribbon and you'll never receive another message on that thread (they all go to your Deleted folder). Calendar Preview. When receiving a Meeting Request, before deciding to accept or not you get to see a preview of your calendar for that day and where the new meeting would fit in. See full description on outlook team blog post. Quick Steps. See full description on outlook team blog post. I have created my own quick steps for filing conversations to folders, various pre-populated reply templates, creating calendar invites and creating TODOs from received emails. Search Interface. Many of us knew the magic keywords for making smart searches (e.g. from:Name), but it is great to learn many more through the search tools contextual ribbon tab. Next 7 days. Out of the many enhancements to the Calendar view, my favorite is to be able with  single click to view the next 7 days – that is now my default view. MailTips. See full description on outlook team blog post. The ones I particularly like are when composing a mail to someone that has their Out Of Office reply set, you get to read it before sending the mail (and hence can decide to postpone sending). when composing a mail to a distribution list, a message informs you of the number of recipients. Hopefully, senders will use that as a clue for narrowing down the recipient list or at least verifying that their mail should indeed be sent to so many people. "You are not responding to the latest message in this conversation. Click here to open it.". When composing a reply to a conversation and you have not picked the last message to reply to (don't you hate it when people split threads like that?), this is the inline message you see (under the MailTips area) and if you click on the message it opens the last mail in the conversation so you can reply to that. Rich "Conversation Settings" and in particular "Show Messages from Other Folders". For example, you can see in your inbox not only the message you received but also the reply you sent (it gets pulled in from the Sent folder). Another example: a conversation has been taking place on a distribution list (so your rules filed it to a folder) and they add you on the TO or CC line, so it appears in a different folder; regardless of which folder you open, you are able to see the entire conversation. Note that messages from other folders than the one you are browsing, appear in grey text so you can easily spot them. Reading them in one folder, obviously marks them as read in the other folder… If you haven't yet, when are you making the move to Outlook 2010? Comments about this post welcome at the original blog.

    Read the article

  • Silverlight Cream for March 06, 2011 -- #1054

    - by Dave Campbell
    In this Back from the Summit Issue, I am overloaded with posts to choose from. Submittals go first, but I'll eventually catch up... hopefully by MIX :) : Ollie Riches(-2-), Colin Eberhardt, John Papa, Jeremy Likness, Martin Krüger, Joost van Schaik, Karl Shifflett, Michael Crump, Georgi Stoyanov, Yochay Kiriaty, Page Brooks, and Deborah Kurata. Above the Fold: Silverlight: "ClassifiedCabinet: A Quick Start" Georgi Stoyanov WP7: "Easy access to WMAppManifest.xml App properties like version and title" Joost van Schaik Multiple: "Flashcards.Show Version 2 for the Desktop, Browser, and Windows Phone" Yochay Kiriaty Shoutouts: Mohamed Mosallem delivered an online session at the Second Riyadh Online Community Summit: Silverlight 4.0 with SharePoint 2010 John-Daniel Trask posted about a release of a new set of tools released for WP7 development... there's a free trial, so definitely worth a look: Mindscape Phone Elements released! From SilverlightCream.com: WP7Contrib: Trickling data to a bound collection Ollie Riches submitted a couple links... first up is this on a way they found to decrease the load on a data template in WP7 to get under the 90 mb limit and then added their solution to the WP7Contrib lib. WP7Contrib: Why we use SilverlightSerializer instead of DataContractSerializer Ollie Riches' next submittal compares the performance of the SilverlightSerializer & DataContractSerializer on the WP7 platform. MVVM Charting – Binding Multiple Series to a Visiblox Chart Colin Eberhardt sent me this post where he describes binding multiple series to a chart with no code-behind... great long multi-phase tutorial all with source. Silverlight TV 64: Dive into 64bit Support, App Model and Security John Papa has Nick Kramer of the Silverlight team up for his latest Silverlight TV episode, discussing some cool new Silverlight stuff: 64-bit support, multiple windows, etc. Building a Windows Phone 7 Application with UltraLight.mvvm Jeremy Likness has a pre-summit tutorial up on his UltraLight.mvvm project, and how he would use it to build a WP7 app... great to meet you, Jeremy! How to: Storyboard only start with the conspicuousness of the application in the browser window Martin Krüger continues his Storyboard startup solutions with this one about what to do if the Silverlight app is small or simply an island on an html page. Easy access to WMAppManifest.xml App properties like version and title Joost van Schaik posted about the WP7 manifest file and how you can get access to that information at runtime... why you ask? How about version number or title? Be sure to read the helpful hints in the last paragraph too! Mole 2010 Released Karl Shifflett, Josh Smith, and others have released the latest version of Mole... well worth the money in my opinion, if only it worked for Silverlight! (not their fault) Changing the Default Windows Phone 7 Deployment Target In Visual Studio 2010 Michael Crump points out an annoyance with the 2011 WP7 tools update... VS2010 defaults to the device rather than the emulator... and he shows us how to get it pointed back to the emulator! ClassifiedCabinet: A Quick Start Georgi Stoyanov posted a QuickStart to a 'ClassifiedCabinet' control posted on CodePlex... check out the demo first, you'll want to read the article after that. He builds a simple project from scratch using the control. Flashcards.Show Version 2 for the Desktop, Browser, and Windows Phone Yochay Kiriaty has a post up about FlashCards.Show version 2 that he worked on with Arik Poznanski and has it now running on the desktop, browser, and WP7, plus you get the source... I've been wanting to write just such an app for WP7, so hey... this saves me some time! A Simple Focus Manager for Jounce Applications Page Brooks has a post up about Jeremy Likness' Jounce... how to set focus to a particular control when a view loads. Silverlight Charting: Formatting the Axis Deborah Kurata is continuing her charting series with this one on setting axis font color and putting the text at an angle... really dresses up the chart! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Silverlight Cream for March 21, 2010 -- #816

    - by Dave Campbell
    In this Issue: Michael Washington, John Papa(-2-, -3-, -4-), Jonas Follesø, David Anson, Scott Guthrie, Andrej Tozon, Bill Reiss(-2-), Pete Blois, and Lee. Shoutouts: Frank LaVigne has a Mix10 Session Downloader for us all to use... thanks Frank! Read what Ward Bell has to say about MVVM, Josh Smith’s Way ... it's all good. Robby Ingebretsen posts on his 10 Favorite Open Source Fonts You Can Embed in WPF or Silverlight Mike Harsh posted Slides and Demos from my MIX10 Session . The download link at Drop.io is down for maintenance until Sunday evening, March 21. From SilverlightCream.com: Blend 4: TreeView SelectedItemChanged using MVVM Michael Washington has a post up about doing SelectedItemChanged on a TreeView with MVVM, oh and he's starting out in Blend 4... Silverlight TV 14: Developing for Windows Phone 7 with Silverlight John Papa hit Silverlight TV pretty hard at the beginning of MIX10. This first one is with Mike Harsh talking about WP7. (Hi Mike ... wondered where you'd run off to!), and you can go to the shoutout section to get Mike's session material from MIX as well. Silverlight TV 15: Announcing Silverlight 4 RC at MIX 10 In this next Silverlight TV(15), John Papa and Adam Kinney discuss Silverlight 4RC ... thank goodness it's out, we can all let go of the breath we've been holding in :) Silverlight TV 16: Tim Heuer and Jesse Liberty Talk about Silverlight 4 RC at MIX 10 Silverlight TV 16 has John Papa sharing the spotlight with Jesse Liberty and Tim Heuer ... geez... can you find 3 more kowledgable Silverlight folks to listen to? No? then go listen to this :) Silverlight TV 17: Build a Twitter Client for Windows Phone 7 with Silverlight The latest Silverlight TV has John Papa bringing Mike Harsh back to produce a Twitter Client for WP7. Simulating multitouch on the Windows Phone 7 Emulator Jonas Follesø has a great post up about simulating multi-touch on WP7 using multiple mice ... yeah, you read that right :) Using IValueConverter to create a grouped list of items simply and flexibly David Anson demonstrates grouping items in a ListBox using IValueConverter. I think I can pretty well guarantee I would NOT have thought of doing this.. :) Building a Windows Phone 7 Twitter Application using Silverlight In the MIX10 first-day keynote, Scott Guthrie did File->New Project and built a WP7 Twitter app. He has that up as a tutorial with all sorts of external links including one to the keynote itself. Named and optional parameters in Silverlight 4 Andrej Tozon delves into the optional parameters that are now available to Silverlight developers... pretty cool stuff. Space Rocks game step 4: Inheriting from Sprite Bill Reiss continues with his game development series with this one on inheriting from the Sprite class and centering objects Space Rocks game step 5: Rotating the ship Bill Reiss's episode 5 is on rotating the ship you setup in episode 4. Don't worry about the transforms, Bill gives it all to us :) Labyrinth Sample for Windows Phone Wow... check out the sample Pete Blois did for the Phone... Silverlight coolness :) PathListBox in SL4 – firstlook Lee has a post up on the PathListBox. I think this is going to catch on quick... it's just too cool not to! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Beware when using .NET's named pipes in a windows forms application

    - by FransBouma
    Yesterday a user of our .net ORM Profiler tool reported that he couldn't get the snapshot recording from code feature working in a windows forms application. Snapshot recording in code means you start recording profile data from within the profiled application, and after you're done you save the snapshot as a file which you can open in the profiler UI. When using a console application it worked, but when a windows forms application was used, the snapshot was always empty: nothing was recorded. Obviously, I wondered why that was, and debugged a little. Here's an example piece of code to record the snapshot. This piece of code works OK in a console application, but results in an empty snapshot in a windows forms application: var snapshot = new Snapshot(); snapshot.Record(); using(var ctx = new ORMProfilerTestDataContext()) { var customers = ctx.Customers.Where(c => c.Country == "USA").ToList(); } InterceptorCore.Flush(); snapshot.Stop(); string error=string.Empty; if(!snapshot.IsEmpty) { snapshot.SaveToFile(@"c:\temp\generatortest\test2\blaat.opsnapshot", out error); } if(!string.IsNullOrEmpty(error)) { Console.WriteLine("Save error: {0}", error); } (the Console.WriteLine doesn't do anything in a windows forms application, but you get the idea). ORM Profiler uses named pipes: the interceptor (referenced and initialized in your application, the application to profile) sends data over the named pipe to a listener, which when receiving a piece of data begins reading it, asynchronically, and when properly read, it will signal observers that new data has arrived so they can store it in a repository. In this case, the snapshot will be the observer and will store the data in its own repository. The reason the above code doesn't work in windows forms is because windows forms is a wrapper around Win32 and its WM_* message based system. Named pipes in .NET are wrappers around Windows named pipes which also work with WM_* messages. Even though we use BeginRead() on the named pipe (which spawns a thread to read the data from the named pipe), nothing is received by the named pipe in the windows forms application, because it doesn't handle the WM_* messages in its message queue till after the method is over, as the message pump of a windows forms application is handled by the only thread of the windows forms application, so it will handle WM_* messages when the application idles. The fix is easy though: add Application.DoEvents(); right before snapshot.Stop(). Application.DoEvents() forces the windows forms application to process all WM_* messages in its message queue at that moment: all messages for the named pipe are then handled, the .NET code of the named pipe wrapper will react on that and the whole process will complete as if nothing happened. It's not that simple to just say 'why didn't you use a worker thread to create the snapshot here?', because a thread doesn't get its own message pump: the messages would still be posted to the window's message pump. A hidden form would create its own message pump, so the additional thread should also create a window to get the WM_* messages of the named pipe posted to a different message pump than the one of the main window. This WM_* messages pain is not something you want to be confronted with when using .NET and its libraries. Unfortunately, the way they're implemented, a lot of APIs are leaky abstractions, they bleed the characteristics of the OS objects they hide away through to the .NET code. Be aware of that fact when using them :)

    Read the article

  • Silverlight Cream for May 15, 2010 -- #862

    - by Dave Campbell
    In this Issue: Victor Gaudioso, Antoni Dol(-2-), Brian Genisio, Shawn Wildermuth, Mike Snow, Phil Middlemiss, Pete Brown, Kirupa, Dan Wahlin, Glenn Block, Jeff Prosise, Anoop Madhusudanan, and Adam Kinney. Shoutouts: Victor Gaudioso would like you to Checkout my Interview with Microsoft’s Murray Gordon at MIX 10 Pete Brown announced: Connected Show Podcast #29 With … Me! From SilverlightCream.com: New Silverlight Video Tutorial: How to Create Fast Forward for the MediaElement Victor Gaudioso's latest video tutorial is on creating the ability to fast-forward a MediaElement... check it out in the tutorial player itself! Overlapping TabItems with the Silverlight Toolkit TabControl Antoni Dol has a very cool tutorial up on the Toolkit TabItems control... not only is he overlapping them quite nicely but this is a very cool tutorial... QuoteFloat: Animating TextBlock PlaneProjections for a spiraling effect in Silverlight Antoni Dol also has a Blend tutorial up on animating TextBlock items... run the demo and you'll want to read the rest :) Adventures in MVVM – My ViewModel Base – Silverlight Support! Brian Genisio continues his MVVM tutorials with this update on his ViewModel base using some new C# 4.0 features, and fully supports Silverlight and WPF My Thoughts on the Windows Phone 7 Shawn Wildermuth gives his take on WP7. He included a port of his XBoxGames app to WP7 ... thanks Shawn! Silverlight Tip of the Day #20 – Using Tooltips in Silverlight I figured Mike Snow was going to overrun me with tips since I have missed a couple days, but there's only one! ... and it's on Tooltips. Animating the Silverlight opacity mask Phil Middlemiss has an article at SilverZine describing a Behavior he wrote (and is sharing) that turns a FrameworkElement into an opacity mask for it's parent container... cool demo on the page too. Breaking Apart the Margin Property in Xaml for better Binding Pete Brown dug in on a Twitter message and put some thoughts down about breaking a Margin apart to see about binding to the individual elements. Building a Simple Windows Phone App Kirupa has a 6-part tutorial up on building not-your-typical first WP7 application... all good stuff! Integrating HTML into Silverlight Applications Dan Wahlin has a post up discussing three ways to display HTML inside a Silverlight app. Hello MEF in Silverlight 4 and VB! (with an MVVM Light cameo) Glenn Block has a post up discussing MEF, MVVM, and it's in VB this time... and it's actually a great tutorial top to bottom... all source included of course :) Understanding Input Scope in Silverlight for Windows Phone Jeff Prosise has a good post up on the WP7 SIP and how to set the proper InputScope to get the SIP you want. Thinking about Silverlight ‘desktop’ apps – Creating a Standalone Installer for offline installation (no browser) Anoop Madhusudanan is discussing something that's been floating around for a while... installing Silverlight from, say, a CD or DVD when someone installs your app. He's got some good code, but be sure to read Tim Heuer and Scott Guthrie's comments, and consider digging deeper into that part. Using FluidMoveBehavior to animate grid coordinates in Silverlight Adam Kinney has a cool post up on animating an object using the FluidMotionBehavior of Blend 4... looks great moving across a checkerboard... check out the demo, then grab the code. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • iPad Impressions

    - by Aaron Lazenby
    So, I spent some quality time with my new iPad on Saturday. Here are things I like/don't like: -- Don't like that it has to sync with iTunes before you use it: I was traveling and left my laptop at home thinking I'd use this iPad thing instead. But the first thing it asked me to do is connect it to a laptop. Ugh. Had to borrow my mother-in-law's MacBook Pro just to get the iPad rolling. -- Like that magazines and newspapers are forever changed: And I think for the better...it's why I bought this thing in the first place. I spent significant time with The New York Times, The Wall Street Journal, Time Magazine and Popular Science on the iPad. Sliding stories around, jumping from section to section, enlarging images = all excellent experiences. Actually prefer iPad magazine to print, which will require a major shift in editorial strategy, summed up by Popular Science's Mark Jannot in his editor's note "What defines a magazine? Curated expertise--not paper." -- Don't like the screwy human factors: I actually enjoy the virtual keyboard (although I think I'm in the minority), but you have to hunch over to look down at what you're typing. Bad technology ergonomics have already jacked my body in various ways. The iPad just introduced a new one.-- Like the multitouch: In fact, it's awesome. Hands down. Probably will have the most lasting impact on the personal computing industry as a whole.   -- Don't like that it's heavy: If you plan to read in bed, you'd better double up on the creatine and curls. Holding this thing up on your own gets pretty uncomfortable. -- Like the Netfilx app: I wanted to watch "The Big Lebowski," so I did. That is all. -- Don't like that people feel 3G is necessary: For $30 a month? Please. I'm already accustomed to limiting my laptop internet use to readily available free wi-fi. Why do I expect anything different with the iPad? Most anyplace I have time to sit and read/use a computer (cafe, airport, you house, library, etc.) has free wi-fi. I can live without web surfing in your car. That's what the iPhone is for. -- Don't like that not everyone was ready in day one: I'm looking at you Facebook. No iPad app for launch? Lame. iPhone apps scaled-up to work on the iPad look grainy and cheap. Not a quality befitting this beautiful $700 piece of glass.Verdict: I'm bringing it to COLLABORATE 08 and seeing if I can go the whole week using only the iPad. If I can trade this thing for my laptop, I know it's a winner. For now, I'm enjoying Popular Science.

    Read the article

  • Multiplayer / Networking options for a 2D game with physics

    - by lahmas
    Summary: My 50% finished 2D sidescroller with Box2D as physics engine should have multiplayer support in the final version. However, the current code is just a singleplayer game. What should I do now? And more important, how should I implement multiplayer and combine it with singleplayer? Is it a bad idea to code the singleplayer mode separated from multiplayer mode (like Notch did it with Minecraft)? The performance in singleplayer should be as good as possible (Simulating physics with using a loopback server to implement singleplayer mode would be a problem there) Full background / questions: I'm working on a relatively large 2D game project in C++, with physics as a core element of it. (I use Box2D for that) The finished game should have full multiplayer support, however I made the mistake that I didn't plan the networking part properly and basically worked on a singleplayer game until now. I thought that multiplayer support could be added to the almost finished singleplayer game in a relatively easy and clear way, but apparently, from what I have read this is wrong. I even read that a multiplayer game should be programmed as one from the beginning, with the singleplayer mode actually just consisting of hosting an invisible local server and connecting to it via loopback. (I found out that most FPS game engines do it that way, an example would be Source) So here I am, with my half finished 2D sidescroller game, and I don't really know how to go on. Simply continueing to work on the singleplayer / client seems useless to me now, as I'd have to recode and refactor even more later. First, a general question to anybody who possibly found himself in a situation like this: How should I proceed? Then, the more specific one - I have been trying to find out how I can approach the networking part for my game: (Possible solutions:) Invisible / loopback server for singleplayer This would have the advantage that there basically is no difference between singleplayer and multiplayer mode. Not much additional code would be needed. A big disadvantage: Performance and other limitations in singleplayer. There would be two physics simulations running. One for the client and one for the loopback server. Even if you work around by providing a direct path for the data from the loopback server, through direct communcation by the threads for example, the singleplayer would be limited. This is a problem because people should be allowed to play around with masses of objects at once. Separated singleplayer / Multiplayer mode There would be no server involved in singleplayer mode. I'm not really sure how this would work. But at least I think that there would be a lot of additional work, because all of the singleplayer features would have to be re-implemented or glued to multiplayer mode. Multiplayer mode as a module for singleplayer This is merely a quick thought I had. Multiplayer could consist of a singleplayer game, with an additional networking module loaded and connected to a server, which sends and receives data and updates the singleplayer world. In the retrospective, I regret not having planned the multiplayer mode earlier. I'm really stuck at this point and I hope that somebody here is able to help me!

    Read the article

  • Silverlight Cream for June 08, 2010 -- #877

    - by Dave Campbell
    In this Issue: Miroslav Miroslavov, Chris Klug, Beau, Christian Schormann(-2-), Dan Wahlin, Pete Brown, Michael S. Scherotter, Philipp Sumi, Andy Wigley, and Phil Middlemiss. Shoutouts: Mark Tucker set about learning Caliburn, and in the process is writing a Caliburn Book: Chapters 1-3 Jesse Liberty has a great link-laden post up about why we should all be learning/using Blend: Why Developers Should, Must, Do Care About The New Expression Blend be sure to read what he says about WP7 development, however! Charlie Kindel announced an Install problem with the Developer Tools CTP Refresh and the WP7 tools... check this out if you're having problems. John Papa has a good post up on the happenings yesterday: Expression Studio 4 Launch of Blend, SketchFlow, Encoder and More! Erik Mork & Company's latest "This Week in Silverlight" is titled First Drop: Prism v4 – First Drop is Available From SilverlightCream.com: Animated navigation between Pages Miroslav Miroslavov has Part 8 of his "Silverlight in Action" series up, detailing cool things from the CompleteIT site... this one is on Animated navigation between pages. Subtitling videos Chris Klug got a gig adding subtitles to videos for Microsoft (sweet) ... and no, not *that* kind of subtitles... read how he approached the final solution. Silverlight Watermark TextBox I'm not sure we can have too many Watermark TextBoxes, and neither does Beau , who sent me a link to this one... give it a dance and decide. Blend 4: Collaborative SketchFlow Feedback with SharePoint With the new Blend release, Christian Schormann has a post up describing the lashup to Sharepoint for sharing Sketchflow and getting feedback. New Utility, Links, and Tutorials for Path-Based Layout Christian Schormann also has a collection of resources for Path-Based Layouts, including a utility "that lets you apply a whole bunch of position-specific effects without having to write any code"... lots of links to resources here. Tales from the Trenches – Building a Real-World Silverlight Line of Business Application Dan Wahlin draws on his recent experience and lays out some of the fun and pitfalls of building LOB apps in Silverlight... WCF, MVVM, slides, and code included WPF (and Silverlight): Choose your Fonts and Text Rendering Options Wisely Pete Brown has a great post up on using fonts wisely across multiple platforms... lots of info and good discussion in the comments as well. Ball Watch USA Remember the awesome watch Michael S. Scherotter did in Silverlight 1 and then converted to Updated Ball Trainmaster Cannonball Watch to Silverlight 2? Well... there's now a contest underfoot and 8 videos to help you get started... all good stuff, and good luck! ... Michael has a post up about the contest: Enter to Win a Ball Watch by Creating One in Silverlight Announcing Sketchables – Rapid Mockup Creation with SketchFlow By way of Jesse Libertyhttp://jesseliberty.com/2010/06/08/why-developers-should-must-do-care-about-the-new-expression-blend/, this is a cool production by Philipp Sumi about a simple mockup framework he's created. Perst - a database for Windows Phone 7 Silverlight I think one of my first comments to Michael Washington back at the MVP Summit 2010 was that we'd need a database engine, and too cool, but we've got one, Andy Wigley discusses Perst in this post... to save you some time, here's the Perst site A Chrome and Glass Theme - Part 7 Phil Middlemiss has part 7 of his great theme-building series up... this time he's giving the accordian control a once-over. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

< Previous Page | 726 727 728 729 730 731 732 733 734 735 736 737  | Next Page >