Search Results

Search found 25534 results on 1022 pages for 'write powershell'.

Page 501/1022 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • How to get values of atributes on a XML file using C++ ?

    - by Reversed
    Need to write some C++ code that reads XML string and if i do something like: get valueofElement("ACTION_ON_CARD") it returns 3 get valueofElement("ACTION_ON_ENVELOPE") it returns YES XML String: <ACTION_ON_CARD>3</ACTION_ON_CARD> <ACTION_ON_ENVELOPE>YES</ACTION_ON_ENVELOPE> Any code example would be helpfull Thanks

    Read the article

  • error LNK2005: xxx already defined in MSVCRT.lib(MSVCR100.dll) C:\something\LIBCMT.lib(setlocal.obj)

    - by volpack
    Hello, I'm using DCMTK library for reading Dicom files (Image format used in medical image processing.) I'm having a problem in compiling this DCMTK source code. DCMTK uses some additional external libraries (zlib, tiff, libpng, libxml2, libiconv). I know that all libraries should be generated with same Code Generation Options. I've downloaded the compiled versions of these support libraries which are compiled with "Multithreaded DLL" runtime options (/MD). In each project of DCMTK source code I ensured that runtime options are "Multithreaded DLL" (/MD). But still I'm getting these errors: Error 238 error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmp2pgm Error 239 error LNK2005: __lock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmp2pgm Error 240 error LNK2005: __unlock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmp2pgm Error 241 error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmp2pgm Error 242 error LNK2005: _exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmp2pgm Error 243 error LNK2005: __exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmp2pgm Error 244 error LNK2005: __cexit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmp2pgm Error 245 error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmp2pgm Error 246 error LNK2005: _fflush already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(fflush.obj) dcmp2pgm Error 247 error LNK2005: __errno already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(dosmap.obj) dcmp2pgm Error 248 error LNK2005: __invoke_watson already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(invarg.obj) dcmp2pgm Error 249 error LNK2005: "void __cdecl terminate(void)" (?terminate@@YAXXZ) already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(hooks.obj) dcmp2pgm Error 250 error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmp2pgm Error 251 error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmp2pgm Error 252 error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmp2pgm Error 253 error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmp2pgm Error 254 error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmp2pgm Error 255 error LNK2005: __lock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmp2pgm Error 256 error LNK2005: __XcptFilter already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(winxfltr.obj) dcmp2pgm Error 257 error LNK2005: _mainCRTStartup already defined in MSVCRT.lib(crtexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0.obj) dcmp2pgm Error 258 error LNK2005: ___set_app_type already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(errmode.obj) dcmp2pgm Error 259 error LNK2005: __configthreadlocale already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(setlocal.obj) dcmp2pgm Error 260 error LNK2005: _getenv already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(getenv.obj) dcmp2pgm Error 261 error LNK2005: __isctype already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(isctype.obj) dcmp2pgm Error 262 error LNK2005: __strnicmp already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(strnicmp.obj) dcmp2pgm Error 263 error LNK2005: __close already defined in LIBCMT.lib(close.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 264 error LNK2005: __fileno already defined in LIBCMT.lib(fileno.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 265 error LNK2005: _calloc already defined in LIBCMT.lib(calloc.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 266 error LNK2005: _atol already defined in LIBCMT.lib(atox.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 267 error LNK2005: _strcspn already defined in LIBCMT.lib(strcspn.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 268 error LNK2005: __stricmp already defined in LIBCMT.lib(stricmp.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 269 error LNK2005: _atoi already defined in LIBCMT.lib(atox.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 270 error LNK2005: __lseek already defined in LIBCMT.lib(lseek.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 271 error LNK2005: __read already defined in LIBCMT.lib(read.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 272 error LNK2005: __write already defined in LIBCMT.lib(write.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 273 error LNK2005: __open already defined in LIBCMT.lib(open.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 274 error LNK2005: __get_osfhandle already defined in LIBCMT.lib(osfinfo.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmp2pgm Error 278 error LNK1169: one or more multiply defined symbols found C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\Release\dcmp2pgm.exe 1 1 dcmp2pgm Error 201 error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmprscp Error 202 error LNK2005: __lock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmprscp Error 203 error LNK2005: __unlock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmprscp Error 204 error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscp Error 205 error LNK2005: _exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscp Error 206 error LNK2005: __exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscp Error 207 error LNK2005: __cexit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscp Error 208 error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscp Error 209 error LNK2005: _fflush already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(fflush.obj) dcmprscp Error 210 error LNK2005: __errno already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(dosmap.obj) dcmprscp Error 211 error LNK2005: __invoke_watson already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(invarg.obj) dcmprscp Error 212 error LNK2005: "void __cdecl terminate(void)" (?terminate@@YAXXZ) already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(hooks.obj) dcmprscp Error 213 error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscp Error 214 error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscp Error 215 error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscp Error 216 error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscp Error 217 error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmprscp Error 218 error LNK2005: __lock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmprscp Error 219 error LNK2005: __XcptFilter already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(winxfltr.obj) dcmprscp Error 220 error LNK2005: __stricmp already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(stricmp.obj) dcmprscp Error 221 error LNK2005: _mainCRTStartup already defined in MSVCRT.lib(crtexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0.obj) dcmprscp Error 222 error LNK2005: ___set_app_type already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(errmode.obj) dcmprscp Error 223 error LNK2005: __configthreadlocale already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(setlocal.obj) dcmprscp Error 224 error LNK2005: _getenv already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(getenv.obj) dcmprscp Error 225 error LNK2005: __isctype already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(isctype.obj) dcmprscp Error 226 error LNK2005: __strnicmp already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(strnicmp.obj) dcmprscp Error 227 error LNK2005: __close already defined in LIBCMT.lib(close.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 228 error LNK2005: __fileno already defined in LIBCMT.lib(fileno.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 229 error LNK2005: __lseek already defined in LIBCMT.lib(lseek.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 230 error LNK2005: __read already defined in LIBCMT.lib(read.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 231 error LNK2005: __write already defined in LIBCMT.lib(write.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 232 error LNK2005: __open already defined in LIBCMT.lib(open.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 233 error LNK2005: __get_osfhandle already defined in LIBCMT.lib(osfinfo.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscp Error 237 error LNK1169: one or more multiply defined symbols found C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\Release\dcmprscp.exe 1 1 dcmprscp Error 160 error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmprscu Error 161 error LNK2005: __lock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmprscu Error 162 error LNK2005: __unlock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmprscu Error 163 error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscu Error 164 error LNK2005: _exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscu Error 165 error LNK2005: __exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscu Error 166 error LNK2005: __cexit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscu Error 167 error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmprscu Error 168 error LNK2005: _fflush already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(fflush.obj) dcmprscu Error 169 error LNK2005: __errno already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(dosmap.obj) dcmprscu Error 170 error LNK2005: __invoke_watson already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(invarg.obj) dcmprscu Error 171 error LNK2005: "void __cdecl terminate(void)" (?terminate@@YAXXZ) already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(hooks.obj) dcmprscu Error 172 error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscu Error 173 error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscu Error 174 error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscu Error 175 error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmprscu Error 176 error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmprscu Error 177 error LNK2005: __lock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmprscu Error 178 error LNK2005: __XcptFilter already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(winxfltr.obj) dcmprscu Error 179 error LNK2005: _mainCRTStartup already defined in MSVCRT.lib(crtexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0.obj) dcmprscu Error 180 error LNK2005: ___set_app_type already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(errmode.obj) dcmprscu Error 181 error LNK2005: __configthreadlocale already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(setlocal.obj) dcmprscu Error 182 error LNK2005: _getenv already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(getenv.obj) dcmprscu Error 183 error LNK2005: __isctype already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(isctype.obj) dcmprscu Error 184 error LNK2005: __strnicmp already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(strnicmp.obj) dcmprscu Error 185 error LNK2005: __close already defined in LIBCMT.lib(close.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 186 error LNK2005: __fileno already defined in LIBCMT.lib(fileno.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 187 error LNK2005: _calloc already defined in LIBCMT.lib(calloc.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 188 error LNK2005: _atol already defined in LIBCMT.lib(atox.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 189 error LNK2005: _strcspn already defined in LIBCMT.lib(strcspn.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 190 error LNK2005: __stricmp already defined in LIBCMT.lib(stricmp.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 191 error LNK2005: _atoi already defined in LIBCMT.lib(atox.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 192 error LNK2005: __lseek already defined in LIBCMT.lib(lseek.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 193 error LNK2005: __read already defined in LIBCMT.lib(read.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 194 error LNK2005: __write already defined in LIBCMT.lib(write.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 195 error LNK2005: __open already defined in LIBCMT.lib(open.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 196 error LNK2005: __get_osfhandle already defined in LIBCMT.lib(osfinfo.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmprscu Error 200 error LNK1169: one or more multiply defined symbols found C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\Release\dcmprscu.exe dcmprscu Error 119 error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmpsprt Error 120 error LNK2005: __lock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmpsprt Error 121 error LNK2005: __unlock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(_file.obj) dcmpsprt Error 122 error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmpsprt Error 123 error LNK2005: _exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmpsprt Error 124 error LNK2005: __exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmpsprt Error 125 error LNK2005: __cexit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmpsprt Error 126 error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0dat.obj) dcmpsprt Error 127 error LNK2005: _fflush already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(fflush.obj) dcmpsprt Error 128 error LNK2005: __errno already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(dosmap.obj) dcmpsprt Error 129 error LNK2005: __invoke_watson already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(invarg.obj) dcmpsprt Error 130 error LNK2005: "void __cdecl terminate(void)" (?terminate@@YAXXZ) already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(hooks.obj) dcmpsprt Error 131 error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmpsprt Error 132 error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmpsprt Error 133 error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmpsprt Error 134 error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0init.obj) dcmpsprt Error 135 error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmpsprt Error 136 error LNK2005: __lock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(mlock.obj) dcmpsprt Error 137 error LNK2005: __XcptFilter already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(winxfltr.obj) dcmpsprt Error 138 error LNK2005: _mainCRTStartup already defined in MSVCRT.lib(crtexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(crt0.obj) dcmpsprt Error 139 error LNK2005: ___set_app_type already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(errmode.obj) dcmpsprt Error 140 error LNK2005: __configthreadlocale already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(setlocal.obj) dcmpsprt Error 141 error LNK2005: _getenv already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(getenv.obj) dcmpsprt Error 142 error LNK2005: __isctype already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(isctype.obj) dcmpsprt Error 143 error LNK2005: __strnicmp already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\LIBCMT.lib(strnicmp.obj) dcmpsprt Error 144 error LNK2005: __close already defined in LIBCMT.lib(close.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 145 error LNK2005: __fileno already defined in LIBCMT.lib(fileno.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 146 error LNK2005: _calloc already defined in LIBCMT.lib(calloc.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 147 error LNK2005: _atol already defined in LIBCMT.lib(atox.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 148 error LNK2005: _strcspn already defined in LIBCMT.lib(strcspn.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 149 error LNK2005: __stricmp already defined in LIBCMT.lib(stricmp.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 150 error LNK2005: _atoi already defined in LIBCMT.lib(atox.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 151 error LNK2005: __lseek already defined in LIBCMT.lib(lseek.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 152 error LNK2005: __read already defined in LIBCMT.lib(read.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 153 error LNK2005: __write already defined in LIBCMT.lib(write.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 154 error LNK2005: __open already defined in LIBCMT.lib(open.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 155 error LNK2005: __get_osfhandle already defined in LIBCMT.lib(osfinfo.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\MSVCRT.lib(MSVCR100.dll) dcmpsprt Error 159 error LNK1169: one or more multiply defined symbols found C:\dcmtk-3.5.4-src\CMakeBinaries\dcmpstat\apps\Release\dcmpsprt.exe 1 1 dcmpsprt Error 61 error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(_file.obj) dsr2html Error 62 error LNK2005: __lock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(_file.obj) dsr2html Error 63 error LNK2005: __unlock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(_file.obj) dsr2html Error 64 error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0dat.obj) dsr2html Error 65 error LNK2005: _exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0dat.obj) dsr2html Error 66 error LNK2005: __exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0dat.obj) dsr2html Error 67 error LNK2005: __cexit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0dat.obj) dsr2html Error 68 error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0dat.obj) dsr2html Error 69 error LNK2005: _fflush already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(fflush.obj) dsr2html Error 70 error LNK2005: __errno already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(dosmap.obj) dsr2html Error 71 error LNK2005: __invoke_watson already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(invarg.obj) dsr2html Error 72 error LNK2005: "void __cdecl terminate(void)" (?terminate@@YAXXZ) already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(hooks.obj) dsr2html Error 73 error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0init.obj) dsr2html Error 74 error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0init.obj) dsr2html Error 75 error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0init.obj) dsr2html Error 76 error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0init.obj) dsr2html Error 77 error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(mlock.obj) dsr2html Error 78 error LNK2005: __lock already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(mlock.obj) dsr2html Error 79 error LNK2005: __XcptFilter already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(winxfltr.obj) dsr2html Error 80 error LNK2005: _mainCRTStartup already defined in MSVCRT.lib(crtexe.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0.obj) dsr2html Error 81 error LNK2005: ___set_app_type already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(errmode.obj) dsr2html Error 82 error LNK2005: __configthreadlocale already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(setlocal.obj) dsr2html Error 83 error LNK2005: _getenv already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(getenv.obj) dsr2html Error 84 error LNK2005: __isctype already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(isctype.obj) dsr2html Error 85 error LNK2005: __close already defined in LIBCMT.lib(close.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\MSVCRT.lib(MSVCR100.dll) dsr2html Error 86 error LNK2005: __fileno already defined in LIBCMT.lib(fileno.obj) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\MSVCRT.lib(MSVCR100.dll) dsr2html Error 90 error LNK1169: one or more multiply defined symbols found C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\Release\dsr2html.exe 1 1 dsr2html Error 31 error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(_file.obj) dsr2xml Error 32 error LNK2005: __lock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(_file.obj) dsr2xml Error 33 error LNK2005: __unlock_file already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(_file.obj) dsr2xml Error 34 error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCMT.lib(crt0dat.obj) dsr2xml Error 35 error LNK2005: _exit already defined in MSVCRT.lib(MSVCR100.dll) C:\dcmtk-3.5.4-src\CMakeBinaries\dcmsr\apps\LIBCM

    Read the article

  • C compiler producing lightweight executeables

    - by samuel
    I'm currently using MSVC for C++ but as I'm switching to C to write a very performance-intensive program (interpreter) I have to search for a fitting C compiler. I've looked at some binaries produced by Turbo-C and even if its old they seem pretty straigthforward and optimized. Now I don't know what the best compiler for building an interpreter is, but maybe you can help me. I've considered GCC but as I don't know much about it, I can't be really sure.

    Read the article

  • How to rewrite a URL with %23 in it?

    - by Jan P.
    I have a (wordpress) blog where after commenting the users are redirected back to the page with an anchor to their comment. Should look like this: http://example.org/foo-bar/#comment-570630 But somehow I get a lot of 404 ins my logfiles for such URLs: http://example.org/foo-bar/%23comment-570630 Is there a way to write a .htaccess rewrite rule to fix this? Bonus question: Any idea why this happens and what I can do about it?

    Read the article

  • How LINQ to Object statements work

    - by rajbk
    This post goes into detail as to now LINQ statements work when querying a collection of objects. This topic assumes you have an understanding of how generics, delegates, implicitly typed variables, lambda expressions, object/collection initializers, extension methods and the yield statement work. I would also recommend you read my previous two posts: Using Delegates in C# Part 1 Using Delegates in C# Part 2 We will start by writing some methods to filter a collection of data. Assume we have an Employee class like so: 1: public class Employee { 2: public int ID { get; set;} 3: public string FirstName { get; set;} 4: public string LastName {get; set;} 5: public string Country { get; set; } 6: } and a collection of employees like so: 1: var employees = new List<Employee> { 2: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 3: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 4: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 5: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" }, 6: }; Filtering We wish to  find all employees that have an even ID. We could start off by writing a method that takes in a list of employees and returns a filtered list of employees with an even ID. 1: static List<Employee> GetEmployeesWithEvenID(List<Employee> employees) { 2: var filteredEmployees = new List<Employee>(); 3: foreach (Employee emp in employees) { 4: if (emp.ID % 2 == 0) { 5: filteredEmployees.Add(emp); 6: } 7: } 8: return filteredEmployees; 9: } The method can be rewritten to return an IEnumerable<Employee> using the yield return keyword. 1: static IEnumerable<Employee> GetEmployeesWithEvenID(IEnumerable<Employee> employees) { 2: foreach (Employee emp in employees) { 3: if (emp.ID % 2 == 0) { 4: yield return emp; 5: } 6: } 7: } We put these together in a console application. 1: using System; 2: using System.Collections.Generic; 3: //No System.Linq 4:  5: public class Program 6: { 7: [STAThread] 8: static void Main(string[] args) 9: { 10: var employees = new List<Employee> { 11: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 12: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 13: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 14: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" }, 15: }; 16: var filteredEmployees = GetEmployeesWithEvenID(employees); 17:  18: foreach (Employee emp in filteredEmployees) { 19: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 20: emp.ID, emp.FirstName, emp.LastName, emp.Country); 21: } 22:  23: Console.ReadLine(); 24: } 25: 26: static IEnumerable<Employee> GetEmployeesWithEvenID(IEnumerable<Employee> employees) { 27: foreach (Employee emp in employees) { 28: if (emp.ID % 2 == 0) { 29: yield return emp; 30: } 31: } 32: } 33: } 34:  35: public class Employee { 36: public int ID { get; set;} 37: public string FirstName { get; set;} 38: public string LastName {get; set;} 39: public string Country { get; set; } 40: } Output: ID 2 First_Name Jim Last_Name Ashlock Country UK ID 4 First_Name Jill Last_Name Anderson Country AUS Our filtering method is too specific. Let us change it so that it is capable of doing different types of filtering and lets give our method the name Where ;-) We will add another parameter to our Where method. This additional parameter will be a delegate with the following declaration. public delegate bool Filter(Employee emp); The idea is that the delegate parameter in our Where method will point to a method that contains the logic to do our filtering thereby freeing our Where method from any dependency. The method is shown below: 1: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 2: foreach (Employee emp in employees) { 3: if (filter(emp)) { 4: yield return emp; 5: } 6: } 7: } Making the change to our app, we create a new instance of the Filter delegate on line 14 with a target set to the method EmployeeHasEvenId. Running the code will produce the same output. 1: public delegate bool Filter(Employee emp); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: var employees = new List<Employee> { 9: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 10: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 11: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 12: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 13: }; 14: var filterDelegate = new Filter(EmployeeHasEvenId); 15: var filteredEmployees = Where(employees, filterDelegate); 16:  17: foreach (Employee emp in filteredEmployees) { 18: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 19: emp.ID, emp.FirstName, emp.LastName, emp.Country); 20: } 21: Console.ReadLine(); 22: } 23: 24: static bool EmployeeHasEvenId(Employee emp) { 25: return emp.ID % 2 == 0; 26: } 27: 28: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 29: foreach (Employee emp in employees) { 30: if (filter(emp)) { 31: yield return emp; 32: } 33: } 34: } 35: } 36:  37: public class Employee { 38: public int ID { get; set;} 39: public string FirstName { get; set;} 40: public string LastName {get; set;} 41: public string Country { get; set; } 42: } Lets use lambda expressions to inline the contents of the EmployeeHasEvenId method in place of the method. The next code snippet shows this change (see line 15).  For brevity, the Employee class declaration has been skipped. 1: public delegate bool Filter(Employee emp); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: var employees = new List<Employee> { 9: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 10: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 11: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 12: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 13: }; 14: var filterDelegate = new Filter(EmployeeHasEvenId); 15: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 16:  17: foreach (Employee emp in filteredEmployees) { 18: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 19: emp.ID, emp.FirstName, emp.LastName, emp.Country); 20: } 21: Console.ReadLine(); 22: } 23: 24: static bool EmployeeHasEvenId(Employee emp) { 25: return emp.ID % 2 == 0; 26: } 27: 28: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 29: foreach (Employee emp in employees) { 30: if (filter(emp)) { 31: yield return emp; 32: } 33: } 34: } 35: } 36:  The output displays the same two employees.  Our Where method is too restricted since it works with a collection of Employees only. Lets change it so that it works with any IEnumerable<T>. In addition, you may recall from my previous post,  that .NET 3.5 comes with a lot of predefined delegates including public delegate TResult Func<T, TResult>(T arg); We will get rid of our Filter delegate and use the one above instead. We apply these two changes to our code. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: var employees = new List<Employee> { 7: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 8: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 9: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 10: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 11: }; 12:  13: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 14:  15: foreach (Employee emp in filteredEmployees) { 16: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 17: emp.ID, emp.FirstName, emp.LastName, emp.Country); 18: } 19: Console.ReadLine(); 20: } 21: 22: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 23: foreach (var x in source) { 24: if (filter(x)) { 25: yield return x; 26: } 27: } 28: } 29: } We have successfully implemented a way to filter any IEnumerable<T> based on a  filter criteria. Projection Now lets enumerate on the items in the IEnumerable<Employee> we got from the Where method and copy them into a new IEnumerable<EmployeeFormatted>. The EmployeeFormatted class will only have a FullName and ID property. 1: public class EmployeeFormatted { 2: public int ID { get; set; } 3: public string FullName {get; set;} 4: } We could “project” our existing IEnumerable<Employee> into a new collection of IEnumerable<EmployeeFormatted> with the help of a new method. We will call this method Select ;-) 1: static IEnumerable<EmployeeFormatted> Select(IEnumerable<Employee> employees) { 2: foreach (var emp in employees) { 3: yield return new EmployeeFormatted { 4: ID = emp.ID, 5: FullName = emp.LastName + ", " + emp.FirstName 6: }; 7: } 8: } The changes are applied to our app. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: var employees = new List<Employee> { 7: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 8: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 9: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 10: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 11: }; 12:  13: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 14: var formattedEmployees = Select(filteredEmployees); 15:  16: foreach (EmployeeFormatted emp in formattedEmployees) { 17: Console.WriteLine("ID {0} Full_Name {1}", 18: emp.ID, emp.FullName); 19: } 20: Console.ReadLine(); 21: } 22:  23: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 24: foreach (var x in source) { 25: if (filter(x)) { 26: yield return x; 27: } 28: } 29: } 30: 31: static IEnumerable<EmployeeFormatted> Select(IEnumerable<Employee> employees) { 32: foreach (var emp in employees) { 33: yield return new EmployeeFormatted { 34: ID = emp.ID, 35: FullName = emp.LastName + ", " + emp.FirstName 36: }; 37: } 38: } 39: } 40:  41: public class Employee { 42: public int ID { get; set;} 43: public string FirstName { get; set;} 44: public string LastName {get; set;} 45: public string Country { get; set; } 46: } 47:  48: public class EmployeeFormatted { 49: public int ID { get; set; } 50: public string FullName {get; set;} 51: } Output: ID 2 Full_Name Ashlock, Jim ID 4 Full_Name Anderson, Jill We have successfully selected employees who have an even ID and then shaped our data with the help of the Select method so that the final result is an IEnumerable<EmployeeFormatted>.  Lets make our Select method more generic so that the user is given the freedom to shape what the output would look like. We can do this, like before, with lambda expressions. Our Select method is changed to accept a delegate as shown below. TSource will be the type of data that comes in and TResult will be the type the user chooses (shape of data) as returned from the selector delegate. 1:  2: static IEnumerable<TResult> Select<TSource, TResult>(IEnumerable<TSource> source, Func<TSource, TResult> selector) { 3: foreach (var x in source) { 4: yield return selector(x); 5: } 6: } We see the new changes to our app. On line 15, we use lambda expression to specify the shape of the data. In this case the shape will be of type EmployeeFormatted. 1:  2: public class Program 3: { 4: [STAThread] 5: static void Main(string[] args) 6: { 7: var employees = new List<Employee> { 8: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 9: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 10: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 11: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 12: }; 13:  14: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 15: var formattedEmployees = Select(filteredEmployees, (emp) => 16: new EmployeeFormatted { 17: ID = emp.ID, 18: FullName = emp.LastName + ", " + emp.FirstName 19: }); 20:  21: foreach (EmployeeFormatted emp in formattedEmployees) { 22: Console.WriteLine("ID {0} Full_Name {1}", 23: emp.ID, emp.FullName); 24: } 25: Console.ReadLine(); 26: } 27: 28: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 29: foreach (var x in source) { 30: if (filter(x)) { 31: yield return x; 32: } 33: } 34: } 35: 36: static IEnumerable<TResult> Select<TSource, TResult>(IEnumerable<TSource> source, Func<TSource, TResult> selector) { 37: foreach (var x in source) { 38: yield return selector(x); 39: } 40: } 41: } The code outputs the same result as before. On line 14 we filter our data and on line 15 we project our data. What if we wanted to be more expressive and concise? We could combine both line 14 and 15 into one line as shown below. Assuming you had to perform several operations like this on our collection, you would end up with some very unreadable code! 1: var formattedEmployees = Select(Where(employees, emp => emp.ID % 2 == 0), (emp) => 2: new EmployeeFormatted { 3: ID = emp.ID, 4: FullName = emp.LastName + ", " + emp.FirstName 5: }); A cleaner way to write this would be to give the appearance that the Select and Where methods were part of the IEnumerable<T>. This is exactly what extension methods give us. Extension methods have to be defined in a static class. Let us make the Select and Where extension methods on IEnumerable<T> 1: public static class MyExtensionMethods { 2: static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 3: foreach (var x in source) { 4: if (filter(x)) { 5: yield return x; 6: } 7: } 8: } 9: 10: static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 11: foreach (var x in source) { 12: yield return selector(x); 13: } 14: } 15: } The creation of the extension method makes the syntax much cleaner as shown below. We can write as many extension methods as we want and keep on chaining them using this technique. 1: var formattedEmployees = employees 2: .Where(emp => emp.ID % 2 == 0) 3: .Select (emp => new EmployeeFormatted { ID = emp.ID, FullName = emp.LastName + ", " + emp.FirstName }); Making these changes and running our code produces the same result. 1: using System; 2: using System.Collections.Generic; 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: var employees = new List<Employee> { 10: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 11: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 12: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 13: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 14: }; 15:  16: var formattedEmployees = employees 17: .Where(emp => emp.ID % 2 == 0) 18: .Select (emp => 19: new EmployeeFormatted { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: } 23: ); 24:  25: foreach (EmployeeFormatted emp in formattedEmployees) { 26: Console.WriteLine("ID {0} Full_Name {1}", 27: emp.ID, emp.FullName); 28: } 29: Console.ReadLine(); 30: } 31: } 32:  33: public static class MyExtensionMethods { 34: static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 35: foreach (var x in source) { 36: if (filter(x)) { 37: yield return x; 38: } 39: } 40: } 41: 42: static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 43: foreach (var x in source) { 44: yield return selector(x); 45: } 46: } 47: } 48:  49: public class Employee { 50: public int ID { get; set;} 51: public string FirstName { get; set;} 52: public string LastName {get; set;} 53: public string Country { get; set; } 54: } 55:  56: public class EmployeeFormatted { 57: public int ID { get; set; } 58: public string FullName {get; set;} 59: } Let’s change our code to return a collection of anonymous types and get rid of the EmployeeFormatted type. We see that the code produces the same output. 1: using System; 2: using System.Collections.Generic; 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: var employees = new List<Employee> { 10: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 11: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 12: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 13: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 14: }; 15:  16: var formattedEmployees = employees 17: .Where(emp => emp.ID % 2 == 0) 18: .Select (emp => 19: new { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: } 23: ); 24:  25: foreach (var emp in formattedEmployees) { 26: Console.WriteLine("ID {0} Full_Name {1}", 27: emp.ID, emp.FullName); 28: } 29: Console.ReadLine(); 30: } 31: } 32:  33: public static class MyExtensionMethods { 34: public static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 35: foreach (var x in source) { 36: if (filter(x)) { 37: yield return x; 38: } 39: } 40: } 41: 42: public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 43: foreach (var x in source) { 44: yield return selector(x); 45: } 46: } 47: } 48:  49: public class Employee { 50: public int ID { get; set;} 51: public string FirstName { get; set;} 52: public string LastName {get; set;} 53: public string Country { get; set; } 54: } To be more expressive, C# allows us to write our extension method calls as a query expression. Line 16 can be rewritten a query expression like so: 1: var formattedEmployees = from emp in employees 2: where emp.ID % 2 == 0 3: select new { 4: ID = emp.ID, 5: FullName = emp.LastName + ", " + emp.FirstName 6: }; When the compiler encounters an expression like the above, it simply rewrites it as calls to our extension methods.  So far we have been using our extension methods. The System.Linq namespace contains several extension methods for objects that implement the IEnumerable<T>. You can see a listing of these methods in the Enumerable class in the System.Linq namespace. Let’s get rid of our extension methods (which I purposefully wrote to be of the same signature as the ones in the Enumerable class) and use the ones provided in the Enumerable class. Our final code is shown below: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; //Added 4:  5: public class Program 6: { 7: [STAThread] 8: static void Main(string[] args) 9: { 10: var employees = new List<Employee> { 11: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 12: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 13: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 14: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 15: }; 16:  17: var formattedEmployees = from emp in employees 18: where emp.ID % 2 == 0 19: select new { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: }; 23:  24: foreach (var emp in formattedEmployees) { 25: Console.WriteLine("ID {0} Full_Name {1}", 26: emp.ID, emp.FullName); 27: } 28: Console.ReadLine(); 29: } 30: } 31:  32: public class Employee { 33: public int ID { get; set;} 34: public string FirstName { get; set;} 35: public string LastName {get; set;} 36: public string Country { get; set; } 37: } 38:  39: public class EmployeeFormatted { 40: public int ID { get; set; } 41: public string FullName {get; set;} 42: } This post has shown you a basic overview of LINQ to Objects work by showning you how an expression is converted to a sequence of calls to extension methods when working directly with objects. It gets more interesting when working with LINQ to SQL where an expression tree is constructed – an in memory data representation of the expression. The C# compiler compiles these expressions into code that builds an expression tree at runtime. The provider can then traverse the expression tree and generate the appropriate SQL query. You can read more about expression trees in this MSDN article.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • How to install SharePoint Server 2013 Preview

    - by ybbest
    The Office 2013 and SharePoint Server 2013 Preview is announced yesterday and as a SharePoint Developer, I am really excited to learn all the new features and capabilities. Today I will show you how to install the preview. 1. Create a service account called SP2013Install and give this account Dbcreator and SecurityAdmin in SQL Server 2012 2. You need to run the following script to set the ‘maxdegree of parellism’ setting to the required value of 1 in SQL Server 2012(using sysadmin privilege) before configure the SharePoint Farm. Otherwise , you might get the error ‘This SQL Server Instance does not have the required maxdegree of parellism setting of 1’ sp_configure 'show advanced options', 1; GO RECONFIGURE WITH OVERRIDE; GO sp_configure 'max degree of parallelism', 1; GO RECONFIGURE WITH OVERRIDE; GO 3. Download the SharePoint preview from here and I am going to install it on Windows Server 2008R2 with SQL2012. 4. Click the Install software prerequisites, this works fine with the internet connection. (However, if you do not have internet connection, it is a bit tricky to install window azure AppFabric as it has to be installed using the prerequisite installer. Your computer might reboot a few times in the process.) 5.After the prerequisites are installed `completely, you can then install the Preview. Click the Install SharePoint Server and Enter the Product key you get from the Preview download page. 6. Accept the License terms and Click Next. 7. Leave the default path for the file location. 8. You can now start the installation process 9. After binary files are installed, you then can configure your farm using the farm configuration wizard. 10.Specify the Database server and the install account 11. Specify SharePoint farm passphrase. 12 Specify the port number , you should choose your own favorite port number. 13. Choose Create a New Server Farm and click next. 14. Double-check with the settings and click Next to Configure the farm install. 15. Finally, your farm is configured successfully and you now are able to go to your Central Admin site http://sp2010:6666/ 16. You should configure the services manually or automate using PowerShell (If you like to understand why,you can read the blog post here) ,however I will use the wizard to configure automatically here  as  this is a test machine. After the configuration is complete, you now be able to see your SharePoint Site. 17.To start the evaluate the Preview , you need to install Visual Studio 2012 RC , Microsoft Office Developer Tools for Visual Studio 2012,SharePoint 2013 Designer Preview , Office 2013 Preview. References: Download SharePoint2013 Server 2013 Download Microsoft Visio Professional 2013 Preview Install SharePoint 2013 Preview Hardware and software requirements for SharePoint 2013 Preview SharePoint 2013 IT Pro and Developer training materials released Plan for SharePoint 2013 Preview Microsoft Office Developer Tools for Visual Studio 2012 SharePoint 2013 Preview Office365 for the SharePoint 2013 preview SharePoint Designer 2013 Download: Microsoft Office 2013 Preview Language Pack Try Office

    Read the article

  • Customize Team Build 2010 – Part 12: How to debug my custom activities

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application       Developers are “spoilt” persons who expect to be able to have easy debugging experiences for every technique they work with. So they also expect it when developing custom activities for the build process template. This post describes how you can debug your custom activities without having to develop on the build server itself. Remote debugging prerequisites The prerequisite for these steps are to install the Microsoft Visual Studio Remote Debugging Monitor. You can find information how to install this at http://msdn.microsoft.com/en-us/library/bt727f1t.aspx. I chose for the option to run the remote debugger on the build server from a file share. Debugging symbols prerequisites To be able to start the debugging, you need to have the pdb files on the buildserver together with the assembly. The pdb must have been build with Full Debug Info. Steps In my setup I have a development machine and a build server. To setup the remote debugging, I performed the following steps Locate on your development machine the folder C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Remote Debugger Create a share for the Remote Debugger folder. Make sure that the share (and the folder) has the correct permissions so the user on the build server has access to the share. On the build server go to the shared “Remote Debugger” folder Start msvsmon.exe which is located in the folder that represents the platform of the build server. This will open a winform application like   Go back to your development machine and open the BuildProcess solution. Start the Attach to process command (Ctrl+Alt+P) Type in the Qualifier the name of the build server. In my case the user account that has started the msvsmon is another user then the user on my development machine. In that case you have to type the qualifier in the format that is shown in the Remote Debugging Monitor (in my case LOCAL\Administrator@TFSLAB) and confirm it by pressing <Enter> Since the build service is running with other credentials, check the option “Show processes from all users”. Now the Attach to process dialog shows the TFSBuildServiceHost process Set the breakpoint in the activity you want to debug and kick of a build. Be aware that when you attach to the TFSBuildServiceHost that you debug every single build that is run by this windows service, so make sure you don’t debug the build server that is in production! You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Building a Mafia&hellip;TechFest Style

    - by David Hoerster
    It’s been a few months since I last blogged (not that I blog much to begin with), but things have been busy.  We all have a lot going on in our lives, but I’ve had one item that has taken up a surprising amount of time – Pittsburgh TechFest 2012.  After the event, I went through some minutes of the first meetings for TechFest, and I started to think about how it all came together.  I think what inspired me the most about TechFest was how people from various technical communities were able to come together and build and promote a common event.  As a result, I wanted to blog about this to show that people from different communities can work together to build something that benefits all communities.  (Hopefully I've got all my facts straight.)  TechFest started as an idea Eric Kepes and myself had when we were planning our next Pittsburgh Code Camp, probably in the summer of 2011.  Our Spring 2011 Code Camp was a little different because we had a great infusion of some folks from the Pittsburgh Agile group (especially with a few speakers from LeanDog).  The line-up was great, but we felt our audience wasn’t as broad as it should have been.  We thought it would be great to somehow attract other user groups around town and have a big, polyglot conference. We started contacting leaders from Pittsburgh’s various user groups.  Eric and I split up the ones that we knew about, and we just started making contacts.  Most of the people we started contacting never heard of us, nor we them.  But we all had one thing in common – we ran user groups who’s primary goal is educating our members to make them better at what they do. Amazingly, and I say this because I wasn’t sure what to expect, we started getting some interest from the various leaders.  One leader, Greg Akins, is, in my opinion, Pittsburgh’s poster boy for the polyglot programmer.  He’s helped us in the past with .NET Code Camps, is a Java developer (and leader in Pittsburgh’s Java User Group), works with Ruby and I’m sure a handful of other languages.  He helped make some e-introductions to other user group leaders, and the whole thing just started to snowball. Once we realized we had enough interest with the user group leaders, we decided to not have a Fall Code Camp and instead focus on this new entity. Flash-forward to October of 2011.  I set up a meeting, with the help of Jeremy Jarrell (Pittsburgh Agile leader) to hold a meeting with the leaders of many of Pittsburgh technical user groups.  We had representatives from 12 technical user groups (Python, JavaScript, Clojure, Ruby, PittAgile, jQuery, PHP, Perl, SQL, .NET, Java and PowerShell) – 14 people.  We likened it to a scene from a Godfather movie where the heads of all the families come together to make some deal.  As a result, the name “TechFest Mafia” was born and kind of stuck. Over the next 7 months or so, we had our starts and stops.  There were moments where I thought this event would not happen either because we wouldn’t have the right mix of topics (was I off there!), or enough people register (OK, I was wrong there, too!) or find an appropriate venue (hmm…wrong there, too) or find enough sponsors to help support the event (wow…not doing so well).  Overall, everything fell into place with a lot of hard work from Eric, Jen, Greg, Jeremy, Sean, Nicholas, Gina and probably a few others that I’m forgetting.  We also had a bit of luck, too.  But in the end, the passion that we had to put together an event that was really about making ourselves better at what we do really paid off. I’ve never been more excited about a project coming together than I have been with Pittsburgh TechFest 2012.  From the moment the first person arrived at the event to the final minutes of my closing remarks (where I almost lost my voice – I ended up being diagnosed with bronchitis the next day!), it was an awesome event.  I’m glad to have been part of bringing something like this to Pittsburgh…and I’m looking forward to Pittsburgh TechFest 2013.  See you there!

    Read the article

  • SQLAuthority News – Top 5 Latest Microsoft Certifications of 2013 – Guest Post

    - by Pinal Dave
    With the IT job market getting more and more competent by the day, certifications are a must for anyone who wishes to get a strong foothold in the industry. Microsoft community comes up with regular updates and enhancements in its existing products to keep up with the rapidly evolving requirements of the ICT industry. We bring you a list of five latest Microsoft certifications that you must consider acquiring this year. MCSE: SharePoint Learn all about Windows Server 2012 and Microsoft SharePoint 2013, which brings an advanced set of features to the fore in this latest version. It introduces new capabilities for business intelligence, social media, branding, search, identity management, mobile device among other features. Enjoy a great user experience with sharing and collaboration in community forum, within a pixel-perfect SharePoint website. Data connectivity and business intelligence tools allow users to process and access data, analyze reports, share and collaborate with each other more conveniently. Microsoft Specialist: Microsoft Project 2013 The only project management system that works seamlessly with other applications and cloud solutions of Microsoft, MS Project 2013 offers more than what meets the eye.  It provides for easier management and monitoring of projects so that users can ensure timely delivery while improving the productivity significantly. So keep all your projects on track and collaborate with your team like never before with this enhanced release! This one’s a must for all project managers. MCSE Messaging Another one of Microsoft gems is its messaging environment which has also launched the latest release Microsoft Exchange Server 2013. Messaging administrators can take up this training and validate their expertise in Unified Messaging, Exchange Online, PowerShell and Virtualization strategies, through MCSE Messaging certification in Exchange Server. If you wish to enhance productivity and data security of your organization while being flexible and extremely efficient, this is the right certification for you. MCSE Communication An enterprise can function optimally on the strength of its information flow and communication systems. With Lync Server 2013, you can introduce a whole new world of unified communications which consists of audio/video conferencing, dial-in, Persistent Chat, instant chat, and EDGE services in your organization. Utilize IT to serve and support business objectives by mastering this UC technology with this latest MCSE Communication course on using Microsoft Lync Server 2013. MCSE: SQL Server 2012 BI Platform The decision making process is largely influenced by underlying enterprise information used by the management for business intelligence. Therefore, a robust business intelligence platform that anchors enterprise IT and transform it to operational efficiencies is the need of the hour. SQL Server 2012 BI Platform certification helps professionals implement, manage and maintain a BI database infrastructure effectively. IT professionals with BI skills are highly sought after these days. MCSD: Windows Store Apps A Microsoft Certified Solutions Developer certification in Windows Store Apps validates your potential in designing interactive apps. Learn The Essentials of Developing Windows Store Apps using HTML5 and JavaScript and establish yourself as an ace developer capable of creating fast and fluid Metro style apps for Windows 8 that are accessible on a variety of devices. You can also go ahead and Learn Essentials of Developing Windows Store Apps using C# mode if you’re already familiar and working with C# programming language. Hence the developers are free to choose their own favorite development stream which opens doors for them to get ready for the latest and exciting application development platform called Windows store apps. Software developers with these skills are in great demand in the industry today. In order to continue being competitive in your respective fields, it is imperative that IT personnel update their knowledge on a regular basis. Certifications are a means to achieve this goal. Not considered to be an optional pre-requisite anymore, major IT certifications such as these are now essential to stay afloat in a cut-throat industry where technologies change on a daily basis. This blog is written by Aruneet Anand of Koenig Solutions. Koenig Solutions does training for all of the above courses. For more information, visit the website. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Microsoft Certifications

    Read the article

  • How to install SharePoint Server 2013 Preview

    - by ybbest
    The Office 2013 and SharePoint Server 2013 Preview is announced yesterday and as a SharePoint Developer, I am really excited to learn all the new features and capabilities. Today I will show you how to install the preview. 1. Create a service account called SP2013Install and give this account Dbcreator and SecurityAdmin in SQL Server 2012 2. You need to run the following script to set the ‘maxdegree of parellism’ setting to the required value of 1 in SQL Server 2012(using sysadmin privilege) before configure the SharePoint Farm. Otherwise , you might get the error ‘This SQL Server Instance does not have the required maxdegree of parellism setting of 1’ sp_configure 'show advanced options', 1; GO RECONFIGURE WITH OVERRIDE; GO sp_configure 'max degree of parallelism', 1; GO RECONFIGURE WITH OVERRIDE; GO 3. Download the SharePoint preview from here and I am going to install it on Windows Server 2008R2 with SQL2012. 4. Click the Install software prerequisites, this works fine with the internet connection. (However, if you do not have internet connection, it is a bit tricky to install window azure AppFabric as it has to be installed using the prerequisite installer. Your computer might reboot a few times in the process.) 5.After the prerequisites are installed `completely, you can then install the Preview. Click the Install SharePoint Server and Enter the Product key you get from the Preview download page. 6. Accept the License terms and Click Next. 7. Leave the default path for the file location. 8. You can now start the installation process 9. After binary files are installed, you then can configure your farm using the farm configuration wizard. 10.Specify the Database server and the install account 11. Specify SharePoint farm passphrase. 12 Specify the port number , you should choose your own favorite port number. 13. Choose Create a New Server Farm and click next. 14. Double-check with the settings and click Next to Configure the farm install. 15. Finally, your farm is configured successfully and you now are able to go to your Central Admin site http://sp2010:6666/ 16. You should configure the services manually or automate using PowerShell (If you like to understand why,you can read the blog post here) ,however I will use the wizard to configure automatically here  as  this is a test machine. After the configuration is complete, you now be able to see your SharePoint Site. 17.To start the evaluate the Preview , you need to install Visual Studio 2012 RC , Microsoft Office Developer Tools for Visual Studio 2012,SharePoint 2013 Designer Preview , Office 2013 Preview. References: Download SharePoint2013 Server 2013 Download Microsoft Visio Professional 2013 Preview Install SharePoint 2013 Preview Hardware and software requirements for SharePoint 2013 Preview SharePoint 2013 IT Pro and Developer training materials released Plan for SharePoint 2013 Preview Microsoft Office Developer Tools for Visual Studio 2012 SharePoint 2013 Preview Office365 for the SharePoint 2013 preview SharePoint Designer 2013 Download: Microsoft Office 2013 Preview Language Pack Try Office

    Read the article

  • Microsoft hosting free Hyper-V training for VMware Pros

    - by Ryan Roussel
    Microsoft will be hosting free training for virtualization professionals focused on Hyper-V, System Center, and virtualization architecture.  Details are below:   Just one week after Microsoft Management Summit 2011 (MMS), Microsoft Learning will be hosting an exclusive three-day Jump Start class specially tailored for VMware and Microsoft virtualization technology pros.  Registration for “Microsoft Virtualization for VMware Professionals” is open now and will be delivered as an online class on March 29-31, 2010 from 10:00am-4:00pm PDT.    The course is COMPLETELY FREE and OPEN TO ANYONE!  Please share with your customers, blog, Tweet, etc. – help us get the word out to strengthen support for Microsoft’s virtualization offerings. What’s the high-level overview? This cutting edge course will feature expert instruction and real-world demonstrations of Hyper-V and brand new releases from System Center Virtual Machine Manager 2012 Beta (many of which will be announced just one week earlier at MMS).  Register Now!   Day 1 will focus on “Platform” (Hyper-V, virtualization architecture, high availability & clustering) 10:00am – 10:30pm PDT:  Virtualization 360 Overview 10:30am – 12:00pm:  Microsoft Hyper-V Deployment Options & Architecture 1:00pm – 2:00pm:  Differentiating Microsoft and VMware (terminology, etc.) 2:00pm – 4:00pm:  High Availability & Clustering Day 2 will focus on “Management” (System Center Suite, SCVMM 2012 Beta, Opalis, Private Cloud solutions) 10:00am – 11:00pm PDT:  System Center Suite Overview w/ focus on DPM 11:00am – 12:00pm:  Virtual Machine Manager 2012 | Part 1 1:00pm –   1:30pm:  Virtual Machine Manager 2012 | Part 2 1:30pm – 2:30pm:  Automation with System Center Opalis & PowerShell 2:30pm – 4:00pm:  Private Cloud Solutions, Architecture & VMM SSP 2.0 Day 3 will focus on “VDI” (VDI Infrastructure/architecture, v-Alliance, application delivery via VDI) 10:00am – 11:00pm PDT:  Virtual Desktop Infrastructure (VDI) Architecture | Part 1 11:00am – 12:00pm:  Virtual Desktop Infrastructure (VDI) Architecture | Part 2 1:00pm – 2:30pm:  v-Alliance Solution Overview 2:30pm – 4:00pm:  Application Delivery for VDI     Every section will be team-taught by two of the most respected authorities on virtualization technologies: Microsoft Technical Evangelist Symon Perriman and leading Hyper-V, VMware, and XEN infrastructure consultant, Corey Hynes Who is the target audience for this training? Suggested prerequisite skills include real-world experience with Windows Server 2008 R2, virtualization and datacenter management. The course is tailored to these types of roles: · IT Professional · IT Decision Maker · Network Administrators & Architects · Storage/Infrastructure Administrators & Architects How do I to register and learn more about this great training opportunity? · Register: Visit the Registration Page and sign up for all three sessions · Blog: Learn more from the Microsoft Learning Blog · Twitter: Here are a few posts you can retweet: o Mar. 29-31 "Microsoft #Virtualization for VMware Pros" @SymonPerriman Corey Hynes http://bit.ly/JS-Hyper-V @MSLearning #Hyper-V o @SysCtrOpalis Mar. 29-31 "Microsoft #Virtualization for VMware Pros" @SymonPerriman Corey Hynes http://bit.ly/JS-Hyper-V #Hyper-V o Learn all the cool new features in Hyper-V & System Center 2012! SCVMM, Self-Service Portal 2.0, http://bit.ly/JS-Hyper-V #Hyper-V #Opalis What is a “Jump Start” course? A “Jump Start” course is “team-taught” by two expert instructors in an engaging radio talk show style format. The idea is to deliver readiness training on strategic and emerging technologies that drive awareness at scale before Microsoft Learning develops mainstream Microsoft Official Courses (MOC) that map to certifications.  All sessions are professionally recorded and distributed through MS Showcase, Channel 9, Zune Marketplace and iTunes for broader reach.

    Read the article

  • Old School Wizardry Tip: Batch File Comments

    - by jkauffman
    Johnny, the Endangered Keyboard-Driven Windows User Some of my proudest, obscure Windows tricks are losing their relevance. I know I’m not alone. Keyboard shortcuts are going the way of the dodo. I used to induce fearful awe by slapping Ctrl+Shift+Esc in front of the lowly, pedestrian Windows users. No windows key on the keyboard? No problem: Ctrl+Esc. No menu key on the keyboard: Shift+F10. I am also firmly planted in the habit of closing windows with the Alt+Space menu (Alt+Space, C); and I harbor a brooding, slow=growing list of programs that fail to support this correctly (that means you, Paint.NET). Every time a new version of windows comes out, the support for some of these minor time-saving habits get pared out. Will I complain publicly? Nope, I know my old ways should be axed to conserve precious design energy. In fact, I disapprove of fierce un-intuitiveness for the sake of alleged productivity. Like vim, for example. If you approach a program after being away for 5 years, having to recall encyclopedic knowledge is a flaw. The RTFM disciples have lost. Anyway, some of the items in my arsenal of goofy time-saving tricks are still relevant today. I wanted to draw attention to one that’s stood the test of time. Remember Batch Files? Yes, it’s true, batch files are fading faster than the world of print. But they're not dead yet. I still run into some situations where I opt to use batch files. They are still relevant for build processes, or just various development workflow tools. Sure, there’s powershell, but there’s that stupid Set-ExecutionPolicy speed bump standing in your way; can you really spare the time to A) hunt down that setting on all machines affected and/or B) make futile efforts to convince your coworkers/boss that the hassle was worth it? When possible, I prefer the batch file wild card. And whenever I return to batch files, I end up researching some of the unintuitive aspects such as parameters, quote handling, and ERRORLEVEL. But I never have to remember to use “REM” for comment lines, because there’s a cleaner way to do them! Double Colon For Eye-Friendly Comments Here is a very simple batch file, with pretty much minimal content: @ECHO OFF SETLOCAL REM This is a comment ECHO This batch file doesn’t do much If you code on a daily basis, this may be more suitable to your eyes: @ECHO OFF SETLOCAL :: This is a comment ECHO This batch file doesn’t do much Works great! I imagine I find it preferable due to the similarity to comments in other situations: // or ;  or # I’ve often make visual pseudo-line breaks in my code, and this colon-based syntax works wonders: @ECHO OFF SETLOCAL :: Do stuff ECHO Doing Stuff :::::::::::::::::::::::::::: :: Do more stuff ECHO This batch file doesn’t do much Not only is it more readable, but there’s a slight performance benefit. The batch file engine sees this as an invalid line label and immediately reads the following line. Use that fact to your advantage if this trick leads you into heated nerd debate. Two Pitfalls to Avoid Be aware of that there are a couple situations where this hack will fail you. It most likely won’t be a problem unless you’re getting really sophisticated with your batch files. Pitfall #1: Inline comments @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt GOTO END ::This will fail :END Unfortunately, this fails. You can only have whitespace to the left of your comments. Pitfall #2: Code Blocks @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt (         :: This will fail         ECHO HELLO ) Code blocks, such as if statements and for loops, cannot contain these comments. This is ultimately due to the fact that entire code blocks are processed as a single line. I originally learned this from Rob van der Woude’s site. He goes into more depth about the behavior of the pitfalls as well, if you are interested in further details. I hope this trick earns you serious geek rep!

    Read the article

  • Windows Azure SDK 1.3 addresses early adopter feedback

    - by Eric Nelson
    At the end of November 2010 we released a new version of the Windows Azure SDK which contains many new features driven by the great feedback of early adopters plus a shiny new portal. New Portal implemented in Silverlight: The new portal is implemented using Silverlight and replaces the (IMHO rather clunky) original HTML + JavaScript portal. It is 100% better although does still have a few bugs. Enjoy! P.S. You can if you wish still use the old portal:   New runtime functionality: The following functionality is now generally available through the Windows Azure SDK and Windows Azure Tools for Visual Studio and the new Windows Azure Management Portal: Elevated Privileges and Full IIS. You can now run a portion or all of your code in Web and Worker roles with elevated administrator privileges. The Web role now provides Full IIS functionality, which enables multiple IIS sites per Web role and the ability to install IIS modules. Remote Desktop functionality enables you to connect to a running instance of your application or service in order to monitor activity and troubleshoot common problems. Windows Server 2008 R2 Roles: Windows Azure now supports Windows Server 2008 R2 in its Web, worker and VM roles. This new support enables you to take advantage of the full range of Windows Server 2008 R2 features such as IIS 7.5, AppLocker, and enhanced command-line and automated management using PowerShell Version 2.0. New runtime functionality – in beta: Windows Azure Virtual Machine Role: Support for more types of new and existing Windows applications will soon be available with the introduction of the Virtual Machine (VM) role. You can move more existing applications to Windows Azure, reducing the need to make costly code or deployment changes. Extra Small Windows Azure Instance, which is priced at $0.05 per compute hour, provides developers with a cost-effective training and development environment. Developers can also use the Extra Small instance to prototype cloud solutions at a lower cost. Windows Azure Connect: (formerly Project Sydney), which enables a simple and easy-to-manage mechanism to set up IP-based network connectivity between on-premises and Windows Azure resources, is the first Windows Azure Virtual Network feature that we’re making available as a CTP. You can sign up for any of the betas via the Windows Azure Management Portal. Improved processes and simplified operations New portal! (see above) Access to new diagnostic information including the ability to click on a role to see role type, deployment time and last reboot time A new sign-up process that dramatically reduces the number of steps needed to sign up for Windows Azure. New scenario based Windows Azure Platform forums to help answer questions and share knowledge more efficiently. Multiple Service Administrators: Windows Azure now supports multiple Windows Live IDs to have administrator privileges on the same Windows Azure account. The objective is to make it easy for a team to work on the same Windows Azure account while using their individual Windows Live IDs.   Related Links Please also let us know through Microsoft Platform Ready if and when you intend to build an application using the Windows Azure Platform. Or indeed if you already have (Well done). You will get access to some great benefits if you do (more on that in a future post). It also really helps us better understand the demand out there which directly impacts how we will plan the next six months of activities around the Windows Azure Platform. Visit Microsoft Platform Ready to tell us about your plans for your applications UK based? Interested in the Windows Azure Platform? Join http://ukazure.ning.com Get started with the Windows Azure Platform http://bit.ly/startazure

    Read the article

  • SQL SERVER – Last Two Days to Get FREE Book – Joes 2 Pros Certification 70-433

    - by pinaldave
    Earlier this week we announced that we will be giving away FREE SQL Wait Stats book to everybody who will get SQL Server Joes 2 Pros Combo Kit. We had a fantastic response to the contest. We got an overwhelming response to the offer. We knew there would be a great response but we want to honestly say thank you to all of you for making it happen. Rick and I want to make sure that we express our special thanks to all of you who are reading our books. The offer is still on and there are two more days to avail this offer. We want to make sure that everybody who buys our most selling combo kits, we will send our other most popular SQL Wait Stats book. Please read all the details of the offer here. The books are great resources for anyone who wants to learn SQL Server from fundamentals and eventually go on the certification path of 70-433. Exam 70-433 contains following important subject and the book covers the subject of fundamental. If you are taking the exam or not taking the exam – this book is for every SQL Developer to learn the subject from fundamentals.  Create and alter tables. Create and alter views. Create and alter indexes. Create and modify constraints. Implement data types. Implement partitioning solutions. Create and alter stored procedures. Create and alter user-defined functions (UDFs). Create and alter DML triggers. Create and alter DDL triggers. Create and deploy CLR-based objects. Implement error handling. Manage transactions. Query data by using SELECT statements. Modify data by using INSERT, UPDATE, and DELETE statements. Return data by using the OUTPUT clause. Modify data by using MERGE statements. Implement aggregate queries. Combine datasets. INTERSECT, EXCEPT Implement subqueries. Implement CTE (common table expression) queries. Apply ranking functions. Control execution plans. Manage international considerations. Integrate Database Mail. Implement full-text search. Implement scripts by using Windows PowerShell and SQL Server Management Objects (SMOs). Implement Service Broker solutions. Track data changes. Data capture Retrieve relational data as XML. Transform XML data into relational data. Manage XML data. Capture execution plans. Collect output from the Database Engine Tuning Advisor. Collect information from system metadata. Availability of Book USA - Amazon | India - Flipkart | Indiaplaza Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What 5 things should SQL Server get rid of?

    - by BuckWoody
    I’ve been “tagged” by my friend Paul Randal. It’s a high-tech way of making someone else do what you want, but since it’s Paul, well, I guess I’m OK with that. He’s asked in his recent blog entry “What five things would you get rid of in SQL Server if you were in charge?” This is, of course, a delicate issue. After all, I work at Microsoft, so anything I say here might be taken as a criticism that would require action – but of course it really doesn’t. Interestingly, you may have more to do with what goes in to SQL Server than I did even as a Program Manager where I “owned” a feature. Unlike many places I’ve worked, Microsoft really does drive its products by what its users want – not every time, and not every user request, mind you, but overall I think we hit the mark pretty well. So, with all of that said, and of course the obligatory statement of “these are my own opinions, and have nothing to do with any official Microsoft position in any way, and do not reflect the opinions of other Microsoft employees or management”, here goes. 1. Get rid of SQL Server Management Studio Does that surprise you? After all, when I was a Program Manager, I actually owned the general architecture for SSMS. But those on my team probably would have been able to guess this one for you. I think that SSMS is a fine development tool. But I think that it does less of a good job for managing a system. It’s based on Visual Studio, probably one of the best development IDE’s around. And when I develop code, I really like it. But for a monitoring/management tool, I prefer a snap-in to the Microsoft Management Console (MMC). I know, the old one (prior to 3.0) was kludgy, difficult to use and program in. But that’s changed. Of course, when I bring this up, you’ll probably immediately say “But I don’t have that in XP.” And that’s one of the reasons we didn’t go there. (But I still don’t like SSMS for management.) 2. ShrinkDB I think this discussion has been done to death, so I’ll leave it at that. 3. SQL Server Agent Does that one surprise you as well? In my mind, since we ALWAYS ride on Windows, just use the task scheduler there, along with PowerShell. You could log the results in Windows logs, files, back into SQL Server, whatever. It’s just a complexity we don’t need in SQL Server. 4. SQL Server Error Logs We have a full logging setup in Windows. They’re well done, easy to understand and ubiquitous. We should just use that. 5. Several SKU’s I won’t say which, but we have a few SKU’s of SQL Server that need to go. And we need to figure out how to help you understand clearly where you need to go to Enterprise or Data Center.  Most folks are trying to push Standard edition to do things it isn’t designed to do, and then they think SQL Server won’t scale. I think we can do a better job of showing you where Standard Edition will hit the wall, and I think with fewer choices it would be pretty simple for you to pick the right one. Well, once again I’ve probably puzzled some folks and angered others. I think my work here is done. :) Back to you, Paul. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Introducing the Metro User Interface on Windows 2012

    - by andywe
    Although I am a big fan of using PowerShell to do many of my server operations, that aspect is well covered by those far more knowledgeable than I, and there is vast information around the web already on that. The new Metro interface, and getting around both Windows 8 and Windows Server 2012 though is relatively new, even for those whop ran the previews. What is this? A blank Desktop!   Where did the start button go? Well, it is still there...sort of. It is hidden, and acts like an auto hidden component that appear only when the mouse is hovered over the lower left corner of the screen. Those familiar with Gnome or OSX can relate this to the "Hot Corners" functions. To get to the start button, hover your mouse in the very left corner of the task bar. Let it sit there a moment, and a small blue square with colored tiles in it called start will appear. Click it. I clicked it and now I have all the tiles..What is this?   Welcome to the Metro interface. This is a much more modern look, and although at first seems weird and cumbersome, I have actually found that it is a bit more extensible, allowing greater organization and customization than the older explorer desktop. If you look closely, you'll see each box represents either a program, or program group. First, a few basics about using the start view. First and foremost, a right mouse click will bring up a bar on the bottom, with an icon towards the right. Notice it is titled “All Apps”. An even easier way in many places is to hover your mouse in the exact opposite corner, in the upper right. A sidebar will open and expose what used to be a widget bar (remember Vista?), and there are options for Search, Start, and Settings.   Ok Great, but where is everything? It’s all there…Click the All Apps icon.   Look better? Notice the scroll bar at the bottom. Move it right..your desktop is sized to your content..so you can have a smaller, or larger amount of programs exposed. Each icon can be secondary clicked (right mouse click for most of us, and an options bar at the bottom, rather than the old small context menu, is opened with some very familiar options.   Notice the top of the Windows Explorer window has some new features. You still have your right mouse click functions, but since the shortcuts for these items already exist..just copy them. There are many ways, but here is a long way to show you more of the interface. 1. Right mouse click a program icon, and select the Open File Location option. 2. Trusty file manager opens…but if you look closely up at top edge of the window, you’ll see a nifty enhancement. An orange colored box that is titled Shortcut Tools and another lavender box Title Application tools. Each of these adds options at the top of the file manager window to make selection easy. Of course, you can still secondary click an item in the listing window too. 3. Click shortcut tools, right click your app shortcut and copy it. Then simply paste it into the desktop outside the File Explorer window Also note some of the newer features. The large icons up top below the menu that has many common operations. The options change as you select each menu item. Well, that’s it for this installment. I hope this helps you out.

    Read the article

  • Configuring Jenkins for running with BitBucket

    - by Claus
    I'm trying to setup Jenkins on my mac mini in order to pull my iOS project source code from BitBucket and build it automatically. I've already gone through the major well know problems generating the ssh keys,uploading them in BitBucket,performing an ssh connection by console for adding the host to the well know list (you can find all my adventure here and here). Now,there are 3 user in my system: A,B and Shared. When I installed Jenkins it automatically placed itself in Shared, but I generated the ssh keys with the user A. So just to be clear In the A home directory there is an .ssh directory with public and private keys. When I try to run by Jenkins job I get this error message: Started by user anonymous Building in workspace /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace Checkout:workspace / /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace - hudson.remoting.LocalChannel@625cb0bb Using strategy: Default Cloning the remote Git repository Cloning repository [email protected]:myuser/myproject.git git --version git version 1.8.0 ERROR: Error cloning remote repo 'origin' : Could not clone [email protected]:myuser/myproject.git hudson.plugins.git.GitException: Could not clone [email protected]:myuser/myproject.git at hudson.plugins.git.GitAPI.clone(GitAPI.java:271) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1036) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:978) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:978) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1134) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1516) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) Caused by: hudson.plugins.git.GitException: Command "/usr/local/git/bin/git clone --progress -o origin [email protected]:myuser/myproject.git /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace" returned status code 128: stdout: Cloning into '/Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace'... stderr: Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:885) at hudson.plugins.git.GitAPI.access$000(GitAPI.java:40) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:267) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:246) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitAPI.clone(GitAPI.java:246) ... 14 more Trying next repository ERROR: Could not clone repository FATAL: Could not clone hudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1048) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:978) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:978) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1134) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1516) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) As you can see it fails when Hudson try to run the GIT command. The odd things is that if I try to run /usr/local/git/bin/git clone --progress -o origin [email protected]:myuser/myproject.git /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace In my console, it works fine (after fixing a small problem relative the folder write permission with chmod) I found a post reporting a similar error which names a number of possible options but I'm not sure how to perform correctly these operations on my console. It looks like Jenkins is trying to run a command with a user which doesn't have permission to retrieve the appropriate keys from my .ssh directory.Not really sure.Maybe this output can help: MacMini:~ myuser$ ps axu | grep "/jenkins" myuser 11660 0.0 4.6 2918124 97096 ?? S 6:59pm 1:05.63 /usr/bin/java -jar /Users/myuser/Library/Caches/org.jenkins-ci.jenkins/jenkins.war jenkins 9896 0.0 9.0 2939824 188552 ?? Ss 4:06pm 17:55.91 /usr/bin/java -jar /Applications/Jenkins/jenkins.war myuser 11930 0.0 0.0 2432768 588 s000 S+ 10:28am 0:00.00 grep /jenkins MacMini:~ myuser$ ps axu | grep tomcat myuser 11932 0.0 0.0 2432768 588 s000 S+ 10:28am 0:00.00 grep tomcat MacMini:~ myuser$ I really hope to fix this problem, because I would like to write a very detailed tutorial with all the information I found disseminated around the web.

    Read the article

  • How to setup linux permissions for the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

  • Disk operations freeze Debian

    - by Grzenio
    Hi, I have just installed Debian testing on my new desktop and I am not very happy with performance - when I perform a disk intensive operation, e.g. upgrade packages in the system, everything seems to freeze, e.g. changing tabs in Iceweasel takes 3 seconds. I run the Debian on my 3 year old Thinkpad X60 ultra-portable, and I don't have these issues. (every single parameter of the laptop is much worse than the desktop). I am using the default packaged kernel and scripts. I run hdparm -t /dev/sda1 And I got around 96GB/s, which is expected. What else can I try to make it work better? EDIT: grzes:/home/ga# hdparm -i /dev/sda /dev/sda: Model=WDC WD15EARS-00Z5B1, FwRev=80.00A80, SerialNo=WD-WMAVU1362357 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=2930277168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode EDIT2: Even my wife said "on this new computer I can't do anything when I copy the photos from the camera and its much worse than on the old one". So it must be serious. EDIT3: Updated to 2.6.32, but still no improvement EDIT4: I forgot to mention that the new disk is ext4, the old was ext3. EDIT5: Still not solved. I have a P43 ASUS P5QL-E board. Lines from dmesg that seem relevant: [ 0.370850] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.370852] io scheduler noop registered [ 0.370853] io scheduler anticipatory registered [ 0.370854] io scheduler deadline registered [ 0.370876] io scheduler cfq registered (default) ... [ 0.908233] ata_piix 0000:00:1f.2: version 2.13 [ 0.908243] ata_piix 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.908246] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] [ 0.908275] ata_piix 0000:00:1f.2: setting latency timer to 64 [ 0.908316] scsi0 : ata_piix [ 0.908374] scsi1 : ata_piix [ 0.909180] ata1: SATA max UDMA/133 cmd 0xa000 ctl 0x9c00 bmdma 0x9480 irq 19 [ 0.909183] ata2: SATA max UDMA/133 cmd 0x9880 ctl 0x9800 bmdma 0x9488 irq 19 [ 0.909199] ata_piix 0000:00:1f.5: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.909202] ata_piix 0000:00:1f.5: MAP [ P0 -- P1 -- ] [ 0.909228] ata_piix 0000:00:1f.5: setting latency timer to 64 [ 0.909279] scsi2 : ata_piix [ 0.909326] scsi3 : ata_piix [ 0.910021] ata3: SATA max UDMA/133 cmd 0xb000 ctl 0xac00 bmdma 0xa480 irq 19 [ 0.910024] ata4: SATA max UDMA/133 cmd 0xa880 ctl 0xa800 bmdma 0xa488 irq 19 [ 0.915575] FDC 0 is a post-1991 82077 ... [ 1.716062] ata1.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.716074] ata1.01: SATA link down (SStatus 0 SControl 300) [ 1.724318] ata1.00: ATA-8: WDC WD15EARS-00Z5B1, 80.00A80, max UDMA/133 [ 1.724322] ata1.00: 2930277168 sectors, multi 16: LBA48 NCQ (depth 0/32) [ 1.740339] ata1.00: configured for UDMA/133 [ 1.740428] scsi 0:0:0:0: Direct-Access ATA WDC WD15EARS-00Z 80.0 PQ: 0 ANSI: 5 [ 1.746788] scsi 6:0:0:0: CD-ROM ASUS DRW-1608P 1.17 PQ: 0 ANSI: 5 ... [ 1.925981] sd 0:0:0:0: [sda] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB) [ 1.926005] sd 0:0:0:0: [sda] Write Protect is off [ 1.926007] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.926020] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.926092] sda:sr0: scsi3-mmc drive: 40x/40x writer cd/rw xa/form2 cdda tray [ 1.931106] Uniform CD-ROM driver Revision: 3.20 [ 1.931191] sr 6:0:0:0: Attached scsi CD-ROM sr0 ... [ 1.941936] sda1 sda2 sda3 sda4 < sda5 sda6 > [ 1.967691] sd 0:0:0:0: [sda] Attached SCSI disk [ 1.970938] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 1.970959] sr 6:0:0:0: Attached scsi generic sg1 type 5 ... [ 2.500086] EXT4-fs (sda3): mounted filesystem with ordered data mode ... [ 7.150468] EXT4-fs (sda6): mounted filesystem with ordered data mode

    Read the article

  • NTFS Permissions - Access Denied even though Explicit Allow and no Deny

    - by chris613
    I'm hoping someone can help me with this NTFS permissions problem. The short version is that I can't write a new file in F:\SomeDir even though I seem to be granted full permissions via both the "Domain Admins" group and a second unprivileged group. The "Effective Permissions" tab in the explorer permissions UI shows that I have full control, and there are no "Deny"s anywhere in the ACL or anything else that looks unusual. I am logged into the machine over RDP and accessing the disk directly, not through a share. F:\SomeDir>set U USERDNSDOMAIN=THEOFFICE.LOCAL USERDOMAIN=THEOFFICE USERNAME=thisisme USERPROFILE=C:\Users\thisisme F:\SomeDir>icacls . . BUILTIN\Administrators:(I)(F) CREATOR OWNER:(I)(OI)(CI)(IO)(F) THEOFFICE\Domain Admins:(I)(OI)(CI)(F) NT AUTHORITY\SYSTEM:(I)(OI)(CI)(F) BUILTIN\Administrators:(I)(OI)(CI)(IO)(F) BUILTIN\Users:(I)(OI)(CI)(RX) Successfully processed 1 files; Failed processing 0 files F:\SomeDir>net group /domain "Domain Admins" The request will be processed at a domain controller for domain THEOFFICE.local. Group name Domain Admins Comment Designated administrators of the domain Members ------------------------------------------------------------------------------- Administrator thatguy thisisme The command completed successfully. F:\SomeDir>echo "whyUNoCreateFile?" > whyUNoCreateFile.txt Access is denied. I searched for answers and came across similar problems that lead to UAC (ex. Why does removing the EVERYONE group prevent domain admins from accessing a drive? ). I can't turn off UAC at the moment, so I try a "regular" group that I'm also part of. This group has no special rights assignments and is not part of any administrative groups. Still no dice: [***** This one command executed in an elevated shell *****] F:\SomeDir>icacls . /grant THEOFFICE\iteveryone:(OI)(CI)F processed file: . Successfully processed 1 files; Failed processing 0 files F:\SomeDir>net group /domain "iteveryone" The request will be processed at a domain controller for domain THEOFFICE.local. Group name ITeveryone Comment Members ------------------------------------------------------------------------------- Administrator thatguy thisisme otherguy someitguy The command completed successfully. F:\ScanningVMsForIBM>echo y > u Access is denied. As you can see, using a "regular" group didn't help. I have logged out and back in to the server to ensure my login token is up to date, and at any rate I belonged to these groups before the server was created. If I grant explicit permission to myself, it does allow me to write files: [***** This one command executed in an elevated shell *****] F:\SomeDir>icacls . /grant THEOFFICE\thisisme:(OI)(CI)F processed file: . Successfully processed 1 files; Failed processing 0 files F:\SomeDir>echo y > u F:\SomeDir>type u y My requirement is for the "Domain Admins" group to have Full Control, or if that's not possible without disabling UAC, then a second group will do, but I can't get either to work. I'm really stumped. Can someone please point out what I could be overlooking?

    Read the article

  • Error installing mysql, any way to fix it?

    - by user156355
    I was installing mysql with apt-get install mysql-server (like I always did) before that I had done an apt-get update (I am using Debian 6), and when I installed I found this problem, pretty common as I see, but Ive followed all stepts and nothing have worked, Ive tried with apt-get install -f also with apt-get remove mysql-server (and common, and mysql-server-5.1) and also with apt-get purge (every packate) and later install, but nothing... I tried also dpkg-reconfigure mysql-server-5.1 apt-get install --reinstall mysql-server (all runed as Root) Still, nothing worked, any idea??? 130130 10:11:48 InnoDB: Shutdown completed; log sequence number 0 44233 Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.1 (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.1; however: Package mysql-server-5.1 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured configured to not write apport reports Errors were encountered while processing: mysql-server-5.1 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) when I tried dpkg-reconfigure mysql-server-5.1 /usr/sbin/dpkg-reconfigure: mysql-server-5.1 is broken or not fully installed the case "Start" on /etc/init.d/mysql is 'start') sanity_checks; # Start daemon log_daemon_msg "Starting MySQL database server" "mysqld" if mysqld_status check_alive nowarn; then log_progress_msg "already running" log_end_msg 0 else # Could be removed during boot test -e /var/run/mysqld || install -m 755 -o mysql -g root -d /var/run/mysqld # Start MySQL! /usr/bin/mysqld_safe > /dev/null 2>&1 & # 6s was reported in #352070 to be too few when using ndbcluster for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14; do sleep 1 if mysqld_status check_alive nowarn ; then break; fi log_progress_msg "." done if mysqld_status check_alive warn; then log_end_msg 0 # Now start mysqlcheck or whatever the admin wants. output=$(/etc/mysql/debian-start) [ -n "$output" ] && log_action_msg "$output" else log_end_msg 1 log_failure_msg "Please take a look at the syslog" fi fi When I make mysql force-reload: Reloading MySQL database server: mysqld/usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! root@americandougnuts:/etc/init.d# root@americandougnuts:/etc/init.d# Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!

    Read the article

  • Understanding NFS4 (Linux server)

    - by drumfire
    I've been a bit bothered by NFS4 on Linux. Some information 'out there' seems to conflict with other information, and other information appears hard to find. So here are a couple of things that caught my attention, hopefully someone out there can shed some light on this. This question focuses exclusively on NFS4 without Kerberos etc. 1. Exports There is ambiguous information in the exports manpage on the structure of /etc/exports. To quote from exports(5): Also, each line may have one or more specifications for default options after the path name, in the form of a dash ("-") followed by an option list. The option list is used for all subsequent exports on that line only. What does "subsequent exports on that line only" mean? 1.2 fsid=0 not required anymore? I was searching for fsid when I found a comment on the linux-nfs list stating fsid=0 is not required anymore. Now I'm just confused, do I need it with nfs4 or not?! 2. Non-exported directory still mountable Say I have the following tree: /exp /exp/users /exp/distr /exp/distr/archlinux /exp/distr/debian And I have the following entries in this fstab entry: /dev/disk/by-label/users /mnt/users ext4 defaults 0 0 /dev/disk/by-label/distr /mnt/distr ext4 defaults 0 0 /mnt/users /exp/users none bind 0 0 /mnt/distr /exp/distr none bind 0 0 And my exports is exactly this: /exp 192.168.1.0/24(fsid=0,rw,async,no_subtree_check,no_root_squash) /exp/distr 192.168.1.0/24(rw,async,no_subtree_check,no_root_squash) And exportfs -arv shows: exporting 192.168.1.0/24:/exp/distr exporting 192.168.1.0/24:/exp Then why am I able to do this and get no error on a client: mount -t nfs4 server:/exp/users /tmp/test Even though /exp/users is not exported? I didn't export this directory, and while I don't see the contents of /dev/disk/by-label/users unless I specify crossmnt, I am still able to write to the directory. Everything I write to there goes to the underlying directory of /exp/users which can be seen when I umount /exp/users; ls /exp/users.. 3. The odd case of showmount -d server As stated by rpc.mountd(8), this command should display directories that are either currently mounted by clients, or stale entries in /var/lib/nfs/rmtab, as can be read: The rpc.mountd daemon registers every successful MNT request by adding an entry to the /var/lib/nfs/rmtab file. When receivng a UMNT request from an NFS client, rpc.mountd simply removes the matching entry from /var/lib/nfs/rmtab, as long as the access control list for that export allows that sender to access the export. (...) Note, however, that there is little to guarantee that the contents of /var/lib/nfs/rmtab are accurate. A client may continue accessing an export even after invoking UMNT. If the client reboots without sending a UMNT request, stale entries remain for that client in /var/lib/nfs/rmtab. After reading this I surely wonder: Isn't it terribly insecure to just expose this type of client information; Aren't unaware server admins bound to have an rmtab with a lot of stale clients; Is this the reason that clients that mount nfs4 directories with mount -v get to see output like "nothing was mounted" even though something was mounted? I have a lot of other questions regarding nfs4, but I'll keep it at this for the moment.. :)

    Read the article

  • Debian Lenny to Debian Squeeze upgrade problems

    - by Roland Soós
    Hi! Yesterday I made a dist-upgrade on my Debian Lenny server. I thought it will be easy as an usual upgrade, but it's not. I got a lot of problem after the update: # apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: linux-image-2.6-amd64 : Depends: linux-image-2.6.32-5-amd64 but it is not installed E: Unmet dependencies. Try using -f. Then I tried the suggestion: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libio-compress-base-perl libatk1.0-0 libts-0.0-0 libmime-types-perl libc-client2007b libgtk2.0-common libxfixes3 libgsf-1-common hicolor-icon-theme libfile-remove-perl libxcomposite1 libltdl3-dev libneon27 libmd5-perl libwmf0.2-7 libilmbase6 libatk1.0-data djvulibre-desktop libdirectfb-1.0-0 fam libxinerama1 libcroco3 libopenexr6 libgsf-1-114 libmail-box-perl libdjvulibre21 openssl-blacklist librsvg2-2 libio-compress-zlib-perl libsysfs2 libbeecrypt6 libxdamage1 libobject-realize-later-perl libuser-identity-perl libgtk2.0-bin libxi6 libxcursor1 portmap libxrandr2 libgtk2.0-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-image-2.6.32-5-amd64 Suggested packages: linux-doc-2.6.32 The following NEW packages will be installed: linux-image-2.6.32-5-amd64 0 upgraded, 1 newly installed, 0 to remove and 121 not upgraded. 98 not fully installed or removed. Need to get 0 B/28.6 MB of archives. After this operation, 103 MB of additional disk space will be used. Do you want to continue [Y/n]? y perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "hu_HU.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r Preconfiguring packages ... (Reading database ... 37915 files and directories currently installed.) Unpacking linux-image-2.6.32-5-amd64 (from .../linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r dpkg: error processing /var/cache/apt/archives/linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb (--unpack): failed in write on buffer copy for backend dpkg-deb during `./lib/modules/2.6.32-5-amd64/kernel/sound/pci/hda/snd-hda-codec-realtek.ko': No space left on device configured to not write apport reports dpkg-deb: subprocess paste killed by signal (Broken pipe) locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r Running postrm hook script /sbin/update-grub. Searching for GRUB installation directory ... found: /boot/grub Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-2.6.26-2-amd64 Updating /boot/grub/menu.lst ... done Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.32-5-amd64 /boot/vmlinuz-2.6.32-5-amd64 Errors were encountered while processing: /var/cache/apt/archives/linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) # dpkg-reconfigure locales perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "hu_HU.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r /usr/sbin/dpkg-reconfigure: locales is broken or not fully installed Then I stucked. Do you have any idea how could I solve this?

    Read the article

  • How to setup linux permissions the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >