Search Results

Search found 15377 results on 616 pages for 'socket programming'.

Page 529/616 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • Member classes versus #includes

    - by ShallowThoughts
    I've recently discovered that it is bad form to have #includes in your header files because anyone who uses your code gets all those extra includes they won't necessarily want. However, for classes that have member variables defined as a type of another class, what's the alternative? For example, I was doing things the following way for the longest time: /* Header file for class myGrades */ #include <vector> //bad #include "classResult.h" //bad class myGrades { vector<classResult> grades; int average; int bestScore; } (Please excuse the fact that this is a highly artificial example) So, if I want to get rid of the #include lines, is there any way I can keep the vector or do I have to approach programming my code in an entirely different way?

    Read the article

  • Dynamic web widget

    - by user1824996
    My vendor offers a widget creation service where I can login to their page, set initial values of a search form, after the save button is clicked, I can copy & paste the script code on my website to display a product search result widget. I am thinking to change this static widget to a dynamic one. Since my programming knowledge is limited, can experts tell me if it's possible to login https remotely (using cURL) and set search form values equal to values on my page (every time my page content changes, it will change the form value), then save the form. So the widget script I pasted on my page will always be refreshed to new search result. So the issue will involve cross domain, form submission & server/browser communication. I know a little jQuery, PHP, Ajax, cURL but so far I stuck with just having an idea but not really sure how to implement it.

    Read the article

  • Create div tag template and reuse

    - by user1683645
    Is it possible to create a template e.g with lots of other elements inside it with proper attribute "tagging" and reuse it with jquery? For instance when you want to display user submitted comments without refreshing the page. The reason I ask this is because the code between the div tags are rather long. So using for instance prepend() would be to long to rewrite. Whats the best approach for larger manipulations? Create a separate html? Im pretty new to manipulation, but since I have a programming background i would expect that there is an efficient way to reuse already existing HTML instead of redefining it in jquery.

    Read the article

  • Streamed mp3 only plays for 1 second

    - by angel6
    Hi, I'm using the plaympeg.c (modified) code of smpeg as a media player. I've got ffserver running as a streaming server. I'm a streaming an mp3 file over http. But when I run plaympeg.c, it plays the streamed file only for a second. When I run plaympeg again, it starts off from where it left and plays for 1 second. Does anyone know why this happens an how to fix it? I've tested it out on WMP and it plays the entire file in one go. So, i guess it's not a problem with the streaming or ffserver.conf include include include include /* #ifdef unix */ include include include include include include include define NET_SUPPORT /* General network support */ define HTTP_SUPPORT /* HTTP support */ ifdef NET_SUPPORT include include include include endif include "smpeg.h" ifdef NET_SUPPORT int tcp_open(char * address, int port) { struct sockaddr_in stAddr; struct hostent * host; int sock; struct linger l; memset(&stAddr,0,sizeof(stAddr)); stAddr.sin_family = AF_INET ; stAddr.sin_port = htons(port); if((host = gethostbyname(address)) == NULL) return(0); stAddr.sin_addr = *((struct in_addr *) host-h_addr_list[0]) ; if((sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) return(0); l.l_onoff = 1; l.l_linger = 5; if(setsockopt(sock, SOL_SOCKET, SO_LINGER, (char*) &l, sizeof(l)) < 0) return(0); if(connect(sock, (struct sockaddr *) &stAddr, sizeof(stAddr)) < 0) return(0); return(sock); } ifdef HTTP_SUPPORT int http_open(char * arg) { char * host; int port; char * request; int tcp_sock; char http_request[1024]; char c; printf("\nin http_open passed parameter = %s\n",arg); /* Check for URL syntax */ if(strncmp(arg, "http://", strlen("http://"))) return(0); /* Parse URL */ port = 80; host = arg + strlen("http://"); if((request = strchr(host, '/')) == NULL) return(0); request++ = 0; if(strchr(host, ':') != NULL) / port is specified */ { port = atoi(strchr(host, ':') + 1); *strchr(host, ':') = 0; } /* Open a TCP socket */ if(!(tcp_sock = tcp_open(host, port))) { perror("http_open"); return(0); } /* Send HTTP GET request */ sprintf(http_request, "GET /%s HTTP/1.0\r\n" "User-Agent: Mozilla/2.0 (Win95; I)\r\n" "Pragma: no-cache\r\n" "Host: %s\r\n" "Accept: /\r\n" "\r\n", request, host); send(tcp_sock, http_request, strlen(http_request), 0); /* Parse server reply */ do read(tcp_sock, &c, sizeof(char)); while(c != ' '); read(tcp_sock, http_request, 4*sizeof(char)); http_request[4] = 0; if(strcmp(http_request, "200 ")) { fprintf(stderr, "http_open: "); do { read(tcp_sock, &c, sizeof(char)); fprintf(stderr, "%c", c); } while(c != '\r'); fprintf(stderr, "\n"); return(0); } return(tcp_sock); } endif endif void update(SDL_Surface *screen, Sint32 x, Sint32 y, Uint32 w, Uint32 h) { if ( screen-flags & SDL_DOUBLEBUF ) { SDL_Flip(screen); } } /* Flag telling the UI that the movie or song should be skipped */ int done; void next_movie(int sig) { done = 1; } int main(int argc, char *argv[]) { int use_audio, use_video; int fullscreen; int scalesize; int scale_width, scale_height; int loop_play; int i, pause; int volume; Uint32 seek; float skip; int bilinear_filtering; SDL_Surface *screen; SMPEG *mpeg; SMPEG_Info info; char *basefile; SDL_version sdlver; SMPEG_version smpegver; int fd; char buf[32]; int status; printf("\nchecking command line options "); /* Get the command line options */ use_audio = 1; use_video = 1; fullscreen = 0; scalesize = 1; scale_width = 0; scale_height = 0; loop_play = 0; volume = 100; seek = 0; skip = 0; bilinear_filtering = 0; fd = 0; for ( i=1; argv[i] && (argv[i][0] == '-') && (argv[i][1] != 0); ++i ) { if ( strcmp(argv[i], "--fullscreen") == 0 ) { fullscreen = 1; } else if ((strcmp(argv[i], "--seek") == 0)||(strcmp(argv[i], "-S") == 0)) { ++i; if ( argv[i] ) { seek = atol(argv[i]); } } else if ((strcmp(argv[i], "--volume") == 0)||(strcmp(argv[i], "-v") == 0)) { ++i; if (i >= argc) { fprintf(stderr, "Please specify volume when using --volume or -v\n"); return(1); } if ( argv[i] ) { volume = atoi(argv[i]); } if ( ( volume < 0 ) || ( volume 100 ) ) { fprintf(stderr, "Volume must be between 0 and 100\n"); volume = 100; } } else { fprintf(stderr, "Warning: Unknown option: %s\n", argv[i]); } } printf("\nuse video = %d, use audio = %d\n",use_video, use_audio); printf("\ngoing to check input parameters\n"); if defined(linux) || defined(FreeBSD) /* Plaympeg doesn't need a mouse */ putenv("SDL_NOMOUSE=1"); endif /* Play the mpeg files! */ status = 0; for ( ; argv[i]; ++i ) { /* Initialize SDL */ if ( use_video ) { if ((SDL_Init(SDL_INIT_VIDEO) < 0) || !SDL_VideoDriverName(buf, 1)) { fprintf(stderr, "Warning: Couldn't init SDL video: %s\n", SDL_GetError()); fprintf(stderr, "Will ignore video stream\n"); use_video = 0; } printf("\ninitialised video\n"); } if ( use_audio ) { if ((SDL_Init(SDL_INIT_AUDIO) < 0) || !SDL_AudioDriverName(buf, 1)) { fprintf(stderr, "Warning: Couldn't init SDL audio: %s\n", SDL_GetError()); fprintf(stderr, "Will ignore audio stream\n"); use_audio = 0; } } /* Allow Ctrl-C when there's no video output */ signal(SIGINT, next_movie); printf("\nchecking defined supports\n"); /* Create the MPEG stream */ ifdef NET_SUPPORT printf("\ndefined NET_SUPPORT\n"); ifdef HTTP_SUPPORT printf("\ndefined HTTP_SUPPORT\n"); /* Check if source is an http URL */ printf("\nabout to call http_open\n"); printf("\nhere we go\n"); if((fd = http_open(argv[i])) != 0) mpeg = SMPEG_new_descr(fd, &info, use_audio); else endif endif { if(strcmp(argv[i], "-") == 0) /* Use stdin for input */ mpeg = SMPEG_new_descr(0, &info, use_audio); else mpeg = SMPEG_new(argv[i], &info, use_audio); } if ( SMPEG_error(mpeg) ) { fprintf(stderr, "%s: %s\n", argv[i], SMPEG_error(mpeg)); SMPEG_delete(mpeg); status = -1; continue; } SMPEG_enableaudio(mpeg, use_audio); SMPEG_enablevideo(mpeg, use_video); SMPEG_setvolume(mpeg, volume); /* Print information about the video */ basefile = strrchr(argv[i], '/'); if ( basefile ) { ++basefile; } else { basefile = argv[i]; } if ( info.has_audio && info.has_video ) { printf("%s: MPEG system stream (audio/video)\n", basefile); } else if ( info.has_audio ) { printf("%s: MPEG audio stream\n", basefile); } else if ( info.has_video ) { printf("%s: MPEG video stream\n", basefile); } if ( info.has_video ) { printf("\tVideo %dx%d resolution\n", info.width, info.height); } if ( info.has_audio ) { printf("\tAudio %s\n", info.audio_string); } if ( info.total_size ) { printf("\tSize: %d\n", info.total_size); } if ( info.total_time ) { printf("\tTotal time: %f\n", info.total_time); } /* Set up video display if needed */ if ( info.has_video && use_video ) { const SDL_VideoInfo *video_info; Uint32 video_flags; int video_bpp; int width, height; /* Get the "native" video mode */ video_info = SDL_GetVideoInfo(); switch (video_info->vfmt->BitsPerPixel) { case 16: case 24: case 32: video_bpp = video_info->vfmt->BitsPerPixel; break; default: video_bpp = 16; break; } if ( scale_width ) { width = scale_width; } else { width = info.width; } width *= scalesize; if ( scale_height ) { height = scale_height; } else { height = info.height; } height *= scalesize; video_flags = SDL_SWSURFACE; if ( fullscreen ) { video_flags = SDL_FULLSCREEN|SDL_DOUBLEBUF|SDL_HWSURFACE; } video_flags |= SDL_ASYNCBLIT; video_flags |= SDL_RESIZABLE; screen = SDL_SetVideoMode(width, height, video_bpp, video_flags); if ( screen == NULL ) { fprintf(stderr, "Unable to set %dx%d video mode: %s\n", width, height, SDL_GetError()); continue; } SDL_WM_SetCaption(argv[i], "plaympeg"); if ( screen->flags & SDL_FULLSCREEN ) { SDL_ShowCursor(0); } SMPEG_setdisplay(mpeg, screen, NULL, update); SMPEG_scaleXY(mpeg, screen->w, screen->h); } else { SDL_QuitSubSystem(SDL_INIT_VIDEO); } /* Set any special playback parameters */ if ( loop_play ) { SMPEG_loop(mpeg, 1); } /* Seek starting position */ if(seek) SMPEG_seek(mpeg, seek); /* Skip seconds to starting position */ if(skip) SMPEG_skip(mpeg, skip); /* Play it, and wait for playback to complete */ SMPEG_play(mpeg); done = 0; pause = 0; while ( ! done && ( pause || (SMPEG_status(mpeg) == SMPEG_PLAYING) ) ) { SDL_Event event; while ( use_video && SDL_PollEvent(&event) ) { switch (event.type) { case SDL_VIDEORESIZE: { SDL_Surface *old_screen = screen; SMPEG_pause(mpeg); screen = SDL_SetVideoMode(event.resize.w, event.resize.h, screen->format->BitsPerPixel, screen->flags); if ( old_screen != screen ) { SMPEG_setdisplay(mpeg, screen, NULL, update); } SMPEG_scaleXY(mpeg, screen-w, screen-h); SMPEG_pause(mpeg); } break; case SDL_KEYDOWN: if ( (event.key.keysym.sym == SDLK_ESCAPE) || (event.key.keysym.sym == SDLK_q) ) { // Quit done = 1; } else if ( event.key.keysym.sym == SDLK_RETURN ) { // toggle fullscreen if ( event.key.keysym.mod & KMOD_ALT ) { SDL_WM_ToggleFullScreen(screen); fullscreen = (screen-flags & SDL_FULLSCREEN); SDL_ShowCursor(!fullscreen); } } else if ( event.key.keysym.sym == SDLK_UP ) { // Volume up if ( volume < 100 ) { if ( event.key.keysym.mod & KMOD_SHIFT ) { // 10+ volume += 10; } else if ( event.key.keysym.mod & KMOD_CTRL ) { // 100+ volume = 100; } else { // 1+ volume++; } if ( volume 100 ) volume = 100; SMPEG_setvolume(mpeg, volume); } } else if ( event.key.keysym.sym == SDLK_DOWN ) { // Volume down if ( volume 0 ) { if ( event.key.keysym.mod & KMOD_SHIFT ) { volume -= 10; } else if ( event.key.keysym.mod & KMOD_CTRL ) { volume = 0; } else { volume--; } if ( volume < 0 ) volume = 0; SMPEG_setvolume(mpeg, volume); } } else if ( event.key.keysym.sym == SDLK_PAGEUP ) { // Full volume volume = 100; SMPEG_setvolume(mpeg, volume); } else if ( event.key.keysym.sym == SDLK_PAGEDOWN ) { // Volume off volume = 0; SMPEG_setvolume(mpeg, volume); } else if ( event.key.keysym.sym == SDLK_SPACE ) { // Toggle play / pause if ( SMPEG_status(mpeg) == SMPEG_PLAYING ) { SMPEG_pause(mpeg); pause = 1; } else { SMPEG_play(mpeg); pause = 0; } } else if ( event.key.keysym.sym == SDLK_RIGHT ) { // Forward if ( event.key.keysym.mod & KMOD_SHIFT ) { SMPEG_skip(mpeg, 100); } else if ( event.key.keysym.mod & KMOD_CTRL ) { SMPEG_skip(mpeg, 50); } else { SMPEG_skip(mpeg, 5); } } else if ( event.key.keysym.sym == SDLK_LEFT ) { // Reverse if ( event.key.keysym.mod & KMOD_SHIFT ) { } else if ( event.key.keysym.mod & KMOD_CTRL ) { } else { } } else if ( event.key.keysym.sym == SDLK_KP_MINUS ) { // Scale minus if ( scalesize > 1 ) { scalesize--; } } else if ( event.key.keysym.sym == SDLK_KP_PLUS ) { // Scale plus scalesize++; } else if ( event.key.keysym.sym == SDLK_f ) { // Toggle filtering on/off if ( bilinear_filtering ) { SMPEG_Filter *filter = SMPEGfilter_null(); filter = SMPEG_filter( mpeg, filter ); filter-destroy(filter); bilinear_filtering = 0; } else { SMPEG_Filter *filter = SMPEGfilter_bilinear(); filter = SMPEG_filter( mpeg, filter ); filter-destroy(filter); bilinear_filtering = 1; } } break; case SDL_QUIT: done = 1; break; default: break; } } SDL_Delay(1000/2); } SMPEG_delete(mpeg); } SDL_Quit(); if defined(HTTP_SUPPORT) if(fd) close(fd); endif return(status); }

    Read the article

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Watch YouTube in Windows 7 Media Center

    - by Mysticgeek
    Have you been looking for a way to watch your favorite viral videos from YouTube and Dailymotion from the couch? Today we take a look at an easy to use plugin which allows you to watch streaming video in Windows 7 Media Center. Install Macrotube The first thing we need to do is download and install the plugin called Macrotube (link below) following the defaults through the install wizard. After it’s installed, open Windows 7 Media Center and you’ll find Macrotube in the main menu. Currently there are three services available…YouTube, Dailymotion, and MSN Soapbox. Just select the service where you want to check out some videos. You can browse through different subjects or categories… Or you can search the the service by typing in what you’re looking for…with your remote or keyboard. There is the ability to drill down you search content by date, rating, views, and relevance. There are a few settings available such as the language beta, auto updates, and appearance. Now just kick back and browse through the different services and watch what you want from the comfort of your couch or on your computer. Conclusion This neat project is still in development and the developer is continuing to add changes through updates. It only works with Windows 7 Media Player, but there is a 32 & 64-bit version. Sometimes we experiences certain videos that wouldn’t play and it did crash a few times, but that is to be expected with a work in progress. But overall, this is a cool plugin that will allow you to watch your favorite online content from WMC. Download Macrotube and get more details and troubleshooting help fro the GreenButton forum Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Integrate Hulu Desktop and Windows Media Center in Windows 7Automatically Start Windows 7 Media Center in Live TV ModeWatch TV Programming Without a TV Tuner In Window 7 Media CenterAutomatically Mount and View ISO files in Windows 7 Media Center TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Ternary operator in VB.NET

    - by Jalpesh P. Vadgama
    We all know about Ternary operator in C#.NET. I am a big fan of ternary operator and I like to use it instead of using IF..Else. Those who don’t know about ternary operator please go through below link. http://msdn.microsoft.com/en-us/library/ty67wk28(v=vs.80).aspx Here you can see ternary operator returns one of the two values based on the condition. See following example. bool value = false;string output=string.Empty;//using If conditionif (value==true) output ="True";else output="False";//using tenary operatoroutput = value == true ? "True" : "False"; In the above example you can see how we produce same output with the ternary operator without using If..Else statement. Recently in one of the project I was working with VB.NET language and I was eager to know if there is a ternary operator equivalent there or not. After searching on internet I have found two ways to do it. IF operator which works for VB.NET 2008 and higher version and IIF operator which is there since VB 6.0. So let’s check same above example with both of this operators. So let’s create a console application which has following code. Module Module1 Sub Main() Dim value As Boolean = False Dim output As String = String.Empty ''Output using if else statement If value = True Then output = "True" Else output = "False" Console.WriteLine("Output Using If Loop") Console.WriteLine(output) output = If(value = True, "True", "False") Console.WriteLine("Output using If operator") Console.WriteLine(output) output = IIf(value = True, "True", "False") Console.WriteLine("Output using IIF Operator") Console.WriteLine(output) Console.ReadKey() End If End SubEnd Module As you can see in the above code I have written all three-way to condition check using If.Else statement and If operator and IIf operator. You can see that both IIF and If operator has three parameter first parameter is the condition which you need to check and then another parameter is true part of you need to put thing which you need as output when condition is ‘true’. Same way third parameter is for the false part where you need to put things which you need as output when condition as ‘false’. Now let’s run that application and following is the output as expected. That’s it. You can see all three ways are producing same output. Hope you like it. Stay tuned for more..Till then Happy Programming.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • Enterprise Manager Database Control Configuration - Recovering From Errors Due to CA Expiry on Oracle Database 10.2.0.4 or 10.2.0.5 from 31-Dec-2010 onwards

    - by jayatheertha.rao(at)oracle.com
    Description What is the Issue? In Enterprise Manager Database Control with Oracle Database 10.2.0.4 and 10.2.0.5, the root certificate used to secure communications via the Secure Socket Layer (SSL) protocol will expire on 31-Dec-2010 00:00:00. The certificate expiration will cause errors if you attempt to configure Database Control on or after 31-Dec-2010. Existing Database Control configurations are not affected by this issue. Likelihood of Occurrence What Versions Are Affected? The issue impacts configuration of Database Control with Oracle Database 10.2.0.4 and 10.2.0.5 only. It does not impact database creation or upgrade. The issue does not impact existing Database Control configurations. What Happens During Database Control Configuration Failure? Database Configuration Assistant (DBCA) and Database Upgrade Assistant (DBUA) Errors Database Configuration Assistant (DBCA) and Database Upgrade Assistant (DBUA) will report the following error in the console: Could not complete the Enterprise Manager configuration.Enterprise manager configuration failed due to the following error -Error starting Database Control Enterprise Manager Configuration Assistant (EMCA) Errors Enterprise Manager Configuration Assistant (EMCA) will write errors similar to those below to the emca.log file: CONFIG: Securing Database Control completed successfully .Jan 2, 2011 7:22:47 PM oracle.sysman.emcp.ParamsManager getParamCONFIG: No value was set for the parameter ORACLE_HOSTNAME.Jan 2, 2011 7:22:47 PM oracle.sysman.emcp.util.DBControlUtil startOMSINFO: Starting Database Control (this may take a while) ...Jan 2, 2011 7:22:47 PM oracle.sysman.emcp.util.PlatformInterface addEnvVarToListCONFIG: Value for env var 'ORACLE_HOSTNAME' is '', discarding the sameCONFIG: Returning env array from cacheJan 2, 2011 7:22:47 PM oracle.sysman.emcp.util.PlatformInterface executeCommandCONFIG: Starting execution: /myhost/bin/emctl start dbconsoleJan 2, 2011 7:27:26 PM oracle.sysman.emcp.util.PlatformInterface executeCommandCONFIG: Exit value of 1Jan 2, 2011 7:27:26 PM oracle.sysman.emcp.util.PlatformInterface executeCommandCONFIG: Oracle Enterprise Manager 10g Database Control Release 10.2.0.4.0Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.https://myhost:5501/em/console/aboutApplicationStarting Oracle Enterprise Manager 10g Database Control............................................................................................. failed.------------------------------------------------------------------Logs are generated in directory /myhost/sysman/logJan 2, 2011 7:27:26 PM oracle.sysman.emcp.util.PlatformInterface executeCommandWARNING: Error executing /myhost/bin/emctl start dbconsoleJan 2, 2011 7:27:26 PM oracle.sysman.emcp.EMConfig performSEVERE: Error starting Database ControlRefer to the log file at /myhost/dbua/d4/upgrade/emConfig.log for more details.Jan 2, 2011 7:27:26 PM oracle.sysman.emcp.EMConfig performCONFIG: Stack Trace:oracle.sysman.emcp.exception.EMConfigException: Error starting Database Controlat oracle.sysman.emcp.EMDBPostConfig.performUpgrade(EMDBPostConfig.java:763)at oracle.sysman.emcp.EMDBPostConfig.invoke(EMDBPostConfig.java:232)at oracle.sysman.emcp.EMDBPostConfig.invoke(EMDBPostConfig.java:193)at oracle.sysman.emcp.EMConfig.perform(EMConfig.java:184)at oracle.sysman.assistants.util.em.EMConfiguration.run(EMConfiguration.java:436)at oracle.sysman.assistants.util.em.EMConfigStep.executeImpl(EMConfigStep.java:140)at oracle.sysman.assistants.util.step.BasicStep.execute(BasicStep.java:210)at oracle.sysman.assistants.util.step.BasicStep.callStep(BasicStep.java:251)at oracle.sysman.assistants.dbma.backend.EMConfigStep.executeStepImpl(EMConfigStep.java:104)at oracle.sysman.assistants.dbma.backend.SummarizableStep.executeImpl(SummarizableStep.java:175)at oracle.sysman.assistants.util.step.BasicStep.execute(BasicStep.java:210)at oracle.sysman.assistants.util.step.Step.execute(Step.java:140)at oracle.sysman.assistants.util.step.StepContext$ModeRunner.run(StepContext.java:2488)at java.lang.Thread.run(Thread.java:534) The EMCA console will display output similar to the following: aime@myhost09 db_1]$ bin/emca -config dbcontrol db -repos recreate -clusterSTARTED EMCA at Jan 11, 2011 4:11:01 PMEM Configuration Assistant, Version 10.2.0.1.0 ProductionCopyright (c) 2003, 2005, Oracle. All rights reserved.Enter the following information:Database unique name: catestDatabase Control is already configured for the database catestYou have chosen to configure Database Control for managing the database catestThis will remove the existing configuration and the default settings and perform a fresh configurationDo you wish to continue? [yes(Y)/no(N)]: YListener port number: 1521Cluster name: myclusterPassword for SYS user:Password for DBSNMP user:Password for SYSMAN user:Email address for notifications (optional):Outgoing Mail (SMTP) server for notifications (optional):........Jan 11, 2011 4:18:05 PM oracle.sysman.emcp.util.DBControlUtil secureDBConsoleINFO: Securing Database Control (this may take a while) ...Jan 11, 2011 4:19:31 PM oracle.sysman.emcp.util.DBControlUtil startOMSINFO: Starting Database Control (this may take a while) ...Jan 11, 2011 4:28:38 PM oracle.sysman.emcp.EMConfig performSEVERE: Error starting Database ControlRefer to the log file at /myhost/oracle/product/10.2.0/db_1/cfgtoollogs/emca/catest/emca_2011-01-11_04-11-01-PM.log for more details.Could not complete the configuration. Refer to the log file at /myhost/oracle/product/10.2.0/db_1/cfgtoollogs/emca/catest/emca_2011-01-11_04-11-01-PM.log for more details. At the end of the database installation on non-Windows platforms, both Database Control and the Management Agent will be up and running, even though the status of both components will be shown as not running, because EMCTL will be unable to connect to the dbconsole process. In addition, Database Control will fail to connect to the Agent. Note for Windows Platform Only:On Windows, the dbconsole process will be stopped after the failed configuration attempt. Note that the tool used to perform Database Control configuration (DBUA, DBCA or EMCA) will also wait for 15 minutes for Database Control to start, then time out. The output of the "emctl status dbconsole" command incorrectly returns the status of Database Control, as shown below: $ ./emctl status dbconsoleOracle Enterprise Manager 10g Database Control Release 10.2.0.1.0Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.https://myhost:1158/em/console/aboutApplicationOracle Enterprise Manager 10g is not running. The output of the "emctl status agent" command incorrectly returns the status of the Agent, as shownbelow: $ ./emctl status agentOracle Enterprise Manager 10g Database Control Release 10.2.0.1.0Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.---------------------------------------------------------------Agent is Not Running   For Solution, refer to Note: 1222603.1 Note: 1217493.1

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • Suggestions on switching from lamp based web design-development to game design-development

    - by Sandeepan Nath
    I have around 2.5 years of experience as a web developer cum designer working mainly on the LAMP platform. Now, I want to try out game development (of the likes of First Person Shooter games like Call of Duty (COD)). It is one of my dreams to some day succeed in making a profitable, popular, commercial game of this type. However, I have never done any kind of business nor even freelancing yet even in the web domain. Okay, first things first, I am just starting and I don't yet have any idea about the technologies, languages, engines (game engines) etc involved in that. I would like this question to be a complete guide for people with similar interests. Best resources for getting hold really fast What would be the best approach to get the basic hold of the domain really fast? Any resource(s) for programmers coming from other domains/experienced in other domains would be the ideal ones for me. E.g., if anybody would ask me some good resource for quickly learning PHP/Mysql, I would suggest books like "How to do everything with PHP & MySql" - because - it introduces all the basics of the domain (not the advanced things which can be later learnt by practice and also a lot by searching in stackoverflow questions) it contains some very nice working projects in the end, which help in applying the skills learnt in the chapters of the book. This is the best way for self learners, I feel. I would appreciate some similar resource which connects all concepts together to get the bigger picture. I have read about C, C++, C#, JAVA being used in game programming but not sure which language to go for (I have previously learnt a little of C and JAVA). I have also read about game engines but there would be various other concepts. Commonly accepted ways of learning Should 3D games like these be tried after 2D games? Are there some commonly accepted ways of learning such kind of games? Like in web development, we should go for frameworks after practising well with basic language, AJAX after getting properly done with simple page-reload processing etc. Apart from these, any useful tips (like language choices etc.) would be much appreciated. Like it is highly recommended to contribute to open source web projects for getting recognition, are there similar open source game projects? Thanks, Sandeepan

    Read the article

  • Integrating Oracle Forms Applications 11g Into SOA (4-6/Mai/10)

    - by Claudia Costa
    Workshop Description This is a Free workshop of 3 days is targeted at Oracle Forms professionals interested in integrating Oracle Forms into a Service Oriented Architecture. The workshop highlights how Forms can be part of a Service Oriented Architecture, how the Oracle Forms functionalities make it possible to integrate existing (or new) Forms applications with new or existing development utilizing the Service Oriented Architecture concepts. The goal is to understand the incremental approach that Forms provides to developers who need to extend their business platform to JEE, allowing Oracle Forms customers to retain their investment in Oracle Forms while leveraging the opportunities offered by complementing technologies. During the event the attendees will implement the Oracle Forms functionalities that make it possible to integrate with SOA. Register Now! Prerequisites ·         Knowledge of the Oracle Forms development environment (mandatory) ·         Basic knowledge of the Oracle database ·         Basic knowledge of the Java Programming Language ·         Basic knowledge of Oracle Jdeveloper or another Java IDE   System Requirements   This workshop requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements:   ·         Laptop/PC with minimum 4 GB RAM ·         Oracle Database ·         Oracle Forms 11g R1 PS1 (WebLogic Server 10.1.3.2 + Portal, Forms, Reports and Discoverer ) ·         Oracle JDeveloper 11g R1 PS1 http://download.oracle.com/otn/java/jdeveloper/1112/jdevstudio11112install.exe ·         TCP-IP Loopback Adapter Installation (before the SOASuite installation) ·         Oracle SOASuite 11g R1 PS1 (without BAM component) When asked for an admin password, please use 'welcome1 http://download.oracle.com/otn/nt/middleware/11g/ofm_rcu_win_11.1.1.2.0_disk1_1of1.zip http://download.oracle.com/otn/nt/middleware/11g/ofm_soa_generic_11.1.1.2.0_disk1_1of1.zip ·         Oracle BI Publisher 10.1.3.4.1 http://download.oracle.com/otn/nt/ias/101341/bipublisher_windows_x86_101341.zip ·         Oracle BI Publisher Desktop 10.1.3.4. http://download.oracle.com/otn/nt/ias/101341/bipublisher_desktop_windows_x86_101341.zip   ·         At least 1 Oracle Forms solution already upgraded to the Oracle FMW 11g platform.   ------------------------------------------------------------------------------------------   Horário e Local:   4-6 de Maio / 9:30-18:00h Oracle, Porto Salvo Register Here Para mais informação por favor contacte: [email protected]

    Read the article

  • CodePlex Daily Summary for Sunday, March 07, 2010

    CodePlex Daily Summary for Sunday, March 07, 2010New ProjectsAlgorithminator: Universal .NET algorithm visualizer, which helps you to illustrate any algorithm, written in any .NET language. Still in development.ALToolkit: Contains a set of handy .NET components/classes. Currently it contains: * A Numeric Text Box (an Extended NumericUpDown) * A Splash Screen base fo...Automaton Home: Automaton is a home automation software built with a n-Tier, MVVM pattern utilzing WCF, EF, WPF, Silverlight and XBAP.Developer Controls: Developer Controls contains various controls to help build applications that can script/write code.Dynamic Reference Manager: Dynamic Reference Manager is a set (more like a small group) of classes and attributes written in C# that allows any .NET program to reference othe...indiologic: Utilities of an IndioNeural Cryptography in F#: This project is my magistracy resulting work. It is intended to be an example of using neural networks in cryptography. Hashing functions are chose...Particle Filter Visualization: Particle Filter Visualization Program for the Intel Science and Engineering FairPólya: Efficient, immutable, polymorphic collections. .Net lacks them, we provide them*. * By we, we mean I; and by efficient, I mean hopefully so.project euler solutions from mhinze: mhinze project euler solutionsSilverlight 4 and WCF multi layer: Silverlight 4 and WCF multi layersqwarea: Project for a browser-based, minimalistic, massively multiplayer strategy game. Part of the "Génie logiciel et Cloud Computing" course of the ENS (...SuperSocket: SuperSocket, a socket application framework can build FTP/SMTP/POP server easilyToast (for ASP.NET MVC): Dynamic, developer & designer friendly content injection, compression and optimization for ASP.NET MVCNew ReleasesALToolkit: ALToolkit 1.0: Binary release of the libraries containing: NumericTextBox SplashScreen Based on the VB.NET code, but that doesn't really matter.Blacklist of Providers: 1.0-Milestone 1: Blacklist of Providers.Milestone 1In this development release implemented - Main interface (Work Item #5453) - Database (Work Item #5523)C# Linear Hash Table: Linear Hash Table b2: Now includes a default constructor, and will throw an exception if capacity is not set to a power of 2 or loadToMaintain is below 1.Composure: CassiniDev-Trunk-40745-VS2010.rc1.NET4: A simple port of the CassiniDev portable web server project for Visual Studio 2010 RC1 built against .NET 4.0. The WCF tests currently fail unless...Developer Controls: DevControls: These are the version 1.0 releases of these controls. Download the individually or all together (in a .zip file). More releases coming soon!Dynamic Reference Manager: DRM Alpha1: This is the first release. I'm calling it Alpha because I intend implementing other functions, but I do not intend changing the way current functio...ESB Toolkit Extensions: Tellago SOA ESB Extenstions v0.3: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...GKO Libraries: GKO Libraries 0.1 Alpha: 0.1 AlphaHome Access Plus+: v3.0.3.0: Version 3.0.3.0 Release Change Log: Added Announcement Box Removed script files that aren't needed Fixed & issue in directory path Stylesheet...Icarus Scene Engine: Icarus Scene Engine 1.10.306.840: Icarus Professional, Icarus Player, the supporting software for Icarus Scene Engine, with some included samples, and the start of a tutorial (with ...mavjuz WndLpt: wndlpt-0.2.5: New: Response to 5 LPT inputs "test i 1" New: Reaction to 12 LPT outputs "test q 8" New: Reaction to all LPT pins "test pin 15" New: Syntax: ...Neural Cryptography in F#: Neural Cryptography 0.0.1: The most simple version of this project. It has a neural network that works just like logical AND and a possibility to recreate neural network from...Password Provider: 1.0.3: This release fixes a bug which caused the program to crash when double clicking on a generic item.RoTwee: RoTwee 6.2.0.0: New feature is as next. 16649 Add hashtag for tweet of tune.Now you can tweet your playing tune with hashtag.Visual Studio DSite: Picture Viewer (Visual C++ 2008): This example source code allows you to view any picture you want in the click of a button. All you got to do is click the button and browser via th...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.8.00: Whats New File Browser: Folders & Files View reworked File Browser: Folders & Files View reworked File Browser: Folders are displayed as TreeVi...WSDLGenerator: WSDLGenerator 0.0.0.4: - replaced CommonLibrary.dll by CommandLineParser.dll - added better support for custom complex typesMost Popular ProjectsMetaSharpSilverlight ToolkitASP.NET Ajax LibraryAll-In-One Code FrameworkWindows 7 USB/DVD Download Toolニコ生アラートWindows Double ExplorerVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2Caliburn: An Application Framework for WPF and SilverlightArkSwitchMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFasterflect - A Fast and Simple Reflection APIFluent Assertions

    Read the article

  • Agile isn’t always Agile

    - by BuckWoody
    I want to make a disclaimer before I dive into this topic – At Microsoft we use all kinds of development methodologies, and I’ve worked in lots of other shops using lots of methodologies. This is one of those “religious” topics like which programming language or database is best, and is bound to generate some heat. But this isn’t pointed towards one particular event or company. But I really don’t like Agile. In particular, I really don’t like Scrum. Let me explain. Agile is a methodology for developing software that emphasizes adapting to change more so than the traditional “waterfall” method of developing software. Within Agile is a process called a “scrum” meeting. The pitch goes that in this quick, stand-up meeting the people involved in the development project (which should include the DBA, but very often doesn’t) go around the room stating what they are working on, when that will be finished and what is keeping them from getting finished (“blockers”, these are called). Sounds all very non-threatening – we’re just “enabling” the developers to work more efficiently. And that’s what we all want, isn’t it? Except it doesn’t work. In my experience (and yours might be VERY different) this just turns into a micro-management environment, where devs have to defend their daily work. Of all the work environments I hate the most, micro-management environments are THE worst. I don’t like workign in them, and I don’t like creating them. The other issue I have with Scrum is that it makes your whole team task-focused. Everyone wants to make sure that they are not the “long pole” in the meeting (meaning that they aren’t the one that gets all the attention) so they only focus on safe, quick tasks. And although you have all of the boxes checked, the project does not go well at all – even when it does finish. Before you comment (and please do comment) I fully realize that Agile <> Scrum. But in my experience, it sometimes turns into that. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Silverlight Cream for April 03, 2010 -- #829

    - by Dave Campbell
    In this Issue: Scott Marlowe, Nokola, SilverLaw, Brad Abrams, Jeff Wilcox, Jesse Liberty, Alexey Zakharov, ondrejsv, Ward Bell, and David Anson. Shoutouts: Bart Czernicki has a post up about the latest with HTML5: HTML 5 is Born Old - Quake in HTML 5 I was sent a link to shoebox360 a while back and had to sign up to see the Silverlight use, but it does work very nice. I like the panoramic carousel in the viewer: shoebox360 Jeff Handley has a post up on RIA Services - Documentation Guidance and Community Samples... the team is looking for feedback from all of us Shawn Wildermuth posted his My MIX Talks' Source Code Laurent Bugnion posted his Sample code and slides for my TechDays10 (Belgium) talks From SilverlightCream.com: Silverlight to WCF Cross Domain SecurityException Scott Marlowe wrote an article about an often-encountered security exception having to do with cross-domain policies. He details the problem, the response, the solution, and yet another problem/solution associated... good stuff, Scott! Simple Functions for HTML Interop You've seen Nokola's graphic work... how about some HTML Interop from him? He's exposing the code he uses in his work. New Video: ChildWindow Styling - Silverlight 3 SilverLaw has a new video tutorial on Silerlight 3 ChildWindow Styling up - in German - but the video is language-agnostic :) Silverlight 4 + RIA Services - Ready for Business: Exposing WCF (SOAP\WSDL) Services Brad Abrams' continuation in his RIA series is this one demonstrating exposing RIA Services as a Soap\WSDL service Silverlight 4: New parser implementation. New parser features. Jeff Wilcox has a post up highlighting some of the new features in Silverlight 4 such as a new parser implementation with new XAML features. New Video Series – Getting Started With Silverlight Jesse Liberty is starting a new video tutorial series that's going to build out to be a "complete survey of Silverlight programming". The first two are in this post and are Getting Started and Adding Controls to a Silverlight App... looks like good material, Jesse, and all the source is there for the taking as well. Silverlight layout hack: Centered content with fixed maxwidth Alexey Zakharov has a quick tip up on creating centered content with fixed maxwidth. He calls it a dirty trick... looks like code to me :) Silverlight DataForm’s autogenerated fields send empty strings to database ondrejsv points up a problem he had with the Toolkit's DataForm, and his solution to it... with code for all of us following along behind :) DevForce Extensibility With MEF InheritedExport Ward Bell has a post up describing how they got DevForce MEF'd up, and looks like a good post to get you all excited about MEF as well... lots of external links and good info. Tip: Read-only custom DependencyProperties don't exist in Silverlight, but can be closely approximated David Anson's latest Tip is about Read-only custom DependencyProperties in Silverlight -- which strictly is not possible, but he has a code example up that gets close. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • CRMIT Solution´s CRM++ Asterisk Telephony Connector Achieves Oracle Validated Integration with Oracle Sales Cloud

    - by Richard Lefebvre
    To achieve Oracle Validated Integration, Oracle partners are required to meet a stringent set of requirements that are based on the needs and priorities of the customers. Based on a Telephony Application Programming Interface (TAPI) framework the CRM++ Asterisk Telephony Connector integrates the Asterisk telephony solutions with Oracle® Sales Cloud. "The CRM++ Asterisk Telephony Connector for Oracle® Sales Cloud showcases CRMIT Solutions focus and commitment to extend the Customer Experience (CX) expertise to our existing and potential customers," said Vinod Reddy, Founder & CEO, CRMIT Solutions. "Oracle® Validated Integration applies a rigorous technical review and test process," said Kevin O’Brien, senior director, ISV and SaaS Strategy, Oracle®. "Achieving Oracle® Validated Integration through Oracle® PartnerNetwork gives our customers confidence that the CRM++ Asterisk Telephony Connector for Oracle® Sales Cloud has been validated and that the products work together as designed. This helps reduce deployment risk and improves the user experience for our joint customers." CRM++ is a suite of native Customer Experience solutions for Oracle® CRM On Demand, Oracle® Sales Cloud and Oracle® RightNow Cloud Service. With over 3000+ users the CRM++ framework helps extend the Customer Experience (CX) and the power of Customer Relations Management features including Email WorkBench, Self Service Portal, Mobile CRM, Social CRM and Computer Telephony Integration.. About CRMIT Solutions CRMIT Solutions is a pioneer in delivering SaaS-based customer experience (CX) consulting and solutions. With more than 200 certified customer relationship management (CRM) consultants and more than 175 successful CRM deployments globally, CRMIT Solutions offers a range of CRM++ applications for accelerated deployments including various rapid implementation and migration utilities for Oracle® Sales Cloud, Oracle® CRM On Demand, Oracle® Eloqua, Oracle® Social Relationship Management and Oracle® RightNow Cloud Service. About Oracle Validated Integration Oracle Validated Integration, available through the Oracle PartnerNetwork (OPN), gives customers confidence that the integration of complementary partner software products with Oracle Applications and specific Oracle Fusion Middleware solutions have been validated, and the products work together as designed. This can help customers reduce risk, improve system implementation cycles, and provide for smoother upgrades and simpler maintenance. Oracle Validated Integration applies a rigorous technical process to review partner integrations. Partners who have successfully completed the program are authorized to use the “Oracle Validated Integration” logo. For more information, please visit Oracle.com at http://www.oracle.com/us/partnerships/solutions/index.html.

    Read the article

  • SQL SERVER – Automated Type Conversion using Expressor Studio

    - by pinaldave
    Recently I had an interesting situation during my consultation project. Let me share to you how I solved the problem using Expressor Studio. Consider a situation in which you need to read a field, such as customer_identifier, from a text file and pass that field into a database table. In the source file’s metadata structure, customer_identifier is described as a string; however, in the target database table, customer_identifier is described as an integer. Legitimately, all the source values for customer_identifier are valid numbers, such as “109380”. To implement this in an ETL application, you probably would have hard-coded a type conversion function call, such as: output.customer_identifier=stringToInteger(input.customer_identifier) That wasn’t so bad, was it? For this instance, programming this hard-coded type conversion function call was relatively easy. However, hard-coding, whether type conversion code or other business rule code, almost always means that the application containing hard-coded fields, function calls, and values is: a) specific to an instance of use; b) is difficult to adapt to new situations; and c) doesn’t contain many reusable sub-parts. Therefore, in the long run, applications with hard-coded type conversion function calls don’t scale well. In addition, they increase the overall level of effort and degree of difficulty to write and maintain the ETL applications. To get around the trappings of hard-coding type conversion function calls, developers need an access to smarter typing systems. Expressor Studio product offers this feature exactly, by providing developers with a type conversion automation engine based on type abstraction. The theory behind the engine is quite simple. A user specifies abstract data fields in the engine, and then writes applications against the abstractions (whereas in most ETL software, developers develop applications against the physical model). When a Studio-built application is run, Studio’s engine automatically converts the source type to the abstracted data field’s type and converts the abstracted data field’s type to the target type. The engine can do this because it has a couple of built-in rules for type conversions. So, using the example above, a developer could specify customer_identifier as an abstract data field with a type of integer when using Expressor Studio. Upon reading the string value from the text file, Studio’s type conversion engine automatically converts the source field from the type specified in the source’s metadata structure to the abstract field’s type. At the time of writing the data value to the target database, the engine doesn’t have any work to do because the abstract data type and the target data type are just the same. Had they been different, the engine would have automatically provided the conversion. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SSIS

    Read the article

  • What is Inversion of control and why we need it?

    - by Jalpesh P. Vadgama
    Most of programmer need inversion of control pattern in today’s complex real time application world. So I have decided to write a blog post about it. This blog post will explain what is Inversion of control and why we need it. We are going to take a real world example so it would be better to understand. The problem- Why we need inversion of control? Before giving definition of Inversion of control let’s take a simple real word example to see why we need inversion of control. Please have look on the following code. public class class1 { private class2 _class2; public class1() { _class2=new class2(); } } public class class2 { //Some implementation of class2 } I have two classes “Class1” and “Class2”.  If you see the code in that I have created a instance of class2 class in the class1 class constructor. So the “class1” class is dependent on “class2”. I think that is the biggest issue in real world scenario as if we change the “class2” class then we might need to change the “class1” class also. Here there is one type of dependency between this two classes that is called Tight Coupling. Tight coupling will have lots of problem in real world applications as things are tends to be change in future so we have to change all the tight couple classes that are dependent of each other. To avoid this kind of issue we need Inversion of control. What is Inversion of Control? According to the wikipedia following is a definition of Inversion of control. “In software engineering, Inversion of Control (IoC) is an object-oriented programming practice where the object coupling is bound at run time by an assembler object and is typically not known at compile time using static analysis.” So if you read the it carefully it says that we should have object coupling at run time not compile time where it know what object it will create, what method it will call or what feature it will going to use for that. We need to use same classes in such way so that it will not tight couple with each other. There are multiple way to implement Inversion of control. You can refer wikipedia link for knowing multiple ways of implementing Inversion of control. In future posts we are going to see all the different way of implementing Inversion of control.

    Read the article

  • SQLAuthority News – Book Review – Beginning T-SQL 2008 by Kathi Kellenberger

    - by pinaldave
    Beginning T-SQL 2008 by Kathi Kellenberger Amazon Link Detail Review: Beginning T-SQL 2008 is one of the best books on the market if you are just beginning to work with Microsoft SQL, or have a little bit of experience and need to learn more quickly. Each chapter of the book introduces a new subject, and builds upon topics covered in previous chapters.  The author of the book, Kathi Kellenberger understands that you need to form a solid foundation of knowledge before moving on to new topics, and sets up each subject nicely.  Because the chapters move in an orderly progression, you continue to use skills you learned earlier. One of the best features of Beginning T-SQL 2008 is that each chapter has multiple examples and exercises.  Many books introduce a topic and then never go back to it.  This book gives enough examples that you will be familiar with the subject when you come across it in real life.  The exercises at the end of the chapter mean that you will be using the skills you learned – and there is no better way to cement a subject in your brain. The book also includes discussions of the common errors that programmers will come across, how to avoid them, and how to fix them if they happen.  Ms. Kellenberger understands that not only do mistakes happen, but they are bound to happen if you aren’t trained properly.  Mistakes are part of the learning process! The book begins by discussions relational theory, so that programmers will understand the way T-SQL works from the ground up.  It also walks readers through writing accurate queries, combining set-based and procedural processing, embedding logic in stored functions, and so much more. Overall, the main goal of Beginning T-SQL 2008 is to introduce novices to SQL programming, and quickly familiarize them with the basics of running the program.  The book is written with the idea that readers will not know any of the technical terms or vocabulary.  However, if you are a little more familiar with SQL and looking to become better, you will still find this book very helpful. Ratting: 4.5+ Stars Summary: I must recommend Beginning T-SQL 2008 highly enough.  If you are going to buy any beginners guide to Transect-SQL, this is the one you should spend your money on.  You can save yourself a lot of time and effort later by using this very affordable manual to learn the basics, which will allow you to become an expert much faster. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology

    Read the article

  • Welcome to my geeks blog

    - by bconlon
    Hi and welcome! I'm Bazza and this is my geeks blog. I have 20 years Visual Studio mainly C++, MFC,  ATL and now, thankfully, C# and I am embarking on the new world (well new to me) of WPF, so I thought I would try and capture my successful...and not so successful...WPF experiences with the geek world. So where to start? WPF? What I know so far... From wiki..."Windows Presentation Foundation (or WPF) is a graphical subsystem for rendering user interfaces in Windows-based applications." Hmm, great but didn't MFC, ATL (my head hurt with that one), and .Net all have APIs to allow me to code against the Windows Graphical Device Interface (GDI)? "Rather than relying on the older GDI subsystem, WPF utilizes DirectX. WPF attempts to provide a consistent programming model for building applications and provides a separation between the user interface and the business logic." OK, different drawing code, same Windows and weren't we always taught to separate our UI, Business Layer and Data Access Layer? "WPF employs XAML, a derivative of XML, to define and link various UI elements. WPF applications can be deployed as standalone desktop programs, or hosted as an embedded object in a website." Cool, now we're getting somewhere. So when they say separation they really mean separation. The crux of this appears to be that you can have creative people writing the UI and making it attractive and intuitive to use, whist the geeks concentrate on writing the Business and Data Access stuff. XAML (eXtensible Application Markup Language) maps XML elements and attributes directly to Common Language Runtime (CLR) object instances, properties and events. True separation of the View and Model. WPF also provides logical separation of a control from its appearance. In a traditional Windows system, all Controls have a base class containing a Windows handle and each Control knows how to render itself. In WPF, the controls are more like those in a Web Browser using Cascading Style Sheet, they are not wrappers for standard Windows Controls. Instead, they have a default 'template' that defines a visual theme which can easily be replaced by a custom template. But it gets better. WPF concentrates heavily on Data Binding where the client can bind directly to data on the server. I think this concept was first introduced in 'Classic' Visual Basic, where you could bind a list directly to a data from an Access database, and you could do similar in ASP .Net. However, the WPF implementation is far superior than it's predecessors. There are also other technologies that I want to look at like LINQ and the Entity Framework, but that's all for now. #

    Read the article

  • The right way to start out in game development/design [closed]

    - by Marco Sacristão
    Greetings everyone I'm a 19 year old student looking for some help in the field of game development. This question may or may not seem a bit overused, but the fact is that game development has been my life long dream, and after several hours of search I've realized that I've been going in circles for the past three or four months whilst doing such research on how to really get down and dirty with game development, therefor I decided to ask you guys if you could help me out at all. Let me start off with some information about me and things i've already learned about GameDev which might help you out on helping me out (wordplay!): I'm not an expert programmer, but I do have knowledge on how to program in several languages including C and Java (Currently learning Java in my degree in Computer Engineering), but my methodology might not be most correct in terms of syntax (hence my difficulty in starting out, i'm afraid that the starting point might not be the most correct, and it would deploy a wrongful development methodology that would be to corrected later on, in terms of game development or other projects). I have yet to work in a project as large as a game, never in my learning curve of programming I've done a project to the scale of a video game, only very small software (PHP Front-ends and Back-ends, with some basic JQuery and CSS knowledge). I'm not the biggest mathematician or physicist, but I already know that is not a problem, because there are several game engines already available for use and integration with home-made projects (Box2D, etc). I've also learned about some libraries that could be included in said projects, to ease out some process in game development, like SDL for example. I do not know how sprites, states, particles or any specific game-related techniques work. With that being said, you can see that I have some ideas on game development, but I have absolutely no clue on how to design and produce a game, or even how game-like mechanics work. It does not have to be a complex game just to start out, I'd rather learn the basic of game design (Like 2D drawing, tiling, object collision) and test that out in a language that I feel comfortable in which could be later on migrated to other platforms, as long that what I've learned is the correct way to do things, and not just something that I've learned from some guy on Youtube by replicating that code on the video. I'm sorry if my question is not in the best format possible, but I've got so many questions on my mind that are still un-answered that I don't know were to start! Thank you for reading.

    Read the article

  • Craftsmanship Tour: Day 3 &amp; 4 8th Light

    - by Liam McLennan
    Thursday morning the Illinois public transport system came through for me again. I took the Metra train north from Union Station (which was seething with inbound commuters) to Prairie Crossing (Libertyville). At Prairie Crossing I met Paul and Justin from 8th Light and then Justin drove us to the office. The 8th Light office is in an small business park, in a semi-rural area, surrounded by ponds. Upstairs there are two spacious, open areas for developers. At one end of the floor is Doug Bradbury’s walk-and-code station; a treadmill with a desk and computer so that a developer can get exercise at work. At the other end of the floor is a hammock. This irregular office furniture is indicative of the 8th Light philosophy, to pursue excellence without being limited by conventional wisdom. 8th Light have a wall covered in posters, each illustrating one person’s software craftsmanship journey. The posters are a fascinating visualisation of the similarities and differences between each of our progressions. The first thing I did Thursday morning was to create my own poster and add it to the wall. Over two days at 8th Light I did some pairing with the 8th Lighters and we shared thoughts on software development. I am not accustomed to such a progressive and enlightened environment and I found the experience inspirational. At 8th Light TDD, clean code, pairing and kaizen are deeply ingrained in the culture. Friday, during lunch, 8th Light hosted a ‘lunch and learn’ event. Paul Pagel lead us through a coding exercise using micro-pomodori. We worked in pairs, focusing on the pedagogy of pair programming and TDD. After lunch I recorded this interview with Paul Pagel and Justin Martin. We discussed 8th light, craftsmanship, apprenticeships and the limelight framework. Interview with Paul Pagel and Justin Martin My time at Didit, Obtiva and 8th Light has convinced me that I need to give up some of my independence and go back to working in a team. Craftsmen advance their skills by learning from each other, and I can’t do that working at home by myself. The challenge is finding the right team, and becoming a part of it.

    Read the article

  • Friday Fun: Factory Balls – Christmas Edition

    - by Asian Angel
    Your weekend is almost here, but until the work day is over we have another fun holiday game for you. This week your job is to correctly decorate/paint the ornaments that go on the Christmas tree. Simple you say? Maybe, but maybe not! Factory Balls – Christmas Edition The object of the game is to correctly decorate/paint each Christmas ornament exactly as shown in the “sample image” provided for each level. What starts off as simple will quickly have you working to figure out the correct combination or sequence to complete each ornament. Are you ready? The first level serves as a tutorial to help you become comfortable with how to decorate/paint the ornaments. To move an ornament to a paint bucket or cover part of it with one of the helper items simply drag the ornament towards that area. The ornament will automatically move back to its’ starting position when the action is complete. First, a nice coat of red paint followed by covering the middle area with a horizontal belt. Once the belt is on move the ornament to the bucket of yellow paint. Next, you will need to remove the belt, so move the ornament back to the belt’s original position. One ornament finished! As soon as you complete decorating/painting an ornament, you move on to the next level and will be shown the next “sample Image” in the upper right corner. Starting with a coat of orange paint sounds good… Pop the little serrated edge cap on top… Add some blue paint… Almost have it… Place the large serrated edge cap on top… Another dip in the orange paint… And the second ornament is finished. Level three looks a little bit tougher…just work out your pattern of helper items & colors and you will definitely get it! Have fun decorating/painting those ornaments! Note: Starting with level four you will need to start using a combination of two helper items combined at times to properly complete the ornaments. Play Factory Balls – Christmas Edition Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser Free Shipping Day is Friday, December 17, 2010 – National Free Shipping Day Find an Applicable Quote for Any Programming Situation Winter Theme for Windows 7 from Microsoft Score Free In-Flight Wi-Fi Courtesy of Google Chrome

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >