Search Results

Search found 22216 results on 889 pages for 'volume control'.

Page 44/889 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Matlab 3d volume visualization and 3d overlay

    - by inf.ig.sh
    Hi, so the question ist pretty much the title. I have a 3d volume loaded as raw data [256, 256, 256] = size(A). It contains only values of zero's and ones, where the 1's represent the structure and 0's the "air". I want to visualize the structure in matlab and then run an algorithm on it and put an overlay on it, let's say in the color red. So to be more precise: How do i visualize the 3d volume. 0's transparent, 1's semitransparent? Plot a line in the 3 d vis as an overlay? I already read the mathworks tutorials and they didn't help. I tried using the set command, but it fails completly saying for every property i try "invalid root property". I hope someone can help.

    Read the article

  • Why can’t GridView extract child control’s values directly?

    - by SourceC
    Hello using Bind in a GridView control template enables the control to extract values from child controls in the template and pass them to the data source control. The data source control in turn performs the appropriate command for the database. For this reason, the Bind function is used inside the EditItemTemplate or InsertItemTemplate of a data-bound control. Why is Bind() needed to extract values and pass them to GridView. Why isn’t GridView able to extract child control’s values directly? thanx

    Read the article

  • Streamed mp3 only plays for 1 second

    - by angel6
    Hi, I'm using the plaympeg.c (modified) code of smpeg as a media player. I've got ffserver running as a streaming server. I'm a streaming an mp3 file over http. But when I run plaympeg.c, it plays the streamed file only for a second. When I run plaympeg again, it starts off from where it left and plays for 1 second. Does anyone know why this happens an how to fix it? I've tested it out on WMP and it plays the entire file in one go. So, i guess it's not a problem with the streaming or ffserver.conf include include include include /* #ifdef unix */ include include include include include include include define NET_SUPPORT /* General network support */ define HTTP_SUPPORT /* HTTP support */ ifdef NET_SUPPORT include include include include endif include "smpeg.h" ifdef NET_SUPPORT int tcp_open(char * address, int port) { struct sockaddr_in stAddr; struct hostent * host; int sock; struct linger l; memset(&stAddr,0,sizeof(stAddr)); stAddr.sin_family = AF_INET ; stAddr.sin_port = htons(port); if((host = gethostbyname(address)) == NULL) return(0); stAddr.sin_addr = *((struct in_addr *) host-h_addr_list[0]) ; if((sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) return(0); l.l_onoff = 1; l.l_linger = 5; if(setsockopt(sock, SOL_SOCKET, SO_LINGER, (char*) &l, sizeof(l)) < 0) return(0); if(connect(sock, (struct sockaddr *) &stAddr, sizeof(stAddr)) < 0) return(0); return(sock); } ifdef HTTP_SUPPORT int http_open(char * arg) { char * host; int port; char * request; int tcp_sock; char http_request[1024]; char c; printf("\nin http_open passed parameter = %s\n",arg); /* Check for URL syntax */ if(strncmp(arg, "http://", strlen("http://"))) return(0); /* Parse URL */ port = 80; host = arg + strlen("http://"); if((request = strchr(host, '/')) == NULL) return(0); request++ = 0; if(strchr(host, ':') != NULL) / port is specified */ { port = atoi(strchr(host, ':') + 1); *strchr(host, ':') = 0; } /* Open a TCP socket */ if(!(tcp_sock = tcp_open(host, port))) { perror("http_open"); return(0); } /* Send HTTP GET request */ sprintf(http_request, "GET /%s HTTP/1.0\r\n" "User-Agent: Mozilla/2.0 (Win95; I)\r\n" "Pragma: no-cache\r\n" "Host: %s\r\n" "Accept: /\r\n" "\r\n", request, host); send(tcp_sock, http_request, strlen(http_request), 0); /* Parse server reply */ do read(tcp_sock, &c, sizeof(char)); while(c != ' '); read(tcp_sock, http_request, 4*sizeof(char)); http_request[4] = 0; if(strcmp(http_request, "200 ")) { fprintf(stderr, "http_open: "); do { read(tcp_sock, &c, sizeof(char)); fprintf(stderr, "%c", c); } while(c != '\r'); fprintf(stderr, "\n"); return(0); } return(tcp_sock); } endif endif void update(SDL_Surface *screen, Sint32 x, Sint32 y, Uint32 w, Uint32 h) { if ( screen-flags & SDL_DOUBLEBUF ) { SDL_Flip(screen); } } /* Flag telling the UI that the movie or song should be skipped */ int done; void next_movie(int sig) { done = 1; } int main(int argc, char *argv[]) { int use_audio, use_video; int fullscreen; int scalesize; int scale_width, scale_height; int loop_play; int i, pause; int volume; Uint32 seek; float skip; int bilinear_filtering; SDL_Surface *screen; SMPEG *mpeg; SMPEG_Info info; char *basefile; SDL_version sdlver; SMPEG_version smpegver; int fd; char buf[32]; int status; printf("\nchecking command line options "); /* Get the command line options */ use_audio = 1; use_video = 1; fullscreen = 0; scalesize = 1; scale_width = 0; scale_height = 0; loop_play = 0; volume = 100; seek = 0; skip = 0; bilinear_filtering = 0; fd = 0; for ( i=1; argv[i] && (argv[i][0] == '-') && (argv[i][1] != 0); ++i ) { if ( strcmp(argv[i], "--fullscreen") == 0 ) { fullscreen = 1; } else if ((strcmp(argv[i], "--seek") == 0)||(strcmp(argv[i], "-S") == 0)) { ++i; if ( argv[i] ) { seek = atol(argv[i]); } } else if ((strcmp(argv[i], "--volume") == 0)||(strcmp(argv[i], "-v") == 0)) { ++i; if (i >= argc) { fprintf(stderr, "Please specify volume when using --volume or -v\n"); return(1); } if ( argv[i] ) { volume = atoi(argv[i]); } if ( ( volume < 0 ) || ( volume 100 ) ) { fprintf(stderr, "Volume must be between 0 and 100\n"); volume = 100; } } else { fprintf(stderr, "Warning: Unknown option: %s\n", argv[i]); } } printf("\nuse video = %d, use audio = %d\n",use_video, use_audio); printf("\ngoing to check input parameters\n"); if defined(linux) || defined(FreeBSD) /* Plaympeg doesn't need a mouse */ putenv("SDL_NOMOUSE=1"); endif /* Play the mpeg files! */ status = 0; for ( ; argv[i]; ++i ) { /* Initialize SDL */ if ( use_video ) { if ((SDL_Init(SDL_INIT_VIDEO) < 0) || !SDL_VideoDriverName(buf, 1)) { fprintf(stderr, "Warning: Couldn't init SDL video: %s\n", SDL_GetError()); fprintf(stderr, "Will ignore video stream\n"); use_video = 0; } printf("\ninitialised video\n"); } if ( use_audio ) { if ((SDL_Init(SDL_INIT_AUDIO) < 0) || !SDL_AudioDriverName(buf, 1)) { fprintf(stderr, "Warning: Couldn't init SDL audio: %s\n", SDL_GetError()); fprintf(stderr, "Will ignore audio stream\n"); use_audio = 0; } } /* Allow Ctrl-C when there's no video output */ signal(SIGINT, next_movie); printf("\nchecking defined supports\n"); /* Create the MPEG stream */ ifdef NET_SUPPORT printf("\ndefined NET_SUPPORT\n"); ifdef HTTP_SUPPORT printf("\ndefined HTTP_SUPPORT\n"); /* Check if source is an http URL */ printf("\nabout to call http_open\n"); printf("\nhere we go\n"); if((fd = http_open(argv[i])) != 0) mpeg = SMPEG_new_descr(fd, &info, use_audio); else endif endif { if(strcmp(argv[i], "-") == 0) /* Use stdin for input */ mpeg = SMPEG_new_descr(0, &info, use_audio); else mpeg = SMPEG_new(argv[i], &info, use_audio); } if ( SMPEG_error(mpeg) ) { fprintf(stderr, "%s: %s\n", argv[i], SMPEG_error(mpeg)); SMPEG_delete(mpeg); status = -1; continue; } SMPEG_enableaudio(mpeg, use_audio); SMPEG_enablevideo(mpeg, use_video); SMPEG_setvolume(mpeg, volume); /* Print information about the video */ basefile = strrchr(argv[i], '/'); if ( basefile ) { ++basefile; } else { basefile = argv[i]; } if ( info.has_audio && info.has_video ) { printf("%s: MPEG system stream (audio/video)\n", basefile); } else if ( info.has_audio ) { printf("%s: MPEG audio stream\n", basefile); } else if ( info.has_video ) { printf("%s: MPEG video stream\n", basefile); } if ( info.has_video ) { printf("\tVideo %dx%d resolution\n", info.width, info.height); } if ( info.has_audio ) { printf("\tAudio %s\n", info.audio_string); } if ( info.total_size ) { printf("\tSize: %d\n", info.total_size); } if ( info.total_time ) { printf("\tTotal time: %f\n", info.total_time); } /* Set up video display if needed */ if ( info.has_video && use_video ) { const SDL_VideoInfo *video_info; Uint32 video_flags; int video_bpp; int width, height; /* Get the "native" video mode */ video_info = SDL_GetVideoInfo(); switch (video_info->vfmt->BitsPerPixel) { case 16: case 24: case 32: video_bpp = video_info->vfmt->BitsPerPixel; break; default: video_bpp = 16; break; } if ( scale_width ) { width = scale_width; } else { width = info.width; } width *= scalesize; if ( scale_height ) { height = scale_height; } else { height = info.height; } height *= scalesize; video_flags = SDL_SWSURFACE; if ( fullscreen ) { video_flags = SDL_FULLSCREEN|SDL_DOUBLEBUF|SDL_HWSURFACE; } video_flags |= SDL_ASYNCBLIT; video_flags |= SDL_RESIZABLE; screen = SDL_SetVideoMode(width, height, video_bpp, video_flags); if ( screen == NULL ) { fprintf(stderr, "Unable to set %dx%d video mode: %s\n", width, height, SDL_GetError()); continue; } SDL_WM_SetCaption(argv[i], "plaympeg"); if ( screen->flags & SDL_FULLSCREEN ) { SDL_ShowCursor(0); } SMPEG_setdisplay(mpeg, screen, NULL, update); SMPEG_scaleXY(mpeg, screen->w, screen->h); } else { SDL_QuitSubSystem(SDL_INIT_VIDEO); } /* Set any special playback parameters */ if ( loop_play ) { SMPEG_loop(mpeg, 1); } /* Seek starting position */ if(seek) SMPEG_seek(mpeg, seek); /* Skip seconds to starting position */ if(skip) SMPEG_skip(mpeg, skip); /* Play it, and wait for playback to complete */ SMPEG_play(mpeg); done = 0; pause = 0; while ( ! done && ( pause || (SMPEG_status(mpeg) == SMPEG_PLAYING) ) ) { SDL_Event event; while ( use_video && SDL_PollEvent(&event) ) { switch (event.type) { case SDL_VIDEORESIZE: { SDL_Surface *old_screen = screen; SMPEG_pause(mpeg); screen = SDL_SetVideoMode(event.resize.w, event.resize.h, screen->format->BitsPerPixel, screen->flags); if ( old_screen != screen ) { SMPEG_setdisplay(mpeg, screen, NULL, update); } SMPEG_scaleXY(mpeg, screen-w, screen-h); SMPEG_pause(mpeg); } break; case SDL_KEYDOWN: if ( (event.key.keysym.sym == SDLK_ESCAPE) || (event.key.keysym.sym == SDLK_q) ) { // Quit done = 1; } else if ( event.key.keysym.sym == SDLK_RETURN ) { // toggle fullscreen if ( event.key.keysym.mod & KMOD_ALT ) { SDL_WM_ToggleFullScreen(screen); fullscreen = (screen-flags & SDL_FULLSCREEN); SDL_ShowCursor(!fullscreen); } } else if ( event.key.keysym.sym == SDLK_UP ) { // Volume up if ( volume < 100 ) { if ( event.key.keysym.mod & KMOD_SHIFT ) { // 10+ volume += 10; } else if ( event.key.keysym.mod & KMOD_CTRL ) { // 100+ volume = 100; } else { // 1+ volume++; } if ( volume 100 ) volume = 100; SMPEG_setvolume(mpeg, volume); } } else if ( event.key.keysym.sym == SDLK_DOWN ) { // Volume down if ( volume 0 ) { if ( event.key.keysym.mod & KMOD_SHIFT ) { volume -= 10; } else if ( event.key.keysym.mod & KMOD_CTRL ) { volume = 0; } else { volume--; } if ( volume < 0 ) volume = 0; SMPEG_setvolume(mpeg, volume); } } else if ( event.key.keysym.sym == SDLK_PAGEUP ) { // Full volume volume = 100; SMPEG_setvolume(mpeg, volume); } else if ( event.key.keysym.sym == SDLK_PAGEDOWN ) { // Volume off volume = 0; SMPEG_setvolume(mpeg, volume); } else if ( event.key.keysym.sym == SDLK_SPACE ) { // Toggle play / pause if ( SMPEG_status(mpeg) == SMPEG_PLAYING ) { SMPEG_pause(mpeg); pause = 1; } else { SMPEG_play(mpeg); pause = 0; } } else if ( event.key.keysym.sym == SDLK_RIGHT ) { // Forward if ( event.key.keysym.mod & KMOD_SHIFT ) { SMPEG_skip(mpeg, 100); } else if ( event.key.keysym.mod & KMOD_CTRL ) { SMPEG_skip(mpeg, 50); } else { SMPEG_skip(mpeg, 5); } } else if ( event.key.keysym.sym == SDLK_LEFT ) { // Reverse if ( event.key.keysym.mod & KMOD_SHIFT ) { } else if ( event.key.keysym.mod & KMOD_CTRL ) { } else { } } else if ( event.key.keysym.sym == SDLK_KP_MINUS ) { // Scale minus if ( scalesize > 1 ) { scalesize--; } } else if ( event.key.keysym.sym == SDLK_KP_PLUS ) { // Scale plus scalesize++; } else if ( event.key.keysym.sym == SDLK_f ) { // Toggle filtering on/off if ( bilinear_filtering ) { SMPEG_Filter *filter = SMPEGfilter_null(); filter = SMPEG_filter( mpeg, filter ); filter-destroy(filter); bilinear_filtering = 0; } else { SMPEG_Filter *filter = SMPEGfilter_bilinear(); filter = SMPEG_filter( mpeg, filter ); filter-destroy(filter); bilinear_filtering = 1; } } break; case SDL_QUIT: done = 1; break; default: break; } } SDL_Delay(1000/2); } SMPEG_delete(mpeg); } SDL_Quit(); if defined(HTTP_SUPPORT) if(fd) close(fd); endif return(status); }

    Read the article

  • calling a method on the parent page from a user control

    - by Kyle
    I am using a user control that I created (just a .cs file not an .ascx file), that does some magic and depending on a value generated by the control, I need it to do something on the parent page that is 'hosting' the control. It needs to call a method under certain circumstances (method is on the parent control). the control is placed on the parent page like so: <customtag:MyControl ID="something" runat="server" /> I'm dynamically creating buttons etc on the control itself but when a button is clicked, let's say for example that there's a text box on the control and if the value of the textbox is "bob" it needs to call a method on the page that's housing the control...how can I accomplish this?

    Read the article

  • .NET (C#) passing messages from a custom control to main application

    - by zer0c00l
    A custom windows form control named 'tweet' is in a dll. The custom control has couple of basic controls to display a tweet. I add this custom control to my main application. This custom control has a button named "retweet", when some user clicks this "retweet" button, i need to send some message to the main application. Unfortunately the this tweet control has no idea about this main application (both or in their own namespaces) How can i send messages from this custom control to the main application?

    Read the article

  • change the sound volume of a live stream

    - by Omu
    I have something like this: private var myVideo:Video; public var videoDisplay:UIComponent; ... videoDisplay.addChild(myVideo); ... nsPlay = new NetStream(nc); nsPlay.addEventListener(NetStatusEvent.NET_STATUS, nsPlayOnStatus); nsPlay.bufferTime = 0; nsPlay.play(pro); myVideo.attachNetStream(nsPlay); anybody knows how can I change the volume of this stream, I would like to bind the volume to a slider

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Can MKS Integrity integrate with other source control tools? (SVN, Git...)

    - by bnsmith
    My boss is interested in using MKS Integrity for bug tracking, feature requests, Wiki documentation and so on. However, we currently use Subversion, and he doesn't want to force us devs to use a version control system that we don't like. Is is possible to integrate a different version control program into MKS Integrity? I'm particularly interested in SVN, Git, Mercurial and Bazaar. If you've tried mixing tools like this before, I'd love to hear about your experiences.

    Read the article

  • Differences between 'Add web site/solution to source control...'

    - by Andy Rose
    I have opened a website website hosted on my workstation in Visual Studio 2008 and saved it as solution. I now want to add this to source contol and I am being given the option to either 'Add solution to source control...' or 'Add web site to source control...'. This solution needs to be accessed, worked on and run locally by several other developers so I was wondering what the key differences are between each option and which would be the best to choose?

    Read the article

  • Source Control System. API. Get metrics

    - by w1z
    Hello all, I have next situation. I need to choise source control system for my project. This scs must provide the API to my .net application to get information about check-in-s for specified user and date period and about changes which was done in this check-in-s (the number of added and updated lines). What source control system provides this functionality? P.S. I can't use the TFS, it's a limitation

    Read the article

  • When inheriting a control in Silverlight, how to find out if its template has been applied?

    - by herzmeister der welten
    When inheriting a control in Silverlight, how do I find out if its template has already been applied? I.e., can I reliably get rid of my cumbersome _hasTemplateBeenApplied field? public class AwesomeControl : Control { private bool _hasTemplateBeenApplied = false; public override void OnApplyTemplate() { base.OnApplyTemplate(); this._hasTemplateBeenApplied = true; // Stuff } private bool DoStuff() { if (this._hasTemplateBeenApplied) { // Do Stuff } } }

    Read the article

  • .net compliant version control system that can be installed on a shared hosting (with no SSH/root Access)

    - by Farshid
    I searched a lot in SO and other websites for a version control system that can be installed on a shared windows hosting that lets me create repositories for putting my project files on it and supply me with version control facilities but I did not find one. I looked to see whether I can install git, Mercurial or TFS in a shared hosting and I did not found any answer. I want to know if you know any system that can be installed on a shared windows hosting and please tell your recommendations if you have had an experience before.

    Read the article

  • Are there any good version control programs for a Windows box that do not have to be installed?

    - by CaffeineZombie
    On my work machine, I do not have the permissions to install anything, and astoundingly, there are not any version control software packages set up. I am using VS2008, and was hoping to work around depending on SourceSafe. I've talked to the network admin, and all I could get was "We don't have any version control set up." Are there any good ways of going about this, or do I have to just bite the bullet?

    Read the article

  • Any advice for dynamic music control?

    - by Assembler
    I would like to be able to dynamically progress the score, and affect the volume levels of separate channels within the music. How could I do this? From my experience with mod music (olden days Amiga music, Mod Tracker, Scream Tracker, Fast Tracker II, Impulse Tracker etc etc), I believe this is the best way to tackle the problem, to allow the music to move from one loop to another, without anything mixed down. I want to do this in AS3, and am considering pulling apart Flod to make this happen

    Read the article

  • What is auto-mounting my media volume?

    - by user285277
    Something is repeatedly mounting my "media" share, doing something with it, then quietly un-mounting it with no notifications at the user level. from the little I can gleaned from the console messages below, I thought I'd managed to stop it, if not understand it last night when I followed instructions for deleting all traces of the Google Update Daemon. I've not been using any Google apps whatsoever, so I was surprised to see that in Console. What's more surprising, and perhaps a little distressing, is that the same thing occurred this evening, when the Google Daemon is long gone. I don't have that log because I can't recall precisely what time it occurred. I'll do a search and provide it if you wish, though. In the meantime, any help with this would be extremely appreciated. I've asked over at Apple Discussions but I think it might be a little deeper than those manning the boards this weekend are comfortable with. It's certainly beyond my meager skills. With apologies in advance if this is more lines thank you need. Please note that I've transformed the Google links a little because the forum here requires more reputation points before one can post more than two links. 12/27/13 10:47:31.000 PM kernel[0]: memorystatus_thread: idle exiting pid 53629 [distnoted] 12/27/13 10:48:10.433 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 0; not stale; error: The file doesn’t exist. 12/27/13 10:48:10.434 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 1; not stale; error: The file doesn’t exist. 12/27/13 10:48:10.950 PM com.apple.SecurityServer[17]: Session 103257 created 12/27/13 10:48:34.328 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 2; not stale; error: The file doesn’t exist. 12/27/13 10:48:34.000 PM kernel[0]: AFP_VFS afpfs_mount: /Volumes/Media Archive-1, pid 53641 12/27/13 10:48:34.000 PM kernel[0]: AFP_VFS afpfs_mount : succeeded on volume 0xffffff80d6355008 /Volumes/Media Archive-1 (error = 0, retval = 0) 12/27/13 10:49:32.000 PM kernel[0]: wlEvent: en0 en0 Link DOWN virtIf = 0 12/27/13 10:49:32.000 PM kernel[0]: AirPort: Link Down on en0. Reason 8 (Disassociated because station leaving). 12/27/13 10:49:32.000 PM kernel[0]: en0::IO80211Interface::postMessage bssid changed 12/27/13 10:49:33.681 PM configd[16]: network changed: v4(en0-:10.0.1.12) DNS- Proxy- SMB 12/27/13 10:49:33.697 PM configd[16]: network changed: DNS* Proxy 12/27/13 10:49:35.475 PM KernelEventAgent[57]: tid 00000000 received event(s) VQ_NOTRESP (1) 12/27/13 10:49:35.000 PM kernel[0]: ASP_TCP Disconnect: triggering reconnect by bumping reconnTrigger from curr value 0 on so 0xffffff802176b4a0 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect started /Volumes/Media Archive-1 prevTrigger 0 currTrigger 1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: doing reconnect on /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: posting to KEA EINPROGRESS for /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: Max reconnect time: 600 secs, Connect timeout: 15 secs for /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 1 seconds and then try again 12/27/13 10:49:35.479 PM KernelEventAgent[57]: tid 00000000 type 'afpfs', mounted on '/Volumes/Media Archive-1', from '//Me@Capsule._afpovertcp._tcp.local/Media%20Archive', not responding 12/27/13 10:49:35.487 PM KernelEventAgent[57]: tid 00000000 found 1 filesystem(s) with problem(s) 12/27/13 10:49:36.692 PM com.bourgeoisbits.cloak.agent[14503]: NetworkProfile: (null), (null), (null) (Connected: NO, Airport: NO, Open: NO) [trusted] 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 2 seconds and then try again 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 4 seconds and then try again 12/27/13 10:49:41.000 PM kernel[0]: CODE SIGNING: cs_invalid_page(0x1000): p=53662[GoogleSoftwareUp] clearing CS_VALID 12/27/13 10:49:42.102 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon logServiceState] GoogleSoftwareUpdate daemon (1.1.0.3659) vending: com.google.Keystone.Daemon.UpdateEngine: 2 connection(s) com.google.Keystone.Daemon.Administration: 0 connection(s) 12/27/13 10:49:42.113 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine updateProductID:] KSUpdateEngine updating product ID: "com.google.Keystone" 12/27/13 10:49:42.116 PM GoogleSoftwareUpdateDaemon[53663]: -[KSCheckAction performAction] KSCheckAction checking 1 ticket(s). 12/27/13 10:49:42.121 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction starting update check for ticket(s): {( <KSTicket:0x531870 productID=com.google.Keystone version=1.1.0.3659 xc=<KSPathExistenceChecker:0x5302d0 path=/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 > )} Using server: <KSOmahaServer:0x534340 engine=<KSDaemonUpdateEngine:0x52e530> params={ EngineVersion = "1.1.0.3659"; ActivesInfo = { "com.google.talkplugin" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; }; "com.google.Keystone" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; LastActivePingDate = 2013-10-06 07:00:00 +0000; LastActiveDate = 2013-12-28 03:49:42 +0000; }; "com.google.picasa" = { LastActiveDate = 2012-08-29 10:15:42 +0000; }; }; UserInitiated = 0; IsSystem = 1; OmahaOSVersion = "10.8.5_i486"; Identity = KeystoneDaemon; AllowedSubdomains = ( ".omaha.sandbox.google.com", ".tools.google.com", ".www.google.com", ".corp.google.com" ); } > 12/27/13 10:49:42.130 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction running KSServerUpdateRequest: <KSOmahaServerUpdateRequest:0x1a31a90 server=<KSOmahaServer:0x534340> url="htt[PeeEs]://tools.google.com/service/update2" runningFetchers=0 tickets=1 activeTickets=1 rollCallTickets=1 body= <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <o:gupdate xmlns:o="htt[Pee]://www.google.com/update2/request" protocol="2.0" version="KeystoneDaemon-1.1.0.3659" ismachine="1"> <o:os platform="mac" version="MacOSX" sp="10.8.5_i486"></o:os> <o:app appid="com.google.Keystone" version="1.1.0.3659" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82" a="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> </o:gupdate> > 12/27/13 10:49:42.291 PM GoogleSoftwareUpdateDaemon[53663]: -[KSOutOfProcessFetcher(PrivateMethods) helperDidTerminate:] The Internet connection appears to be offline. [NSURLErrorDomain:-1009] 12/27/13 10:49:42.291 PM GoogleSoftwareUpdateDaemon[53663]: -[KSServerUpdateRequest(PrivateMethods) fetcher:failedWithError:] KSServerUpdateRequest fetch failed. (productIDs: com.google.Keystone) [com.google.UpdateEngine.CoreErrorDomain:702 - 'htt[PeeEs]://tools.google.com/service/update2'] (The Internet connection appears to be offline. [NSURLErrorDomain:-1009]) 12/27/13 10:49:42.292 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction(PrivateMethods) finishAction] KSUpdateCheckAction found updates: {( )} 12/27/13 10:49:42.295 PM GoogleSoftwareUpdateDaemon[53663]: -[KSPrefetchAction performAction] KSPrefetchAction no updates to prefetch. 12/27/13 10:49:42.295 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSSilentUpdateAction had no updates to apply. 12/27/13 10:49:42.296 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSPromptAction had no updates to apply. 12/27/13 10:49:42.299 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine(PrivateMethods) updateFinish] KSUpdateEngine update processing complete. 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 8 seconds and then try again 12/27/13 10:49:43.152 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine updateAllProducts] KSUpdateEngine updating all installed products. 12/27/13 10:49:43.153 PM GoogleSoftwareUpdateDaemon[53663]: -[KSCheckAction performAction] KSCheckAction checking 2 ticket(s). 12/27/13 10:49:43.158 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction starting update check for ticket(s): {( <KSTicket:0x18367a0 productID=com.google.Keystone version=1.1.0.3659 xc=<KSPathExistenceChecker:0x1837e10 path=/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 >, <KSTicket:0x1834750 productID=com.google.talkplugin version=4.7.0.15362 xc=<KSPathExistenceChecker:0x1833890 path=/Library/Application Support/Google/GoogleTalkPlugin.app> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 > )} Using server: <KSOmahaServer:0x52e930 engine=<KSDaemonUpdateEngine:0x52e530> params={ EngineVersion = "1.1.0.3659"; ActivesInfo = { "com.google.talkplugin" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; }; "com.google.Keystone" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; LastActivePingDate = 2013-10-06 07:00:00 +0000; LastActiveDate = 2013-12-28 03:49:42 +0000; }; "com.google.picasa" = { LastActiveDate = 2012-08-29 10:15:42 +0000; }; }; UserInitiated = 0; IsSystem = 1; OmahaOSVersion = "10.8.5_i486"; Identity = KeystoneDaemon; AllowedSubdomains = ( ".omaha.sandbox.google.com", ".tools.google.com", ".www.google.com", ".corp.google.com" ); } > 12/27/13 10:49:43.159 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction running KSServerUpdateRequest: <KSOmahaServerUpdateRequest:0x53a210 server=<KSOmahaServer:0x52e930> url="htt[PeeEs]://tools.google.com/service/update2" runningFetchers=0 tickets=2 activeTickets=1 rollCallTickets=2 body= <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <o:gupdate xmlns:o="htt[Pee]://www.google.com/update2/request" protocol="2.0" version="KeystoneDaemon-1.1.0.3659" ismachine="1"> <o:os platform="mac" version="MacOSX" sp="10.8.5_i486"></o:os> <o:app appid="com.google.Keystone" version="1.1.0.3659" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82" a="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> <o:app appid="com.google.talkplugin" version="4.7.0.15362" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> </o:gupdate> > 12/27/13 10:49:43.243 PM GoogleSoftwareUpdateDaemon[53663]: -[KSOutOfProcessFetcher(PrivateMethods) helperDidTerminate:] The Internet connection appears to be offline. [NSURLErrorDomain:-1009] 12/27/13 10:49:43.243 PM GoogleSoftwareUpdateDaemon[53663]: -[KSServerUpdateRequest(PrivateMethods) fetcher:failedWithError:] KSServerUpdateRequest fetch failed. (productIDs: com.google.Keystone, ... (2)) [com.google.UpdateEngine.CoreErrorDomain:702 - 'htt[PeeEs]://tools.google.com/service/update2'] (The Internet connection appears to be offline. [NSURLErrorDomain:-1009]) 12/27/13 10:49:43.244 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction(PrivateMethods) finishAction] KSUpdateCheckAction found updates: {( )} 12/27/13 10:49:43.247 PM GoogleSoftwareUpdateDaemon[53663]: -[KSPrefetchAction performAction] KSPrefetchAction no updates to prefetch. 12/27/13 10:49:43.248 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSSilentUpdateAction had no updates to apply. 12/27/13 10:49:43.248 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSPromptAction had no updates to apply. 12/27/13 10:49:43.250 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine(PrivateMethods) updateFinish] KSUpdateEngine update processing complete. 12/27/13 10:49:45.570 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon logServiceState] GoogleSoftwareUpdate daemon (1.1.0.3659) vending: com.google.Keystone.Daemon.UpdateEngine: 1 connection(s) com.google.Keystone.Daemon.Administration: 0 connection(s) 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 10 seconds and then try again 12/27/13 10:49:53.828 PM KernelEventAgent[57]: tid 00000000 unmounting 1 filesystems 12/27/13 10:49:53.000 PM kernel[0]: AFP_VFS afpfs_unmount: /Volumes/Media Archive-1, flags 524288, pid 57 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: get the reconnect token 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: GetReconnectToken failed 32 /Volumes/Media Archive-1 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_unmount : afpfs_DoReconnect sent signal for unmount to proceed 12/27/13 10:50:12.104 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon main] GoogleSoftwareUpdateDaemon inactive, shutdown. 12/27/13 10:50:29.396 PM Dock[93157]: no information back from LS about running process

    Read the article

  • How to bind a XPO Data Source to an ASPxGridView control

    - by nikolaosk
    I have been involved with an ASP.Net project recently and I have implemented it using the awesome DevExpress ASP.Net controls. In this post I will show you how to bind an XPODataSource control to an ASPxGridView control. If you want to implement this example you need to download the trial version of these controls unless you are a licensed holder of DevExpress products. We will need a database to work with. I will use AdventureWorks2008R2 . You can download it here . We will need an instance of SQL...(read more)

    Read the article

  • What&rsquo;s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

    - by Rick Strahl
    In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier. WebForms The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages. Take Control of Your ClientIDs Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs. In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page. <%@Page Language="C#"      MasterPageFile="~/Site.Master"     CodeBehind="WebForm2.aspx.cs"     Inherits="WebApplication1.WebForm2"  %> <asp:Content ID="content"  ContentPlaceHolderID="content"               runat="server"               ClientIDMode="Static" >       <asp:TextBox runat="server" ID="txtName" /> </asp:Content> The four available ClientIDMode values are: AutoID This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place. <input name="ctl00$content$txtName" type="text"        id="ctl00_content_txtName" /> This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks. Static This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids: <input name="ctl00$content$txtName"         type="text" id="txtName" /> Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control. This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict. Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix. Predictable The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows: This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_). The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting. For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this: <input name="ctl00$content$txtName" type="text"         id="content_txtName" /> which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container. If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName: <input name="ctl00$content$txtName" type="text"         id="ctl00_content_txtName" /> The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging. The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode. Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example: <asp:GridView runat="server" ID="gvItems"              AutoGenerateColumns="false"             ClientIDMode="Static"              ClientIDRowSuffix="Id">     <Columns>     <asp:TemplateField>         <ItemTemplate>             <asp:Label runat="server" id="txtName"                        Text='<%# Eval("Name") %>'                   ClientIDMode="Predictable"/>         </ItemTemplate>     </asp:TemplateField>     <asp:TemplateField>         <ItemTemplate>         <asp:Label runat="server" id="txtId"                     Text='<%# Eval("Id") %>'                     ClientIDMode="Predictable" />         </ItemTemplate>     </asp:TemplateField>     </Columns>  </asp:GridView> generates client Ids inside of a column in the master page described earlier: <td>     <span id="txtName_0">Rick</span> </td> where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls: <span id="ctrl0_txtName_0">Rick</span> I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway. Inherit The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting. For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx. ClientID Enhancements Summary The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls. ViewStateMode Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates. Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems. In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it. ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"        ClientIDMode="Static"                ViewStateMode="Disabled"     EnableViewState="true"  %> <!-- this control has viewstate  --> <asp:TextBox runat="server" ID="txtName"  ViewStateMode="Enabled" />       <!-- this control has no viewstate - it inherits  from parent container --> <asp:TextBox runat="server" ID="txtAddress" /> Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it. For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on. Inline HTML Encoding HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this: <%: "<script type='text/javascript'>" +     "alert('Really?');</script>" %> which produces properly encoded HTML: &lt;script type=&#39;text/javascript&#39; &gt;alert(&#39;Really?&#39;);&lt;/script&gt; Effectively this is a shortcut to: <%= HttpUtility.HtmlEncode( "<script type='text/javascript'>" + "alert('Really?');</script>") %> Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values: <%: Model.Address.Street %> This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks. Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature. ScriptManager Enhancements The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading. The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library. There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath. [assembly: WebResource(    "Westwind.Web.Resources.ww.jquery.js",    "application/x-javascript",    CdnPath =  "http://mysite.com/scripts/ww.jquery.min.js")] Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example. Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config: <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <Scripts>         <asp:ScriptReference          Name="Westwind.Web.Resources.ww.jquery.js"         Assembly="Westwind.Web" />     </Scripts>        </asp:ScriptManager> The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js): <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <CompositeScript          Path="~/scripts/allscripts.js">         <Scripts>             <asp:ScriptReference                    Path="~/scripts/jquery.js" />             <asp:ScriptReference                    Path="~/scripts/ww.jquery.js" />             <asp:ScriptReference            Name="Westwind.Web.Resources.editors.js"                 Assembly="Westwind.Web" />         </Scripts>     </CompositeScript> </asp:ScriptManager> When you render this into HTML, you’ll see a single script reference in the page: <script src="scripts/allscripts.debug.js"          type="text/javascript"></script> All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally. The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy. Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers. MetaDescription, MetaKeywords Page Properties There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself: Page.MetaKeywords = "ASP.NET,4.0,New Features"; Page.MetaDescription = "This article discusses the new features in ASP.NET 4.0"; Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"      ClientIDMode="Static"                MetaDescription="Article that discusses what's                      new in ASP.NET 4.0"     MetaKeywords="ASP.NET,4.0,New Features" %> Nothing earth shattering but quite convenient. Visual Studio 2010 Enhancements for Web Development For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general. Text Editors Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing. On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack. Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors. Multi-Targeting Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions. Multi-Monitor Support One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment. IntelliSense Snippets for HTML and JavaScript Editors The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually. You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created. Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets. jQuery Integration Is Now Native jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery. Summary ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback. If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >