Search Results

Search found 16473 results on 659 pages for 'game logic'.

Page 540/659 | < Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >

  • Store scores for players and produce a high score list

    - by zrvan
    This question is derived from an interview question that I got for a job I was declined. I have asked for code review for my solution at the dedicated Stack Exchange site. But I hope this question is sufficiently rephrased and asked with a different motivation not to be a duplicate of the other question. Consider the following scenario: You should store player scores in the server back end of a game. The server is written in Java. Every score should be registered, that is, one player may have any number of scores for any number of levels. A high score list should be produced with the fifteen top scores for a given level, but only one score per user (to the effect that even if player X has the two highest scores for level Y, only the first position is counted and player Z has the second place). No information should be persisted and only Java 1.7+ standard libraries should be used. No third party libraries or frameworks are acceptable. With the number of players as the primary factor, what would be the best data structure in terms of scalability and concurrency? How would you access the structure to register a single score given a level and a player id? How would you access the structure to compile the high score list?

    Read the article

  • What differences should I know? I just upgraded to 13.10 from 10.10 [on hold]

    - by test
    I ran Ubuntu 10.10 for a long time because I liked the menu style. The change in GUI with the upgrades drove me nuts but I finally gave in and downloaded Saucy Salamander 13.10 x64. It's a fresh install running as a virtual machine guest in VMWare Workstation 9 on a Windows 7 x64 host. Well it looks like all those icons are still there on the side which I would be OK with if there were some way to bring back my menus. I have no organized way of accessing things now, or do I? That is the purpose for this question, maybe there is some functionality I just can't find but is there. Also all my fine tuning was gone. I used to be able to change DPI but that's gone. I went ahead and installed Unity Tweak Tool via sudo apt-get install unity-tweak-tool but I couldn't find an icon for it after I installed it.. because again no menu. So I did a search for it and found it there. I've changed the font and which side the window buttons appear on which is good enough for now. Anyway... any suggestions you may have for me I'm game. I'm a Windows 7 user primarily but I use Ubuntu every once in a while. I really liked the old style where everything was categorized like for Applications there was Accessories, Games, Graphics, Internet, Office, Sound & Video, Wine.

    Read the article

  • iPhone Serialization problem

    - by Jenicek
    Hi, I need to save my own created class to file, I found on the internet, that good approach is to use NSKeyedArchiver and NSKeyedUnarchiver My class definition looks like this: @interface Game : NSObject <NSCoding> { NSMutableString *strCompleteWord; NSMutableString *strWordToGuess; NSMutableArray *arGuessedLetters; //This array stores characters NSMutableArray *arGuessedLettersPos; //This array stores CGRects NSInteger iScore; NSInteger iLives; NSInteger iRocksFallen; BOOL bGameCompleted; BOOL bGameOver; } I've implemented methods initWithCoder: and encodeWithCoder: this way: - (id)initWithCoder:(NSCoder *)coder { if([coder allowsKeyedCoding]) { strCompleteWord = [[coder decodeObjectForKey:@"CompletedWord"] copy]; strWordToGuess = [[coder decodeObjectForKey:@"WordToGuess"] copy]; arGuessedLetters = [[coder decodeObjectForKey:@"GuessedLetters"] retain]; // arGuessedLettersPos = [[coder decodeObjectForKey:@"GuessedLettersPos"] retain]; iScore = [coder decodeIntegerForKey:@"Score"]; iLives = [coder decodeIntegerForKey:@"Lives"]; iRocksFallen = [coder decodeIntegerForKey:@"RocksFallen"]; bGameCompleted = [coder decodeBoolForKey:@"GameCompleted"]; bGameOver = [coder decodeBoolForKey:@"GameOver"]; } else { strCompleteWord = [[coder decodeObject] retain]; strWordToGuess = [[coder decodeObject] retain]; arGuessedLetters = [[coder decodeObject] retain]; // arGuessedLettersPos = [[coder decodeObject] retain]; [coder decodeValueOfObjCType:@encode(NSInteger) at:&iScore]; [coder decodeValueOfObjCType:@encode(NSInteger) at:&iLives]; [coder decodeValueOfObjCType:@encode(NSInteger) at:&iRocksFallen]; [coder decodeValueOfObjCType:@encode(BOOL) at:&bGameCompleted]; [coder decodeValueOfObjCType:@encode(BOOL) at:&bGameOver]; } return self; } - (void)encodeWithCoder:(NSCoder *)coder { if([coder allowsKeyedCoding]) { [coder encodeObject:strCompleteWord forKey:@"CompleteWord"]; [coder encodeObject:strWordToGuess forKey:@"WordToGuess"]; [coder encodeObject:arGuessedLetters forKey:@"GuessedLetters"]; //[coder encodeObject:arGuessedLettersPos forKey:@"GuessedLettersPos"]; [coder encodeInteger:iScore forKey:@"Score"]; [coder encodeInteger:iLives forKey:@"Lives"]; [coder encodeInteger:iRocksFallen forKey:@"RocksFallen"]; [coder encodeBool:bGameCompleted forKey:@"GameCompleted"]; [coder encodeBool:bGameOver forKey:@"GameOver"]; } else { [coder encodeObject:strCompleteWord]; [coder encodeObject:strWordToGuess]; [coder encodeObject:arGuessedLetters]; //[coder encodeObject:arGuessedLettersPos]; [coder encodeValueOfObjCType:@encode(NSInteger) at:&iScore]; [coder encodeValueOfObjCType:@encode(NSInteger) at:&iLives]; [coder encodeValueOfObjCType:@encode(NSInteger) at:&iRocksFallen]; [coder encodeValueOfObjCType:@encode(BOOL) at:&bGameCompleted]; [coder encodeValueOfObjCType:@encode(BOOL) at:&bGameOver]; } } And I use these methods to archive and unarchive data: [NSKeyedArchiver archiveRootObject:currentGame toFile:strPath]; Game *currentGame = [NSKeyedUnarchiver unarchiveObjectWithFile:strPath]; I have two problems. 1) As you can see, lines with arGuessedLettersPos is commented, it's because every time I try to encode this array, error comes up(this archiver cannot encode structs), and this array is used for storing CGRect structs. I've seen solution on the internet. The thing is, that every CGRect in the array is converted to an NSString (using NSStringFromCGRect()) and then saved. Is it a good approach? 2)This is bigger problem for me. Even if I comment this line and then run the code successfully, then save(archive) the data and then try to load (unarchive) them, no data is loaded. There aren't any error but currentGame object does not have data that should be loaded. Could you please give me some advice? This is first time I'm using archivers and unarchivers. Thanks a lot for every reply.

    Read the article

  • Using SDL to replace colors using SDL Color Keys

    - by Shawn B
    Hey, I am working an a simple Roguelike game, and using SDL as the display. The Graphics for the game is an image of Codepage 437, with the background being black, and the font white. Instead of using many seperate image files that are already colored, I want to use one image file, and replace the colors when it is being loaded into memory. The code to split the codepage into a sprite sheet works properly, but when attempting to print in color, everything comes out in white. I had it working the past, but somehow I broke the code when changing it from change the color at print, to change the color on load. Here is the code to load the image: SDL_Surface *Screen,*Font[2]; SDL_Rect Character[256]; Uint8 ScreenY,ScreenX; Uint16 PrintX,PrintY,ScreenSizeY,ScreenSizeX; Uint32 Color[2]; void InitDisplay() { if(SDL_Init(SDL_INIT_VIDEO) == -1) { printf("SDL Init failed\n"); return; } ScreenSizeY = 600; ScreenSizeX = 800; Screen = SDL_SetVideoMode(ScreenSizeX,ScreenSizeY,32,SDL_HWSURFACE | SDL_RESIZABLE); SDL_WM_SetCaption("Alpha",NULL); SDL_Surface *Load; Load = IMG_Load("resource/font.png"); Font[0] = SDL_DisplayFormat(Load); SDL_FreeSurface(Load); Color[0] = SDL_MapRGB(Font[0]->format,255,255,255); Color[1] = SDL_MapRGB(Font[0]->format,255,0,0); Uint8 i,j,k = 0; PrintX = 0; PrintY = 0; for(i = 0; i < 16; i++) { for(j = 0; j < 16; j++) { Character[k].x = PrintX; Character[k].y = PrintY; Character[k].w = 8; Character[k].h = 12; k++; PrintX += 8; } PrintX = 0; PrintY += 12; } PrintX = 0; PrintY = 0; for(i = 1; i < 2; i++) { Font[i] = SDL_DisplayFormat(Font[0]); SDL_SetColorKey(Font[i],SDL_SRCCOLORKEY,Color[i]); SDL_FillRect(Font[i],&Font[i]->clip_rect,Color[i]); SDL_BlitSurface(Font[0],NULL,Font[i],NULL); SDL_SetColorKey(Font[0],0,Color[0]); } } The problem is with the last for loop above. I can't figure out why it isn't working. Any help is greatly appreciated!

    Read the article

  • Core Data Model Design Question - Changing "Live" Objects also Changes Saved Objects

    - by mwt
    I'm working on my first Core Data project (on iPhone) and am really liking it. Core Data is cool stuff. I am, however, running into a design difficulty that I'm not sure how to solve, although I imagine it's a fairly common situation. It concerns the data model. For the sake of clarity, I'll use an imaginary football game app as an example to illustrate my question. Say that there are NSMO's called Downs and Plays. Plays function like templates to be used by Downs. The user creates Plays (for example, Bootleg, Button Hook, Slant Route, Sweep, etc.) and fills in the various properties. Plays have a to-many relationship with Downs. For each Down, the user decides which Play to use. When the Down is executed, it uses the Play as its template. After each down is run, it is stored in history. The program remembers all the Downs ever played. So far, so good. This is all working fine. The question I have concerns what happens when the user wants to change the details of a Play. Let's say it originally involved a pass to the left, but the user now wants it to be a pass to the right. Making that change, however, not only affects all the future executions of that Play, but also changes the details of the Plays stored in history. The record of Downs gets "polluted," in effect, because the Play template has been changed. I have been rolling around several possible fixes to this situation, but I imagine the geniuses of SO know much more about how to handle this than I do. Still, the potential fixes I've come up with are: 1) "Versioning" of Plays. Each change to a Play template actually creates a new, separate Play object with the same name (as far as the user can tell). Underneath the hood, however, it is actually a different Play. This would work, AFAICT, but seems like it could potentially lead to a wild proliferation of Play objects, esp. if the user keeps switching back and forth between several versions of the same Play (creating object after object each time the user switches). Yes, the app could check for pre-existing, identical Plays, but... it just seems like a mess. 2) Have Downs, upon saving, record the details of the Play they used, but not as a Play object. This just seems ridiculous, given that the Play object is there to hold those just those details. 3) Recognize that Play objects are actually fulfilling 2 functions: one to be a template for a Down, and the other to record what template was used. These 2 functions have a different relationship with a Down. The first (template) has a to-many relationship. But the second (record) has a one-to-one relationship. This would mean creating a second object, something like "Play-Template" which would retain the to-many relationship with Downs. Play objects would get reconfigured to have a one-to-one relationship with Downs. A Down would use a Play-Template object for execution, but use the new kind of Play object to store what template was used. It is this change from a to-many relationship to a one-to-one relationship that represents the crux of the problem. Even writing this question out has helped me get clearer. I think something like solution 3 is the answer. However if anyone has a better idea or even just a confirmation that I'm on the right track, that would be helpful. (Remember, I'm not really making a football game, it's just faster/easier to use a metaphor everyone understands.) Thanks.

    Read the article

  • Placing Varibles into an external Sheet

    - by Leslie Peer
    Trying to Build an Online D&d program which stores the character info into Tables my problem is the game works just fine while your playing but as soon as you exit game all varibles are lost which means you have to restart from scratch the next time you log on... So this is a Two Fold Question What is the Best type of External Sheet to save it on... and two How to access sheet for saving and Loading Below are Varibles <SCRIPT> Name1="Tabor Bloomfield"; Name2="Sam Wrightfield"; Name3="Gavin Hartfild"; Name4="Gail Quickfoot"; Name5="Robert Gragorian"; Name6="Peter Shain"; Class1="MagicUser"; Class2="Fighter"; Class3="Fighter"; Class4="Thief"; Class5="Cleric"; Class6="Fighter"; Level1=23; Level2=1; Level3=1; Level4=2; Level5=2; Level6=1; Hpts1=145; Hpts2=14; Hpts3=13; Hpts4=8; Hpts5=12; Hpts6=15; Armor1="Robe of Protection +5"; Armor2="Splinted Armor"; Armor3="Chain Armor"; Armor4="Leather Armor"; Armor5="Chain Armor"; Armor6="Splinted Armor"; Ac1a=5; Ac2a=3; Ac3a=3; Ac4a=4; Ac5a=2; Ac6a=3; Armor1b="Ring of Protection +5"; Armor2b="Small Shield"; Armor3b="Small Shield"; Armor4b="Wooden Shield"; Armor5b="Large Shield"; Armor6b="Small Shield"; Ac1b=5; Ac2b=1; Ac3b=1; Ac4b=1; Ac5b=1; Ac6b=1; Str1=21; Str2=16; Str3=14; Str4=13; Str5=14; Str6=13; Int1=19; Int2=11; Int3=12; Int4=13; Int5=14; Int6=13; Wis1=18; Wis2=12; Wis3=14; Wis4=13; Wis5=14; Wis6=12; Dex1=19; Dex2=14; Dex3=13; Dex4=15; Dex5=14; Dex6=12; Con1=19; Con2=15; Con3=16; Con4=13; Con5=12; Con6=10; Chr1=21; Chr2=14; Chr3=13; Chr4=12; Chr5=14; Chr6=13; </SCRIPT> File name ="gamestats" Path="trellian Webpage/droves E and F/gamestats have tryed html Page,Javascript,Creating a serperate table page and putting the varibles into cells...But at a lost on how to arrive at a solution

    Read the article

  • Need help looping through text file in Objective-C and looping through multidimensional array of dat

    - by Fulvio
    I have a question regarding an iPhone game I'm developing. At the moment, below is the code I'm using to currently I loop through my multidimensional array and position bricks accordingly on my scene. Instead of having multiple two dimensional arrays within my code as per the following (gameLevel1). Ideally, I'd like to read from a text file within my project and loop through the values in that instead. Please take into account that I'd like to have more than one level within my game (possibly 20) so my text file would have to have some sort of separator line item to determine what level I want to render. I was then thinking of having some sort of method that I call and that method would take the level number I'm interested in rendering. e.g. Method to call level based on separator? -(void)renderLevel:(NSString) levelNumber; e.g. Text file example? #LEVEL_ONE# 0,0,0,0,0,0,0,0,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,0,0,0,0,0,0,0,0 #LEVEL_TWO# 1,0,0,0,0,0,0,0,1 1,1,1,1,1,1,1,1,1 1,1,1,1,1,1,1,1,1 1,1,1,1,1,1,1,1,1 1,1,1,1,1,1,1,1,1 0,1,1,1,1,1,1,1,0 1,1,1,1,1,1,1,1,1 0,1,1,1,1,1,1,1,0 1,1,1,1,1,1,1,1,1 0,1,1,1,1,1,1,1,0 1,1,1,1,1,1,1,1,1 0,1,1,1,1,1,1,1,0 1,1,1,1,1,1,1,1,1 1,1,1,1,1,1,1,1,1 1,1,1,1,1,1,1,1,1 1,1,1,1,1,1,1,1,1 1,0,0,0,0,0,0,0,1 Code that I'm currently using: int gameLevel[17][9] = { { 0,0,0,0,0,0,0,0,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,0,0,0,0,0,0,0,0 } }; for (int row=0; row < 17; row++) { for (int col=0; col < 9; col++) { thisBrickValue = gameLevel[row][col]; xOffset = 35 * floor(col); yOffset = 22 * floor(row); switch (thisBrickValue) { case 0: brick = [[CCSprite spriteWithFile:@"block0.png"] autorelease]; break; case 1: brick = [[CCSprite spriteWithFile:@"block1.png"] autorelease]; break; } brick.position = ccp(xOffset, yOffset); [self addChild:brick]; } }

    Read the article

  • Fatal error on a non-object

    - by Chris Leah
    Hey, so I have created a function to check the DB for unique entries, but when I call the function it doesn't seem to work and gives me a fatal error any ideas ? Thanks :) //Check for unique entries function checkUnique($table, $field, $compared) { $query = $mysqli->query('SELECT '.$mysqli->real_escape_string($field).' FROM '.$mysqli->real_escape_string($table).' WHERE "'.$mysqli->real_escape_string($field).'" = "'.$mysqli->real_escape_string($compared).'"'); if(!$query){ return TRUE; } else { return FALSE; } } The page calling it..... //Start session session_start(); //Check if the session is already set, if so re-direct to the game if(isset($_SESSION['id'], $_SESSION['logged_in'])){ Header('Location: ../main/index.php'); }; //Require database connection require_once('../global/includes/db.php'); require_once('../global/functions/functions.php'); //Check if the form has been submitted if (isset($_POST['signup'])){ //Validate input if (!empty($_POST['username']) && !empty($_POST['password']) && $_POST['password']==$_POST['password_confirm'] && !empty($_POST['email']) && validateEmail($_POST['email']) == TRUE && checkUnique('users', 'email', $_POST['email']) == TRUE && checkUnique('users', 'username', $_POST['username']) == TRUE) { //Insert user to the database $insert_user = $mysqli->query('INSERT INTO (`username, `password`, `email`, `verification_key`) VALUES ("'.$mysqli->real_escape_string($_POST['username']).'", "'.$mysqli-real_escape_string(md5($_POST['password'])).'", "'.$mysqli->real_escape_string($_POST['email']).'", "'.randomString('alnum', 32). '"') or die($mysqli->error()); //Get user information $getUser = $mysqli->query('SELECT id, username, email, verification_key FROM users WHERE username = "'.$mysqli->real_escape_string($_POST['username']).'"' or die($mysqli->error())); //Check if the $getUser returns true if ($getUser->num_rows == 1) { //Fetch associated fields to this user $row = $getUser->fetch_assoc(); //Set mail() variables $headers = 'From: [email protected]'."\r\n". 'Reply-To: [email protected]'."\r\n". 'X-Mailer: PHP/'.phpversion(); $subject = 'Activate your account (Music Battles.net)'; //Set verification email message $message = 'Dear '.$row['username'].', I would like to welcome you to Music Battles. Although in order to enjoy the gmae you must first activate your account. \n\n Click the following link: http://www.musicbattles.net/home/confirm.php?id='.$row['id'].'key='.$row['verification_key'].'\n Thanks for signing up, enjoy the game! \n Music Battles Team'; //Attempts to send the email if (mail($row['email'], $subject, $message, $headers)) { $msg = '<p class="success">Accound has been created, please go activate it from your email.</p>'; } else { $error = '<p class="error">The account was created but your email was not sent.</p>'; } } else { $error = '<p class="error">Your account was not created.</p>'; } } else { $error = '<p class="error">One or more fields contain non or invalid data.</p>'; } } Erorr.... Fatal error: Call to a member function query() on a non-object in /home/mbattles/public_html/global/functions/functions.php on line 5

    Read the article

  • How to get dropdown menu to open/close on click rather than hover?

    - by TankDriver
    Hello! I am very new to java and ajax/jquery and have been working on trying to get a script to open and close the drop menu on click rather that hover. The menu in question is found on http://www.gamefriction.com/Coded/ and is the dark menu on the right side under the header. I would like it to open and close like the other menu that is further below it (it is light gray and is in the "Select Division" module). The gray menu is part of a menu and the language menu is not. I have a jquery import as well which can be found in the view source of the above link. My Java: <script type="text/javascript"> /* Language Selector */ $(function() { $("#lang-selector li").hover(function() { $('ul:first',this).css('display', 'block'); }, function() { $('ul:first',this).css('display', 'none'); }); }); $(document).ready(function(){ /* Navigation */ $('.subnav-game').hide(); $('.subnav-game:eq(0)').show(); $('.preorder-type').hide(); $('.preorder-type:eq(3)').show(); }); </script> My CSS: #lang-selector { font-size: 11px; height: 21px; margin: 7px auto 17px auto; width: 186px; } #lang-selector span { color: #999; float: left; margin: 4px 0 0 87px; padding-right: 4px; text-align: right; } #lang-selector ul { float: left; list-style: none; margin: 0; padding: 0; } #lang-selector ul li a { padding: 3px 10px 1px 10px; } #lang-selector ul, #lang-selector a { width: 186px; } #lang-selector ul ul { display: none; position: absolute; } #lang-selector ul ul li { border-top: 1px solid #666; float: left; position: relative; } #lang-selector a { background: url("http://www.gamefriction.com/Coded/images/language_bg.png") no-repeat; color: #666; display: block; font-size: 10px; height: 17px; padding: 4px 10px 0 10px; text-align: left; text-decoration: none; width: 166px; } #lang-selector ul ul li a { background: #333; color: #999; } #lang-selector ul ul li a:hover { background: #c4262c; color: #fff; } My HTML: <div id="lang-selector"> <ul> <li> <a href="#">Choose a Language</a> <ul> <li><a href="?iw_lang=en">English</a></li> <li><a href="?iw_lang=de">Deutsch</a></li> <li><a href="?iw_lang=es">Espa&ntilde;ol</a></li> <li><a href="?iw_lang=fr">Fran&ccedil;ais</a></li> <li><a href="?iw_lang=it">Italiano</a></li> </ul> </li> </ul> </div> Thanks!

    Read the article

  • Need help with a possible memory management problem(leak) regarding NSMutableArray

    - by user309030
    Hi, I'm a beginner level programmer trying to make a game app for the iphone and I've encountered a possible issue with the memory management (exc_bad_access) of my program so far. I've searched and read dozens of articles regarding memory management (including apple's docs) but I still can't figure out what exactly is wrong with my codes. So I would really appreciate it if someone can help clear up the mess I made for myself. - (void)viewDidLoad { [super viewDidLoad]; self.gameState = gameStatePaused; fencePoleArray = [[NSMutableArray alloc] init]; fencePoleImageArray = [[NSMutableArray alloc] init]; fenceImageArray = [[NSMutableArray alloc] init]; mainField = CGRectMake(10, 35, 310, 340); .......... [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(gameLoop) userInfo:nil repeats:YES]; } So basically, the player touches the screen to set up the fences/poles -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { if(.......) { ....... } else { UITouch *touch = [[event allTouches] anyObject]; currentTapLoc = [touch locationInView:touch.view]; NSLog(@"%i, %i", (int)currentTapLoc.x, (int)currentTapLoc.y); if(CGRectContainsPoint(mainField, currentTapLoc)) { if([self checkFence]) { onFencePole++; //this 3 set functions adds their respective objects into the 3 NSMutableArrays using addObject: [self setFencePole]; [self setFenceImage]; [self setFencePoleImage]; ....... } } else { ....... } } } } The setFence function (setFenceImage and setFencePoleImage is similar to this) -(void)setFencePole { Fence *fencePole; if (!elecFence) { fencePole = [[Fence alloc] initFence:onFencePole fenceType:1 fencePos:currentTapLoc]; } else { fencePole = [[Fence alloc] initFence:onFencePole fenceType:2 fencePos:currentTapLoc]; } [fencePoleArray addObject:fencePole]; [fencePole release]; and whenever I press a button in the game, endOpenState is called to clear away all the extra images(fence/poles) on the screen and also to remove all existing objects in the 3 NSMutableArray -(void)endOpenState { ........ int xMax = [fencePoleArray count]; int yMax = [fenceImageArray count]; for (int x = 0; x < xMax; x++) { [[fencePoleImageArray objectAtIndex:x] removeFromSuperview]; } for (int y = 0; y < yMax; y++) { [[fenceImageArray objectAtIndex:y] removeFromSuperview]; } [fencePoleArray removeAllObjects]; [fencePoleImageArray removeAllObjects]; [fenceImageArray removeAllObjects]; ........ } The crash happens here at the checkFence function. -(BOOL)checkFence { if (onFencePole == 0) { return YES; } else if (onFencePole >= 1 && onFencePole < currentMaxFencePole - 1) { CGPoint tempPoint1 = currentTapLoc; CGPoint tempPoint2 = [[fencePoleArray objectAtIndex:onFencePole-1] returnPos]; // the crash happens at this line if ([self checkDistance:tempPoint1 point2:tempPoint2]) { return YES; } else { return NO; } } else if (onFencePole == currentMaxFencePole - 1) { ...... } else { return NO; } } What I'm thinking of is that fencePoleArray got messed up when I used [fencePoleArray removeAllObjects] because it doesn't crash when I comment it out. It would really be great if someone can explain to me what went wrong. And thanks in advance.

    Read the article

  • Dialog created after first becomes unresponsive unless created first?

    - by Justin Sterling
    After creating the initial dialog box that works perfectly fine, I create another dialog box when the Join Game button is pressed. The dialog box is created and show successfully, however I am unable to type in the edit box or even press or exit the dialog. Does anyone understand how to fix this or why it happens? I made sure the dialog box itself was not the problem by creating and displaying it from the main loop in the application. It worked fine when I created it that way. So why does it error when being created from another dialog? My code is below. This code is for the DLGPROC function that each dialog uses. #define WIN32_LEAN_AND_MEAN #include "Windows.h" #include ".\Controllers\Menu\MenuSystem.h" #include ".\Controllers\Game Controller\GameManager.h" #include ".\Controllers\Network\Network.h" #include "resource.h" #include "main.h" using namespace std; extern GameManager g; extern bool men; NET_Socket server; extern HWND d; HWND joinDlg; char ip[64]; void JoinMenu(){ joinDlg = CreateDialog(g_hInstance, MAKEINTRESOURCE(IDD_GETADDRESSINFO), NULL, (DLGPROC)GameJoinDialogPrompt); SetFocus(joinDlg); // ShowWindow(joinDlg, SW_SHOW); ShowWindow(d, SW_HIDE); } LRESULT CALLBACK GameJoinDialogPrompt(HWND Dialogwindow, UINT Message, WPARAM wParam, LPARAM lParam){ switch(Message){ case WM_COMMAND:{ switch(LOWORD(wParam)){ case IDCONNECT:{ GetDlgItemText(joinDlg, IDC_IP, ip, 63); if(server.ConnectToServer(ip, 7890, NET_UDP) == NET_INVALID_SOCKET){ LogString("Failed to connect to server! IP: %s", ip); MessageBox(NULL, "Failed to connect!", "Error", MB_OK); ShowWindow(joinDlg, SW_SHOW); break; }   } LogString("Connected!"); break; case IDCANCEL: ShowWindow(d, SW_SHOW); ShowWindow(joinDlg, SW_HIDE); break; } break; } case WM_CLOSE: PostQuitMessage(0); break; } return 0; } LRESULT CALLBACK GameMainDialogPrompt(HWND Dialogwindow, UINT Message, WPARAM wParam, LPARAM lParam){ switch(Message){ case WM_PAINT:{ PAINTSTRUCT ps; RECT rect; HDC hdc = GetDC(Dialogwindow);    hdc = BeginPaint(Dialogwindow, &ps); GetClientRect (Dialogwindow, &rect); FillRect(hdc, &rect, CreateSolidBrush(RGB(0, 0, 0)));    EndPaint(Dialogwindow, &ps);    break;  } case WM_COMMAND:{ switch(LOWORD(wParam)){ case IDC_HOST: if(!NET_Initialize()){ break; } if(server.CreateServer(7890, NET_UDP) != 0){ MessageBox(NULL, "Failed to create server.", "Error!", MB_OK); PostQuitMessage(0); return -1; } ShowWindow(d, SW_HIDE); break; case IDC_JOIN:{ JoinMenu(); } break; case IDC_EXIT: PostQuitMessage(0); break; default: break; } break; } return 0; } } I call the first dialog using the below code void EnterMenu(){ // joinDlg = CreateDialog(g_hInstance, MAKEINTRESOURCE(IDD_GETADDRESSINFO), g_hWnd, (DLGPROC)GameJoinDialogPrompt);// d = CreateDialog(g_hInstance, MAKEINTRESOURCE(IDD_SELECTMENU), g_hWnd, (DLGPROC)GameMainDialogPrompt); } The dialog boxes are not DISABLED by default, and they are visible by default. Everything is set to be active on creation and no code deactivates the items on the dialog or the dialog itself.

    Read the article

  • Android - determine specific locations (X,Y coordinates) on a Bitmap on different resolutions?

    - by Mike
    My app that I am trying to create is a board game. It will have one bitmap as the board and pieces that will move to different locations on the board. The general design of the board is square, has a certain number of columns and rows and has a border for looks. Think of a chess board or scrabble board. Before using bitmaps, I first created the board and boarder by manually drawing it - drawLine & drawRect. I decided how many pixels in width the border would be based on the screen width and height passed in on "onSizeChanged". The remaining screen I divided by the number of columns or rows I needed. For examples sake, let's say the screen dimensions are 102 x 102. I may have chosen to set the border at 1 and set the number of rows & columns at 10. That would leave 100 x 100 left (reduced by two to account for the top & bottom border, as well as left/right border). Then with columns and rows set to 10, that would leave 10 pixels left for both height and width. No matter what screen size is passed in, I store exactly how many pixels in width the boarder is and the height & width of each square on the board. I know exactly what location on the screen to move the pieces to based on a simple formula and I know exactly what cell a user touched to make a move. Now how does that work with bitmaps? Meaning, if I create 3 different background bitmaps, once for each density, won't they still be resized to fit each devices screen resolution, because from what I read there were not just 3 screen resolutions, but 5 and now with tablets - even more. If I or Android scales the bitmaps up or down to fit the current devices screen size, how will I know how wide the border is scaled to and the dimensions of each square in order to figure out where to move a piece or calculate where a player touched. So far the examples I have looked at just show how to scale the overall bitmap and get the overall bitmaps width and height. But, I don't see how to tell how many pixels wide or tall each part of the board would be after it was scaled. When I draw each line and rectangle myself based in the screen dimensions from onSizeChanged, I always know these dimensions. If anyone has any sample code or a URL to point me to that I can a read about this with bitmaps, I would appreciate it. Thanks, --Mike BTW, here is some sample code (very simplified) on how I know the dimensions of my game board (border and squares) no matter the screen size. Now I just need to know how to do this with the board as a bitmap that gets scaled to any screen size. @Override protected void onSizeChanged(int w, int h, int oldw, int oldh) { intScreenWidth = w; intScreenHeight = h; // Set Border width - my real code changes this value based on the dimensions of w // and h that are passed in. In other words bigger screens get a slightly larger // border. intOuterBorder = 1; /** Reserve part of the board for the boardgame and part for player controls & score My real code forces this to be square, but this is good enough to get the point across. **/ floatBoardHeight = intScreenHeight / 4 * 3; // My real code actually causes floatCellWidth and floatCellHeight to // be equal (Square). floatCellWidth = (intScreenWidth - intOuterBorder * 2 ) / intNumColumns; floatCellHeight = (floatBoardHeight - intOuterBorder * 2) / intNumRows; super.onSizeChanged(w, h, oldw, oldh); }

    Read the article

  • Are all <canvas> tag dimensions in pixels?

    - by Simon Omega
    Are all tag dimensions in pixels? I am asking because I understood them to be. But my math is broken or I am just not grasping something here. I have been doing python mostly and just jumped back into Java Scripting. If I am just doing something stupid let me know. For a game I am writing, I wanted to have a blocky gradient. I have the following: HTML <canvas id="heir"></canvas> CSS @media screen { body { font-size: 12pt } /* Game Rendering Space */ canvas { width: 640px; height: 480px; border-style: solid; border-width: 1px; } } JavaScript (Shortened) function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 480; i = i + 10 ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, i, 640, 10); myblue = myblue - 2; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } function main () { var targetcontext = document.getElementById(“main”).getContext("2d"); testDraw(targetcontext); } To me this should produce a series of 640w by 10h pixel bars. In Google Chrome and Fire Fox I get 15 bars. To me that means ( 480 / 15 ) is 32 pixel high bars. So I change the code to: function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 16; i++ ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, (i * 10), 640, 10); myblue = myblue - 10; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } And get a true 32 pixel height result for comparison. Other than the fact that the first code snippet has shades of blue rendering in non-visible portions of the they are measuring 32 pixels. Now back to the Original Java Code... If I inspect the tag in Chrome it reports 640 x 480. If I inspect it in Fire Fox it reports 640 x 480. BUT! Fire Fox exports the original code to png at 300 x 150 (which is 15 rows of 10). Is it some how being resized to 640 x 480 by the CSS instead of being set to a true 640 x 480? Why, how, what? O_o I confused...

    Read the article

  • iOS: Interpreted code - where do they draw the line?

    - by d7samurai
    Apple's iOS developer guidelines state: 3.3.2 — An Application may not itself install or launch other executable code by any means, including without limitation through the use of a plug-in architecture, calling other frameworks, other APIs or otherwise. No interpreted code may be downloaded or used in an Application except for code that is interpreted and run by Apple’s Documented APIs and built-in interpreter(s). Assuming that downloading data - like XML and images (or a game level description), for example - at run-time is allowed (as is my impression), I am wondering where they draw the line between "data" and "code". Picture the scenario of an app that delivers interactive "presentations" to users (like a survey, for instance). Presentations are added continuously to the server and different presentations are made available to different users, so they cannot be part of the initial app download (this is the whole point). They are described in XML format, but being interactive, they might contain conditional branching of this sort (shown in pseudo form to exemplify): <options id="Gender"> <option value="1">Male</option> <option value="2">Female</option> </options> <branches id="Gender"> <branch value="1"> <image src="Man" /> </branch> <branch value="2"> <image src="Woman" /> </branch> </branches> When the presentation is "played" within the app, the above would be presented in two steps. First a selection screen where the user can click on either of the two choices presented ("Male" or "Female"). Next, an image will be [downloaded dynamically] and displayed based on the choice made in the previous step. Now, it's easy to imagine additional tags describing further logic still. For example, a containing tag could be added: <loop count="3"> <options... /> <branches... /> </loop> The result here being that the selection screen / image screen pair would be sequentially presented three times over, of course. Or imagine some description of a level in a game. It's easy to view that as passive "data", but if it includes, say, several doorways that the user can go through and with various triggers, traps and points attached to them etc - isn't that the same as using a script - or, indeed, interpreted code - to describe options and their conditional responses? Assuming that the interpretation engine for this XML data is already present in the app and that such presentations can only be consumed (not created or edited) in the app, how would this fare against Apple's iOS guidelines? Doesn't XML basically constitute a scripting language (couldn't any interpreted programming language simply be described by XML) in this sense? Would it be OK if the proprietary scripting language (ref the XML used above) was strictly sandboxed (how can they tell?) and not given access to the operating system in any way (but able to download content dynamically - and upload results to the authoring server)? Where does the line go?

    Read the article

  • How to maintain GridPane's fixed-size after adding elemnts dynamically

    - by Eviatar G.
    I need to create board game that can be dynamically change. Its size can be 5x5, 6x6, 7x7 or 8x8. I am jusing JavaFX with NetBeans and Scene builder for the GUI. When the user choose board size greater than 5x5 this is what happens: This is the template on the scene builder before adding cells dynamically: To every cell in the GridPane I am adding StackPane + label of the cell number: @FXML GridPane boardGame; public void CreateBoard() { int boardSize = m_Engine.GetBoard().GetBoardSize(); int num = boardSize * boardSize; int maxColumns = m_Engine.GetNumOfCols(); int maxRows = m_Engine.GetNumOfRows(); for(int row = 0; row < maxRows ; row++) { for(int col = maxColumns - 1; col >= 0 ; col--) { StackPane stackPane = new StackPane(); stackPane.setPrefSize(150.0, 200.0); stackPane.getChildren().add(new Label(String.valueOf(num))); boardGame.add(stackPane, col, row); num--; } } boardGame.setGridLinesVisible(true); boardGame.autosize(); } The problem is the stack panes's size on the GridPane are getting smaller. I tried to set them equal minimum and maximum size but it didn't help they are still getting smaller. I searched on the web but didn't realy find same problem as mine. The only similar problem to mine was found here: Dynamically add elements to a fixed-size GridPane in JavaFX But his suggestion is to use TilePane and I need to use GridPane because this is a board game and it more easier to use GridPane when I need to do tasks such as getting to cell on row = 1 and column = 2 for example. EDIT: I removed the GridPane from the FXML and created it manually on the Controller but now it print a blank board: @FXML GridPane boardGame; public void CreateBoard() { int boardSize = m_Engine.GetBoard().GetBoardSize(); int num = boardSize * boardSize; int maxColumns = m_Engine.GetNumOfCols(); int maxRows = m_Engine.GetNumOfRows(); boardGame = new GridPane(); boardGame.setAlignment(Pos.CENTER); Collection<StackPane> stackPanes = new ArrayList<StackPane>(); for(int row = 0; row < maxRows ; row++) { for(int col = maxColumns - 1; col >= 0 ; col--) { StackPane stackPane = new StackPane(); stackPane.setPrefSize(150.0, 200.0); stackPane.getChildren().add(new Label(String.valueOf(num))); boardGame.add(stackPane, col, row); stackPanes.add(stackPane); num--; } } this.buildGridPane(boardSize); boardGame.setGridLinesVisible(true); boardGame.autosize(); boardGamePane.getChildren().addAll(stackPanes); } public void buildGridPane(int i_NumOfRowsAndColumns) { RowConstraints rowConstraint; ColumnConstraints columnConstraint; for(int index = 0 ; index < i_NumOfRowsAndColumns; index++) { rowConstraint = new RowConstraints(3, Control.USE_COMPUTED_SIZE, Double.POSITIVE_INFINITY, Priority.ALWAYS, VPos.CENTER, true); boardGame.getRowConstraints().add(rowConstraint); columnConstraint = new ColumnConstraints(3, Control.USE_COMPUTED_SIZE, Double.POSITIVE_INFINITY, Priority.ALWAYS, HPos.CENTER, true); boardGame.getColumnConstraints().add(columnConstraint); } }

    Read the article

  • SQLAuthority News – SafePeak’s SQL Server Performance Contest – Winners

    - by pinaldave
    SafePeak, the unique automated SQL performance acceleration and performance tuning software vendor, announced the winners of their SQL Performance Contest 2011. The contest quite unique: the writer of the best / most interesting and most community liked “performance story” would win an expensive gadget. The judges were the community DBAs that could participating and Like’ing stories and could also win expensive prizes. Robert Pearl SQL MVP, was the contest supervisor. I liked most of the stories and decided then to contact SafePeak and suggested to participate in the give-away and they have gladly accepted the same. The winner of best story is: Jason Brimhall (USA) with a story about a proc with a fair amount of business logic. Congratulations Jason! The 3 participants won the second prize of $100 gift card on amazon.com are: Michael Corey (USA), Hakim Ali (USA) and Alex Bernal (USA). And 5 participants won a printed copy of a book of mine (Book Reviews of SQL Wait Stats Joes 2 Pros: SQL Performance Tuning Techniques Using Wait Statistics, Types & Queues) are: Patrick Kansa (USA), Wagner Bianchi (USA), Riyas.V.K (India), Farzana Patwa (USA) and Wagner Crivelini (Brazil). The winners are welcome to send safepeak their mail address to receive the prizes (to “info ‘at’ safepeak.com”). Also SafePeak team asked me to welcome you all to continue sending stories, simply because they (and we all) like to read interesting stuff) as well as to send them ideas for future contests. You can do it from here: www.safepeak.com/SQL-Performance-Contest-2011/Submit-Story Congratulations to everybody! I found this very funny video about SafePeak: It looks like someone (maybe the vendor) played with video’s once and created this non-commercial like video: SafePeak dynamic caching is an immediate plug-n-play performance acceleration and scalability solution for cloud, hosted and business SQL server applications. By caching in memory result sets of queries and stored procedures, while keeping all those cache correct and up to date using unique patent pending technology, SafePeak can fix SQL performance problems and bottlenecks of most applications – most importantly: without actual code changes. By the way, I checked their website prior this contest announcement and noticed that they are running these days a special end year promotion giving between 30% to 45% discounts. Since the installation is quick and full testing can be done within couple of days – those have the need (performance problems) and have budget leftovers: I suggest you hurry. A free fully functional trial is here: www.safepeak.com/download, while those that want to start with a quote should ping here www.safepeak.com/quote. Good luck! Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Rendering ASP.NET Script References into the Html Header

    - by Rick Strahl
    One thing that I’ve come to appreciate in control development in ASP.NET that use JavaScript is the ability to have more control over script and script include placement than ASP.NET provides natively. Specifically in ASP.NET you can use either the ClientScriptManager or ScriptManager to embed scripts and script references into pages via code. This works reasonably well, but the script references that get generated are generated into the HTML body and there’s very little operational control for placement of scripts. If you have multiple controls or several of the same control that need to place the same scripts onto the page it’s not difficult to end up with scripts that render in the wrong order and stop working correctly. This is especially critical if you load script libraries with dependencies either via resources or even if you are rendering referenced to CDN resources. Natively ASP.NET provides a host of methods that help embedding scripts into the page via either Page.ClientScript or the ASP.NET ScriptManager control (both with slightly different syntax): RegisterClientScriptBlock Renders a script block at the top of the HTML body and should be used for embedding callable functions/classes. RegisterStartupScript Renders a script block just prior to the </form> tag and should be used to for embedding code that should execute when the page is first loaded. Not recommended – use jQuery.ready() or equivalent load time routines. RegisterClientScriptInclude Embeds a reference to a script from a url into the page. RegisterClientScriptResource Embeds a reference to a Script from a resource file generating a long resource file string All 4 of these methods render their <script> tags into the HTML body. The script blocks give you a little bit of control by having a ‘top’ and ‘bottom’ of the document location which gives you some flexibility over script placement and precedence. Script includes and resource url unfortunately do not even get that much control – references are simply rendered into the page in the order of declaration. The ASP.NET ScriptManager control facilitates this task a little bit with the abililty to specify scripts in code and the ability to programmatically check what scripts have already been registered, but it doesn’t provide any more control over the script rendering process itself. Further the ScriptManager is a bear to deal with generically because generic code has to always check and see if it is actually present. Some time ago I posted a ClientScriptProxy class that helps with managing the latter process of sending script references either to ClientScript or ScriptManager if it’s available. Since I last posted about this there have been a number of improvements in this API, one of which is the ability to control placement of scripts and script includes in the page which I think is rather important and a missing feature in the ASP.NET native functionality. Handling ScriptRenderModes One of the big enhancements that I’ve come to rely on is the ability of the various script rendering functions described above to support rendering in multiple locations: /// <summary> /// Determines how scripts are included into the page /// </summary> public enum ScriptRenderModes { /// <summary> /// Inherits the setting from the control or from the ClientScript.DefaultScriptRenderMode /// </summary> Inherit, /// Renders the script include at the location of the control /// </summary> Inline, /// <summary> /// Renders the script include into the bottom of the header of the page /// </summary> Header, /// <summary> /// Renders the script include into the top of the header of the page /// </summary> HeaderTop, /// <summary> /// Uses ClientScript or ScriptManager to embed the script include to /// provide standard ASP.NET style rendering in the HTML body. /// </summary> Script, /// <summary> /// Renders script at the bottom of the page before the last Page.Controls /// literal control. Note this may result in unexpected behavior /// if /body and /html are not the last thing in the markup page. /// </summary> BottomOfPage } This enum is then applied to the various Register functions to allow more control over where scripts actually show up. Why is this useful? For me I often render scripts out of control resources and these scripts often include things like a JavaScript Library (jquery) and a few plug-ins. The order in which these can be loaded is critical so that jQuery.js always loads before any plug-in for example. Typically I end up with a general script layout like this: Core Libraries- HeaderTop Plug-ins: Header ScriptBlocks: Header or Script depending on other dependencies There’s also an option to render scripts and CSS at the very bottom of the page before the last Page control on the page which can be useful for speeding up page load when lots of scripts are loaded. The API syntax of the ClientScriptProxy methods is closely compatible with ScriptManager’s using static methods and control references to gain access to the page and embedding scripts. For example, to render some script into the current page in the header: // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Same again - shouldn't be rendered because it's the same id ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Create a second script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function2", "function helloWorld2() { alert('hello2'); }", true, ScriptRenderModes.Header); // This just calls ClientScript and renders into bottom of document ClientScriptProxy.Current.RegisterStartupScript(this,typeof(ControlResources), "call_hello", "helloWorld();helloWorld2();", true); which generates: <html xmlns="http://www.w3.org/1999/xhtml" > <head><title> </title> <script type="text/javascript"> function helloWorld() { alert('hello'); } </script> <script type="text/javascript"> function helloWorld2() { alert('hello2'); } </script> </head> <body> … <script type="text/javascript"> //<![CDATA[ helloWorld();helloWorld2();//]]> </script> </form> </body> </html> Note that the scripts are generated into the header rather than the body except for the last script block which is the call to RegisterStartupScript. In general I wouldn’t recommend using RegisterStartupScript – ever. It’s a much better practice to use a script base load event to handle ‘startup’ code that should fire when the page first loads. So instead of the code above I’d actually recommend doing: ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "call_hello", "$().ready( function() { alert('hello2'); });", true, ScriptRenderModes.Header); assuming you’re using jQuery on the page. For script includes from a Url the following demonstrates how to embed scripts into the header. This example injects a jQuery and jQuery.UI script reference from the Google CDN then checks each with a script block to ensure that it has loaded and if not loads it from a server local location: // load jquery from CDN ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js", ScriptRenderModes.HeaderTop); // check if jquery loaded - if it didn't we're not online string scriptCheck = @"if (typeof jQuery != 'object') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));"; string jQueryUrl = ClientScriptProxy.Current.GetWebResourceUrl(this, typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jquery_register", string.Format(scriptCheck,jQueryUrl),true, ScriptRenderModes.HeaderTop); // Load jquery-ui from cdn ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js", ScriptRenderModes.Header); // check if we need to load from local string jQueryUiUrl = ResolveUrl("~/scripts/jquery-ui-custom.min.js"); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jqueryui_register", string.Format(scriptCheck, jQueryUiUrl), true, ScriptRenderModes.Header); // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "$().ready( function() { alert('hello'); });", true, ScriptRenderModes.Header); which in turn generates this HTML: <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=DIykvYhJ_oXCr-TA_dr35i4AayJoV1mgnQAQGPaZsoPM2LCdvoD3cIsRRitHKlKJfV5K_jQvylK7tsqO3lQIFw2&t=633979863959332352' type='text/javascript'%3E%3C/script%3E")); </script> <title> </title> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/scripts/jquery-ui-custom.min.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> $().ready(function() { alert('hello'); }); </script> </head> <body> …</body> </html> As you can see there’s a bit more control in this process as you can inject both script includes and script blocks into the document at the top or bottom of the header, plus if necessary at the usual body locations. This is quite useful especially if you create custom server controls that interoperate with script and have certain dependencies. The above is a good example of a useful switchable routine where you can switch where scripts load from by default – the above pulls from Google CDN but a configuration switch may automatically switch to pull from the local development copies if your doing development for example. How does it work? As mentioned the ClientScriptProxy object mimicks many of the ScriptManager script related methods and so provides close API compatibility with it although it contains many additional overloads that enhance functionality. It does however work against ScriptManager if it’s available on the page, or Page.ClientScript if it’s not so it provides a single unified frontend to script access. There are however many overloads of the original SM methods like the above to provide additional functionality. The implementation of script header rendering is pretty straight forward – as long as a server header (ie. it has to have runat=”server” set) is available. Otherwise these routines fall back to using the default document level insertions of ScriptManager/ClientScript. Given that there is a server header it’s relatively easy to generate the script tags and code and append them to the header either at the top or bottom. I suspect Microsoft didn’t provide header rendering functionality precisely because a runat=”server” header is not required by ASP.NET so behavior would be slightly unpredictable. That’s not really a problem for a custom implementation however. Here’s the RegisterClientScriptBlock implementation that takes a ScriptRenderModes parameter to allow header rendering: /// <summary> /// Renders client script block with the option of rendering the script block in /// the Html header /// /// For this to work Header must be defined as runat="server" /// </summary> /// <param name="control">any control that instance typically page</param> /// <param name="type">Type that identifies this rendering</param> /// <param name="key">unique script block id</param> /// <param name="script">The script code to render</param> /// <param name="addScriptTags">Ignored for header rendering used for all other insertions</param> /// <param name="renderMode">Where the block is rendered</param> public void RegisterClientScriptBlock(Control control, Type type, string key, string script, bool addScriptTags, ScriptRenderModes renderMode) { if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; if (control.Page.Header == null || renderMode != ScriptRenderModes.HeaderTop && renderMode != ScriptRenderModes.Header && renderMode != ScriptRenderModes.BottomOfPage) { RegisterClientScriptBlock(control, type, key, script, addScriptTags); return; } // No dupes - ref script include only once const string identifier = "scriptblock_"; if (HttpContext.Current.Items.Contains(identifier + key)) return; HttpContext.Current.Items.Add(identifier + key, string.Empty); StringBuilder sb = new StringBuilder(); // Embed in header sb.AppendLine("\r\n<script type=\"text/javascript\">"); sb.AppendLine(script); sb.AppendLine("</script>"); int? index = HttpContext.Current.Items["__ScriptResourceIndex"] as int?; if (index == null) index = 0; if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if(renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1,new LiteralControl(sb.ToString())); HttpContext.Current.Items["__ScriptResourceIndex"] = index; } Note that the routine has to keep track of items inserted by id so that if the same item is added again with the same key it won’t generate two script entries. Additionally the code has to keep track of how many insertions have been made at the top of the document so that entries are added in the proper order. The RegisterScriptInclude method is similar but there’s some additional logic in here to deal with script file references and ClientScriptProxy’s (optional) custom resource handler that provides script compression /// <summary> /// Registers a client script reference into the page with the option to specify /// the script location in the page /// </summary> /// <param name="control">Any control instance - typically page</param> /// <param name="type">Type that acts as qualifier (uniqueness)</param> /// <param name="url">the Url to the script resource</param> /// <param name="ScriptRenderModes">Determines where the script is rendered</param> public void RegisterClientScriptInclude(Control control, Type type, string url, ScriptRenderModes renderMode) { const string STR_ScriptResourceIndex = "__ScriptResourceIndex"; if (string.IsNullOrEmpty(url)) return; if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; // Extract just the script filename string fileId = null; // Check resource IDs and try to match to mapped file resources // Used to allow scripts not to be loaded more than once whether // embedded manually (script tag) or via resources with ClientScriptProxy if (url.Contains(".axd?r=")) { string res = HttpUtility.UrlDecode( StringUtils.ExtractString(url, "?r=", "&", false, true) ); foreach (ScriptResourceAlias item in ScriptResourceAliases) { if (item.Resource == res) { fileId = item.Alias + ".js"; break; } } if (fileId == null) fileId = url.ToLower(); } else fileId = Path.GetFileName(url).ToLower(); // No dupes - ref script include only once const string identifier = "script_"; if (HttpContext.Current.Items.Contains( identifier + fileId ) ) return; HttpContext.Current.Items.Add(identifier + fileId, string.Empty); // just use script manager or ClientScriptManager if (control.Page.Header == null || renderMode == ScriptRenderModes.Script || renderMode == ScriptRenderModes.Inline) { RegisterClientScriptInclude(control, type,url, url); return; } // Retrieve script index in header int? index = HttpContext.Current.Items[STR_ScriptResourceIndex] as int?; if (index == null) index = 0; StringBuilder sb = new StringBuilder(256); url = WebUtils.ResolveUrl(url); // Embed in header sb.AppendLine("\r\n<script src=\"" + url + "\" type=\"text/javascript\"></script>"); if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if (renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1, new LiteralControl(sb.ToString())); HttpContext.Current.Items[STR_ScriptResourceIndex] = index; } There’s a little more code here that deals with cleaning up the passed in Url and also some custom handling of script resources that run through the ScriptCompressionModule – any script resources loaded in this fashion are automatically cached based on the resource id. Raw urls extract just the filename from the URL and cache based on that. All of this to avoid doubling up of scripts if called multiple times by multiple instances of the same control for example or several controls that all load the same resources/includes. Finally RegisterClientScriptResource utilizes the previous method to wrap the WebResourceUrl as well as some custom functionality for the resource compression module: /// <summary> /// Returns a WebResource or ScriptResource URL for script resources that are to be /// embedded as script includes. /// </summary> /// <param name="control">Any control</param> /// <param name="type">A type in assembly where resources are located</param> /// <param name="resourceName">Name of the resource to load</param> /// <param name="renderMode">Determines where in the document the link is rendered</param> public void RegisterClientScriptResource(Control control, Type type, string resourceName, ScriptRenderModes renderMode) { string resourceUrl = GetClientScriptResourceUrl(control, type, resourceName); RegisterClientScriptInclude(control, type, resourceUrl, renderMode); } /// <summary> /// Works like GetWebResourceUrl but can be used with javascript resources /// to allow using of resource compression (if the module is loaded). /// </summary> /// <param name="control"></param> /// <param name="type"></param> /// <param name="resourceName"></param> /// <returns></returns> public string GetClientScriptResourceUrl(Control control, Type type, string resourceName) { #if IncludeScriptCompressionModuleSupport // If wwScriptCompression Module through Web.config is loaded use it to compress // script resources by using wcSC.axd Url the module intercepts if (ScriptCompressionModule.ScriptCompressionModuleActive) { string url = "~/wwSC.axd?r=" + HttpUtility.UrlEncode(resourceName); if (type.Assembly != GetType().Assembly) url += "&t=" + HttpUtility.UrlEncode(type.FullName); return WebUtils.ResolveUrl(url); } #endif return control.Page.ClientScript.GetWebResourceUrl(type, resourceName); } This code merely retrieves the resource URL and then simply calls back to RegisterClientScriptInclude with the URL to be embedded which means there’s nothing specific to deal with other than the custom compression module logic which is nice and easy. What else is there in ClientScriptProxy? ClientscriptProxy also provides a few other useful services beyond what I’ve already covered here: Transparent ScriptManager and ClientScript calls ClientScriptProxy includes a host of routines that help figure out whether a script manager is available or not and all functions in this class call the appropriate object – ScriptManager or ClientScript – that is available in the current page to ensure that scripts get embedded into pages properly. This is especially useful for control development where controls have no control over the scripting environment in place on the page. RegisterCssLink and RegisterCssResource Much like the script embedding functions these two methods allow embedding of CSS links. CSS links are appended to the header or to a form declared with runat=”server”. LoadControlScript Is a high level resource loading routine that can be used to easily switch between different script linking modes. It supports loading from a WebResource, a url or not loading anything at all. This is very useful if you build controls that deal with specification of resource urls/ids in a standard way. Check out the full Code You can check out the full code to the ClientScriptProxyClass here: ClientScriptProxy.cs ClientScriptProxy Documentation (class reference) Note that the ClientScriptProxy has a few dependencies in the West Wind Web Toolkit of which it is part of. ControlResources holds a few standard constants and script resource links and the ScriptCompressionModule which is referenced in a few of the script inclusion methods. There’s also another useful ScriptContainer companion control  to the ClientScriptProxy that allows scripts to be placed onto the page’s markup including the ability to specify the script location and script minification options. You can find all the dependencies in the West Wind Web Toolkit repository: West Wind Web Toolkit Repository West Wind Web Toolkit Home Page© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  JavaScript  

    Read the article

  • Dynamic connection for LINQ to SQL DataContext

    - by Steve Clements
    If for some reason you need to specify a specific connection string for a DataContext, you can of course pass the connection string when you initialise you DataContext object.  A common scenario could be a dev/test/stage/live connection string, but in my case its for either a live or archive database.   I however want the connection string to be handled by the DataContext, there are probably lots of different reasons someone would want to do this…but here are mine. I want the same connection string for all instances of DataContext, but I don’t know what it is yet! I prefer the clean code and ease of not using a constructor parameter. The refactoring of using a constructor parameter could be a nightmare.   So my approach is to create a new partial class for the DataContext and handle empty constructor in there. First from within the LINQ to SQL designer I changed the connection property to None.  This will remove the empty constructor code from the auto generated designer.cs file. Right click on the .dbml file, click View Code and a file and class is created for you! You’ll see the new class created in solutions explorer and the file will open. We are going to be playing with constructors so you need to add the inheritance from System.Data.Linq.DataContext public partial class DataClasses1DataContext : System.Data.Linq.DataContext    {    }   Add the empty constructor and I have added a property that will get my connection string, you will have whatever logic you need to decide and get the connection string you require.  In my case I will be hitting a database, but I have omitted that code. public partial class DataClasses1DataContext : System.Data.Linq.DataContext {    // Connection String Keys - stored in web.config    static string LiveConnectionStringKey = "LiveConnectionString";    static string ArchiveConnectionStringKey = "ArchiveConnectionString";      protected static string ConnectionString    {       get       {          if (DoIWantToUseTheLiveConnection) {             return global::System.Configuration.ConfigurationManager.ConnectionStrings[LiveConnectionStringKey].ConnectionString;          }          else {             return global::System.Configuration.ConfigurationManager.ConnectionStrings[ArchiveConnectionStringKey].ConnectionString;          }       }    }      public DataClasses1DataContext() :       base(ConnectionString, mappingSource)    {       OnCreated();    } }   Now when I new up my DataContext, I can just leave the constructor empty and my partial class will decide which one i need to use. Nice, clean code that can be easily refractored and tested.   Share this post :

    Read the article

  • Using linked servers, OPENROWSET and OPENQUERY

    - by BuckWoody
    SQL Server has a few mechanisms to reach out to another server (even another server type) and query data from within a Transact-SQL statement. Among them are a set of stored credentials and information (called a Linked Server), a statement that uses a linked server called called OPENQUERY, another called OPENROWSET, and one called OPENDATASOURCE. This post isn’t about those particular functions or statements – hit the links for more if you’re new to those topics. I’m actually more concerned about where I see these used than the particular method. In many cases, a Linked server isn’t another Relational Database Management System (RDMBS) like Oracle or DB2 (which is possible with a linked server), but another SQL Server. My concern is that linked servers are the new Data Transformation Services (DTS) from SQL Server 2000 – something that was designed for one purpose but which is being morphed into something much more. In the case of DTS, most of us turned that feature into a full-fledged job system. What was designed as a simple data import and export system has been pressed into service doing logic, routing and timing. And of course we all know how painful it was to move off of a complex DTS system onto SQL Server Integration Services. In the case of linked servers, what should be used as a method of running a simple query or two on another server where you have occasional connection or need a quick import of a small data set is morphing into a full federation strategy. In some cases I’ve seen a complex web of linked servers, and when credentials, names or anything else changes there are huge problems. Now don’t get me wrong – linked servers and other forms of distributing queries is a fantastic set of tools that we have to move data around. I’m just saying that when you start having lots of workarounds and when things get really complicated, you might want to step back a little and ask if there’s a better way. Are you able to tolerate some latency? Perhaps you’re able to use Service Broker. Would you like to be platform-independent on the data source? Perhaps a middle-tier might make more sense, abstracting the queries there and sending them to the proper server. Designed properly, I’ve seen these systems scale further and be more resilient than loading up on linked servers. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Creating STA COM compatible ASP.NET Applications

    - by Rick Strahl
    When building ASP.NET applications that interface with old school COM objects like those created with VB6 or Visual FoxPro (MTDLL), it's extremely important that the threads that are serving requests use Single Threaded Apartment Threading. STA is a COM built-in technology that allows essentially single threaded components to operate reliably in a multi-threaded environment. STA's guarantee that COM objects instantiated on a specific thread stay on that specific thread and any access to a COM object from another thread automatically marshals that thread to the STA thread. The end effect is that you can have multiple threads, but a COM object instance lives on a fixed never changing thread. ASP.NET by default uses MTA (multi-threaded apartment) threads which are truly free spinning threads that pay no heed to COM object marshaling. This is vastly more efficient than STA threading which has a bit of overhead in determining whether it's OK to run code on a given thread or whether some sort of thread/COM marshaling needs to occur. MTA COM components can be very efficient, but STA COM components in a multi-threaded environment always tend to have a fair amount of overhead. It's amazing how much COM Interop I still see today so while it seems really old school to be talking about this topic, it's actually quite apropos for me as I have many customers using legacy COM systems that need to interface with other .NET applications. In this post I'm consolidating some of the hacks I've used to integrate with various ASP.NET technologies when using STA COM Components. STA in ASP.NET Support for STA threading in the ASP.NET framework is fairly limited. Specifically only the original ASP.NET WebForms technology supports STA threading directly via its STA Page Handler implementation or what you might know as ASPCOMPAT mode. For WebForms running STA components is as easy as specifying the ASPCOMPAT attribute in the @Page tag:<%@ Page Language="C#" AspCompat="true" %> which runs the page in STA mode. Removing it runs in MTA mode. Simple. Unfortunately all other ASP.NET technologies built on top of the core ASP.NET engine do not support STA natively. So if you want to use STA COM components in MVC or with class ASMX Web Services, there's no automatic way like the ASPCOMPAT keyword available. So what happens when you run an STA COM component in an MTA application? In low volume environments - nothing much will happen. The COM objects will appear to work just fine as there are no simultaneous thread interactions and the COM component will happily run on a single thread or multiple single threads one at a time. So for testing running components in MTA environments may appear to work just fine. However as load increases and threads get re-used by ASP.NET COM objects will end up getting created on multiple different threads. This can result in crashes or hangs, or data corruption in the STA components which store their state in thread local storage on the STA thread. If threads overlap this global store can easily get corrupted which in turn causes problems. STA ensures that any COM object instance loaded always stays on the same thread it was instantiated on. What about COM+? COM+ is supposed to address the problem of STA in MTA applications by providing an abstraction with it's own thread pool manager for COM objects. It steps in to the COM instantiation pipeline and hands out COM instances from its own internally maintained STA Thread pool. This guarantees that the COM instantiation threads are STA threads if using STA components. COM+ works, but in my experience the technology is very, very slow for STA components. It adds a ton of overhead and reduces COM performance noticably in load tests in IIS. COM+ can make sense in some situations but for Web apps with STA components it falls short. In addition there's also the need to ensure that COM+ is set up and configured on the target machine and the fact that components have to be registered in COM+. COM+ also keeps components up at all times, so if a component needs to be replaced the COM+ package needs to be unloaded (same is true for IIS hosted components but it's more common to manage that). COM+ is an option for well established components, but native STA support tends to provide better performance and more consistent usability, IMHO. STA for non supporting ASP.NET Technologies As mentioned above only WebForms supports STA natively. However, by utilizing the WebForms ASP.NET Page handler internally it's actually possible to trick various other ASP.NET technologies and let them work with STA components. This is ugly but I've used each of these in various applications and I've had minimal problems making them work with FoxPro STA COM components which is about as dififcult as it gets for COM Interop in .NET. In this post I summarize several STA workarounds that enable you to use STA threading with these ASP.NET Technologies: ASMX Web Services ASP.NET MVC WCF Web Services ASP.NET Web API ASMX Web Services I start with classic ASP.NET ASMX Web Services because it's the easiest mechanism that allows for STA modification. It also clearly demonstrates how the WebForms STA Page Handler is the key technology to enable the various other solutions to create STA components. Essentially the way this works is to override the WebForms Page class and hijack it's init functionality for processing requests. Here's what this looks like for Web Services:namespace FoxProAspNet { public class WebServiceStaHandler : System.Web.UI.Page, IHttpAsyncHandler { protected override void OnInit(EventArgs e) { IHttpHandler handler = new WebServiceHandlerFactory().GetHandler( this.Context, this.Context.Request.HttpMethod, this.Context.Request.FilePath, this.Context.Request.PhysicalPath); handler.ProcessRequest(this.Context); this.Context.ApplicationInstance.CompleteRequest(); } public IAsyncResult BeginProcessRequest( HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } } public class AspCompatWebServiceStaHandlerWithSessionState : WebServiceStaHandler, IRequiresSessionState { } } This class overrides the ASP.NET WebForms Page class which has a little known AspCompatBeginProcessRequest() and AspCompatEndProcessRequest() method that is responsible for providing the WebForms ASPCOMPAT functionality. These methods handle routing requests to STA threads. Note there are two classes - one that includes session state and one that does not. If you plan on using ASP.NET Session state use the latter class, otherwise stick to the former. This maps to the EnableSessionState page setting in WebForms. This class simply hooks into this functionality by overriding the BeginProcessRequest and EndProcessRequest methods and always forcing it into the AspCompat methods. The way this works is that BeginProcessRequest() fires first to set up the threads and starts intializing the handler. As part of that process the OnInit() method is fired which is now already running on an STA thread. The code then creates an instance of the actual WebService handler factory and calls its ProcessRequest method to start executing which generates the Web Service result. Immediately after ProcessRequest the request is stopped with Application.CompletRequest() which ensures that the rest of the Page handler logic doesn't fire. This means that even though the fairly heavy Page class is overridden here, it doesn't end up executing any of its internal processing which makes this code fairly efficient. In a nutshell, we're highjacking the Page HttpHandler and forcing it to process the WebService process handler in the context of the AspCompat handler behavior. Hooking up the Handler Because the above is an HttpHandler implementation you need to hook up the custom handler and replace the standard ASMX handler. To do this you need to modify the web.config file (here for IIS 7 and IIS Express): <configuration> <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated-4.0" /> <add name="Asmx STA Web Service Handler" path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" precondition="integrated"/> </handlers> </system.webServer> </configuration> (Note: The name for the WebServiceHandlerFactory-Integrated-4.0 might be slightly different depending on your server version. Check the IIS Handler configuration in the IIS Management Console for the exact name or simply remove the handler from the list there which will propagate to your web.config). For IIS 5 & 6 (Windows XP/2003) or the Visual Studio Web Server use:<configuration> <system.web> <httpHandlers> <remove path="*.asmx" verb="*" /> <add path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" /> </httpHandlers> </system.web></configuration> To test, create a new ASMX Web Service and create a method like this: [WebService(Namespace = "http://foxaspnet.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class FoxWebService : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World. Threading mode is: " + System.Threading.Thread.CurrentThread.GetApartmentState(); } } Run this before you put in the web.config configuration changes and you should get: Hello World. Threading mode is: MTA Then put the handler mapping into Web.config and you should see: Hello World. Threading mode is: STA And you're on your way to using STA COM components. It's a hack but it works well! I've used this with several high volume Web Service installations with various customers and it's been fast and reliable. ASP.NET MVC ASP.NET MVC has quickly become the most popular ASP.NET technology, replacing WebForms for creating HTML output. MVC is more complex to get started with, but once you understand the basic structure of how requests flow through the MVC pipeline it's easy to use and amazingly flexible in manipulating HTML requests. In addition, MVC has great support for non-HTML output sources like JSON and XML, making it an excellent choice for AJAX requests without any additional tools. Unlike WebForms ASP.NET MVC doesn't support STA threads natively and so some trickery is needed to make it work with STA threads as well. MVC gets its handler implementation through custom route handlers using ASP.NET's built in routing semantics. To work in an STA handler requires working in the Page Handler as part of the Route Handler implementation. As with the Web Service handler the first step is to create a custom HttpHandler that can instantiate an MVC request pipeline properly:public class MvcStaThreadHttpAsyncHandler : Page, IHttpAsyncHandler, IRequiresSessionState { private RequestContext _requestContext; public MvcStaThreadHttpAsyncHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); _requestContext = requestContext; } public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } protected override void OnInit(EventArgs e) { var controllerName = _requestContext.RouteData.GetRequiredString("controller"); var controllerFactory = ControllerBuilder.Current.GetControllerFactory(); var controller = controllerFactory.CreateController(_requestContext, controllerName); if (controller == null) throw new InvalidOperationException("Could not find controller: " + controllerName); try { controller.Execute(_requestContext); } finally { controllerFactory.ReleaseController(controller); } this.Context.ApplicationInstance.CompleteRequest(); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } public override void ProcessRequest(HttpContext httpContext) { throw new NotSupportedException("STAThreadRouteHandler does not support ProcessRequest called (only BeginProcessRequest)"); } } This handler code figures out which controller to load and then executes the controller. MVC internally provides the information needed to route to the appropriate method and pass the right parameters. Like the Web Service handler the logic occurs in the OnInit() and performs all the processing in that part of the request. Next, we need a RouteHandler that can actually pick up this handler. Unlike the Web Service handler where we simply registered the handler, MVC requires a RouteHandler to pick up the handler. RouteHandlers look at the URL's path and based on that decide on what handler to invoke. The route handler is pretty simple - all it does is load our custom handler: public class MvcStaThreadRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); return new MvcStaThreadHttpAsyncHandler(requestContext); } } At this point you can instantiate this route handler and force STA requests to MVC by specifying a route. The following sets up the ASP.NET Default Route:Route mvcRoute = new Route("{controller}/{action}/{id}", new RouteValueDictionary( new { controller = "Home", action = "Index", id = UrlParameter.Optional }), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute);   To make this code a little easier to work with and mimic the behavior of the routes.MapRoute() functionality extension method that MVC provides, here is an extension method for MapMvcStaRoute(): public static class RouteCollectionExtensions { public static void MapMvcStaRoute(this RouteCollection routeTable, string name, string url, object defaults = null) { Route mvcRoute = new Route(url, new RouteValueDictionary(defaults), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute); } } With this the syntax to add  route becomes a little easier and matches the MapRoute() method:RouteTable.Routes.MapMvcStaRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); The nice thing about this route handler, STA Handler and extension method is that it's fully self contained. You can put all three into a single class file and stick it into your Web app, and then simply call MapMvcStaRoute() and it just works. Easy! To see whether this works create an MVC controller like this: public class ThreadTestController : Controller { public string ThreadingMode() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Try this test both with only the MapRoute() hookup in the RouteConfiguration in which case you should get MTA as the value. Then change the MapRoute() call to MapMvcStaRoute() leaving all the parameters the same and re-run the request. You now should see STA as the result. You're on your way using STA COM components reliably in ASP.NET MVC. WCF Web Services running through IIS WCF Web Services provide a more robust and wider range of services for Web Services. You can use WCF over HTTP, TCP, and Pipes, and WCF services support WS* secure services. There are many features in WCF that go way beyond what ASMX can do. But it's also a bit more complex than ASMX. As a basic rule if you need to serve straight SOAP Services over HTTP I 'd recommend sticking with the simpler ASMX services especially if COM is involved. If you need WS* support or want to serve data over non-HTTP protocols then WCF makes more sense. WCF is not my forte but I found a solution from Scott Seely on his blog that describes the progress and that seems to work well. I'm copying his code below so this STA information is all in one place and quickly explain. Scott's code basically works by creating a custom OperationBehavior which can be specified via an [STAOperation] attribute on every method. Using his attribute you end up with a class (or Interface if you separate the contract and class) that looks like this: [ServiceContract] public class WcfService { [OperationContract] public string HelloWorldMta() { return Thread.CurrentThread.GetApartmentState().ToString(); } // Make sure you use this custom STAOperationBehavior // attribute to force STA operation of service methods [STAOperationBehavior] [OperationContract] public string HelloWorldSta() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Pretty straight forward. The latter method returns STA while the former returns MTA. To make STA work every method needs to be marked up. The implementation consists of the attribute and OperationInvoker implementation. Here are the two classes required to make this work from Scott's post:public class STAOperationBehaviorAttribute : Attribute, IOperationBehavior { public void AddBindingParameters(OperationDescription operationDescription, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.ClientOperation clientOperation) { // If this is applied on the client, well, it just doesn’t make sense. // Don’t throw in case this attribute was applied on the contract // instead of the implementation. } public void ApplyDispatchBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation) { // Change the IOperationInvoker for this operation. dispatchOperation.Invoker = new STAOperationInvoker(dispatchOperation.Invoker); } public void Validate(OperationDescription operationDescription) { if (operationDescription.SyncMethod == null) { throw new InvalidOperationException("The STAOperationBehaviorAttribute " + "only works for synchronous method invocations."); } } } public class STAOperationInvoker : IOperationInvoker { IOperationInvoker _innerInvoker; public STAOperationInvoker(IOperationInvoker invoker) { _innerInvoker = invoker; } public object[] AllocateInputs() { return _innerInvoker.AllocateInputs(); } public object Invoke(object instance, object[] inputs, out object[] outputs) { // Create a new, STA thread object[] staOutputs = null; object retval = null; Thread thread = new Thread( delegate() { retval = _innerInvoker.Invoke(instance, inputs, out staOutputs); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); thread.Join(); outputs = staOutputs; return retval; } public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state) { // We don’t handle async… throw new NotImplementedException(); } public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result) { // We don’t handle async… throw new NotImplementedException(); } public bool IsSynchronous { get { return true; } } } The key in this setup is the Invoker and the Invoke method which creates a new thread and then fires the request on this new thread. Because this approach creates a new thread for every request it's not super efficient. There's a bunch of overhead involved in creating the thread and throwing it away after each thread, but it'll work for low volume requests and insure each thread runs in STA mode. If better performance is required it would be useful to create a custom thread manager that can pool a number of STA threads and hand off threads as needed rather than creating new threads on every request. If your Web Service needs are simple and you need only to serve standard SOAP 1.x requests, I would recommend sticking with ASMX services. It's easier to set up and work with and for STA component use it'll be significantly better performing since ASP.NET manages the STA thread pool for you rather than firing new threads for each request. One nice thing about Scotts code is though that it works in any WCF environment including self hosting. It has no dependency on ASP.NET or WebForms for that matter. STA - If you must STA components are a  pain in the ass and thankfully there isn't too much stuff out there anymore that requires it. But when you need it and you need to access STA functionality from .NET at least there are a few options available to make it happen. Each of these solutions is a bit hacky, but they work - I've used all of them in production with good results with FoxPro components. I hope compiling all of these in one place here makes it STA consumption a little bit easier. I feel your pain :-) Resources Download STA Handler Code Examples Scott Seely's original STA WCF OperationBehavior Article© Rick Strahl, West Wind Technologies, 2005-2012Posted in FoxPro   ASP.NET  .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Replacing “if”s with your own number system

    - by Michael Williamson
    During our second code retreat at Red Gate, the restriction for one of the sessions was disallowing the use of if statements. That includes other constructs that have the same effect, such as switch statements or loops that will only be executed zero or one times. The idea is to encourage use of polymorphism instead, and see just how far it can be used to get rid of “if”s. The main place where people struggled to get rid of numbers from their implementation of Conway’s Game of Life was the piece of code that decides whether a cell is live or dead in the next generation. For instance, for a cell that’s currently live, the code might look something like this: if (numberOfNeighbours == 2 || numberOfNeighbours == 3) { return CellState.LIVE; } else { return CellState.DEAD; } The problem is that we need to change behaviour depending on the number of neighbours each cell has, but polymorphism only allows us to switch behaviour based on the type of a value. It follows that the solution is to make different numbers have different types: public interface IConwayNumber { IConwayNumber Increment(); CellState LiveCellNextGeneration(); } public class Zero : IConwayNumber { public IConwayNumber Increment() { return new One(); } public CellState LiveCellNextGeneration() { return CellState.DEAD; } } public class One : IConwayNumber { public IConwayNumber Increment() { return new Two(); } public CellState LiveCellNextGeneration() { return CellState.LIVE; } } public class Two : IConwayNumber { public IConwayNumber Increment() { return new ThreeOrMore(); } public CellState LiveCellNextGeneration() { return CellState.LIVE; } } public class ThreeOrMore : IConwayNumber { public IConwayNumber Increment() { return this; } public CellState LiveCellNextGeneration() { return CellState.DEAD; } } In the code that counts the number of neighbours, we use our new number system by starting with Zero and incrementing when we find a neighbour. To choose the next state of the cell, rather than inspecting the number of neighbours, we ask the number of neighbours for the next state directly: return numberOfNeighbours.LiveCellNextGeneration(); And now we have no “if”s! If C# had double-dispatch, or if we used the visitor pattern, we could move the logic for choosing the next cell out of the number classes, which might feel a bit more natural. I suspect that reimplementing the natural numbers is still going to feel about the same amount of crazy though.

    Read the article

  • SQL SERVER – Introduction to Adaptive ETL Tool – How adaptive is your ETL?

    - by pinaldave
    I am often reminded by the fact that BI/data warehousing infrastructure is very brittle and not very adaptive to change. There are lots of basic use cases where data needs to be frequently loaded into SQL Server or another database. What I have found is that as long as the sources and targets stay the same, SSIS or any other ETL tool for that matter does a pretty good job handling these types of scenarios. But what happens when you are faced with more challenging scenarios, where the data formats and possibly the data types of the source data are changing from customer to customer?  Let’s examine a real life situation where a health management company receives claims data from their customers in various source formats. Even though this company supplied all their customers with the same claims forms, they ended up building one-off ETL applications to process the claims for each customer. Why, you ask? Well, it turned out that the claims data from various regional hospitals they needed to process had slightly different data formats, e.g. “integer” versus “string” data field definitions.  Moreover the data itself was represented with slight nuances, e.g. “0001124” or “1124” or “0000001124” to represent a particular account number, which forced them, as I eluded above, to build new ETL processes for each customer in order to overcome the inconsistencies in the various claims forms.  As a result, they experienced a lot of redundancy in these ETL processes and recognized quickly that their system would become more difficult to maintain over time. So imagine for a moment that you could use an ETL tool that helps you abstract the data formats so that your ETL transformation process becomes more reusable. Imagine that one claims form represents a data item as a string – acc_no(varchar) – while a second claims form represents the same data item as an integer – account_no(integer). This would break your traditional ETL process as the data mappings are hard-wired.  But in a world of abstracted definitions, all you need to do is create parallel data mappings to a common data representation used within your ETL application; that is, map both external data fields to a common attribute whose name and type remain unchanged within the application. acc_no(varchar) is mapped to account_number(integer) expressor Studio first claim form schema mapping account_no(integer) is also mapped to account_number(integer) expressor Studio second claim form schema mapping All the data processing logic that follows manipulates the data as an integer value named account_number. Well, these are the kind of problems that that the expressor data integration solution automates for you.  I’ve been following them since last year and encourage you to check them out by downloading their free expressor Studio ETL software. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL, SSIS

    Read the article

  • ActAs and OnBehalfOf support in WIF

    - by cibrax
    I discussed a time ago how WIF supported a new WS-Trust 1.4 element, “ActAs”, and how that element could be used for authentication delegation.  The thing is that there is another feature in WS-Trust 1.4 that also becomes handy for this kind of scenario, and I did not mention in that last post, “OnBehalfOf”. Shiung Yong wrote an excellent summary about the difference of these two new features in this forum thread. He basically commented the following, “An ActAs RST element indicates that the requestor wants a token that contains claims about two distinct entities: the requestor, and an external entity represented by the token in the ActAs element. An OnBehalfOf RST element indicates that the requestor wants a token that contains claims only about one entity: the external entity represented by the token in the OnBehalfOf element. In short, ActAs feature is typically used in scenarios that require composite delegation, where the final recipient of the issued token can inspect the entire delegation chain and see not just the client, but all intermediaries to perform access control, auditing and other related activities based on the whole identity delegation chain. The ActAs feature is commonly used in multi-tiered systems to authenticate and pass information about identities between the tiers without having to pass this information at the application/business logic layer. OnBehalfOf feature is used in scenarios where only the identity of the original client is important and is effectively the same as identity impersonation feature available in the Windows OS today. When the OnBehalfOf is used the final recipient of the issued token can only see claims about the original client, and the information about intermediaries is not preserved. One common pattern where OnBehalfOf feature is used is the proxy pattern where the client cannot access the STS directly but is instead communicating through a proxy gateway. The proxy gateway authenticates the caller and puts information about him into the OnBehalfOf element of the RST message that it then sends to the real STS for processing. The resulting token is going to contain only claims related to the client of the proxy, making the proxy completely transparent and not visible to the receiver of the issued token.” Going back to WIF, “ActAs” and “OnBehalfOf” are both supported as extensions methods in the WCF client channel. public static class ChannelFactoryOperations {   public static T CreateChannelActingAs<T>(this ChannelFactory<T> factory,     SecurityToken actAs);     public static T CreateChannelOnBehalfOf<T>(this ChannelFactory<T> factory,     SecurityToken onBehalfOf); } Both methods receive the security token with the identity of the original caller.

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Observation of the View – Part 2

    - by pinaldave
    Earlier, I have written an article about SQL SERVER – Index Created on View not Used Often – Observation of the View. I received an email from one of the readers, asking if there would no problems when we create the Index on the base table. Well, we need to discuss this situation in two different cases. Before proceeding to the discussion, I strongly suggest you read my earlier articles. To avoid the duplication, I am not going to repeat the code and explanation over here. In all the earlier cases, I have explained in detail how Index created on the View is not utilized. SQL SERVER – Index Created on View not Used Often – Limitation of the View 12 SQL SERVER – Index Created on View not Used Often – Observation of the View SQL SERVER – Indexed View always Use Index on Table As per earlier blog posts, so far we have done the following: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View However, the blog reader who emailed me suggests the extension of the said logic, which is as follows: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View Create Index on the Base Table Write SELECT with ORDER BY on View After doing the last two steps, the question is “Will the query on the View utilize the Index on the View, or will it still use the Index of the base table?“ Let us first run the Create example. USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO -- Create Index on Original Table -- On Column ID1 CREATE UNIQUE CLUSTERED INDEX [IX_OriginalTable] ON mySampleTable ( ID1 ASC ) GO -- On Column ID2 CREATE UNIQUE NONCLUSTERED INDEX [IX_OriginalTable_ID2] ON mySampleTable ( ID2 ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO Now let us see the execution plans for both of the SELECT statement. Before Index on Base Table (with Index on View): After Index on Base Table (with Index on View): Looking at both executions, it is very clear that with or without, the View is using Indexes. Alright, I have written 11 disadvantages of the Views. Now I have written one case where the View is using Indexes. Anybody who says that I am being harsh on Views can say now that I found one place where Index on View can be helpful. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, SQLServer, T SQL, Technology

    Read the article

  • How to add event receiver to SharePoint2010 content type programmatically

    - by ybbest
    Today , I’d like to show how to add event receiver to How to add event receiver to SharePoint2010 content type programmatically. 1. Create empty SharePoint Project and add a class called ItemContentTypeEventReceiver and make it inherit from SPItemEventReceiver and implement your logic as below public class ItemContentTypeEventReceiver : SPItemEventReceiver { private bool eventFiringEnabledStatus; public override void ItemAdded(SPItemEventProperties properties) { base.ItemAdded(properties); UpdateTitle(properties); } private void UpdateTitle(SPItemEventProperties properties) { SPListItem addedItem = properties.ListItem; string enteredTitle = addedItem["Title"] as string; addedItem["Title"] = enteredTitle + " Updated"; DisableItemEventsScope(); addedItem.Update(); EnableItemEventsScope(); } public override void ItemUpdated(SPItemEventProperties properties) { base.ItemUpdated(properties); UpdateTitle(properties); } private void DisableItemEventsScope() { eventFiringEnabledStatus = EventFiringEnabled; EventFiringEnabled = false; } private void EnableItemEventsScope() { eventFiringEnabledStatus = EventFiringEnabled; EventFiringEnabled = true; } } 2.Create a Site or Web(depending or your requirements) scoped feature and implement your feature event handler as below: public override void FeatureActivated(SPFeatureReceiverProperties properties) { SPWeb web = GetFeatureWeb(properties); //http://karinebosch.wordpress.com/walkthroughs/event-receivers-theory/ string assemblyName =  System.Reflection.Assembly.GetExecutingAssembly().FullName; const string className = "YBBEST.AddEventReceiverToContentType.ItemContentTypeEventReceiver"; SPContentType contentType= web.ContentTypes["Item"]; AddEventReceiverToContentType(className, contentType, assemblyName, SPEventReceiverType.ItemAdded, SPEventReceiverSynchronization.Asynchronous); AddEventReceiverToContentType(className, contentType, assemblyName, SPEventReceiverType.ItemUpdated, SPEventReceiverSynchronization.Asynchronous); contentType.Update(); } protected static void AddEventReceiverToContentType(string className, SPContentType contentType, string assemblyName, SPEventReceiverType eventReceiverType, SPEventReceiverSynchronization eventReceiverSynchronization) { if (className == null) throw new ArgumentNullException("className"); if (contentType == null) throw new ArgumentNullException("contentType"); if (assemblyName == null) throw new ArgumentNullException("assemblyName"); SPEventReceiverDefinition eventReceiver = contentType.EventReceivers.Add(); eventReceiver.Synchronization = eventReceiverSynchronization; eventReceiver.Type = eventReceiverType; eventReceiver.Assembly = assemblyName; eventReceiver.Class = className; eventReceiver.Update(); } 3.Deploy your solution and now you have a event receiver that attached to the Item contentType. You can download the complete source code here.You can also check how to add event receiver to a list using SharePoint event receiver item in Visual Studio2010 in my previous blog.

    Read the article

< Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >