Search Results

Search found 13969 results on 559 pages for 'word count'.

Page 291/559 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • Fetching value from collection

    - by user334119
    public string GetProductVariantImageUrl(ShoppingCartItem shoppingCartItem) { string pictureUrl = String.Empty; ProductVariant productVariant = shoppingCartItem.ProductVariant; ProductVariantAttributeValueCollection pvaValues = shoppingCartItem.Attributes; [here the count comes 0]{case1} } public string GetAttributeDescription(ShoppingCartItem shoppingCartItem) { string result = string.Empty; ProductVariant productVariant = shoppingCartItem.ProductVariant; if (productVariant != null) { ProductVariantAttributeValueCollection pvaValues = shoppingCartItem.Attributes;[here count is 1] } } WHY am i not able to get count as 1 for the case1. /// <summary> /// Represents a shopping cart item /// </summary> public class ShoppingCartItem : BaseEntity { #region Fields private ProductVariant _cachedProductVariant; private ProductVariantAttributeValueCollection _cachedPvaValues; #endregion #region Ctor /// <summary> /// Creates a new instance of the shopping cart class /// </summary> public ShoppingCartItem() { } #endregion #region Properties /// <summary> /// Gets or sets the shopping cart item identifier /// </summary> public int ShoppingCartItemID { get; set; } /// <summary> /// Gets or sets the shopping cart type identifier /// </summary> public int ShoppingCartTypeID { get; set; } /// <summary> /// Gets or sets the customer session identifier /// </summary> public Guid CustomerSessionGUID { get; set; } /// <summary> /// Gets or sets the product variant identifier /// </summary> public int ProductVariantID { get; set; } /// <summary> /// Gets or sets the product variant attribute identifiers /// </summary> public List<int> AttributeIDs { get; set; } /// <summary> /// Gets or sets the text option /// </summary> public string TextOption { get; set; } /// <summary> /// Gets or sets the quantity /// </summary> public int Quantity { get; set; } /// <summary> /// Gets or sets the date and time of instance creation /// </summary> public DateTime CreatedOn { get; set; } /// <summary> /// Gets or sets the date and time of instance update /// </summary> public DateTime UpdatedOn { get; set; } #endregion #region Custom Properties /// <summary> /// Gets the log type /// </summary> public ShoppingCartTypeEnum ShoppingCartType { get { return (ShoppingCartTypeEnum)ShoppingCartTypeID; } } /// <summary> /// Gets the product variant /// </summary> public ProductVariant ProductVariant { get { if (_cachedProductVariant == null) { _cachedProductVariant = ProductManager.GetProductVariantByID(ProductVariantID); } return _cachedProductVariant; } } /// <summary> /// Gets the product variant attribute values /// </summary> public ProductVariantAttributeValueCollection Attributes { get { if (_cachedPvaValues == null) { ProductVariantAttributeValueCollection pvaValues = new ProductVariantAttributeValueCollection(); foreach (int attributeID in this.AttributeIDs) { ProductVariantAttributeValue pvaValue = ProductAttributeManager.GetProductVariantAttributeValueByID(attributeID); if (pvaValue != null) pvaValues.Add(pvaValue); } _cachedPvaValues = pvaValues; } return _cachedPvaValues; } } /// <summary> /// Gets the total weight /// </summary> public decimal TotalWeigth { get { decimal totalWeigth = decimal.Zero; ProductVariant productVariant = ProductVariant; if (productVariant != null) { decimal attributesTotalWeight = decimal.Zero; foreach (ProductVariantAttributeValue pvaValue in this.Attributes) { attributesTotalWeight += pvaValue.WeightAdjustment; } decimal unitWeight = productVariant.Weight + attributesTotalWeight; totalWeigth = unitWeight * Quantity; } return totalWeigth; } } /// <summary> /// Gets a value indicating whether the shopping cart item is free shipping /// </summary> public bool IsFreeShipping { get { ProductVariant productVariant = this.ProductVariant; if (productVariant != null) return productVariant.IsFreeShipping; return true; } }

    Read the article

  • Application is crash..

    - by user338322
    Below is my crash Report. 0 0x326712f8 in prepareForMethodLookup () 1 0x3266cf5c in lookUpMethod () 2 0x32668f28 in objc_msgSend_uncached () 3 0x33f70996 in NSPopAutoreleasePool () 4 0x33f82a6c in -[NSAutoreleasePool drain] () 5 0x00003d3e in -[CameraViewcontroller save:] (self=0x811400, _cmd=0x319c00d4, number=0x11e210) at /Users/hardikrathore/Desktop/LiveVideoRecording/Classes/CameraViewcontroller.m:266 6 0x33f36f8a in __NSFireDelayedPerform () 7 0x32da44c2 in CFRunLoopRunSpecific () 8 0x32da3c1e in CFRunLoopRunInMode () 9 0x31bb9374 in GSEventRunModal () 10 0x30bf3c30 in -[UIApplication _run] () 11 0x30bf2230 in UIApplicationMain () 12 0x00002650 in main (argc=1, argv=0x2ffff474) at /Users/hardikrathore/Desktop/LiveVideoRecording/main.m:14 And this is the code. lines, where I am getting the error. -(void)save:(id)number { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; j =[number intValue]; while(screens[j] != NULL){ NSLog(@" image made : %d",j); UIImage * image = [UIImage imageWithCGImage:screens[j]]; image=[self imageByCropping:image toRect:CGRectMake(0, 0, 320, 240)]; NSData *imgdata = UIImageJPEGRepresentation(image,0.3); [image release]; CGImageRelease(screens[j]); screens[j] = NULL; UIImage * image1 = [UIImage imageWithCGImage:screens[j+1]]; image1=[self imageByCropping:image1 toRect:CGRectMake(0, 0, 320, 240)]; NSData *imgdata1 = UIImageJPEGRepresentation(image1,0.3); [image1 release]; CGImageRelease(screens[j+1]); screens[j+1] = NULL; NSString *urlString=@"http://www.test.itmate4.com/iPhoneToServerTwice.php"; // setting up the request object now NSMutableURLRequest *request = [[NSMutableURLRequest alloc]init]; [request setURL:[NSURL URLWithString:urlString]]; [request setHTTPMethod:@"POST"]; NSString *fileName=[VideoID stringByAppendingString:@"_"]; fileName=[fileName stringByAppendingString:[NSString stringWithFormat:@"%d",k]]; NSString *fileName2=[VideoID stringByAppendingString:@"_"]; fileName2=[fileName2 stringByAppendingString:[NSString stringWithFormat:@"%d",k+1]]; /* add some header info now we always need a boundary when we post a file also we need to set the content type You might want to generate a random boundary.. this is just the same as my output from wireshark on a valid html post */ NSString *boundary = [NSString stringWithString:@"---------------------------14737809831466499882746641449"]; NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@",boundary]; [request addValue:contentType forHTTPHeaderField: @"Content-Type"]; /* now lets create the body of the post */ //NSString *count=[NSString stringWithFormat:@"%d",front];; NSMutableData *body = [NSMutableData data]; [body appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //[body appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"userfile\"; count=\"@\"";filename=\"%@.jpg\"\r\n",count,fileName] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"userfile\"; filename=\"%@.jpg\"\r\n",fileName] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithString:@"Content-Type: application/octet-stream\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[NSData dataWithData:imgdata]]; [body appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //second boundary NSString *string1 = [[NSString alloc] initWithFormat:@"\r\n--%@\r\n",boundary]; NSString *string2 =[[NSString alloc] initWithFormat:@"Content-Disposition: form-data; name=\"userfile2\"; filename=\"%@.jpg\"\r\n",fileName2]; NSString *string3 =[[NSString alloc] initWithFormat:@"\r\n--%@--\r\n",boundary]; [body appendData:[string1 dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[string2 dataUsingEncoding:NSUTF8StringEncoding]]; //experiment //[body appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"userfile2\"; filename=\"%@.jpg\"\r\n",fileName2] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithString:@"Content-Type: application/octet-stream\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[NSData dataWithData:imgdata1]]; //[body appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[string3 dataUsingEncoding:NSUTF8StringEncoding]]; // setting the body of the post to the reqeust [request setHTTPBody:body]; // now lets make the connection to the web NSData *returnData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]; NSString *returnString = [[NSString alloc] initWithData:returnData encoding:NSUTF8StringEncoding]; if([returnString isEqualToString:@"SUCCESS"]) { NSLog(returnString); k=k+2; j=j+2; [self performSelectorInBackground:@selector(save:) withObject:(id)[NSNumber numberWithInt:j]]; } //k=k+2; [imgdata release]; [imgdata1 release]; [NSThread sleepForTimeInterval:.01]; } [pool drain]; <-------------Line 266 } As you can see in log report. I am getting the error, Line 266. Some autorelease problem Any help !!!? coz I am not getting why its happening.

    Read the article

  • changing output in objective-c app

    - by Zack
    // // RC4.m // Play5 // // Created by svp on 24.05.10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import "RC4.h" @implementation RC4 @synthesize txtLyrics; @synthesize sbox; @synthesize mykey; - (IBAction) clicked: (id) sender { NSData *asciidata1 = [@"4875" dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *asciistr1 = [[NSString alloc] initWithData:asciidata1 encoding:NSASCIIStringEncoding]; //[txtLyrics setText:@"go"]; NSData *asciidata = [@"sdf883jsdf22" dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *asciistr = [[NSString alloc] initWithData:asciidata encoding:NSASCIIStringEncoding]; //RC4 * x = [RC4 alloc]; [txtLyrics setText:[self decrypt:asciistr1 andKey:asciistr]]; } - (NSMutableArray*) hexToChars: (NSString*) hex { NSMutableArray * arr = [[NSMutableArray alloc] init]; NSRange range; range.length = 2; for (int i = 0; i < [hex length]; i = i + 2) { range.location = 0; NSString * str = [[hex substringWithRange:range] uppercaseString]; unsigned int value; [[NSScanner scannerWithString:str] scanHexInt:&value]; [arr addObject:[[NSNumber alloc] initWithInt:(int)value]]; } return arr; } - (NSString*) charsToStr: (NSMutableArray*) chars { NSString * str = @""; for (int i = 0; i < [chars count]; i++) { str = [NSString stringWithFormat:@"%@%@",[NSString stringWithFormat:@"%c", [chars objectAtIndex:i]],str]; } return str; } //perfect except memory leaks - (NSMutableArray*) strToChars: (NSString*) str { NSData *asciidata = [str dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *asciistr = [[NSString alloc] initWithData:asciidata encoding:NSASCIIStringEncoding]; NSMutableArray * arr = [[NSMutableArray alloc] init]; for (int i = 0; i < [str length]; i++) { [arr addObject:[[NSNumber alloc] initWithInt:(int)[asciistr characterAtIndex:i]]]; } return arr; } - (void) initialize: (NSMutableArray*) pwd { sbox = [[NSMutableArray alloc] init]; mykey = [[NSMutableArray alloc] init]; int a = 0; int b; int c = [pwd count]; int d = 0; while (d < 256) { [mykey addObject:[pwd objectAtIndex:(d % c)]]; [sbox addObject:[[NSNumber alloc] initWithInt:d]]; d++; } d = 0; while (d < 256) { a = (a + [[sbox objectAtIndex:d] intValue] + [[mykey objectAtIndex:d] intValue]) % 256; b = [[sbox objectAtIndex:d] intValue]; [sbox replaceObjectAtIndex:d withObject:[sbox objectAtIndex:a]]; [sbox replaceObjectAtIndex:a withObject:[[NSNumber alloc] initWithInt:b]]; d++; } } - (NSMutableArray*) calculate: (NSMutableArray*) plaintxt andPsw: (NSMutableArray*) psw { [self initialize:psw]; int a = 0; int b = 0; NSMutableArray * c = [[NSMutableArray alloc] init]; int d; int e; int f; int g = 0; while (g < [plaintxt count]) { a = (a + 1) % 256; b = (b + [[sbox objectAtIndex:a] intValue]) % 256; e = [[sbox objectAtIndex:a] intValue]; [sbox replaceObjectAtIndex:a withObject:[sbox objectAtIndex:b]]; [sbox replaceObjectAtIndex:b withObject:[[NSNumber alloc] initWithInt:e]]; int h = ([[sbox objectAtIndex:a]intValue] + [[sbox objectAtIndex:b]intValue]) % 256; d = [[sbox objectAtIndex:h] intValue]; f = [[plaintxt objectAtIndex:g] intValue] ^ d; [c addObject:[[NSNumber alloc] initWithInt:f]]; g++; } return c; } - (NSString*) decrypt: (NSString*) src andKey: (NSString*) key { NSMutableArray * plaintxt = [self hexToChars:src]; NSMutableArray * psw = [self strToChars:key]; NSMutableArray * chars = [self calculate:plaintxt andPsw:psw]; NSData *asciidata = [[self charsToStr:chars] dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *asciistr = [[NSString alloc] initWithData:asciidata encoding:NSUTF8StringEncoding]; return asciistr; } @end This is supposed to decrypt a hex string with an ascii string, using rc4 decryption. I'm converting my java application to objective-c. The output keeps changing, every time i run it.

    Read the article

  • libcurl - unable to download a file

    - by marmistrz
    I'm working on a program which will download lyrics from sites like AZLyrics. I'm using libcurl. It's my code lyricsDownloader.cpp #include "lyricsDownloader.h" #include <curl/curl.h> #include <cstring> #include <iostream> #define DEBUG 1 ///////////////////////////////////////////////////////////////////////////// size_t lyricsDownloader::write_data_to_var(char *ptr, size_t size, size_t nmemb, void *userdata) // this function is a static member function { ostringstream * stream = (ostringstream*) userdata; size_t count = size * nmemb; stream->write(ptr, count); return count; } string AZLyricsDownloader::toProviderCode() const { /*this creates an url*/ } CURLcode AZLyricsDownloader::download() { CURL * handle; CURLcode err; ostringstream buff; handle = curl_easy_init(); if (! handle) return static_cast<CURLcode>(-1); // set verbose if debug on curl_easy_setopt( handle, CURLOPT_VERBOSE, DEBUG ); curl_easy_setopt( handle, CURLOPT_URL, toProviderCode().c_str() ); // set the download url to the generated one curl_easy_setopt(handle, CURLOPT_WRITEDATA, &buff); curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, &AZLyricsDownloader::write_data_to_var); err = curl_easy_perform(handle); // The segfault should be somewhere here - after calling the function but before it ends cerr << "cleanup\n"; curl_easy_cleanup(handle); // copy the contents to text variable lyrics = buff.str(); return err; } main.cpp #include <QString> #include <QTextEdit> #include <iostream> #include "lyricsDownloader.h" int main(int argc, char *argv[]) { AZLyricsDownloader dl(argv[1], argv[2]); dl.perform(); QTextEdit qtexted(QString::fromStdString(dl.lyrics)); cout << qPrintable(qtexted.toPlainText()); return 0; } When running ./maelyrica Anthrax Madhouse I'm getting this logged from curl * About to connect() to azlyrics.com port 80 (#0) * Trying 174.142.163.250... * connected * Connected to azlyrics.com (174.142.163.250) port 80 (#0) > GET /lyrics/anthrax/madhouse.html HTTP/1.1 Host: azlyrics.com Accept: */* < HTTP/1.1 301 Moved Permanently < Server: nginx/1.0.12 < Date: Thu, 05 Jul 2012 16:59:21 GMT < Content-Type: text/html < Content-Length: 185 < Connection: keep-alive < Location: http://www.azlyrics.com/lyrics/anthrax/madhouse.html < Segmentation fault Strangely, the file is there. The same error is displayed when there's no such page (redirect to azlyrics.com mainpage) What am I doing wrong? Thanks in advance EDIT: I made the function for writing data static, but this changes nothing. Even wget seems to have problems $ wget http://www.azlyrics.com/lyrics/anthrax/madhouse.html --2012-07-06 10:36:05-- http://www.azlyrics.com/lyrics/anthrax/madhouse.html Resolving www.azlyrics.com... 174.142.163.250 Connecting to www.azlyrics.com|174.142.163.250|:80... connected. HTTP request sent, awaiting response... No data received. Retrying. Why does opening the page in a browser work and wget/curl not? EDIT2: After adding this: curl_easy_setopt(handle, CURLOPT_FOLLOWLOCATION, 1); The log is: * About to connect() to azlyrics.com port 80 (#0) * Trying 174.142.163.250... * connected * Connected to azlyrics.com (174.142.163.250) port 80 (#0) > GET /lyrics/anthrax/madhouse.html HTTP/1.1 Host: azlyrics.com Accept: */* < HTTP/1.1 301 Moved Permanently < Server: nginx/1.0.12 < Date: Fri, 06 Jul 2012 09:09:47 GMT < Content-Type: text/html < Content-Length: 185 < Connection: keep-alive < Location: http://www.azlyrics.com/lyrics/anthrax/madhouse.html < * Ignoring the response-body * Connection #0 to host azlyrics.com left intact * Issue another request to this URL: 'http://www.azlyrics.com/lyrics/anthrax/madhouse.html' * About to connect() to www.azlyrics.com port 80 (#1) * Trying 174.142.163.250... * connected * Connected to www.azlyrics.com (174.142.163.250) port 80 (#1) > GET /lyrics/anthrax/madhouse.html HTTP/1.1 Host: www.azlyrics.com Accept: */* < HTTP/1.1 200 OK < Server: nginx/1.0.12 < Date: Fri, 06 Jul 2012 09:09:47 GMT < Content-Type: text/html < Transfer-Encoding: chunked < Connection: keep-alive < Segmentation fault

    Read the article

  • Self-relation messes up contents in fetching

    - by holographix
    Hi folks, I'm dealing with an annoying problem in core data I've got a table named Character, which is made as follows I'm filling the table in various steps: 1) fill the attributes of the table 2) fill the Character Relation (charRel) FYI charRel is defined as follows I'm feeding the contents by pulling the data from an xml, the feeding code is this curStr = [[NSMutableString stringWithString:[curStr stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]] retain]; NSLog(@"Parsing relation within these keys %@, in order to get'em associated",curStr); NSArray *chunks = [curStr componentsSeparatedByString: @","]; for( NSString *relId in chunks ) { NSLog(@"Associating %@ with id %@",[currentCharacter valueForKey:@"character_id"], relId); NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", relId]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:[self managedObjectContext] ]]; [request setPredicate:predicate]; NSerror *error = nil; NSArray *results = [[self managedObjectContext] executeFetchRequest:request error:&error]; // error handling code if(error != nil) { NSLog(@"[SYMBOL CORRELATION]: retrieving correlated symbol error: %@", [error localizedDescription]); } else if([results count] > 0) { Character *relatedChar = [results objectAtIndex:0]; // grab the first result in the stack, could be done better! [currentCharacter addCharRelObject:relatedChar]; //VICE VERSA RELATIONS NSArray *charRels = [relatedChar valueForKey:@"charRel"]; BOOL alreadyRelated = NO; for(Character *charRel in charRels) { if([[charRel valueForKey:@"character_id"] isEqual:[currentCharacter valueForKey:@"character_id"]]) { alreadyRelated = YES; break; } } if(!alreadyRelated) { NSLog(@"\n\t\trelating %@ with %@", [relatedChar valueForKey:@"character_id"], [currentCharacter valueForKey:@"character_id"]); [relatedChar addCharRelObject:currentCharacter]; } } else { NSLog(@"[SYMBOL CORRELATION]: related symbol was not found! ##SKIPPING-->"); } [request release]; } NSLog(@"\t\t### TOTAL OF REALTIONS FOR ID %@: %d\n%@", [currentCharacter valueForKey:@"character_id"], [[currentCharacter valueForKey:@"charRel"] count], currentCharacter); error = nil; /* SAVE THE CONTEXT */ if (![managedObjectContext save:&error]) { NSLog(@"Whoops, couldn't save the symbol record: %@", [error localizedDescription]); NSArray* detailedErrors = [[error userInfo] objectForKey:NSDetailedErrorsKey]; if(detailedErrors != nil && [detailedErrors count] > 0) { for(NSError* detailedError in detailedErrors) { NSLog(@"\n################\t\tDetailedError: %@\n################", [detailedError userInfo]); } } else { NSLog(@" %@", [error userInfo]); } } at this point when I print out the values of the currentCharacter, everything looks perfect. every relation is in its place. in example in this log we can clearly see that this element has got 3 items in charRel: <Character: 0x5593af0> (entity: Character; id: 0x55938c0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p86> ; data: { catRel = "<relationship fault: 0x9a29db0 'catRel'>"; charRel = ( "0x9a1f870 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p74>", "0x9a14bd0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p109>", "0x558ba00 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p5>" ); "character_id" = 254; examplesRel = "<relationship fault: 0x9a29df0 'examplesRel'>"; meaning = "\n Left"; pinyin = "\n zu\U01d2"; "pronunciation_it" = "\n zu\U01d2"; strokenumber = 5; text = "\n \n <p>The most ancient form of this symbol"; unicodevalue = "\n \U5de6"; }) then when I'm in need of retrieving this item I perform an extraction, like this: // at first I get the single Character record NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSError *error; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", self.char_id ]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:_context ]]; [request setPredicate:predicate]; NSArray *fetchedObjs = [_context executeFetchRequest:request error:&error]; when, for instance, I print out in NSLog the contents of charRel NSArray *correlations = [singleCharacter valueForKey:@"charRel"]; NSLog(@"CHARACTER OBJECT \n%@", correlations); I get this Relationship fault for (<NSRelationshipDescription: 0x5568520>), name charRel, isOptional 1, isTransient 0, entity Character, renamingIdentifier charRel, validation predicates (), warnings (), versionHashModifier (null), destination entity Character, inverseRelationship (null), minCount 1, maxCount 99 on 0x6937f00 hope that I made myself clear. this thing is driving me insane, I've googled all over world, but I couldn't find a solution (and this make me think to as issue related to bad coding somehow :P). thank you in advance guys. k

    Read the article

  • C# SQL Parameter Errors in Loops

    - by jakesankey
    Please help me out with this. I have this small application to load txt files into a sql db and it works fine with sqlite. When I ported to SQL I started getting 'parameter already declared' errors.. If anyone can help me reorganize this code, it would be great! I need to get the parameter definitions outside of the loops or something.. using System; using System.Data; using System.Data.SQLite; using System.IO; using System.Text.RegularExpressions; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Data.SqlClient; namespace JohnDeereCMMDataParser { internal class Program { public static List<string> GetImportedFileList() { List<string> ImportedFiles = new List<string>(); using (SqlConnection connect = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { connect.Open(); using (SqlCommand fmd = connect.CreateCommand()) { fmd.CommandText = @"SELECT FileName FROM Import;"; fmd.CommandType = CommandType.Text; SqlDataReader r = fmd.ExecuteReader(); while (r.Read()) { ImportedFiles.Add(Convert.ToString(r["FileName"])); } } } return ImportedFiles; } private static void Main(string[] args) { using (SqlConnection con = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { con.Open(); using (SqlCommand insertCommand = con.CreateCommand()) { Console.WriteLine("Connecting to SQL server..."); SqlCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\", "R.txt*", SearchOption.AllDirectories); insertCommand.Parameters.Add(new SqlParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Value", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@OutOfTol", DbType.Decimal)); List<string> ImportedFiles = GetImportedFileList(); foreach (string file in files.Except(ImportedFiles)) { var FileNameExt1 = Path.GetFileName(file); cmdd.Parameters.Add(new SqlParameter("@FileExt", FileNameExt1)); cmdd.CommandText = @" IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'Import')) BEGIN SELECT COUNT(*) FROM Import WHERE FileName = @FileExt; END"; int count = Convert.ToInt32(cmdd.ExecuteScalar()); con.Close(); con.Open(); if (count == 0) { Console.WriteLine("Parsing CMM data for SQL database... Please wait."); insertCommand.CommandText = @" INSERT INTO Import (FeatType, FeatName, Value, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Value, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; string FileNameExt = Path.GetFullPath(file); string RNumber = Path.GetFileNameWithoutExtension(file); string RNumberE = RNumber.Split('_')[0]; string RNumberD = RNumber.Split('_')[1]; string RNumberDate = RNumber.Split('_')[2]; DateTime dateTime = DateTime.ParseExact(RNumberDate, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SqlParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { SqlParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } } insertCommand.Parameters.Add(new SqlParameter("@PartNumber", RNumberE)); insertCommand.Parameters.Add(new SqlParameter("@CMMNumber", RNumberD)); insertCommand.Parameters.Add(new SqlParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SqlParameter("@FileName", FileNameExt)); // insertCommand.ExecuteNonQuery(); } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); } } } } FYI - the PartNumber, CMMNumber, Date, etc at the bottom are pulled from the file name and I need it in the table next to each respective record.

    Read the article

  • iPhone: Crash in Custom Autorelease Pool

    - by user338322
    My app is crashing when I try to post images in an HTTP request. I am trying to upload images to a server. The crash appears related to my autorelease pool because the crash is trapped at the [pool release] message. Here is the crash report: #0 0x326712f8 in prepareForMethodLookup () #1 0x3266cf5c in lookUpMethod () #2 0x32668f28 in objc_msgSend_uncached () #3 0x33f70996 in NSPopAutoreleasePool () #4 0x33f82a6c in -[NSAutoreleasePool drain] () #5 0x00003d3e in -[CameraViewcontroller save:] (self=0x811400, _cmd=0x319c00d4, number=0x11e210) at /Users/hardikrathore/Desktop/LiveVideoRecording/Classes/CameraViewcontroller.m:266 #6 0x33f36f8a in __NSFireDelayedPerform () #7 0x32da44c2 in CFRunLoopRunSpecific () #8 0x32da3c1e in CFRunLoopRunInMode () #9 0x31bb9374 in GSEventRunModal () #10 0x30bf3c30 in -[UIApplication _run] () #11 0x30bf2230 in UIApplicationMain () #12 0x00002650 in main (argc=1, argv=0x2ffff474) at /Users/hardikrathore/Desktop/LiveVideoRecording/main.m:14 The crash report says that final line of the following code is the point of the crash. (Line No. 266) -(void)save:(id)number { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; j =[number intValue]; while(screens[j] != NULL){ NSLog(@" image made : %d",j); UIImage * image = [UIImage imageWithCGImage:screens[j]]; image=[self imageByCropping:image toRect:CGRectMake(0, 0, 320, 240)]; NSData *imgdata = UIImageJPEGRepresentation(image,0.3); [image release]; CGImageRelease(screens[j]); screens[j] = NULL; UIImage * image1 = [UIImage imageWithCGImage:screens[j+1]]; image1=[self imageByCropping:image1 toRect:CGRectMake(0, 0, 320, 240)]; NSData *imgdata1 = UIImageJPEGRepresentation(image1,0.3); [image1 release]; CGImageRelease(screens[j+1]); screens[j+1] = NULL; NSString *urlString=@"http://www.test.itmate4.com/iPhoneToServerTwice.php"; // setting up the request object now NSMutableURLRequest *request = [[NSMutableURLRequest alloc]init]; [request setURL:[NSURL URLWithString:urlString]]; [request setHTTPMethod:@"POST"]; NSString *fileName=[VideoID stringByAppendingString:@"_"]; fileName=[fileName stringByAppendingString:[NSString stringWithFormat:@"%d",k]]; NSString *fileName2=[VideoID stringByAppendingString:@"_"]; fileName2=[fileName2 stringByAppendingString:[NSString stringWithFormat:@"%d",k+1]]; /* add some header info now we always need a boundary when we post a file also we need to set the content type You might want to generate a random boundary.. this is just the same as my output from wireshark on a valid html post */ NSString *boundary = [NSString stringWithString:@"---------------------------14737809831466499882746641449"]; NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@",boundary]; [request addValue:contentType forHTTPHeaderField: @"Content-Type"]; /* now lets create the body of the post */ //NSString *count=[NSString stringWithFormat:@"%d",front];; NSMutableData *body = [NSMutableData data]; [body appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //[body appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"userfile\"; count=\"@\"";filename=\"%@.jpg\"\r\n",count,fileName] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"userfile\"; filename=\"%@.jpg\"\r\n",fileName] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithString:@"Content-Type: application/octet-stream\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[NSData dataWithData:imgdata]]; [body appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //second boundary NSString *string1 = [[NSString alloc] initWithFormat:@"\r\n--%@\r\n",boundary]; NSString *string2 =[[NSString alloc] initWithFormat:@"Content-Disposition: form-data; name=\"userfile2\"; filename=\"%@.jpg\"\r\n",fileName2]; NSString *string3 =[[NSString alloc] initWithFormat:@"\r\n--%@--\r\n",boundary]; [body appendData:[string1 dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[string2 dataUsingEncoding:NSUTF8StringEncoding]]; //experiment //[body appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"userfile2\"; filename=\"%@.jpg\"\r\n",fileName2] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithString:@"Content-Type: application/octet-stream\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[NSData dataWithData:imgdata1]]; //[body appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[string3 dataUsingEncoding:NSUTF8StringEncoding]]; // setting the body of the post to the reqeust [request setHTTPBody:body]; // now lets make the connection to the web NSData *returnData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]; NSString *returnString = [[NSString alloc] initWithData:returnData encoding:NSUTF8StringEncoding]; if([returnString isEqualToString:@"SUCCESS"]) { NSLog(returnString); k=k+2; j=j+2; [self performSelectorInBackground:@selector(save:) withObject:(id)[NSNumber numberWithInt:j]]; } [imgdata release]; [imgdata1 release]; [NSThread sleepForTimeInterval:.01]; } [pool drain]; //<-------------Line 266 } I don't understand what is causing the crash.

    Read the article

  • SQL error C# - Parameter already defined

    - by jakesankey
    Hey there. I have a c# application that parses txt files and imports the data from them into a sql db. I was using sqlite and am now working on porting it to sql server. It was working fine with sqlite but now with sql i am getting an error when it is processing the files. It added the first row of data to the db and then says "parameter @PartNumber has already been declared. Variable names must be unique within a batch or stored procedure". Here is my whole code and SQL table layout ... the error comes at the last insertCommand.ExecuteNonQuery() instance at the end of the code... SQL TABLE: CREATE TABLE Import ( RowId int PRIMARY KEY IDENTITY, PartNumber text, CMMNumber text, Date text, FeatType text, FeatName text, Value text, Actual text, Nominal text, Dev text, TolMin text, TolPlus text, OutOfTol text, FileName text ); CODE: using System; using System.Data; using System.Data.SQLite; using System.IO; using System.Text.RegularExpressions; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Data.SqlClient; namespace JohnDeereCMMDataParser { internal class Program { public static List<string> GetImportedFileList() { List<string> ImportedFiles = new List<string>(); using (SqlConnection connect = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { connect.Open(); using (SqlCommand fmd = connect.CreateCommand()) { fmd.CommandText = @"IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'Import')) BEGIN SELECT DISTINCT FileName FROM Import; END"; fmd.CommandType = CommandType.Text; SqlDataReader r = fmd.ExecuteReader(); while (r.Read()) { ImportedFiles.Add(Convert.ToString(r["FileName"])); } } } return ImportedFiles; } private static void Main(string[] args) { Console.Title = "John Deere CMM Data Parser"; Console.WriteLine("Preparing CMM Data Parser... done"); Console.WriteLine("Scanning for new CMM data... done"); Console.ForegroundColor = ConsoleColor.Gray; using (SqlConnection con = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { con.Open(); using (SqlCommand insertCommand = con.CreateCommand()) { SqlCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\", "R303717*.txt*", SearchOption.AllDirectories); List<string> ImportedFiles = GetImportedFileList(); foreach (string file in files.Except(ImportedFiles)) { string FileNameExt1 = Path.GetFileName(file); cmdd.CommandText = @" IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'Import')) BEGIN SELECT COUNT(*) FROM Import WHERE FileName = @FileExt; END"; cmdd.Parameters.Add(new SqlParameter("@FileExt", FileNameExt1)); int count = Convert.ToInt32(cmdd.ExecuteScalar()); con.Close(); con.Open(); if (count == 0) { Console.WriteLine("Parsing CMM data for SQL database... Please wait."); insertCommand.CommandText = @" INSERT INTO Import (FeatType, FeatName, Value, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Value, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; insertCommand.Parameters.Add(new SqlParameter("@FeatType", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@FeatName", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Value", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@OutOfTol", DbType.Decimal)); string FileNameExt = Path.GetFullPath(file); string RNumber = Path.GetFileNameWithoutExtension(file); string RNumberE = RNumber.Split('_')[0]; string RNumberD = RNumber.Split('_')[1]; string RNumberDate = RNumber.Split('_')[2]; DateTime dateTime = DateTime.ParseExact(RNumberDate, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SqlParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { SqlParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SqlParameter("@PartNumber", RNumberE)); insertCommand.Parameters.Add(new SqlParameter("@CMMNumber", RNumberD)); insertCommand.Parameters.Add(new SqlParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SqlParameter("@FileName", FileNameExt)); // insertCommand.ExecuteNonQuery(); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); } } } }

    Read the article

  • how to click on a button in python

    - by Ciobanu Alexandru
    Trying to build some bot for clicking "skip-ad" button on a page. So far, i manage to use Mechanize to load a web-driver browser and to connect to some page but Mechanize module do not support js directly so now i need something like Selenium if i understand correct. I am also a beginner in programming so please be specific. How can i use Selenium or if there is any other solution, please explain details. This is the inner html code for the button: <a id="skip-ad" class="btn btn-inverse" onclick="open_url('http://imgur.com/gallery/tDK9V68', 'go'); return false;" style="font-weight: bold; " target="_blank" href="http://imgur.com/gallery/tDK9V68"> … </a> And this is my source so far: #!/usr/bin/python # FILENAME: test.py import mechanize import os, time from random import choice, randrange prox_list = [] #list of common UAS to apply to each connection attempt to impersonate browsers user_agent_strings = [ 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1468.0 Safari/537.36', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1', 'Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14', 'Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52', 'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:23.0) Gecko/20131011 Firefox/23.0', 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; Media Center PC 6.0; InfoPath.3; MS-RTC LM 8; Zune 4.7', 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Zune 4.0; Tablet PC 2.0; InfoPath.3; .NET4.0C; .NET4.0E)', 'Mozilla/5.0 (compatible; MSIE 10.6; Windows NT 6.1; Trident/5.0; InfoPath.2; SLCC1; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET CLR 2.0.50727) 3gpp-gba UNTRUSTED/1.0', 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0; chromeframe/11.0.696.57)', 'Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; InfoPath.1; SV1; .NET CLR 3.8.36217; WOW64; en-US)', 'Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; GTB7.4; InfoPath.2; SV1; .NET CLR 3.3.69573; WOW64; en-US)' ] def load_proxy_list(target): #loads and parses the proxy list file = open(target, 'r') count = 0 for line in file: prox_list.append(line) count += 1 print "Loaded " + str(count) + " proxies!" load_proxy_list('proxies.txt') #for i in range(1,(len(prox_list) - 1)): # depreceated for overloading for i in range(1,30): br = mechanize.Browser() #pick a random UAS to add some extra cover to the bot br.addheaders = [('User-agent', choice(user_agent_strings))] print "----------------------------------------------------" #This is bad internet ethics br.set_handle_robots(False) #choose a proxy proxy = choice(prox_list) br.set_proxies({"http": proxy}) br.set_debug_http(True) try: print "Trying connection with: " + str(proxy) #currently using: BTC CoinURL - Grooveshark Broadcast br.open("http://cur.lv/4czwj") print "Opened successfully!" #act like a nice little drone and view the ads sleep_time_on_link = randrange(17.0,34.0) time.sleep(sleep_time_on_link) except mechanize.HTTPError, e: print "Oops Request threw " + str(e.code) #future versions will handle codes properly, 404 most likely means # the ad-linker has noticed bot-traffic and removed the link # or the used proxy is terrible. We will either geo-locate # proxies beforehand and pick good hosts, or ditch the link # which is worse case scenario, account is closed because of botting except mechanize.URLError, e: print "Oops! Request was refused, blacklisting proxy!" + str(e) prox_list.remove(proxy) del br #close browser entirely #wait between 5-30 seconds like a good little human sleep_time = randrange(5.0, 30.0) print "Waiting for %.1f seconds like a good bot." % (sleep_time) time.sleep(sleep_time)

    Read the article

  • LWJGL Circle program create's an oval-like shape

    - by N1ghtk1n9
    I'm trying to draw a circle in LWJGL, but when I draw I try to draw it, it makes a shape that's more like an oval rather than a circle. Also, when I change my circleVertexCount 350+, the shape like flips out. I'm really not sure how the code works that creates the vertices(I have taken Geometry and I know the basic trig ratios). I haven't really found that good of tutorials on creating circles. Here's my code: public class Circles { // Setup variables private int WIDTH = 800; private int HEIGHT = 600; private String title = "Circle"; private float fXOffset; private int vbo = 0; private int vao = 0; int circleVertexCount = 300; float[] vertexData = new float[(circleVertexCount + 1) * 4]; public Circles() { setupOpenGL(); setupQuad(); while (!Display.isCloseRequested()) { loop(); adjustVertexData(); Display.update(); Display.sync(60); } Display.destroy(); } public void setupOpenGL() { try { Display.setDisplayMode(new DisplayMode(WIDTH, HEIGHT)); Display.setTitle(title); Display.create(); } catch (LWJGLException e) { e.printStackTrace(); System.exit(-1); } glClearColor(0.0f, 0.0f, 0.0f, 0.0f); } public void setupQuad() { float r = 0.1f; float x; float y; float offSetX = 0f; float offSetY = 0f; double theta = 2.0 * Math.PI; vertexData[0] = (float) Math.sin(theta / circleVertexCount) * r + offSetX; vertexData[1] = (float) Math.cos(theta / circleVertexCount) * r + offSetY; for (int i = 2; i < 400; i += 2) { double angle = theta * i / circleVertexCount; x = (float) Math.cos(angle) * r; vertexData[i] = x + offSetX; } for (int i = 3; i < 404; i += 2) { double angle = Math.PI * 2 * i / circleVertexCount; y = (float) Math.sin(angle) * r; vertexData[i] = y + offSetY; } FloatBuffer vertexBuffer = BufferUtils.createFloatBuffer(vertexData.length); vertexBuffer.put(vertexData); vertexBuffer.flip(); vao = glGenVertexArrays(); glBindVertexArray(vao); vbo = glGenBuffers(); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER,vertexBuffer, GL_STATIC_DRAW); glVertexAttribPointer(0, 2, GL_FLOAT, false, 0, 0); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); } public void loop() { glClear(GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glEnableVertexAttribArray(0); glDrawArrays(GL_TRIANGLE_FAN, 0, vertexData.length / 2); glDisableVertexAttribArray(0); glBindVertexArray(0); } public static void main(String[] args) { new Circles(); } private void adjustVertexData() { float newData[] = new float[vertexData.length]; System.arraycopy(vertexData, 0, newData, 0, vertexData.length); if(Keyboard.isKeyDown(Keyboard.KEY_W)) { fXOffset += 0.05f; } else if(Keyboard.isKeyDown(Keyboard.KEY_S)) { fXOffset -= 0.05f; } for(int i = 0; i < vertexData.length; i += 2) { newData[i] += fXOffset; } FloatBuffer newDataBuffer = BufferUtils.createFloatBuffer(newData.length); newDataBuffer.put(newData); newDataBuffer.flip(); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferSubData(GL_ARRAY_BUFFER, 0, newDataBuffer); glBindBuffer(GL_ARRAY_BUFFER, 0); } } 300 Vertex Count(This is my main problem) 400 Vertex Count - I removed this image, it's bugged out, should be a tiny sliver cut out from the right, like a secant 500 Vertex Count Each 100, it removes more and more of the circle, and so on.

    Read the article

  • Java Array Index Out of Bounds Exception

    - by user1302023
    I need help debugging the following program: I'm getting a run time error that reads: Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: -1 at SearchEngine.main(SearchEngine.java:126) import java.util.*; import java.io.*; public class SearchEngine { public static int getNumberOfWords (File f) throws FileNotFoundException { int numWords = 0; Scanner scan = new Scanner(f); while (scan.hasNext()) { numWords++; scan.next(); } scan.close(); return numWords; } public static void readInWords (File input, String [] x) throws FileNotFoundException { Scanner scan = new Scanner(input); int i = 0; while (scan.hasNext() && i<x.length) { x[i] = scan.next(); i++; } scan.close(); } public static int getNumOfDistinctWords (File input, String [] x) throws FileNotFoundException { Scanner scan = new Scanner(input); int count = 0; int i = 1; while (scan.hasNext() && i<x.length) { if (!x[i].equals(x[i-1])) { count++; } i++; } scan.close(); return count; } public static void readInDistinctWords (String [] x, String [] y) { int i = 1; int k = 0; while (i<x.length) { if (!x[i].equals(x[i-1])) { y[k] = x[i]; k++; } i++; } } public static int getNumberOfLines (File input) throws FileNotFoundException { int numLines = 0; Scanner scan = new Scanner(input); while (scan.hasNextLine()) { numLines++; scan.nextLine(); } scan.close(); return numLines; } public static void readInLines (File input, String [] x) throws FileNotFoundException { Scanner scan = new Scanner(input); int i = 0; while (scan.hasNextLine() && i<x.length) { x[i] = scan.nextLine(); i++; } scan.close(); } public static void main(String [] args) { try { //gets file name System.out.println("Enter the name of the text file you wish to search"); Scanner kb = new Scanner(System.in); String fileName = kb.nextLine(); String TXT = ".txt"; if (!fileName.endsWith(TXT)) { fileName = fileName.concat(TXT); } File input = new File(fileName); //First part of creating index System.out.println("Creating vocabArray"); int NUM_WORDS = getNumberOfWords(input); //System.out.println(NUM_WORDS); String [] wordArray = new String[NUM_WORDS]; readInWords(input, wordArray); Arrays.sort(wordArray); int NUM_DISTINCT_WORDS = getNumOfDistinctWords(input, wordArray); String [] vocabArray = new String[NUM_DISTINCT_WORDS]; readInDistinctWords(wordArray, vocabArray); System.out.println("Finished creating vocabArray"); System.out.println("Creating concordanceArray"); int NUM_LINES = getNumberOfLines(input); String [] concordanceArray = new String[NUM_LINES]; readInLines(input, concordanceArray); System.out.println("Finished creating concordanceArray"); System.out.println("Creating invertedIndex"); int [][] invertedIndex = new int[NUM_DISTINCT_WORDS][10]; int [] wordCountArray = new int[NUM_DISTINCT_WORDS]; int lineNum = 0; while (lineNum<concordanceArray.length) { Scanner scan = new Scanner(concordanceArray[lineNum]); while (scan.hasNext()) { int wordPos = Arrays.binarySearch(vocabArray, scan.next()); wordCountArray[wordPos]+=1; for(int i = 0; i < invertedIndex.length; i++) { for(int j = 0; j < invertedIndex[i].length; j++) { if (invertedIndex[i][j] == 0) { invertedIndex[i][j] = lineNum; break; } } } } lineNum++; } System.out.println("Finished creating invertedIndex"); } catch (FileNotFoundException exception) { System.out.println("File Not Found"); } } //main } //class

    Read the article

  • MEF + Plug-In not updating

    - by mybrokengnome
    I asked this on the MEF Codeplex forum already, but I haven't gotten a response yet, so I figured I'd try StackOverflow. Here's the original post if anyone's interested (this is just a copy from it): MEF Codeplex "Let me first say that I'm completely new to MEF (just discovered it today) and am very happy with it so far. However, I've ran in to a problem that is very frustrating. I'm creating an app that will have a plugin architecture and the plugins will only be stored in a single DLL file (or coded into the main app). The DLL file needs to be able to be recompiled during run-time and the app should recognize this and re-load the plugins (I know this is difficult, but it's a requirement). To accomplish this I took the approach covered http://blog.maartenballiauw.be/category/MEF.aspx there (look for WebServerDirectoryCatalog). Basically the idea is to "monitor the plugins folder, copy the new/modified assemblies to the web application’s /bin folder and instruct MEF to load its exports from there." This is my code, which is probably not the correct way to do it but it's what I found in some samples around the net: main()... string myExecName = Assembly.GetExecutingAssembly().Location; string myPath = System.IO.Path.GetDirectoryName(myExecName); catalog = new AggregateCatalog(); pluginCatalog = new MyDirectoryCatalog(myPath + @"/Plugins"); catalog.Catalogs.Add(pluginCatalog); exportContainer = new CompositionContainer(catalog); CompositionBatch compBatch = new CompositionBatch(); compBatch.AddPart(this); compBatch.AddPart(catalog); exportContainer.Compose(compBatch); and private FileSystemWatcher fileSystemWatcher; public DirectoryCatalog directoryCatalog; private string path; private string extension; public MyDirectoryCatalog(string path) { Initialize(path, "*.dll", "*.dll"); } private void Initialize(string path, string extension, string modulePattern) { this.path = path; this.extension = extension; fileSystemWatcher = new FileSystemWatcher(path, modulePattern); fileSystemWatcher.Changed += new FileSystemEventHandler(fileSystemWatcher_Changed); fileSystemWatcher.Created += new FileSystemEventHandler(fileSystemWatcher_Created); fileSystemWatcher.Deleted += new FileSystemEventHandler(fileSystemWatcher_Deleted); fileSystemWatcher.Renamed += new RenamedEventHandler(fileSystemWatcher_Renamed); fileSystemWatcher.IncludeSubdirectories = false; fileSystemWatcher.EnableRaisingEvents = true; Refresh(); } void fileSystemWatcher_Renamed(object sender, RenamedEventArgs e) { RemoveFromBin(e.OldName); Refresh(); } void fileSystemWatcher_Deleted(object sender, FileSystemEventArgs e) { RemoveFromBin(e.Name); Refresh(); } void fileSystemWatcher_Created(object sender, FileSystemEventArgs e) { Refresh(); } void fileSystemWatcher_Changed(object sender, FileSystemEventArgs e) { Refresh(); } private void Refresh() { // Determine /bin path string binPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Plugins"); string newPath = ""; // Copy files to /bin foreach (string file in Directory.GetFiles(path, extension, SearchOption.TopDirectoryOnly)) { try { DirectoryInfo dInfo = new DirectoryInfo(binPath); DirectoryInfo[] dirs = dInfo.GetDirectories(); int count = dirs.Count() + 1; newPath = binPath + "/" + count; DirectoryInfo dInfo2 = new DirectoryInfo(newPath); if (!dInfo2.Exists) dInfo2.Create(); File.Copy(file, System.IO.Path.Combine(newPath, System.IO.Path.GetFileName(file)), true); } catch { // Not that big deal... Blog readers will probably kill me for this bit of code :-) } } // Create new directory catalog directoryCatalog = new DirectoryCatalog(newPath, extension); directoryCatalog.Refresh(); } public override IQueryable<ComposablePartDefinition> Parts { get { return directoryCatalog.Parts; } } private void RemoveFromBin(string name) { string binPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, ""); File.Delete(Path.Combine(binPath, name)); } So all this actually works, and after the end of the code in main my IEnumerable variable is actually filled with all the plugins in the DLL (which if you follow the code is located in Plugins/1 so that I can modify the dll in the plugins folder). So now at this point I should be able to re-compile the plugins DLL, drop it in to the Plugins folder, my FileWatcher detect that it's changed, and then copy it into folder "2" and directoryCatalog should point to the new folder. All this actually works! The problem is, even though it seems like every thing is pointed to the right place, my IEnumerable variable is never updated with the new plugins. So close, but yet so far! Any suggestions? I know the downsides of doing it this way, that no dll is actually getting unloaded and causing a memory leak, but it's a Windows App and will probably be started at least once a day, and the plugins are un-likely to change that often, but it's still a requirement from the client that it does this without re-loading the app. Thanks! Thanks for any help you all can provide, it's driving me crazy not being able to figure this out."

    Read the article

  • OCR anything with OneNote 2007 and 2010

    - by Matthew Guay
    Quality OCR software can often be very expensive, but you may have one already installed on your computer that you didn’t know about.  Here’s how you can use OneNote to OCR anything on your computer. OneNote is one of the overlooked gems in recent versions of Microsoft Office.  OneNote makes it simple to take notes and keep track of everything with integrated search, and offers more features than its popular competitor Evernote.  One way it is better is its high quality optical character recognition (OCR) engine.  One of Evernote’s most popular features is that you can search for anything, including text in an image, and you can easily find it.  OneNote takes this further, and instantly OCRs any text in images you add.  Then, you can use this text easily and copy it from the image.  Let’s see how this works and how you can use OneNote as the ultimate OCR. Please Note: This feature is available in OneNote 2007 and 2010.  OneNote 2007 is included with Office 2007 Home and Student, Enterprise, and Ultimate, while OneNote 2010 is included with all edition of Office 2010 except for Starter edition. OCR anything First, let’s add something to OCR into OneNote.  There are many different ways you can add items to OCR into OneNote.  Open a blank page or one you want to insert something into, and then follow these steps to add what you want into OneNote. Picture Simply drag-and-drop a picture with text into a notebook… You can insert a picture directly from OneNote as well.  In OneNote 2010, select the Insert tab, and then choose Picture. In OneNote 2007, select the Insert menu, select Picture, and then choose From File.   Screen Clipping There are many times we’d like to copy text from something we see onscreen, but there is no direct way to copy text from that thing.  For instance, you cannot copy text from the title-bar of a window, or from a flash-based online presentation.  For these cases, the Screen Clipping option is very useful.  To add a clip of anything onscreen in OneNote 2010, select the Insert tab in the ribbon and click Screen Clipping. In OneNote 2007, either click the Clip button on the toolbar or select the Insert menu and choose Screen Clipping.   Alternately, you can take a screen clipping by pressing the windows key + S. When you click Screen Clipping, OneNote will minimize, your desktop will fade lighter, and your mouse pointer will change to a plus sign.  Now, click and drag over anything you want to add to OneNote.  Here we’re selecting the title of this article. The section you selected will now show up in your OneNote notebook, complete with the date and time the clip was made. Insert a file You’re not limited to pictures; OneNote can even OCR anything in most files on your computer.  You can add files directly in OneNote 2010 by selecting File Printout in the Insert tab. In OneNote 2007, select the Insert menu and choose Files as Printout. Choose the file you want to add to OneNote in the dialog. Select Insert, and OneNote will pause momentarily as it processes the file. Now your file will show up in OneNote as a printout with a link to the original file above it. You can also send any file directly to OneNote via the OneNote virtual printer.  If you have a file open, such as a PDF, that you’d like to OCR, simply open the print dialog in that program and select the “Send to OneNote” printer. Or, if you have a scanner, you can scan documents directly into OneNote by clicking Scanner Printout in the Insert tab in OneNote 2010. In OneNote 2003, to add a scanned document select the Insert menu, select Picture, and then choose From Scanner or Camera. OCR the image, file, or screenshot you put in OneNote Now that you’ve got your stuff into OneNote, let’s put it to work.  OneNote automatically did an OCR scan on anything you inserted into OneNote.  You can check to make sure by right-clicking on any picture, screenshot, or file you inserted.  Select “Make Text in Image Searchable” and then make sure the correct language is selected. Now, you can copy text from the Picture.  Simply right-click on the picture, and select “Copy Text from Picture”. And here’s the text that OneNote found in this picture: OCR anything with OneNote 2007 and 2010 - Windows Live Writer Not bad, huh?  Now you can paste the text from the picture into a document or anywhere you need to use the text. If you are instead copying text from a printout, it may give you the option to copy text from this page or all pages of the printout.   This works the exact same in OneNote 2007. In OneNote 2010, you can also edit the text OneNote has saved in the image from the OCR.  This way, if OneNote read something incorrectly you can change it so you can still find it when you use search in OneNote.  Additionally, you can copy only a specific portion of the text from the edit box, so it can be useful just for general copying as well.  To do this, right-click on the item and select “Edit Alt Text”. Here is the window to edit alternate text.  If you want to copy only a portion of the text, simply select it and press Ctrl+C to copy that portion. Searching OneNote’s OCR engine is very useful for finding specific pictures you have saved in OneNote.  Simply enter your search query in the search box on top right, and OneNote will automatically find all instances of that term in all of your notebooks.  Notice how it highlights the search term even in the image! This works the same in OneNote 2007.  Notice how it highlighted “How-to” in a shot of the header image in our favorite website. In Windows Vista and 7, you can even search for things OneNote OCRed from the Start Menu search.  Here the start menu search found the words “Windows Live Writer” in our OCR Test notebook in OneNote where we inserted the screen clip above. Conclusion OneNote is a very useful OCR tool, and can help you capture text from just about anything.  Plus, since you can easily search everything you have stored in OneNote, you can quickly find anything you insert anytime.  OneNote is one of the least-used Office tools, but we have found it very useful and hope you do too. Similar Articles Productive Geek Tips Add or Remove Apps from the Microsoft Office 2007 or 2010 SuiteRemove Office 2010 Beta and Reinstall Office 2007How To Create and Publish Blog Posts in Word 2010 & 2007How To Copy Worksheets in Excel 2007 & 2010Add Page Numbers to Documents in Word 2007 & 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers

    Read the article

  • How to fix Monogame WP8 Touch Position bug?

    - by Moses Aprico
    Normally below code will result in X:Infinity, Y:Infinity TouchCollection touchState = TouchPanel.GetState(); foreach (TouchLocation t in touchState) { if (t.State == TouchLocationState.Pressed) { vb.ButtonTouched((int)t.Position.X, (int)t.Position.Y); } } Then, I followed this https://github.com/mono/MonoGame/issues/1046 and added below code at the first line in update method. (I still don't know how it's worked, but it fixed the problem) if (_firstUpdate) { typeof(Microsoft.Xna.Framework.Input.Touch.TouchPanel).GetField("_touchScale",System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Static).SetValue(null, Vector2.One); _firstUpdate = false; } And then, when I randomly testing something, there are several area that won't read the user touch. The tile with the purple dude is the area which won't receive user input (It don't even detect "Pressed", the TouchCollection.Count = 0) Any idea how to fix this? UPDATE 1 : The second attempt in recompiling The difference is weird. Dunno why the consistent clickable area is just 2/3 area to the left UPDATE 2 : After trying to rotate to landscape and back to portrait to randomly testing, then the outcome become :

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Problem in working with async and await?

    - by Vicky
    I am trying to upload files to Azure Blob Storage and after successful upload adding the filename to a list for my further operation. When i am doing synchronous it works fine but when i am doing async the error occured. Error : Collection was modified; enumeration operation may not execute. foreach(var file in files) { // ..... await blockBlob.UploadFromStreamAsync(fs); listOfMovedLabelFiles.Add(fileName); } if (listOfMovedLabelFiles.Count > 0) // error point { // my code for further operation } Is there any way to wait till all the async operations get completed.

    Read the article

  • The entity type String is not part of the model for the current context error [migrated]

    - by Michael V
    I am getting the following error in my controller after the view submits the collection: The entity type String is not part of the model for the current context. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The entity type String is not part of the model for the current context. Source Error: Line 51: foreach (var survey in mysurveys) Line 52: { Line 53: db.Entry(survey).State = EntityState.Modified; Line 54: Line 55: // db.Entry(survey).State = EntityState.Modified; Here is the code ` [HttpPost] public ActionResult UpdateTest(FormCollection mysurveys) { System.Diagnostics.Debug.WriteLine("iam in test post" + mysurveys.Count); foreach (var survey in mysurveys) { db.Entry(survey).State = EntityState.Modified; } db.SaveChanges(); return View(mysurveys); } `Similar code with one record only (no foreach) works fine

    Read the article

  • Top 10 Reasons SQL Developer is Perfect for Oracle Beginners

    - by thatjeffsmith
    Learning new technologies can be daunting. If you’ve never used a Mac before, you’ll probably be a bit baffled at first. But, you’re probably at least coming from a desktop computing background (Windows), so you common frame of reference. But what if you’re just now learning to use a relational database? Yes, you’ve played with Access a bit, but now your employer or college instructor has charged you with becoming proficient with Oracle database. Here’s 10 reasons why I think Oracle SQL Developer is the perfect vehicle to help get you started. 1. It’s free No need to break into one of these… No start-up costs, no need to wrangle budget dollars from your company. Students don’t have any money after books and lab fees anyway. And most employees don’t like having to ask for ‘special’ software anyway. So avoid all of that and make sure the free stuff doesn’t suit your needs first. Upgrades are available on a regular base, also at no cost, and support is freely available via our public forums. 2. It will run pretty much anywhere Windows – check. OSX (Apple) – check. Unix – check. Linux – check. No need to start up a windows VM to run your Windows-only software in your lab machine. 3. Anyone can install it There’s no installer, no registry to be updated, no admin privs to be obtained. If you can download and extract files to your machine or USB storage device, you can run it. You can be up and running with SQL Developer in under 5 minutes. Here’s a video tutorial to see how to get started. 4. It’s ubiquitous I admit it, I learned a new word yesterday and I wanted an excuse to use it. SQL Developer’s everywhere. It’s had over 2,500,000 downloads in the past year, and is the one of the most downloaded items from OTN. This means if you need help, there’s someone sitting nearby you that can assist, and since they’re in the same tool as you, they’ll be speaking the same language. 5. Simple User Interface Up-up-down-down-Left-right-left-right-A-B-A-B-START will get you 30 lives, but you already knew that, right? You connect, you see your objects, you click on your objects. Or, you can use the worksheet to write your queries and programs in. There’s only one toolbar, and just a few buttons. If you’re like me, video games became less fun when each button had 6 action items mapped to it. I just want the good ole ‘A’, ‘B’, ‘SELECT’, and ‘START’ controls. If you’re new to Oracle, you shouldn’t have the double-workload of learning a new complicated tool as well. 6. It’s not a ‘black box’ Click through your objects, but also get the SQL that drives the GUI As you use the wizards to accomplish tasks for you, you can view the SQL statement being generated on your behalf. Just because you have a GUI, doesn’t mean you’re ceding your responsibility to learn the underlying code that makes the database work. 7. It’s four tools in one It’s not just a query tool. Maybe you need to design a data model first? Or maybe you need to migrate your Sybase ASE database to Oracle for a new project? Or maybe you need to create some reports? SQL Developer does all of that. So once you get comfortable with one part of the tool, the others will be much easier to pick up as your needs change. 8. Great learning resources available Videos, blogs, hands-on learning labs – you name it, we got it. Why wait for someone to train you, when you can train yourself at your own pace? 9. You can use it to teach yourself SQL Instead of being faced with the white-screen-of-panic, you can visually build your queries by dragging and dropping tables and views into the Query Builder. Yes, ‘just like Access’ – only better. And as you build your query, toggle to the Worksheet panel and see the SQL statement. Again, SQL Developer is not a black box. If you prefer to learn by trial and error, the worksheet will attempt to suggest the next bit of your SQL statement with it’s completion insight feature. And if you have syntax errors, those will be highlighted – just like your misspelled words in your favorite word processor. 10. It scales to match your experience level You won’t be a n00b forever. In 6-8 months, when you’re ready to tackle something a bit more complicated, like XML DB or Oracle Spatial, the tool is already there waiting on you. No need to go out and find the ‘advanced’ tool. 11. Wait, you said this was a ‘Top 10′ list? Yes. Yes, I did. I’m using this ‘trick’ to get you to continue reading because I’m going to say something you might not want to hear. Are you ready? Tools won’t replace experience, failure, hard work, and training. Just because you have the keys to the car, doesn’t mean you’re ready to head out on the race track. While SQL Developer reduces the barriers to entry, it does not completely remove them. Many experienced folks simply do not like tools. Rather, they don’t like the people that pick up tools without the know-how to properly use them. If you don’t understand what ‘TRUNCATE’ means, don’t try it out. Try picking up a book first. Of course, it’s very nice to have your own sandbox to play in, so you don’t upset the other children. That’s why I really like our Dev Days Database Virtual Box image. It’s your own database to learn and experiment with.

    Read the article

  • Should I set up standard email accounts? What are they?

    - by artlung
    A long time ago one used to be able to count on domains having addresses like [email protected], [email protected], or [email protected] ... is this convention dead? Note: I always try to make sure to make a contact available on the websites I put up, so people can contact us if necessary. But are there reasons to handle these or other "standard" email addresses I might not be thinking of? I set up less email addresses than I used to since spam got so awful, and a "predictable" email address just seems to be an invitation to the lousy spammers.

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

  • SharePoint – The Most Important Feature

    - by Bil Simser
    Watching twitter and doing a search for SharePoint and you see a lot (almost one every few minutes) of tweets about the top 10 new features in SharePoint. What answer do you get when you ask the question, “What’s the most important feature in SharePoint?”. Chances are the answer will vary. Some will say it’s the collaboration aspect, others might say it’s the new ribbon interface, multi-item editing, external content types, faceted search, large list support, document versioning, Silverlight, etc. The list goes on. However I think most people might be missing the most important feature that’s sitting right under their noses all this time. The most important feature of SharePoint? It’s called User Empowerment. Huh? What? Is that something I find in the Site Actions menu? Nope. It’s something that’s always been there in SharePoint, you just need to get the word out and support it. How many times have you had a team ask you for a team site (assuming you had SharePoint up and running). Or to create them a contact list. Or how long have you employed that guy in the corner who’s been copying and pasting content from Corporate Communications into the web from a Word document. Let’s stop the insanity. It doesn’t have to be this way. SharePoint’s strongest feature isn’t anything you can find in the Site Settings screen or Central Admin. It’s all about empowering your users and letting them take control of their content. After all, SharePoint really is a bunch of tools to allow users to collaborate on content isn’t it? So why are you stepping in as IT and helping the user every moment along the way. It’s like having to ask users to fill out a help desk ticket or call up the Windows team to create a folder on their desktop or rearrange their Start menu. This isn’t something IT should be spending their time doing nor is it something the users should be burdened with having to wait until their friendly neighborhood tech-guy (or gal) shows up to help them sort the icons on their desktop. SharePoint IS all about empowerment. Site owners can create whatever lists and libraries they need for their team, and if the template isn’t there they can always turn to my friend and yours, the Custom List. From that can spew forth approval tracking systems, new hire checklists, and server inventory. You’re only limited by your imagination and needs. Users should be able to create new sites as they need. Want a blog to let everyone know what your team is up to? Go create one, here’s how. What’s a blog you ask? Here’s what it is and why you would use one. SharePoint is the shift in the balance of power and you need, and an IT group, let go of certain responsibilities and let your users run with the tools. A power user who knows how to create sites and what features are available to them can help a team go from the forming stage to the storming stage overnight. Again, this all hinges on you as an IT organization and what you can and empower your users with as far as features go. Running with tools is great if you know how to use them, running with scissors not recommended unless you enjoy trips to the hospital. With Great Power comes Great Responsibility so don’t go out on Monday and send out a memo to the organization saying “This Bil guy says you peeps can do anything so here it is, knock yourself out” (for one, they’ll have *no* idea who this Bil guy is). This advice comes with the task of getting your users ready for empowerment. Whether it’s through some kind of internal training sessions, in-house documentation; videos; blog posts; on how to accomplish things in SharePoint, or full blown one-on-one sit downs with teams or individuals to help them through their problems. The work is up to you. Helping them along also should be part of your governance (you do have one don’t you?). Just because you have InfoPath client deployed with your Office suite, doesn’t mean users should just start publishing forms all over your SharePoint farm. There should be some governance behind that in what you’ll support and what is possible. The other caveat to all this is that SharePoint is not everything for everyone. It can’t cook you breakfast and impregnate your cat or solve world hunger. It also isn’t suited for every IT solution out there. It’s a horrible source control system (even though some people try to use it as such) and really can’t do financials worth a darn. Again, governance is key here and part of that governance and your responsibility in setting up and unleashing SharePoint into your organization is to provide users guidance on what should be in SharePoint and (more importantly) what should not be in SharePoint. There are boundaries you have to set where you don’t want your end users going as they might be treading into trouble. Again, this is up to you to set these constraints and help users understand why these pylons are there. If someone understands why they can’t do something they might have a better understanding and respect for those that put them there in the first place. Of course you’ll always have the power-users who want to go skiing down dead mans curve so this doesn’t work for everyone, but you can catch the majority of the newbs who don’t wander aimlessly off the beaten path. At the end of the day when all things are going swimmingly your end users should be empowered to solve the needs they have on a day to day basis and not having to keep bugging the IT department to help them create a view to show only approved documents. I wouldn’t go as far as business users building out full blown solutions and handing the keys to SharePoint Designer or (worse) Visual Studio to power-users might not be a path you want to go down but you also don’t have to lock up the SharePoint system in a tight box where users can’t use what’s there. So stop focusing on the shiny things in SharePoint and maybe consider making a shift to what’s really important. Making your day job easier and letting users get the most our of your technology investment.

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • I Hereby Resolve… (T-SQL Tuesday #14)

    - by smisner
    It’s time for another T-SQL Tuesday, hosted this month by Jen McCown (blog|twitter), on the topic of resolutions. Specifically, “what techie resolutions have you been pondering, and why?” I like that word – pondering – because I ponder a lot. And while there are many things that I do already because of my job, there are many more things that I ponder about doing…if only I had the time. Then I ponder about making time, but then it’s back to work! In 2010, I was moderately more successful in making time for things that I ponder about than I had been in years past, and I hope to continue that trend in 2011. If Jen hadn’t settled on this topic, I could keep my ponderings to myself and no one would ever know the outcome, but she’s egged me on (and everyone else that chooses to participate)! So here goes… For me, having resolve to do something means that I wouldn’t be doing that something as part of my ordinary routine. It takes extra effort to make time for it. It’s not something that I do once and check off a list, but something that I need to commit to over a period of time. So with that in mind, I hereby resolve… To Learn Something New… One of the things I love about my job is that I get to do a lot of things outside of my ordinary routine. It’s a veritable smorgasbord of opportunity! So what more could I possibly add to that list of things to do? Well, the more I learn, the more I realize I have so much more to learn. It would be much easier to remain in ignorant bliss, but I was born to learn. Constantly. (And apparently to teach, too– my father will tell you that as a small child, I had the neighborhood kids gathered together to play school – in the summer. I’m sure they loved that – but they did it!) These are some of things that I want to dedicate some time to learning this year: Spatial data. I have a good understanding of how maps in Reporting Services works, and I can cobble together a simple T-SQL spatial query, but I know I’m only scratching the surface here. Rob Farley (blog|twitter) posted interesting examples of combining maps and PivotViewer, and I think there’s so many more creative possibilities. I’ve always felt that pictures (including charts and maps) really help people get their minds wrapped around data better, and because a lot of data has a geographic aspect to it, I believe developing some expertise here will be beneficial to my work. PivotViewer. Not only is PivotViewer combined with maps a useful way to visualize data, but it’s an interesting way to work with data. If you haven’t seen it yet, check out this interactive demonstration using Netflx OData feed. According to Rob Farley, learning how to work with PivotViewer isn’t trivial. Just the type of challenge I like! Security. You’ve heard of the accidental DBA? Well, I am the accidental security person – is there a word for that role? My eyes used to glaze over when having to study about security, or  when reading anything about it. Then I had a problem long ago that no one could figure out – not even the vendor’s tech support – until I rolled up my sleeves and painstakingly worked through the myriad of potential problems to resolve a very thorny security issue. I learned a lot in the process, and have been able to share what I’ve learned with a lot of people. But I’m not convinced their eyes weren’t glazing over, too. I don’t take it personally – it’s just a very dry topic! So in addition to deepening my understanding about security, I want to find a way to make the subject as it relates to SQL Server and business intelligence more accessible and less boring. Well, there’s actually a lot more that I could put on this list, and a lot more things I have plans to do this coming year, but I run the risk of overcommitting myself. And then I wouldn’t have time… To Have Fun! My name is Stacia and I’m a workaholic. When I love what I do, it’s difficult to separate out the work time from the fun time. But there are some things that I’ve been meaning to do that aren’t related to business intelligence for which I really need to develop some resolve. And they are techie resolutions, too, in a roundabout sort of way! Photography. When my husband and I went on an extended camping trip in 2009 to Yellowstone and the Grand Tetons, I had a nice little digital camera that took decent pictures. But then I saw the gorgeous cameras that other tourists were toting around and decided I needed one too. So I bought a Nikon D90 and have started to learn to use it, but I’m definitely still in the beginning stages. I traveled so much in 2010 and worked on two book projects that I didn’t have a lot of free time to devote to it. I was very inspired by Kimberly Tripp’s (blog|twitter) and Paul Randal’s (blog|twitter) photo-adventure in Alaska, though, and plan to spend some dedicated time with my camera this year. (And hopefully before I move to Alaska – nothing set in stone yet, but we hope to move to a remote location – with Internet access – later this year!) Astronomy. I have this cool telescope, but it suffers the same fate as my camera. I have been gone too much and busy with other things that I haven’t had time to work with it. I’ll figure out how it works, and then so much time passes by that I forget how to use it. I have this crazy idea that I can actually put the camera and the telescope together for astrophotography, but I think I need to start simple by learning how to use each component individually. As long as I’m living in Las Vegas, I know I’ll have clear skies for nighttime viewing, but when we move to Alaska, we’ll be living in a rain forest. I have no idea what my opportunities will be like there – except I know that when the sky is clear, it will be far more amazing than anything I can see in Vegas – even out in the desert - because I’ll be so far away from city light pollution. I’ve been contemplating putting together a blog on these topics as I learn. As many of my fellow bloggers in the SQL Server community know, sometimes the best way to learn something is to sit down and write about it. I’m just stumped by coming up with a clever name for the new blog, which I was thinking about inaugurating with my move to Alaska. Except that I don’t know when that will be exactly, so we’ll just have to wait and see which comes first!

    Read the article

  • You are probably NOT a SharePoint Development Expert if&hellip;

    - by Mark Rackley
    So, all you aspiring SharePoint experts out there (especially those of you who put “expert” in your resumes).  It’s time for a cold cool splash of reality. More than likely you are NOT an expert (I know I’m not). Yes, you may have some expertise in certain aspects in SharePoint (it’s questionable if I have THAT some days), but make sure you’ve got the basics down before you start throwing that word “expert” around. I know that it becomes frustrating to those looking to hire SharePoint people and having to sift through all the resumes of those who think very highly of themselves and their skills only to find those gaping holes in common best practices. I’m much more willing to hire a decent dev who KNOWS they are not an expert than to hire a decent+ dev who THINKS they are an expert.  So… I’ve compiled a small reality check for you SharePoint Devs. and a “red flag” check for those of you wishing to hire a SharePoint developer. If any of these apply to you, you are probably not a SharePoint Development Expert. You are not a SharePoint Development Expert if you manually copy your DLLs Seriously, I don’t care if you write the best code in the world. If you are manually copying files to each web front end you are NOT a SharePoint Development expert. Yes, I realize the admins are generally the ones who do the actual deployments, but if you don’t know how to create solution packages for your admins, you are going to end up doing more damage than good some day. There are TONS of tools out there to help generate deployable solutions for you. You have ZERO excuse. You are not a SharePoint Development expert if you can’t tell me the main artifacts of a solution package Directly related to the first one. If you don’t know what the Manifest, DDF, WSP, and Feature files are and how they are used in a solution package, you are NOT a SharePoint development expert. I’m not asking you to be able to write them all from scratch (heck, I can’t even do that), but you MUST know what they are and how to tweak them if necessary. You are not a SharePoint Development expert if you don’t know what a Content Type or a Site Column is You would be absolutely amazed at how many “Expert” SharePoint Developers have NEVER EVER created a Content Type or Site Column or even know what they are. I mean, why would you ever want to create those when you can just do everything as a custom list or custom field? right???? (that’s sarcasm). You also need to know how to package a Content Type and a Site Column into a deployable package by the way. You are not a SharePoint Development expert if you have not created at least one Web Part, Workflow, Timer Job, and Event Handler. If you haven’t written at least one of each, you don’t fully understand what they do or their limitations. Again, I expect NO ONE to be able to write these things blind. I think the last time I wrote an application from scratch without copying and pasting from another project I had done before was back in 1994? Seriously, coding is like a Sour Dough starter, you get it from someone else and keep adding to it. You are not a SharePoint Development expert if you don’t know how to properly dispose of objects Another biggie with zero excuse for getting it wrong. It is so well known that you must dispose of your SPWeb and SPSite objects that if you aren’t doing it then you are not an expert. Heck, if you utilize “using” when handling SPWeb and SPSite objects and don’t realize that it disposes of those objects for you, then you are not a SharePoint Development expert. You are not a SharePoint Development expert if you do not know how to properly elevate privileges Just one of those development basics that any decent SharePoint Developer has got to have down and understand how and why it’s used You are not a SharePoint Development expert if you don’t know all of the development options available to SharePoint and when they should be used Okay… so all you hard core .NET SharePoint dev geeks take a moment to listen. You may be the most top not SharePoint .NET developer in the world, but if you are opening Visual Studio to solve every problem in SharePoint, then you are NOT a SharePoint development expert. The SharePoint developer’s tool kit is growing every day with tools like Visual Studio, Data View Web Parts, XSL, jQuery, SPServices, etc. etc… If you don’t have the ability to at least recognize that “hey, you can basically do the same thing here but just dropping in Easy Tabs instead of writing some weird web part” then you are NOT a SharePoint Development expert AND you are doing a huge disservice to your clients and customers. You are probably NOT a SharePoint Development expert if you call yourself an Expert So, truth telling time. I’m not an expert. There, I said it. I feel so much better. Now, I realize the word “expert” has been used with my name before, but I am quick to point out that I KNOW the experts and know that they will help me if I need it, but I’m not an expert in all things SharePoint. The minute you take on that moniker you are setting yourself up for a fall. It’s too big, there’s too much to know, and there’s WAY too much you can do wrong. You are not a SharePoint Development expert if you are not involved in the community I expect to get the most flack for this one, but it’s always a huge red flag for me when someone says they are an expert and has ZERO knowledge of the SharePoint community. The SharePoint community is ABSOLUTELY CRITICAL to be an effective SharePoint developer, admin, architect, power user or whatever the heck you are!! The community keeps you sane, tells you when you are NOT using a best practice, recommends the best practice, and even knows when Microsoft is giving you the wrong information (*gasp* it does happen). If you can’t tell me who you are following on twitter, who's blog you read, what conferences you attend, or name the experts who you monitor to make sure you are not doing something stupid, then you are probably doing something stupid. Again, not asking you to be a speaker, blogger, or the least bit extroverted but you should be at LEAST stalking the experts. So… what’s the point? So… yeah… what’s my point in all this. Well, first of all let me point out that this is by far not a finished list and I could come up with a LOT more specific “deep dive” questions, but these should be high enough level that even non experts can recognize and ask them. If you have some common ones you run into let me know and add them in the comments below. Also, keep in mind I’m not saying you as a developer HAVE to know EVERYTHING, but you DO need to know what you don’t know and proudly and honestly state “I don’t know, but I’ll learn and find out”.  Those of us hiring SharePoint developers and know and have a passion for SharePoint are not looking for that elusive “expert” who knows everything. We are looking for someone who “gets it”, has a similar passion, great attitude, an understanding that they DON’T know everything, and a desire to do it right.  I would bet money that most SharePoint development disasters happen because of “experts” who think they know everything rather than the developer who is cautious and knows he doesn’t. Lastly, I know there’s a raging debate over what a “SharePoint Developer” is (I should know, as I keep bringing it up). So, obviously this blog post is more closely tied to the .NET side of SharePoint development and less towards the client side, middle tier, or whatever you want to call it. So, let’s please not get that argument going here as well…  Thanks

    Read the article

  • Blend for Visual Studio 2013 Prototyping Applications with SketchFlow

    - by T
    Originally posted on: http://geekswithblogs.net/tburger/archive/2014/08/10/blend-for-visual-studio-2013-prototyping-applications-with-sketchflow.aspxSketchFlow enables rapid creating of dynamic interface mockups very quickly. The SketchFlow workspace is the same as the standard Blend workspace with the inclusion of three panels: the SketchFlow Feedback panel, the SketchFlow Animation panel and the SketchFlow Map panel. By using SketchFlow to prototype, you can get feedback early in the process. It helps to surface possible issues, lower development iterations, and increase stakeholder buy in. SketchFlow prototypes not only provide an initial look but also provide a way to add additional ideas and input and make sure the team is on track prior to investing in complete development. When you have completed the prototyping, you can discard the prototype and just use the lessons learned to design the application from or extract individual elements from your prototype and include them in the application. I don’t recommend trying to transition the entire project into a development project. Objects that you add with the SketchFlow style have a hand-sketched look. The sketch style is used to remind stakeholders that this is a prototype. This encourages them to focus on the flow and functionality without getting distracted by design details. The sketchflow assets are under sketchflow in the asset panel and are identifiable by the postfix “–Sketch”. For example “Button-Sketch”. You can mix sketch and standard controls in your interface, if required. Be creative, if there is a missing control or your interface has a different look and feel than the out of the box one, reuse other sketch controls to mimic the functionality or look and feel. Only use standard controls if it doesn’t distract from the idea that this is a prototype and not a standard application. The SketchFlow Map panel provides information about the structure of your application. To create a new screen in your prototype: Right-click the map surface and choose “Create a Connected Screen”. Name the screens with names that are meaningful to the stakeholders. The start screen is the one that has the green arrow. To change the start screen, right click on any other screen and set to start screen. Only one screen can be the start screen at a time. Rounded screen are component screens to mimic reusable custom controls that will be built into the final application. You can change the colors of all of the boxes and should use colors to create functional groupings. The groupings can be identified in the SketchFlow Project Settings. To add connections between screens in the SketchFlow Map panel. Move the mouse over a screen in the SketchFlow and a menu will appear at the bottom of the screen node. In the menu, click Connect to an existing screen. Drag the arrow to another screen on the Map. You add navigation to your prototype by adding connections on the SketchFlow map or by adding navigation directly to items on your interface. To add navigation from objects on the artboard, right click the item then from the menu, choose “Navigate to”. This will expose a sub-menu with available screens, backward, or forward. When the map has connected screens, the SketchFlow Player displays the connected screens on the Navigate sidebar. All screens show in the SketchFlow Player Map. To see the SketchFlow Player, run your SketchFlow prototype. The Navigation sidebar is meant to show the desired user work flow. The map can be used to view the different screens regardless of suggested navigation in the navigation bar. The map is able to be hidden and shown. As mentioned, a component screen is a shared screen that is used in more than one screen and generally represents what will be a custom object in the application. To create a component screen, you can create a screen, right click on it in the SketchFlow Map and choose “Make into component screen”. You can mouse over a screen and from the menu that appears underneath, choose create and insert component screen. To use an existing screen, select if from the Asset panel under SketchFlow, Components. You can use Storyboards and Visual State animations in your SketchFlow project. However, SketchFlow also offers its own animation technique that is simpler and better suited for prototyping. The SketchFlow Animation panel is above your artboard by default. In SketchFlow animation, you create frames and then position the elements on your interface for each frame. You then specify elapsed time and any effects you want to apply to the transition. The + at the top is what creates new frames. Once you have a new Frame, select it and change the property you want to animate. In the example above, I changed the Text of the result box. You can adjust the time between frames in the lower area between the frames. The easing and effects functions are changed in the center between each frame. You edit the hold time for frames by clicking the clock icon in the lower left and the hold time will appear on each frame and can be edited. The FluidLayout icon (also located in the lower left) will create smooth transitions. Next to the FluidLayout icon is the name of that Animation. You can rename the animation by clicking on it and editing the name. The down arrow chevrons next to the name allow you to view the list of all animations in this prototype and select them for editing. To add the animation to the interface object (such as a button to start the animation), select the PlaySketchFlowAnimationAction from the SketchFlow behaviors in the Assets menu and drag it to an object on your interface. With the PlaySketchFlowAnimationAction that you just added selected in the Objects and Timeline, edit the properties to change the EventName to the event you want and choose the SketchFlowAnimation you want from the drop down list. You may want to add additional information to your screens that isn’t really part of the prototype but is relevant information or a request for clarification or feedback from the reviewer. You do this with annotations or notes. Both appear on the user interface, however, annotations can be switched on or off at design and review time. Notes cannot be switched off. To add an Annotation, chose the Create Annotation from the Tools menu. The annotation appears on the UI where you will add the notes. To display or Hide annotations, click the annotation toggle at the bottom right on the artboard . After to toggle annotations on, the identifier of the person who created them appears on the artboard and you must click that to expand the notes. To add a note to the artboard, simply select the Note-Sketch from Assets ->SketchFlow ->Styles ->Sketch Styles. Drag and drop it to the artboard and place where you want it. When you are ready for users to review the prototype, you have a few options available. Click File -> Export and choose one of the options from the list: Publish to Sharepoint, Package SketchFlowProject, Export to Microsoft Word, or Export as Images. I suggest you play with as many of the options as you can to see what they do. Both the Sharepoint and Packaged SketchFlowProject allow you to collect feedback from one or more users that you can import into the project. The user can make notes on the UI and in the Feedback area in the bottom left corner of the player. When the user is done adding feedback, it is exported from the right most folder icon in the My Feedback panel. Feeback is imported on a panel named SketchFlow Feedback. To get that panel to show up, select Window -> SketchFlow Feedback. Once you have the panel showing, click the + in the upper right of the panel and find the notes you exported. When imported, they will show up in a list and on the artboard. To document your prototype, use the Export to Microsoft Word option from the File menu. That should get you started with prototyping.

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >