Search Results

Search found 54188 results on 2168 pages for 'sqlite net'.

Page 29/2168 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Parallelism in .NET – Part 17, Think Continuations, not Callbacks

    - by Reed
    In traditional asynchronous programming, we’d often use a callback to handle notification of a background task’s completion.  The Task class in the Task Parallel Library introduces a cleaner alternative to the traditional callback: continuation tasks. Asynchronous programming methods typically required callback functions.  For example, MSDN’s Asynchronous Delegates Programming Sample shows a class that factorizes a number.  The original method in the example has the following signature: public static bool Factorize(int number, ref int primefactor1, ref int primefactor2) { //... .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, calling this is quite “tricky”, even if we modernize the sample to use lambda expressions via C# 3.0.  Normally, we could call this method like so: int primeFactor1 = 0; int primeFactor2 = 0; bool answer = Factorize(10298312, ref primeFactor1, ref primeFactor2); Console.WriteLine("{0}/{1} [Succeeded {2}]", primeFactor1, primeFactor2, answer); If we want to make this operation run in the background, and report to the console via a callback, things get tricker.  First, we need a delegate definition: public delegate bool AsyncFactorCaller( int number, ref int primefactor1, ref int primefactor2); Then we need to use BeginInvoke to run this method asynchronously: int primeFactor1 = 0; int primeFactor2 = 0; AsyncFactorCaller caller = new AsyncFactorCaller(Factorize); caller.BeginInvoke(10298312, ref primeFactor1, ref primeFactor2, result => { int factor1 = 0; int factor2 = 0; bool answer = caller.EndInvoke(ref factor1, ref factor2, result); Console.WriteLine("{0}/{1} [Succeeded {2}]", factor1, factor2, answer); }, null); This works, but is quite difficult to understand from a conceptual standpoint.  To combat this, the framework added the Event-based Asynchronous Pattern, but it isn’t much easier to understand or author. Using .NET 4’s new Task<T> class and a continuation, we can dramatically simplify the implementation of the above code, as well as make it much more understandable.  We do this via the Task.ContinueWith method.  This method will schedule a new Task upon completion of the original task, and provide the original Task (including its Result if it’s a Task<T>) as an argument.  Using Task, we can eliminate the delegate, and rewrite this code like so: var background = Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }); background.ContinueWith(task => Console.WriteLine("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result)); This is much simpler to understand, in my opinion.  Here, we’re explicitly asking to start a new task, then continue the task with a resulting task.  In our case, our method used ref parameters (this was from the MSDN Sample), so there is a little bit of extra boiler plate involved, but the code is at least easy to understand. That being said, this isn’t dramatically shorter when compared with our C# 3 port of the MSDN code above.  However, if we were to extend our requirements a bit, we can start to see more advantages to the Task based approach.  For example, supposed we need to report the results in a user interface control instead of reporting it to the Console.  This would be a common operation, but now, we have to think about marshaling our calls back to the user interface.  This is probably going to require calling Control.Invoke or Dispatcher.Invoke within our callback, forcing us to specify a delegate within the delegate.  The maintainability and ease of understanding drops.  However, just as a standard Task can be created with a TaskScheduler that uses the UI synchronization context, so too can we continue a task with a specific context.  There are Task.ContinueWith method overloads which allow you to provide a TaskScheduler.  This means you can schedule the continuation to run on the UI thread, by simply doing: Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }).ContinueWith(task => textBox1.Text = string.Format("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result), TaskScheduler.FromCurrentSynchronizationContext()); This is far more understandable than the alternative.  By using Task.ContinueWith in conjunction with TaskScheduler.FromCurrentSynchronizationContext(), we get a simple way to push any work onto a background thread, and update the user interface on the proper UI thread.  This technique works with Windows Presentation Foundation as well as Windows Forms, with no change in methodology.

    Read the article

  • MVC : Does Code to save data in cache or session belongs in controller?

    - by newbie
    I'm a bit confused if saving the information to session code below, belongs in the controller action as shown below or should it be part of my Model? I would add that I have other controller methods that will read this session value later. public ActionResult AddFriend(FriendsContext viewModel) { if (!ModelState.IsValid) { return View(viewModel); } // Start - Confused if the code block below belongs in Controller? Friend friend = new Friend(); friend.FirstName = viewModel.FirstName; friend.LastName = viewModel.LastName; friend.Email = viewModel.UserEmail; httpContext.Session["latest-friend"] = friend; // End Confusion return RedirectToAction("Home"); } I thought about adding a static utility class in my Model which does something like below, but it just seems stupid to add 2 lines of code in another file. public static void SaveLatestFriend(Friend friend, HttpContextBase httpContext) { httpContext.Session["latest-friend"] = friend; } public static Friend GetLatestFriend(HttpContextBase httpContext) { return httpContext.Session["latest-friend"] as Friend; }

    Read the article

  • add c# user control to existing asp.net vb.net project

    - by Fidel
    Hello, I've got an existing asp.net project written in vb.net. Another person has written a user control in c#. Could you please let me know the steps for adding that C# user control to the vb.net app? I've tried copying them to the folder and using "Add existing item", however it doesn't compile the code behind at all. Thanks, Fidel

    Read the article

  • An Unusual UpdatePanel

    - by João Angelo
    The code you are about to see was mostly to prove a point, to myself, and probably has limited applicability. Nonetheless, in the remote possibility this is useful to someone here it goes… So this is a control that acts like a normal UpdatePanel where all child controls are registered as postback triggers except for a single control specified by the TriggerControlID property. You could basically achieve the same thing by registering all controls as postback triggers in the regular UpdatePanel. However with this, that process is performed automatically. Finally, here is the code: public sealed class SingleAsyncTriggerUpdatePanel : WebControl, INamingContainer { public string TriggerControlID { get; set; } [TemplateInstance(TemplateInstance.Single)] [PersistenceMode(PersistenceMode.InnerProperty)] public ITemplate ContentTemplate { get; set; } public override ControlCollection Controls { get { this.EnsureChildControls(); return base.Controls; } } protected override void CreateChildControls() { if (string.IsNullOrWhiteSpace(this.TriggerControlID)) throw new InvalidOperationException( "The TriggerControlId property must be set."); this.Controls.Clear(); var updatePanel = new UpdatePanel() { ID = string.Concat(this.ID, "InnerUpdatePanel"), ChildrenAsTriggers = false, UpdateMode = UpdatePanelUpdateMode.Conditional, ContentTemplate = this.ContentTemplate }; updatePanel.Triggers.Add(new SingleControlAsyncUpdatePanelTrigger { ControlID = this.TriggerControlID }); this.Controls.Add(updatePanel); } } internal sealed class SingleControlAsyncUpdatePanelTrigger : UpdatePanelControlTrigger { private Control target; private ScriptManager scriptManager; public Control Target { get { if (this.target == null) { this.target = this.FindTargetControl(true); } return this.target; } } public ScriptManager ScriptManager { get { if (this.scriptManager == null) { var page = base.Owner.Page; if (page != null) { this.scriptManager = ScriptManager.GetCurrent(page); } } return this.scriptManager; } } protected override bool HasTriggered() { string asyncPostBackSourceElementID = this.ScriptManager.AsyncPostBackSourceElementID; if (asyncPostBackSourceElementID == this.Target.UniqueID) return true; return asyncPostBackSourceElementID.StartsWith( string.Concat(this.target.UniqueID, "$"), StringComparison.Ordinal); } protected override void Initialize() { base.Initialize(); foreach (Control control in FlattenControlHierarchy(this.Owner.Controls)) { if (control == this.Target) continue; bool isApplicableControl = false; isApplicableControl |= control is INamingContainer; isApplicableControl |= control is IPostBackDataHandler; isApplicableControl |= control is IPostBackEventHandler; if (isApplicableControl) { this.ScriptManager.RegisterPostBackControl(control); } } } private static IEnumerable<Control> FlattenControlHierarchy( ControlCollection collection) { foreach (Control control in collection) { yield return control; if (control.Controls.Count > 0) { foreach (Control child in FlattenControlHierarchy(control.Controls)) { yield return child; } } } } } You can use it like this, meaning that only the B2 button will trigger an async postback: <cc:SingleAsyncTriggerUpdatePanel ID="Test" runat="server" TriggerControlID="B2"> <ContentTemplate> <asp:Button ID="B1" Text="B1" runat="server" OnClick="Button_Click" /> <asp:Button ID="B2" Text="B2" runat="server" OnClick="Button_Click" /> <asp:Button ID="B3" Text="B3" runat="server" OnClick="Button_Click" /> <asp:Label ID="LInner" Text="LInner" runat="server" /> </ContentTemplate> </cc:SingleAsyncTriggerUpdatePanel>

    Read the article

  • Unrecognized Token : "#" C# to SQLITE

    - by user2788405
    Background + Problem: ( Begginer here ) I want to populate sqlite db with values from a list I have in my c# code. Here's the sqlite code taken from finisarsqlite website: I modified it a bit by creating my own column names like "seq#" etc. But I'm getting the following error : "unrecognized token #" Maybe my syntax is off? Code: // [snip] - As C# is purely object-oriented the following lines must be put into a class: // We use these three SQLite objects: SQLiteConnection sqlite_conn; SQLiteCommand sqlite_cmd; SQLiteDataReader sqlite_datareader; // create a new database connection: sqlite_conn = new SQLiteConnection("Data Source=database.db;Version=3;New=True;Compress=True;"); // open the connection: sqlite_conn.Open(); // create a new SQL command: sqlite_cmd = sqlite_conn.CreateCommand(); // Let the SQLiteCommand object know our SQL-Query: sqlite_cmd.CommandText = "CREATE TABLE table1 (Seq# integer primary key, Field integer primary key, Description integer primary key);"; // Now lets execute the SQL ;D sqlite_cmd.ExecuteNonQuery(); <<< ---- This is where Error Occurs ! // Lets insert something into our new table: sqlite_cmd.CommandText = "INSERT INTO table1 (Seq#, Field, Description) VALUES (list[0], list[1], list[2]);"; // And execute this again ;D sqlite_cmd.ExecuteNonQuery(); // We are ready, now lets cleanup and close our connection: sqlite_conn.Close(); }

    Read the article

  • Iphone SQLite Databse with german umlauts results in NULL value

    - by Daniel
    Hi guys, I'm quite new to the Iphone development and after search for an answer for 3 hours now, I hope that you guys can give me a hand. My problem is that I have a SQLite Database with german umlauts. Looking at it with a SQLite browser tool shows me that the data is stored with german umlauts, correctly. But selecting fields with german umlauts in it results in a NULL value. I'm already using "stringWithUTF8String", so I don't get the point where the problem is placed. Here is my code: -(void) readSearchFromDatabase { searchFlag = YES; // Setup the database object sqlite3 *database; // Init the SCode Array searchSCodes = [[NSMutableArray alloc] init]; // Open the database from the users filessytem if(sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { NSString *wildcard =@"%"; // Setup the SQL Statement and compile it NSString *sql = [NSString stringWithFormat:@"SELECT * FROM scode WHERE ta LIKE '%@%@%@' OR descriptionde LIKE '%@%@%@' OR descriptionen LIKE '%@%@%@'", wildcard, searchBar.text, wildcard, wildcard, searchBar.text, wildcard, wildcard, searchBar.text, wildcard, wildcard, searchBar.text, wildcard]; //Creating a const char SQL Statement especially for SQLite const char *sqlStatement = [sql UTF8String]; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { // Loop through the results and add them to the feeds array while(sqlite3_step(compiledStatement) == SQLITE_ROW) { // Read the data from the result row NSString *aTa = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; NSString *aReport = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)]; NSString *aDescriptionDE = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 2)]; NSString *aDescriptionEN = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 3)]; //Results a NULL value NSLog(@"Output: %@", aDescriptionDE); // Create a new SCode object with the data from the database SCode *searchSCode = [[SCode alloc] initWithTa:aTa report:aReport descriptionDE:aDescriptionDE descriptionEN:aDescriptionEN]; // Add the SCode object to the SCodes Array [searchSCodes addObject:searchSCode]; [searchSCode release]; } } // Release the compiled statement from memory sqlite3_finalize(compiledStatement); } sqlite3_close(database); }

    Read the article

  • python / sqlite - database locked despite large timeouts

    - by Chris Phillips
    Hi, I'm sure I'm missing something pretty obvious, but I can't for the life of me stop my pysqlite scripts crashing out with a database is locked error. I have two scripts, one to load data into the database, and one to read data out, but both will frequently, and instantly, crash depending on what the other is doing with the database at any given time. I've got the timeout on both scripts set to 30 seconds: cx = sqlite.connect("database.sql", timeout=30.0) and think I can see some evidence of the timeouts in that i get what appears to be a timing stamp (e.g 0.12343827e-06 0.1 - and how do I stop that being printed?) dumped occasionally in the middle of my curses formatted output screen, but no delay that ever gets remotely near the 30 second timeout, but still one of the other keeps crashing again and again from this. I'm running RHEL5.4 on a 64 bit 4 cpu HS21 IBM blade, and have heard some mention about issues about multi-threading and am not sure if this might be relevant. Packages in use are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to newer versions outside of RedHat's official provisions is not a great option for me. Possible, but not desirable due to the environment in general. I have had autocommit=1 on previously on both scripts, but have since disabled on both, and am now cx.commit()ing on the inserting script and not committing on the select script. Ultimately as I only ever have one script actually making any modifications, I don't really see why this locking should ever ever happen. I have noticed that this is significantly worse over time when the database has gotten larger. It was recently at 13mb with 3 equal sized tables, which was about 1 day's worth of data. creating a new file has significantly improved this, which seems understandable, but the timeout ultimately just doesn't seem to be being obeyed. Any pointers very much appreciated. Thanks Chris

    Read the article

  • SQLite self-join performance

    - by Derk
    What I essentially want, is to retreive all features and values of products which have a particular feature and value. For example: I want to know all available hard drive sizes of products that have an Intel processor. I have three tables: product_to_value (product_id, feature_id, value_id) features (id, value) // for example Processor family, Storage size, etc. values (id, value) // for example Intel, 60GB, etc The simplified query I have now: SELECT features.name, featurevalues.name, featurevalues.value FROM products, products as prod2, features, features as feat2, values, values as val2 WHERE products.feature = features.id AND products.value = values.id AND products.product = prod2.product AND prod2.feature_id = feat2.id AND prod2.value_id = val2.id AND features.id = ? AND feat2.id = ? All columns have an index. I am using SQLite. The problem is that it's very slow (70ms per query, without the self-join it's <1ms). Is there a smarter way to fetch data like this? Or is this too much to ask from SQLite? I personally think I am simply overlooking something, as I am quite new to SQLite.

    Read the article

  • Distance between Long Lat coord using SQLITE

    - by munchine
    I've got an sqlite db with long and lat of shops and I want to find out the closest 5 shops. So the following code works fine. if(sqlite3_prepare_v2(db, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { while (sqlite3_step(compiledStatement) == SQLITE_ROW) { NSString *branchStr = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; NSNumber *fLat = [NSNumber numberWithFloat:(float)sqlite3_column_double(compiledStatement, 1)]; NSNumber *fLong = [NSNumber numberWithFloat:(float)sqlite3_column_double(compiledStatement, 2)]; NSLog(@"Address %@, Lat = %@, Long = %@", branchStr, fLat, fLong); CLLocation *location1 = [[CLLocation alloc] initWithLatitude:currentLocation.coordinate.latitude longitude:currentLocation.coordinate.longitude]; CLLocation *location2 = [[CLLocation alloc] initWithLatitude:[fLat floatValue] longitude:[fLong floatValue]]; NSLog(@"Distance i meters: %f", [location1 getDistanceFrom:location2]); [location1 release]; [location2 release]; } } I know the distance from where I am to each shop. My question is. Is it better to put the distance back into the sqlite row, I have the row when I step thru the database. How do I do that? Do I use the UPDATE statement? Does someone have a piece of code to help me. I can read the sqlite into an array and then sort the array. Do you recommend this over the above approach? Is this more efficient? Finally, if someone has a better way to get the closest 5 shops, love to hear it.

    Read the article

  • Application passwords and SQLite security

    - by Bryan
    I have been searching on google for information regarding application passwords and SQLite security for some time, and nothing that I have found has really answered my questions. Here is what I am trying to figure out: 1) My application is going to have an optional password activity that will be called when the application is first opened. My questions for this are a) If I store the password via android preference or SQLite database, how can I ensure security and privacy for the password, and b) how should password recovery be handled? Regarding b) from above, I have thought about requiring an email address when the password feature is enabled, and also a password hint question for use when requesting password recovery. Upon successfully answering the hint question, the password is then emailed to the email address that was submitted. I am not completely confident in the security and privacy of the email method, especially if the email is sent when the user is connected to an open, public wireless network. 2) My application will be using an SQLite database, which will be stored on the SD card if the user has one. Regardless of whether it is stored on the phone or the SD card, what options do I have for data encryption, and how does that affect the application performance? Thanks in advance for time taken to answer these questions. I think that there may be other developers struggling with the same concerns.

    Read the article

  • SQLite for personal use

    - by ALife
    What are the applications for your personal use that needs a small database like SQLite? I am thinking of trying a few popular databases and SQLite is surely the first one I am planning to try since I know barely nothing about database except some simple programming years ago. I learned that SQLite is good for personal use. But embarrassingly I do not see any application except maybe managing my list of phone numbers/contact info, which has probably a few hundred items. What's your experience? FYI, I use EndNote for my reference and softcopy of books, and I feel iTunes' music/media management is ok since I am not a frequent user anyway. And others? I do lots of coding, but I just use some simple etags tools for that. And I pretty much use .txt file (sometimes in the asciidoc style) for my notes. I have quite a bunch of notes, but not that many either. So, really, what are your personal applications that need a small database instead of existing tools and plain text files?

    Read the article

  • Pre-populate iPhone Safari SQLite DB

    - by Matt Rogish
    I'm working with a PhoneGap app that uses Safari local storage (SQlite DB) via Javascript: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/UsingtheJavascriptDatabase/UsingtheJavascriptDatabase.html On first load, the app creates the database, tables, and populates the data via a series of INSERT statements. If the user closes the app while this processing is happening, then my app database is left in an inconsistent state. What I prefer to do is deploy the SQLite DB as part of my iTunes App packaging so nothing must be populated at app cold start. However, I'm not sure if that is possible -- all of the google hits for this topic that I can find are referring to the core-data provided SQLite which is not what we're using... If it's not possible, could I wrap the entire thing in a transaction and keep re-trying it when the app is restarted? Failing that, I guess I can create a simple table with one boolean column "is_app_db_loaded?" and set it to true after I've processed all my inserts. But that's really gross... Ideas? Thanks!!

    Read the article

  • How SQLite on Android handles long strings?

    - by Levara
    I'm wondering how Android's implementation of SQLite handles long Strings. Reading from online documentation on sqlite, it said that strings in sqlite are limited to 1 million characters. My strings are definitely smaller. I'm creating a simple RSS application, and after parsing a html document, and extracting text, I'm having problem saving it to a database. I have 2 tables in database, feeds and articles. RSS feeds are correctly saved and retrieved from feeds table, but when saving to the articles table, logcat is saying that it cannot save extracted text to it's column. I don't know if other columns are making problems too, no mention of them in logcat. I'm wondering, since text is from an article on web, are signs like (",',;) creating problems? Is Android automaticaly escaping them, or I have to do that. I'm using a technique for inserting similar to one in notepad tutorial: public long insertArticle(long feedid, String title, String link, String description, String h1,tring h2, String h3, String p, String image, long date) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_FEEDID, feedid); initialValues.put(KEY_TITLE, title); initialValues.put(KEY_LINK, link); initialValues.put(KEY_DESCRIPTION, description ); initialValues.put(KEY_H1, h1 ); initialValues.put(KEY_H2, h2); initialValues.put(KEY_H3, h3); initialValues.put(KEY_P, p); initialValues.put(KEY_IMAGE, image); initialValues.put(KEY_DATE, date); return mDb.insert(DATABASE_TABLE_ARTICLES,null, initialValues); } Column P is for extracted text, h1, h2 and h3 are for headers from a page. Logcat reports only column p to be the problem. The table is created with following statement: private static final String DATABASE_CREATE_ARTICLES = "create table articles( _id integer primary key autoincrement, feedid integer, title text, link text not null, description text," + "h1 text, h2 text, h3 text, p text, image text, date integer);";

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • Android and fairly large SQLite datafiles

    - by SK9
    I'm starting an Android project, a port from an existing iPhone project I've completed. I have a fairly large read-only SQLite database, about 100Mb in all. It's called "mydata.sqlite". Where do I place this in my Eclipse workspace? It's too big for "assets". Next, how do I best get at the file? I would think to try (handling exceptions later) something like: SQLiteDatabase myDatabase = null; myDatabase = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY); But I would then need the path string myPath and since I don't know where to put the resource I don't know what this needs to be. Can I put "mydata.sqlite" into "res/raw" (once I create "raw" in Eclipse?) and then referene it as a resource with "R.raw.mydata"? I would very much appreciate some direct help here, rather than a reference to a tutorial. I have checked tons of these, including those that are already cited here on stackoverflow. I've also gone through the "Notepad" project in the Android developer documents. However these and the documentation typically consider only new, empty or small databases. This should be a simple thing and given the time I've spent already it is perhaps easier to ask. Thanking you kindly in advance for your assistance.

    Read the article

  • How to update a string property of an sqlite database item

    - by Thomas Joos
    hi all, I'm trying to write an application that checks if there is an active internet connection. If so it reads an xml and checks every 'lastupdated' item ( php generated string ). It compares it to the database items and if there is a new value, this particular item needs to be updated. My code seems to work ( compiles, no error messages, no failures, .. ) but I notice that the particular property does not change, it becomese (null). When I output the binded string value it returns the correct string value I want to update into the db.. Any idea what I'm doing wrong? const char *sql = "update myTable Set last_updated=? Where node_id =?"; sqlite3_stmt *statement; // Preparing a statement compiles the SQL query into a byte-code program in the SQLite library. // The third parameter is either the length of the SQL string or -1 to read up to the first null terminator. if (sqlite3_prepare_v2(database, sql, -1, &statement, NULL) == SQLITE_OK){ NSLog(@"last updated item: %@", [d lastupdated]); sqlite3_bind_text(statement, 1, [d lastupdated],-1,SQLITE_TRANSIENT); sqlite3_bind_int (statement, 2, [d node_id]); }else { NSLog(@"SQLite statement error!"); } if(SQLITE_DONE != sqlite3_step(statement)){ NSAssert1(0, @"Error while updating. '%s'", sqlite3_errmsg(database)); }else { NSLog(@"SQLite Update done!"); }

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Parallelism in .NET – Part 2, Simple Imperative Data Parallelism

    - by Reed
    In my discussion of Decomposition of the problem space, I mentioned that Data Decomposition is often the simplest abstraction to use when trying to parallelize a routine.  If a problem can be decomposed based off the data, we will often want to use what MSDN refers to as Data Parallelism as our strategy for implementing our routine.  The Task Parallel Library in .NET 4 makes implementing Data Parallelism, for most cases, very simple. Data Parallelism is the main technique we use to parallelize a routine which can be decomposed based off data.  Data Parallelism refers to taking a single collection of data, and having a single operation be performed concurrently on elements in the collection.  One side note here: Data Parallelism is also sometimes referred to as the Loop Parallelism Pattern or Loop-level Parallelism.  In general, for this series, I will try to use the terminology used in the MSDN Documentation for the Task Parallel Library.  This should make it easier to investigate these topics in more detail. Once we’ve determined we have a problem that, potentially, can be decomposed based on data, implementation using Data Parallelism in the TPL is quite simple.  Let’s take our example from the Data Decomposition discussion – a simple contrast stretching filter.  Here, we have a collection of data (pixels), and we need to run a simple operation on each element of the pixel.  Once we know the minimum and maximum values, we most likely would have some simple code like the following: for (int row=0; row < pixelData.GetUpperBound(0); ++row) { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This simple routine loops through a two dimensional array of pixelData, and calls the AdjustContrast routine on each pixel. As I mentioned, when you’re decomposing a problem space, most iteration statements are potentially candidates for data decomposition.  Here, we’re using two for loops – one looping through rows in the image, and a second nested loop iterating through the columns.  We then perform one, independent operation on each element based on those loop positions. This is a prime candidate – we have no shared data, no dependencies on anything but the pixel which we want to change.  Since we’re using a for loop, we can easily parallelize this using the Parallel.For method in the TPL: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Here, by simply changing our first for loop to a call to Parallel.For, we can parallelize this portion of our routine.  Parallel.For works, as do many methods in the TPL, by creating a delegate and using it as an argument to a method.  In this case, our for loop iteration block becomes a delegate creating via a lambda expression.  This lets you write code that, superficially, looks similar to the familiar for loop, but functions quite differently at runtime. We could easily do this to our second for loop as well, but that may not be a good idea.  There is a balance to be struck when writing parallel code.  We want to have enough work items to keep all of our processors busy, but the more we partition our data, the more overhead we introduce.  In this case, we have an image of data – most likely hundreds of pixels in both dimensions.  By just parallelizing our first loop, each row of pixels can be run as a single task.  With hundreds of rows of data, we are providing fine enough granularity to keep all of our processors busy. If we parallelize both loops, we’re potentially creating millions of independent tasks.  This introduces extra overhead with no extra gain, and will actually reduce our overall performance.  This leads to my first guideline when writing parallel code: Partition your problem into enough tasks to keep each processor busy throughout the operation, but not more than necessary to keep each processor busy. Also note that I parallelized the outer loop.  I could have just as easily partitioned the inner loop.  However, partitioning the inner loop would have led to many more discrete work items, each with a smaller amount of work (operate on one pixel instead of one row of pixels).  My second guideline when writing parallel code reflects this: Partition your problem in a way to place the most work possible into each task. This typically means, in practice, that you will want to parallelize the routine at the “highest” point possible in the routine, typically the outermost loop.  If you’re looking at parallelizing methods which call other methods, you’ll want to try to partition your work high up in the stack – as you get into lower level methods, the performance impact of parallelizing your routines may not overcome the overhead introduced. Parallel.For works great for situations where we know the number of elements we’re going to process in advance.  If we’re iterating through an IList<T> or an array, this is a typical approach.  However, there are other iteration statements common in C#.  In many situations, we’ll use foreach instead of a for loop.  This can be more understandable and easier to read, but also has the advantage of working with collections which only implement IEnumerable<T>, where we do not know the number of elements involved in advance. As an example, lets take the following situation.  Say we have a collection of Customers, and we want to iterate through each customer, check some information about the customer, and if a certain case is met, send an email to the customer and update our instance to reflect this change.  Normally, this might look something like: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } } Here, we’re doing a fair amount of work for each customer in our collection, but we don’t know how many customers exist.  If we assume that theStore.GetLastContact(customer) and theStore.EmailCustomer(customer) are both side-effect free, thread safe operations, we could parallelize this using Parallel.ForEach: Parallel.ForEach(customers, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); Just like Parallel.For, we rework our loop into a method call accepting a delegate created via a lambda expression.  This keeps our new code very similar to our original iteration statement, however, this will now execute in parallel.  The same guidelines apply with Parallel.ForEach as with Parallel.For. The other iteration statements, do and while, do not have direct equivalents in the Task Parallel Library.  These, however, are very easy to implement using Parallel.ForEach and the yield keyword. Most applications can benefit from implementing some form of Data Parallelism.  Iterating through collections and performing “work” is a very common pattern in nearly every application.  When the problem can be decomposed by data, we often can parallelize the workload by merely changing foreach statements to Parallel.ForEach method calls, and for loops to Parallel.For method calls.  Any time your program operates on a collection, and does a set of work on each item in the collection where that work is not dependent on other information, you very likely have an opportunity to parallelize your routine.

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • What I saw at TechEd North America 2014

    - by Brian Schroer
    Originally posted on: http://geekswithblogs.net/brians/archive/2014/05/19/teched-north-america-2014.aspxI was thrilled to be able to attend TechEd North America 2014 in Houston last week. I got to go to Orlando in 2008, and since then I’ve had to settle for watching the sessions online (which ain’t bad – They’re all available on Channel 9 for streaming or downloading. Here are links to the Developer Track sessions and to the sessions from all tracks.) The sessions I attended (with my favorites bolded) were: Shiny new stuff The Microsoft Application Platform for Developers: Create Applications That Span Devices and Services INTRODUCING: The Future of .NET on the Server DEEP DIVE: The Future of .NET on the Server ASP.NET: Building Web Application Using ASP.NET and Visual Studio The Next Generation of .NET for Building Applications The Future of Visual Basic and C# Stuff you can use now Building Rich Apps with AngularJS on ASP.NET Get the Most Out of Your Code Maps SignalR: Building Real-Time Applications with ASP.NET SignalR Performance Optimize Your ASP.NET Web App Modern Web and Visual Studio Visual Studio Power User: Tips and Tricks Debugging Tips and Tricks in Visual Studio 2013 In a world where the whole company uses TFS… Using Functional, Exploratory and Acceptance Testing to Release with Confidence A Practical View of Release Management for Visual Studio 2013 From Vanity to Value, Metrics That Matter: Improving Lean and Agile, Kanban, and Scrum Ain’t Nobody Got Time for That As usual, there were some time slots with nothing of interest and others with 5 things I wanted to see at the same time. Here are the sessions I’m still planning to watch… Getting Started with TypeScript Building a Large Scale JavaScript Application in TypeScript Modern Application Lifecycle Management Why a Hacker Can Own Your Web Servers in a Day! Async Best Practices for C# and Visual Basic Building Multi-Device Apps with the New Visual Studio Tooling for Apache Cordova Applying S.O.L.I.D. Principles in .NET/C# Native Mobile Application Development for iOS, Android, and Windows in C# and Visual Studio Using Xamarin Latest Innovations in Developing ASP.NET MVC Web Applications Zero to Hero: Untested to Tested with Microsoft Fakes Using Visual Studio Cool and Elegant ASP.NET Web Forms with HTML 5 for the Modern Web The Present and Future of .NET in a World of Devices and Services

    Read the article

  • How do you handle EF Data Contexts combined with asp.net custom membership/role providers

    - by KallDrexx
    I can't seem to get my head around how to implement a custom membership provider with Entity Framework data contexts into my asp.net MVC application. I understand how to create a custom membership/role provider by itself (using this as a reference). Here's my current setup: As of now I have a repository factory interface that allows different repository factories to be created (right now I only have a factory for EF repositories and and in memory repositories). The repository factory looks like this: public class EFRepositoryFactory : IRepositoryFactory { private EntitiesContainer _entitiesContext; /// <summary> /// Constructor that generates the necessary object contexts /// </summary> public EFRepositoryFactory() { _entitiesContext = new EntitiesContainer(); } /// <summary> /// Generates a new entity framework repository for the specified entity type /// </summary> /// <typeparam name="T">Type of entity to generate a repository for </typeparam> /// <returns>Returns an EFRepository</returns> public IRepository<T> GenerateRepository<T>() where T : class { return new EFRepository<T>(_entitiesContext); } } Controllers are passed an EF repository factory via castle Windsor. The controller then creates all the service/business layer objects it requires and passes in the repository factory into it. This means that all service objects are using the same EF data contexts and I do not have to worry about objects being used in more than one data context (which of course is not allowed and causes an exception). As of right now I am trying to decide how to generate my user and authorization service layers, and have run against a design roadblock. The User/Authization service will be a central class that handles the logic for logging in, changing user details, managing roles and determining what users have access to what. The problem is, using the current methodology the asp.net mvc controllers will initialize it's own EF repository factory via Windsor and the asp.net membership/role provider will have to initialize it's own EF repository factory. This means that each part of the site will then have it's own data context. This seems to mean that if asp.net authenticates a user, that user's object will be in the membership provider's data context and thus if I try to retrieve that user object in the service layer (say to change the user's name) I will get a duplication exception. I thought of making the repository factory class a singleton, but I don't see a way for that to work with castle Windsor. How do other people handle asp.net custom providers in a MVC (or any n-tier) architecture without having object duplication issues?

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • ASP.NET: Custom MembershipProvider with a custom user table

    - by blahblah
    I've recently started tinkering with ASP.NET MVC, but this question should apply to classic ASP.NET as well. For what it's worth, I don't know very much about forms authentication and membership providers either. I'm trying to write my own MembershipProvider which will be connected to my own custom user table in my database. My user table contains all of the basic user information such as usernames, passwords, password salts, e-mail addresses and so on, but also information such as first name, last name and country of residence. As far as I understand, the standard way of doing this in ASP.NET is to create a user table without the extra information and then a "profile" table with the extra information. However, this doesn't sound very good to me, because whenever I need to access that extra information I would have to make one extra database query to get it. I read in the book "Pro ASP.NET 3.5 in C# 2008" that having a separate table for the profiles is not a very good idea if you need to access the profile table a lot and have many different pages in your website. Now for the problem at hand... As I said, I'm writing my own custom MembershipProvider subclass and it's going pretty well so far, but now I've come to realize that the CreateUser doesn't allow me to create users in the way I'd like. The method only takes a fixed number of arguments and first name, last name and country of residence are not part of them. So how would I create an entry for the new user in my custom table without this information at hand in CreateUser of my MembershipProvider?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >