Search Results

Search found 4879 results on 196 pages for 'geeks'.

Page 105/196 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Custom Session Management using HashTable

    - by kaleidoscope
    ASP.NET session state lets you associate a server-side string or object dictionary containing state data with a particular HTTP client session. A session is defined as a series of requests issued by the same client within a certain period of time, and is managed by associating a session ID with each unique client. The ID is supplied by the client on each request, either in a cookie or as a special fragment of the request URL. The session data is stored on the server side in one of the supported session state stores, which include in-process memory, SQL Server™ database, and the ASP.NET State Server service. The latter two modes enable session state to be shared among multiple Web servers on a Web farm and do not require server affinity. Implement Custom session Handler you need to follow following process : 1. Create class library which will inherit from  SessionStateStoreProviderBase abstract Class. 2. Implement all abstract Method in your base class. 3.Change Mode of session to “Custom” in web.config file and provide Provider as your Namespace with classname. <sessionState mode=”Custom” customProvider=”Namespace.classname”> <Providers> <add name=”Name” type=”Namespace.classname”> </sessionstate> For more Details Please refer following links :   http://msdn.microsoft.com/en-us/magazine/cc163730.aspx http://msdn.microsoft.com/en-us/library/system.web.sessionstate.sessionstatestoreproviderbase.aspx - Chandraprakash, S Technorati Tags: Chandraprakash,Session state Managment

    Read the article

  • Override an IOCTL Handler in PQOAL

    - by Kate Moss' Big Fan
    When porting or creating a BSP to a new platform, we often need to make change to OEMIoControl or HAL IOCTL handler for more specific. Since Microsoft introduced PQOAL in CE 5.0 and more and more BSP today leverages PQOAL to simplify the OAL, we no longer define the OEMIoControl directly. It is somehow analogous to migrate from pure Windows SDK to MFC; people starts to define those MFC handlers and forgot the WinMain and the big message loop. If you ever take a look at the interface between OAL and Kernel, PUBLIC\COMMON\OAK\INC\oemglobal.h, the pfnOEMIoctl is still there just as the entry point of Windows Program is WinMain since day one. (For those may argue about pfnOEMIoctl is not OEMIoControl, I will encourage you to dig into PRIVATE\WINCEOS\COREOS\NK\OEMMAIN\oemglobal.c which initialized pfnOEMIoctl to OEMIoControl. The interface is just to split OAL and Kernel which no longer linked to one executable file in CE 6, all of the function signature is still identical) So let's trace into PQOAL to realize how it implements OEMIoControl and how can we override an IOCTL handler we interest. First thing to know is the entry point (just as finding the WinMain in MFC), OEMIoControl is defined in PLATFORM\COMMON\SRC\COMMON\IOCTL\ioctl.c. Basically, it does nothing special but scan a pre-defined IOCTL table, g_oalIoCtlTable, and then execute the handler. (The highlight part) Other than that is just for error handling and the use of critical section to serialize the function. BOOL OEMIoControl(     DWORD code, VOID *pInBuffer, DWORD inSize, VOID *pOutBuffer, DWORD outSize,     DWORD *pOutSize ) {     BOOL rc = FALSE;     UINT32 i; ...     // Search the IOCTL table for the requested code.     for (i = 0; g_oalIoCtlTable[i].pfnHandler != NULL; i++) {         if (g_oalIoCtlTable[i].code == code) break;     }     // Indicate unsupported code     if (g_oalIoCtlTable[i].pfnHandler == NULL) {         NKSetLastError(ERROR_NOT_SUPPORTED);         OALMSG(OAL_IOCTL, (             L"OEMIoControl: Unsupported Code 0x%x - device 0x%04x func %d\r\n",             code, code >> 16, (code >> 2)&0x0FFF         ));         goto cleanUp;     }            // Take critical section if required (after postinit & no flag)     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Take critical section                    EnterCriticalSection(&g_ioctlState.cs);     }     // Execute the handler     rc = g_oalIoCtlTable[i].pfnHandler(         code, pInBuffer, inSize, pOutBuffer, outSize, pOutSize     );     // Release critical section if it was taken above     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Release critical section                    LeaveCriticalSection(&g_ioctlState.cs);     } cleanUp:     OALMSG(OAL_IOCTL&&OAL_FUNC, (L"-OEMIoControl(rc = %d)\r\n", rc ));     return rc; }   Where is the g_oalIoCtlTable? It is defined in your BSP. Let's use DeviceEmulator BSP as an example. The PLATFORM\DEVICEEMULATOR\SRC\OAL\OALLIB\ioctl.c defines the table as const OAL_IOCTL_HANDLER g_oalIoCtlTable[] = { #include "ioctl_tab.h" }; And that leads to PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h which defined some of IOCTL handler but others are defined in oal_ioctl_tab.h which is under PLATFORM\COMMON\SRC\INC\. Finally, we got the full table body! (Just like tracing MFC, always jumping back and forth). The format of table is very straight forward, IOCTL code, Flags and Handler Function // IOCTL CODE,                          Flags   Handler Function //------------------------------------------------------------------------------ { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlHalInitRegistry     }, { IOCTL_HAL_INIT_RTC,                       0,  OALIoCtlHalInitRTC          }, { IOCTL_HAL_REBOOT,                         0,  OALIoCtlHalReboot           }, The PQOAL scans through the table until it find a matched IOCTL code, then invokes the handler function. Since it scans the table from the top which means if we define TWO handler with same IOCTL code, the first one is always invoked with no exception. Now back to the PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h, with the following table { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlDeviceEmulatorHalInitRegistry     }, ... #include <oal_ioctl_tab.h> Note the IOCTL_HAL_INITREGISTRY handler are defined in both BSP's local ioctl_tab.h and the common oal_ioctl_tab.h, but due to BSP's local handler comes before "#include <oal_ioctl_tab.h>" so we know the OALIoCtlDeviceEmulatorHalInitRegistry always get called. In this example, the DeviceEmulator BSP overrides the IOCTL_HAL_INITREGISTRY handler from OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry by manipulating the g_oalIoCtlTable table. (In some point of view, it is similar to message map in MFC) Please be aware, when you override an IOCTL handler in PQOAL, you may want to clone the original implementation to your BSP and change to meet your need. It is recommended and save you the redundant works but remember to rename the handler function (Just like the DeviceEmulator it changes the name of OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry). If you don't change the name, linker may not be happy (due to name conflict) and the more important is by using different handler name, you could always redirect the handler back to original one. (It is like the concept of OOP that calling a function in base class; still not so clear? I am goinf to show you soon!) The OALIoCtlDeviceEmulatorHalInitRegistry setups DeviceEmulator specific registry settings and in the end, if everything goes well, it calls the OALIoCtlHalInitRegistry (PLATFORM\COMMON\SRC\COMMON\IOCTL\reginit.c) to do the rest.     if(fOk) {         fOk = OALIoCtlHalInitRegistry(code, pInpBuffer, inpSize, pOutBuffer,             outSize, pOutSize);     } Now you got the picture, whenever you want to override an IOCTL hadnler that is implemented in PQOAL just Clone the handler function to your BSP as a template. Simple name change for the handler function, and a name change in the IOCTL table header file that maps the IOCTL with the function Implement your IOCTL handler and whenever you need to redirect it back just calling the original handler function. It is the standard way of implementing a custom IOCTL and most Microsoft developers prefer. The mapping of IOCTL routine to IOCTL code is platform specific - you control the header file that does that mapping.

    Read the article

  • Geekswithblogs.net | Congrats to the new and renewed MVPs

    - by Geekswithblogs Administrator
    We just wanted to send a shout out to all those who have entered or have been renewed into the MVP program. I always wondered why they wouldn’t move the April date off of April Fool’s Day cause that would be an interesting email to get on April 1. If you are a GWB blogger and an MVP but your name does not have an MVP logo next to it on the homepage, let us know via support and we will get you added. Related Tags: Geekswithblogs.net, MVP, Microsoft

    Read the article

  • Backup Azure Tables with the Enzo Backup API

    - by Herve Roggero
    In case you missed it, you can now backup (and restore) Azure Tables and SQL Databases using an API directly. The features available through the API can be found here: http://www.bluesyntax.net/backup20api.aspx and the online help for the API is here: http://www.bluesyntax.net/EnzoCloudBackup20/APIIntro.aspx. Backing up Azure Tables can’t be any easier than with the Enzo Backup API. Here is a sample code that does the trick: // Create the backup helper class. The constructor automatically sets the SourceStorageAccount property StorageBackupHelper backup = new StorageBackupHelper("storageaccountname", "storageaccountkey", "sourceStorageaccountname", "sourceStorageaccountkey", true, "apilicensekey"); // Now set some properties… backup.UseCloudAgent = false;                                       // backup locally backup.DeviceURI = @"c:\TMP\azuretablebackup.bkp";    // to this file backup.Override = true; backup.Location = DeviceLocation.LocalFile; // Set optional performance options backup.PKTableStrategy.Mode = BSC.Backup.API.TableStrategyMode.GUID; // Set GUID strategy by default backup.MaxRESTPerSec = 200; // Attempt to stay below 200 REST calls per second // Start the backup now… string taskId = backup.Backup(); // Use the Environment class to get the final status of the operation EnvironmentHelper env = new EnvironmentHelper("storageaccountname", "storageaccountkey", "apilicensekey"); string status = env.GetOperationStatus(taskId);   As you can see above, the code is straightforward. You provide connection settings in the constructor, set a few options indicating where the backup device will be located, set optional performance parameters and start the backup. The performance options are designed to help you backup your Azure Tables quickly, while attempting to keep under a specific threshold to prevent Storage Account throttling. For example, the MaxRESTPerSec property will attempt to keep the overall backup operation under 200 rest calls per second. Another performance option if the Backup Strategy for Azure Tables. By default, all tables are simply scanned. While this works best for smaller Azure Tables, larger tables can use the GUID strategy, which will issue requests against an Azure Table in parallel assuming the PartitionKey stores GUID values. It doesn’t mean that your PartitionKey must have GUIDs however for this strategy to work; but the backup algorithm is tuned for this condition. Other options are available as well, such as filtering which columns, entities or tables are being backed up. Check out more on the Blue Syntax website at http://www.bluesyntax.net.

    Read the article

  • Using T4 to generate Configuration classes

    - by Justin Hoffman
    I wanted to try to use T4 to read a web.config and generate all of the appSettings and connectionStrings as properties of a class.  I elected in this template only to output appSettings and connectionStrings but you can see it would be easily adapted for app specific settings, bindings etc.  This allows for quick access to config values as well as removing the potential for typo's when accessing values from the ConfigurationManager. One caveat: a developer would need to remember to run the .tt file after adding an entry to the web.config.  However, one would quickly notice when trying to access the property from the generated class (it wouldn't be there).  Additionally, there are other options as noted here. The first step was to create the .tt file.  Note that this is a basic example, it could be extended even further I'm sure.  In this example I just manually input the path to the web.config file. <#@ template debug="false" hostspecific="true" language="C#" #><#@ output extension=".cs" #><#@ assembly Name="System.Configuration" #><#@ assembly name="System.Xml" #><#@ assembly name="System.Xml.Linq" #><#@ assembly name="System.Net" #><#@ assembly name="System" #><#@ import namespace="System.Configuration" #><#@ import namespace="System.Xml" #><#@ import namespace="System.Net" #><#@ import namespace="Microsoft.VisualStudio.TextTemplating" #><#@ import namespace="System.Xml.Linq" #>using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { <# var xDocument = XDocument.Load(@"G:\MySolution\MyProject\Web.config"); var results = xDocument.Descendants("appSettings"); const string key = "key"; const string name = "name"; foreach (var xElement in results.Descendants()) {#> public string <#= xElement.Attribute(key).Value#>{get {return ConfigurationManager.AppSettings[<#= string.Format("{0}{1}{2}","\"" , xElement.Attribute(key).Value, "\"")#>];}} <#}#> <# var connectionStrings = xDocument.Descendants("connectionStrings"); foreach(var connString in connectionStrings.Descendants()) {#> public string <#= connString.Attribute(name).Value#>{get {return ConfigurationManager.ConnectionStrings[<#= string.Format("{0}{1}{2}","\"" , connString.Attribute(name).Value, "\"")#>].ConnectionString;}} <#} #> }} The resulting .cs file: using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { public string ClientValidationEnabled{get {return ConfigurationManager.AppSettings["ClientValidationEnabled"];}} public string UnobtrusiveJavaScriptEnabled{get {return ConfigurationManager.AppSettings["UnobtrusiveJavaScriptEnabled"];}} public string ServiceUri{get {return ConfigurationManager.AppSettings["ServiceUri"];}} public string TestConnection{get {return ConfigurationManager.ConnectionStrings["TestConnection"].ConnectionString;}} public string SecondTestConnection{get {return ConfigurationManager.ConnectionStrings["SecondTestConnection"].ConnectionString;}} }} Next, I extended the partial class for easy access to the Configuration. However, you could just use the generated class file itself. using System;using System.Linq;using System.Xml.Linq;namespace MyProject.Web{ public partial class Configurator { private static readonly Configurator Instance = new Configurator(); public static Configurator For { get { return Instance; } } }} Finally, in my example, I used the Configurator class like so: [TestMethod] public void Test_Web_Config() { var result = Configurator.For.ServiceUri; Assert.AreEqual(result, "http://localhost:30237/Service1/"); }

    Read the article

  • SCSF for Visual Studio 2010

    - by Anthony Trudeau
    The Smart Client Software Factory (SCSF) for Visual Studio 2010 was uploaded tonight.  You can get it, the source code, and the documentation on the patterns & practices page. Note: Do not forget to "unblock" the documentation (CHM) file after you download it.  To unblock it right click the file, choose Properties, and click the Unblock button.

    Read the article

  • Login screen appears even if logged in

    - by Prasenjit
    HEHE it's a stupid problem, for those who care place the following on the load event of the master page or default             Response.AppendHeader("Cache-Control", "no-cache"); //HTTP 1.1             Response.AppendHeader("Cache-Control", "private"); // HTTP 1.1             Response.AppendHeader("Cache-Control", "no-store"); // HTTP 1.1             Response.AppendHeader("Cache-Control", "must-revalidate"); // HTTP 1.1             Response.AppendHeader("Cache-Control", "max-stale=0"); // HTTP 1.1              Response.AppendHeader("Cache-Control", "post-check=0"); // HTTP 1.1              Response.AppendHeader("Cache-Control", "pre-check=0"); // HTTP 1.1              Response.AppendHeader("Pragma", "no-cache"); // HTTP 1.1              Response.AppendHeader("Keep-Alive", "timeout=3, max=993"); // HTTP 1.1              Response.AppendHeader("Expires", "Mon, 26 Jul 1997 05:00:00 GMT"); // HTTP 1.1   It finally worked for me:)

    Read the article

  • XenApp 6.5 – How to create and set a Policy using PowerShell

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2013/06/20/xenapp-6.5--how-to-create-and-set-a-policy.aspxHere is my homework Add-PSSnapin -name Citrix.Common.* -ErrorAction SilentlyContinueNew-Item LocalFarmGpo:\User\MyPolicycd LocalFarmGpo:\User\MyPolicy\Settings\ICA\SecuritySet-ItemProperty .\MinimumEncryptionLevel State EnabledSet-ItemProperty .\MinimumEncryptionLevel Value Bits128cd LocalFarmGpo:\User\MyPolicy\Filters\WorkerGroupNew-Item -Name "All Servers" -Value "All Servers"Set-ItemProperty LocalFarmGpo:\User\MyPolicy -Name Priority -Value 2  So cute …

    Read the article

  • Multiple instances of Intellitrace.exe process

    - by Vincent Grondin
    Not so long ago I was confronted with a very bizarre problem… I was using visual studio 2010 and whenever I opened up the Test Impact view I would suddenly see my pc perf go down drastically…  Investigating this problem, I found out that hundreds of “Intellitrace.exe” processes had been started on my system and I could not close them as they would re-start as soon as I would close one.  That was very weird.  So I knew it had something to do with the Test Impact but how can this feature and Intellitrace.exe going crazy be related?  After a bit of thinking I remembered that a teammate (Etienne Tremblay, ALM MVP) had told me once that he had seen this issue before just after installing a MOCKING FRAMEWORK that uses the .NET Profiler API…  Apparently there’s a conflict between the test impact features of Visual Studio and some mocking products using the .NET profiler API…  Maybe because VS 2010 also uses this feature for Test Impact purposes, I don’t know… Anyways, here’s the fix…  Go to your VS 2010 and click the “Test” menu.  Then go to the “Edit Test Settings” and choose EACH test setting file applying the following actions (normally 2 files being “Local” and TraceAndTestImpact”: -          Select the Data And Diagnostic option on the left -          Make sure that the ASP.NET Client Proxy for Intellitrace and Test Impact option is NOT SELECTED -          Make sure that the Test Impact option is NOT SELECTED -          Save and close   Edit Test Settings   Problem solved…  For me having to choose between the “Test Impact” features and the “Mocking Framework” was a no brainer, bye bye test impact…  I did not investigate much on this subject but I feel there might be a way to have them both working by enabling one after the other in a precise sequence…  Feel free to leave a comment if you know how to make them both work at the same time!   Hope this helps someone out there !

    Read the article

  • Telesharp: An application metadata repository that enables true agility in enterprise .NET applications.

    - by Vishal
    Tellago Studios proudly announces its newest product, a third one within a year of time : TELESHARP .NET Configuration Management has always been a nightmare for any enterprise. TeleSharp is an innovative product that addresses the most common challenges of .NET applications in the enterprise. After years of struggle developing and managing large .NET applications, we decided to create a tool that makes .NET applications truly agile. You can read more about Telesharp and what difference it can make into your enterprise. Also if you want to see Telesharp in action, check the videos about it. Click here to get more information about TeleSharp trial version! Click here to register for the TeleSharp webinar on July 6th from 2PM - 3PM EST.   -Vishal

    Read the article

  • Taking a screenshot from within a Silverlight #WP7 application

    - by Laurent Bugnion
    Often times, you want to take a screenshot of an application’s page. There can be multiple reasons. For instance, you can use this to provide an easy feedback method to beta testers. I find this super invaluable when working on integration of design in an app, and the user can take quick screenshots, attach them to an email and send them to me directly from the Windows Phone device. However, the same mechanism can also be used to provide screenshots are a feature of the app, for example if the user wants to save the current status of his application, etc. Caveats Note the following: The code requires an XNA library to save the picture to the media library. To have this, follow the steps: In your application (or class library), add a reference to Microsoft.Xna.Framework. In your code, add a “using” statement to Microsoft.Xna.Framework.Media. In the Properties folder, open WMAppManifest.xml and add the following capability: ID_CAP_MEDIALIB. The method call will fail with an exception if the device is connected to the Zune application on the PC. To avoid this, either disconnect the device when testing, or end the Zune application on the PC. While the method call will not fail on the emulator, there is no way to access the media library, so it is pretty much useless on this platform. This method only prints Silverlight elements to the output image. Other elements (such as a WebBrowser control’s content for instance) will output a black rectangle. The code public static void SaveToMediaLibrary( FrameworkElement element, string title) { try { var bmp = new WriteableBitmap(element, null); var ms = new MemoryStream(); bmp.SaveJpeg( ms, (int)element.ActualWidth, (int)element.ActualHeight, 0, 100); ms.Seek(0, SeekOrigin.Begin); var lib = new MediaLibrary(); var filePath = string.Format(title + ".jpg"); lib.SavePicture(filePath, ms); MessageBox.Show( "Saved in your media library!", "Done", MessageBoxButton.OK); } catch { MessageBox.Show( "There was an error. Please disconnect your phone from the computer before saving.", "Cannot save", MessageBoxButton.OK); } } This method can save any FrameworkElement. Typically I use it to save a whole page, but you can pass any other element to it. On line 7, we create a new WriteableBitmap. This excellent class can render a visual tree into a bitmap. Note that for even more features, you can use the great WriteableBitmapEx class library (which is open source). On lines 9 to 16, we save the WriteableBitmap to a MemoryStream. The only format supported by default is JPEG, however it is possible to convert to other formats with the ImageTools library (also open source). Lines 18 to 20 save the picture to the Windows Phone device’s media library. Using the image To retrieve the image, simply launch the Pictures library on the phone. The image will be in Saved Pictures. From here, you can share the image (by email, for instance), or synchronize it with the PC using the Zune software. Saving to other platforms It is of course possible to save to other platforms than the media library. For example, you can send the image to a web service, or save it to the isolated storage on the device. To do this, instead of using a MemoryStream, you can use any other stream (such as a web request stream, or a file stream) and save to that instead. Hopefully this code will be helpful to you! Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • SharePoint 2010 PowerShell Script to Find All SPShellAdmins with Database Name

    - by Brian Jackett
    Problem     Yesterday on Twitter my friend @cacallahan asked for some help on how she could get all SharePoint 2010 SPShellAdmin users and the associated database name.  I spent a few minutes and wrote up a script that gets this information and decided I’d post it here for others to enjoy.     Background     The Get-SPShellAdmin commandlet returns a listing of SPShellAdmins for the given database Id you pass in, or the farm configuration database by default.  For those unfamiliar, SPShellAdmin access is necessary for non-admin users to run PowerShell commands against a SharePoint 2010 farm (content and configuration databases specifically).  Click here to read an excellent guest post article my friend John Ferringer (twitter) wrote on the Hey Scripting Guy! blog regarding granting SPShellAdmin access.  Solution     Below is the script I wrote (formatted for space and to include comments) to provide the information needed. Click here to download the script.   # declare a hashtable to store results $results = @{}   # fetch databases (only configuration and content DBs are needed) $databasesToQuery = Get-SPDatabase | Where {$_.Type -eq 'Configuration Database' -or $_.Type -eq 'Content Database'}   # for each database get spshelladmins and add db name and username to result $databasesToQuery | ForEach-Object {$dbName = $_.Name; Get-SPShellAdmin -database $_.id | ForEach-Object {$results.Add($dbName, $_.username)}}   # sort results by db name and pipe to table with auto sizing of col width $results.GetEnumerator() | Sort-Object -Property Name | ft -AutoSize     Conclusion     In this post I provided a script that outputs all of the SPShellAdmin users and the associated database names in a SharePoint 2010 farm.  Funny enough it actually took me longer to boot up my dev VM and PowerShell (~3 mins) than it did to write the first working draft of the script (~2 mins).  Feel free to use this script and modify as needed, just be sure to give credit back to the original author.  Let me know if you have any questions or comments.  Enjoy!         -Frog Out   Links PowerShell Hashtables http://technet.microsoft.com/en-us/library/ee692803.aspx SPShellAdmin Access Explained http://blogs.technet.com/b/heyscriptingguy/archive/2010/07/06/hey-scripting-guy-tell-me-about-permissions-for-using-windows-powershell-2-0-cmdlets-with-sharepoint-2010.aspx

    Read the article

  • Lessons Building KeyRef (a .NET developer learning Rails)

    - by Liam McLennan
    Just because I like to build things, and I like to learn, I have been working on a keyboard shortcut reference site. I am using this as an opportunity to improve my ruby and rails skills. The first few days were frustrating. Perhaps the learning curve of all the fun new toys was a bit excessive. Finally tonight things have really started to come together. I still don’t understand the rails built-in testing support but I will get there. Interesting Things I Learned Tonight RubyMine IDE Tonight I switched to RubyMine instead of my usual Notepad++. I suspect RubyMine is a powerful tool if you know how to use it – but I don’t. At the moment it gives me errors about some gems not being activated. This is another one of those things that I will get to. I have also noticed that the editor functions significantly differently to the editors I am used to. For example, in visual studio and notepad++ if you place the cursor at the start of a line and press left arrow the cursor is sent to the end of the previous line. In RubyMine nothing happens. Haml Haml is my favourite view engine. For my .NET work I have been using its non-union Mexican CLR equivalent – nHaml. Multiple CSS Classes To define a div with more than one css class haml lets you chain them together with a ‘.’, such as: .span-6.search_result contents of the div go here Indent Consistency I also learnt tonight that both haml and nhaml complain if you are not consistent about indenting. As a consequence of the move from notepad++ to RubyMine my haml views ended up with some tab indenting and some space indenting. For the view to render all of the indents within a view must be consistent. Sorting Arrays I guessed that ruby would be able to sort an array alphabetically by a property of the elements so my first attempt was: Application.all.sort {|app| app.name} which does not work. You have to supply a comparer (much like .NET). The correct sort is: Application.all.sort {|a,b| a.name.downcase <=> b.name.downcase} MongoMapper Find by Id Since document databases are just fancy key-value stores it is essential to be able to easily search for a document by its id. This functionality is so intrinsic that it seems that the MongoMapper author did not bother to document it. To search by id simply pass the id to the find method: Application.find(‘4c19e8facfbfb01794000002’) Rails And CoffeeScript I am a big fan of CoffeeScript so integrating it into this application is high on my priorities. My first thought was to copy Dr Nic’s strategy. Unfortunately, I did not get past step 1. Install Node.js. I am doing my development on Windows and node is unix only. I looked around for a solution but eventually had to concede defeat… for now. Quicksearch The front page of the application I am building displays a list of applications. When the user types in the search box I want to reduce the list of applications to match their search. A quick googlebing turned up quicksearch, a jquery plugin. You simply tell quicksearch where to get its input (the search textbox) and the list of items to filter (the divs containing the names of applications) and it just works. Here is the code: $('#app_search').quicksearch('.search_result'); Summary I have had a productive evening. The app now displays a list of applications, allows them to be sorted and links through to an application page when an application is selected. Next on the list is to display the set of keyboard shortcuts for an application.

    Read the article

  • What label of tests are BizUnit tests?

    - by charlie.mott
    BizUnit is defined as a "Framework for Automated Testing of Distributed Systems.  However, I've never seen a catchy label to describe what sort of tests we create using this framework. They are not really “Unit Tests” that's for sure. "Integration Tests" might be a good definition, but I want a label that clearly separates it from the manual "System Integration Testing" phase of a project where real instances of the integrated systems are used. Among some colleagues, we brainstormed some suggestions: Automated Integration Tests Stubbed Integration Tests Sandbox Integration Tests Localised Integration Tests All give a good view of the sorts of tests that are being done. I think "Stubbed Integration Tests" is most catchy and descriptive. So I will use that until someone comes up with a better idea.

    Read the article

  • Fight for your rights as a video gamer.

    - by Chris Williams
    Soon, the U.S. Supreme Court may decide whether to hear a case that could have a lasting impact on computer and video games. The case before the Court involves a law passed by the state of California attempting to criminalize the sale of certain computer and video games. Two previous courts rejected the California law as unconstitutional, but soon the Supreme Court could have the final say. Whatever the Court's ruling, we must be prepared to continue defending our rights now and in the future. To do so, we need a large, powerful movement of gamers to speak with one voice and show that we won't sit back while lawmakers try to score political points by scapegoating video games and treating them differently than books, movies, and music. If the Court decides to hear the case, we're going to need thousands of activists like you who can help defend computer and video games by writing letters to editors, calling into talk radio stations, and educating Americans about our passion for and appreciation of computer and video games. You can help build this movement right now by inviting all your friends and fellow gamers to join the Video Game Voters Network. Use our simple tool to send an email to everyone you know asking them to stand up for gaming rights: http://videogamevoters.org/movement You can also help spread the word through Facebook and Twitter, or you can simply forward this email to everyone you know and ask them to sign up at videogamevoters.org. Time after time, courts continue to reject politicians' efforts to restrict the sale of computer and video games. But that doesn't mean the politicians will stop trying anytime soon -- in fact, it means they're likely to ramp up their efforts even more. To stop them, we must make it clear that gamers will continue to stand up for free speech -- and that the numbers are on our side. Help make sure we're ready and able to keep fighting for our gaming rights. Spread the word about the Video Game Voters Network right now: http://videogamevoters.org/movement Thank you. -- Video Game Voters Network

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • Basic WCF Unit Testing

    - by Brian
    Coming from someone who loves the KISS method, I was surprised to find that I was making something entirely too complicated. I know, shocker right? Now I'm no unit testing ninja, and not really a WCF ninja either, but had a desire to test service calls without a) going to a database, or b) making sure that the entire WCF infrastructure was tip top. Who does? It's not the environment I want to test, just the logic I’ve written to ensure there aren't any side effects. So, for the K.I.S.S. method: Assuming that you're using a WCF service library (you are using service libraries correct?), it's really as easy as referencing the service library, then building out some stubs for bunking up data. The service contract We’ll use a very basic service contract, just for getting and updating an entity. I’ve used the default “CompositeType” that is in the template, handy only for examples like this. I’ve added an Id property and overridden ToString and Equals. [ServiceContract] public interface IMyService { [OperationContract] CompositeType GetCompositeType(int id); [OperationContract] CompositeType SaveCompositeType(CompositeType item); [OperationContract] CompositeTypeCollection GetAllCompositeTypes(); } The implementation When I implement the service, I want to be able to send known data into it so I don’t have to fuss around with database access or the like. To do this, I first have to create an interface for my data access: public interface IMyServiceDataManager { CompositeType GetCompositeType(int id); CompositeType SaveCompositeType(CompositeType item); CompositeTypeCollection GetAllCompositeTypes(); } For the purposes of this we can ignore our implementation of the IMyServiceDataManager interface inside of the service. Pretend it uses LINQ to Entities to map its data, or maybe it goes old school and uses EntLib to talk to SQL. Maybe it talks to a tape spool on a mainframe on the third floor. It really doesn’t matter. That’s the point. So here’s what our service looks like in its most basic form: public CompositeType GetCompositeType(int id) { //sanity checks if (id == 0) throw new ArgumentException("id cannot be zero."); return _dataManager.GetCompositeType(id); } public CompositeType SaveCompositeType(CompositeType item) { return _dataManager.SaveCompositeType(item); } public CompositeTypeCollection GetAllCompositeTypes() { return _dataManager.GetAllCompositeTypes(); } But what about the datamanager? The constructor takes care of that. I don’t want to expose any testing ability in release (or the ability for someone to swap out my datamanager) so this is what we get: IMyServiceDataManager _dataManager; public MyService() { _dataManager = new MyServiceDataManager(); } #if DEBUG public MyService(IMyServiceDataManager dataManager) { _dataManager = dataManager; } #endif The Stub Now it’s time for the rubber to meet the road… Like most guys that ever talk about unit testing here’s a sample that is painting in *very* broad strokes. The important part however is that within the test project, I’ve created a bunk (unit testing purists would say stub I believe) object that implements my IMyServiceDataManager so that I can deal with known data. Here it is: internal class FakeMyServiceDataManager : IMyServiceDataManager { internal FakeMyServiceDataManager() { Collection = new CompositeTypeCollection(); Collection.AddRange(new CompositeTypeCollection { new CompositeType { Id = 1, BoolValue = true, StringValue = "foo 1", }, new CompositeType { Id = 2, BoolValue = false, StringValue = "foo 2", }, new CompositeType { Id = 3, BoolValue = true, StringValue = "foo 3", }, }); } CompositeTypeCollection Collection { get; set; } #region IMyServiceDataManager Members public CompositeType GetCompositeType(int id) { if (id <= 0) return null; return Collection.SingleOrDefault(m => m.Id == id); } public CompositeType SaveCompositeType(CompositeType item) { var existing = Collection.SingleOrDefault(m => m.Id == item.Id); if (null != existing) { Collection.Remove(existing); } if (item.Id == 0) { item.Id = Collection.Count > 0 ? Collection.Max(m => m.Id) + 1 : 1; } Collection.Add(item); return item; } public CompositeTypeCollection GetAllCompositeTypes() { return Collection; } #endregion } So it’s tough to see in this example why any of this is necessary, but in a real world application you would/should/could be applying much more logic within your service implementation. This all serves to ensure that between refactorings etc, that it doesn’t send sparking cogs all about or let the blue smoke out. Here’s a simple test that brings it all home, remember, broad strokes: [TestMethod] public void MyService_GetCompositeType_ExpectedValues() { FakeMyServiceDataManager fake = new FakeMyServiceDataManager(); MyService service = new MyService(fake); CompositeType expected = fake.GetCompositeType(1); CompositeType actual = service.GetCompositeType(2); Assert.AreEqual<CompositeType>(expected, actual, "Objects are not equal. Expected: {0}; Actual: {1};", expected, actual); } Summary That’s really all there is to it. You could use software x or framework y to do the exact same thing, but in my case I just didn’t really feel like it. This speaks volumes to my not yet ninja unit testing prowess.

    Read the article

  • jQuery "Auto Post-back" Select/Drop-Down List

    - by Doug Lampe
    I have one common piece of jQuery code which I use to submit a form any time the selection changes on a drop-down list (HTML select tag).  This is similar to setting AutoPostBack = true in ASP.Net.  I use a single CSS class (autoSubmit) to annotate that I want the drop-down to force the form to submit on change so the HTML looks something like this: <select id="myAutoSubmitDropDown" name="myAutoSubmitDropDown" class="autoSubmit">     <option value="1">Option 1</option>     <option value="2">Option 2</option> </select> Then the following jQuery will look for any element with this CSS class and submit the parent form when the value is changed: function wireUpAutoSubmit() {   $(".autoSubmit").each(function (index) {     $(this).change(function () {       $(this).closest('form').submit();     })   }); } I put this in a separate function since I might need to wire this up explicitly after an ajax call.  Therefore I use the following code to set this method to fire when the DOM is loaded: $(document).ready(function () {   wireUpAutoSubmit(); });

    Read the article

  • BizTalk: Suspend shape and Convoy

    - by Leonid Ganeline
    Part 1: BizTalk: Instance Subscription and Convoys: Details This is a Part 2. I am discussing the Suspend shape together with Convoys and going to show that using them together is undesirable. In previous article we investigated the Instance Subscriptions and how they could create situation with dangerous zones in processing.  Let' start with Suspend shape. [See the BizTalk Help] "You can use the Suspend shape to make an orchestration instance stop running until an administrator explicitly intervenes, perhaps to reflect an error condition that requires attention beyond the scope of the orchestration. All of the state information for the orchestration instance is saved, and will be reinstated when the administrator resumes the orchestration instance. When an orchestration instance is suspended, an error is raised. You can specify a message string to accompany the error to help the administrator diagnose the situation."   On the Suspend shape the orchestration is stopped in the Suspended (Resumable) state. Next we have two choices, one is to resume and the second is to terminate the orchestration. Is the orchestration is stopped or unenlisted? You don't find a note about it anywhere. The fact is the Orchestration is stopped and still enlisted. It is very important. So again, the suspended orchestration can be resumed or terminated. The moment when the operator or the operation script resumes or terminates can be far away. It is also important too. Let's go back to the case from previous article. Make sure you notice the convoy and the dangerous zone after the last Receive shape.     Now we have a Suspend shape inside the orchestration. The first orchestration instance is suspended. Next messages start new orchestration instance and have been consumed by this orchestration, right? Wrong! The orchestration is stopped on the Suspend shape but still enlisted. Now the dangerous zone, the "zombie zone" is expanded to the interval between the last receive and the moment of termination or end of the orchestration. The new orchestration instance for this convoy will not start till this moment. How fast operator finds out this suspended orchestration? Maybe hours or days. All this time orchestration is still enlisted and gathering the convoy messages. We can resume the orchestration but we cannot resume these messages together with orchestration. Seems the name Suspended of the orchestration is misleading. The orchestration can be in the Started (and Enlisted)/Stopped (and Enlisted)/Unenlisted state. The Suspend shape switches orchestration exactly to the Stopped state. The Stop name would describe the shape clearly and unambiguously and the Stopped state would describe the orchestration. Imagine we can change the BizTalk. The Orchestration editor can search these situations and returns the compile error. In similar case the Orchestration Editor forces us to use only ordered delivery port with convoys. The run-time core can force the orchestration with convoy be suspended in Unresumable state, that means the run-time unenlists the orchestration instance subscriptions. The Suspend shape name should be changed. The "Suspend" name is misleading. The "Stop" name is clear and unambiguous. The same for the orchestration state, it should be “Stopped” not “Suspended (Resumable)”.   Conclusion:  It is not recommended using a Suspend shape together with the convoy orchestrations.

    Read the article

  • Save the dates &ndash; Tech.Days 2011 23rd to 25th of May in London

    - by Eric Nelson
    In May Microsoft UK (and specifically my group) will be delivering Tech.Days – a week of day long technical events plus evening activities. We will be covering Windows Phone 7, Silverlight, IE 9, Windows Azure Platform and more. I’m working right now on the details of what we will be covering around the Windows Azure Platform – and it is shaping up very nicely. There is a little more detail over on TechNet – but for the moment, keep the dates clear if you can. P.S. I think the above is called a “teaser” in marketing speak.

    Read the article

  • Making the most of next weeks SharePoint 2010 developer training

    - by Eric Nelson
    [you can still register if you are free on the afternoons of 9th to 11th – UK time] We have 50+ registrations with more coming in – which is fantastic. Please read on to make the most of the training. Background We have structured the training to make sure that you can still learn lots during the three days even if you do not have SharePoint 2010 installed. Additionally the course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Which means if you have zero time between now and next Wednesday then you are still good to go. But if you can do some pre-work you will likely get even more out of the three days. Step 1: Check out the topics and resources available on-demand The course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Take a lap around the SharePoint 2010 Training Course on Channel 9 Download the SharePoint Developer Training Kit Step 2: Use a pre-configured Virtual Machine which you can download (best start today – it is large!) Consider using the VM we created If you don't have access to SharePoint 2010. You will need a 64bit host OS and bare minimum of 4GB of RAM. 8GB recommended. Virtual PC can not be used with this VM – Virtual PC only supports 32bit guests. The 2010-7a Information Worker VM gives you everything you need to develop for SharePoint 2010. Watch the Video on how to use this VM Download the VM Remember you only need to download the “parts” for the 2010-7a VM. There are 3 subtly different ways of using this VM: Easiest is to follow the advice of the video and get yourself a host OS of Windows Server 2008 R2 with Hyper-V and simply use the VM Alternatively you can take the VHD and create a “Boot to VHD” if you have Windows 7 Ultimate or Enterprise Edition. This works really well – especially if you are already familiar with “Boot to VHD” (This post I did will help you get started) Or you can take the VHD and use an alternative VM tool such as VirtualBox if you have a different host OS. NB: This tends to involve some work to get everything running fine. Check out parts 1 to 3 from Rolly and if you go with Virtual Box use an IDE controller not SATA. SATA will blue screen. Note in the screenshot below I also converted the vhd to a vmdk. I used the FREE Starwind Converter to do this whilst I was fighting blue screens – not sure its necessary as VirtualBox does now work with VHDs. or Step 3 – Install SharePoint 2010 on a 64bit Windows 7 or Vista Host I haven’t tried this but it is now supported. Check out MSDN. Final notes: I am in the process of securing a number of hosted VMs for ISVs directly managed by my team. Your Architect Evangelist will have details once I have them! Else we can sort out on the Wed. Regrettably I am unable to give folks 1:1 support on any issues around Boot to VHD, 3rd party VM products etc. Related Links: Check you are fully plugged into the work of my team – have you done these simple steps including joining our new LinkedIn group?

    Read the article

  • Send Multiple InMemory Attachments Using FileUpload Controls

    - by bullpit
    I wanted to give users an ability to send multiple attachments from the web application. I did not want anything fancy, just a few FileUpload controls on the page and then send the email. So I dropped five FileUpload controls on the web page and created a function to send email with multiple attachments. Here’s the code: public static void SendMail(string fromAddress, string toAddress, string subject, string body, HttpFileCollection fileCollection)     {         // CREATE THE MailMessage OBJECT         MailMessage mail = new MailMessage();           // SET ADDRESSES         mail.From = new MailAddress(fromAddress);         mail.To.Add(toAddress);           // SET CONTENT         mail.Subject = subject;         mail.Body = body;         mail.IsBodyHtml = false;                        // ATTACH FILES FROM HttpFileCollection         for (int i = 0; i < fileCollection.Count; i++)         {             HttpPostedFile file = fileCollection[i];             if (file.ContentLength > 0)             {                 Attachment attachment = new Attachment(file.InputStream, Path.GetFileName(file.FileName));                 mail.Attachments.Add(attachment);             }         }           // SEND MESSAGE         SmtpClient smtp = new SmtpClient("127.0.0.1");         smtp.Send(mail);     } And here’s how you call the method: protected void uxSendMail_Click(object sender, EventArgs e)     {         HttpFileCollection fileCollection = Request.Files;         string fromAddress = "[email protected]";         string toAddress = "[email protected]";         string subject = "Multiple Mail Attachment Test";         string body = "Mail Attachments Included";         HelperClass.SendMail(fromAddress, toAddress, subject, body, fileCollection);            }

    Read the article

  • BizTalk Server 2013 beta on Windows 8 (with Visual Studio 2012, SQL Server 2012 &amp; ESB Toolkit 2.2)

    - by Vishal
    Hello BizTalkers, Finally, Microsoft released the beta version of BizTalk Server 2010 R2 and now its called BizTalk Server 2013. I had tried the BTS 2010 R2 CTP version on Windows Azure VM and particularly I was excited about the RESTful services support and ESB fully integrated into BizTalk. Well didn’t get chance to test it much, Azure & VM running cost associated . Anyways, I was waiting for this announcement and I was so much glad that Microsoft finally released the on premise one.  Check what’s new in the BizTalk Server 2013.  Officially Microsoft says that BizTalk Server 2013 “beta” is not supported on Windows 8 but I was curious to try it out. Below is my installation and configuration experience. Virtual Machine configuration: VM Ware Workstation 9.0. Windows 8 Enterprise x64. SQL Server 2012. Visual Studio 2012 Ultimate. BizTalk Server 2013 beta. Windows 8 Machine name: WIN8 Local Administrator account name: Admin First I installed Windows 8 Enterprise on a VM Ware Workstation 9.0 and updated the OS. Even Windows 8 is the new release so luckily didn’t had much updates to perform. Next Installed Visual Studio 2012 Ultimate which was straightforward installation. Next Installed SQL Server 2012. Select New SQL Server stand-alone installation & followed the steps as shown in the screenshot below.   Once the installation is finished, fire up SQL Server Management Studio and try connecting. Initially when the management studio opened up, I thought why did Visual Studio 2010 open when I tried opening SQL Management studio but well, they made the interface alike VS 2010. Cool, I like it. Next is the real deal, download the BizTalk Server 2013 and unzip to particular folder. Double click the Setup.exe and follow the steps in the screenshots. Install Microsoft BizTalk Server 2013 beta. I selected all the normal artifacts and also all the artifacts under Additional Software's. So far so good. Next Launch BizTalk Server Configuration and I used Basic configuration as shown in screenshot below. Didn’t expect to see this but “wala”. Successful in the first shot. Still I wasn’t sure & something would have gone wrong so fired up the BizTalk Server Administration Console and that too came up just fine. Still was not able to believe so created a simple messaging application:  message in –> message out and that too worked just fine. Finally I was convinced that BizTalk Server 2013 did work on Windows 8. Next step was to install the ESB Toolkit 2.2 which is now integrated with BizTalk Server and does not come as a separate standalone installation file. Again run the BizTalk Setup.exe from the unzipped folder. Install Microsoft ESB Toolkit. Next, unlike ESB Configuration would  not open up by itself so go to “Windows 8 so called Start” (I could not resist to write this) and open the ESB Toolkit Configuration wizard. Below screenshot display the configurations I used. Also you can find them on MSDN here. Finally after the ESB Configuration, I open Admin Console and checked the 2 ESB application deployed. Cool. This concludes my experience about installation and configuration of BizTalk Server 2013 Beta & ESB Toolkit 2.2 on Windows 8. I will try and keep writing about BizTalk Server 2013 and its use with RESTful Services etc. Thanks, Vishal Mody

    Read the article

  • Associating your MentionNotifier subscriptions with OAuth

    - by Tim Hibbard
    We recently added OAuth to MentionNotifier so that users can quickly view and edit their subscriptions without needed an additional login.  This is enabled by default for new users, but existing users will need to do the following steps to associate their subscriptions with OAuth: 1)  Go to http://software.engraph.com/ManageMentionNotifier 2)  Click “Sign in with Twitter” 3)  Verify that your twittername and email are correct 4)  Click "Associate with OAuth" This will also allow you to reply to notification emails and MentionNotifier will tweet on your behalf.  This is made possible by @sidePop written by @ferventcoder Note that the reply by email is new and buggy, so make sure that what was tweeted is correct and as expected. If you run into any issues, sent me a reply to @timhibbard. You can also join the MentionNotifier fan page on facebook, or follow @MentionNotifier on twitter.

    Read the article

  • Azure Table Storage Creation using Nov 2009 CTP

    - by kaleidoscope
    The new SDK introduces a new class - · The CloudTableClient : This new class enables us to create tables and test for the existence of tables. We need not need use this class for querying table storage, it's   more of an administrative class for dealing with table storage itself.   · Once we have got the account key and the account name from ConfigurationSetting, we can create an instance of the storage credentials and table client classes:   StorageCredentialsAccountAndKey creds = new StorageCredentialsAccountAndKey(accountName, accountKey);     CloudTableClient tableStorage = new CloudTableClient(tableBaseUri, creds);     CustomerContext ctx = new CustomerContext(tableBaseUri, creds);     //where tableBaseUri is the TableStorageEndpoint obtained from ConfigurationSetting Using the table storage class, we can now create a new table (if it doesn't already exist):     if (tableStorage.CreateTableIfNotExist("Customers"))     {        CustomerRow cust = new CustomerRow("AccountsReceivable", "kevin");         cust.FirstName = "Kevin";        cust.LastName = "Hoffman";        ctx.AddObject("Customers", cust);        ctx.SaveChanges();     } For a complete article on this topic please follow this link: http://dotnetaddict.dotnetdevelopersjournal.com/azure_nov09_tablestorage.htm Tinu, O

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >