Search Results

Search found 4879 results on 196 pages for 'geeks'.

Page 104/196 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Azure Table Storage Creation using Nov 2009 CTP

    - by kaleidoscope
    The new SDK introduces a new class - · The CloudTableClient : This new class enables us to create tables and test for the existence of tables. We need not need use this class for querying table storage, it's   more of an administrative class for dealing with table storage itself.   · Once we have got the account key and the account name from ConfigurationSetting, we can create an instance of the storage credentials and table client classes:   StorageCredentialsAccountAndKey creds = new StorageCredentialsAccountAndKey(accountName, accountKey);     CloudTableClient tableStorage = new CloudTableClient(tableBaseUri, creds);     CustomerContext ctx = new CustomerContext(tableBaseUri, creds);     //where tableBaseUri is the TableStorageEndpoint obtained from ConfigurationSetting Using the table storage class, we can now create a new table (if it doesn't already exist):     if (tableStorage.CreateTableIfNotExist("Customers"))     {        CustomerRow cust = new CustomerRow("AccountsReceivable", "kevin");         cust.FirstName = "Kevin";        cust.LastName = "Hoffman";        ctx.AddObject("Customers", cust);        ctx.SaveChanges();     } For a complete article on this topic please follow this link: http://dotnetaddict.dotnetdevelopersjournal.com/azure_nov09_tablestorage.htm Tinu, O

    Read the article

  • Master Page: Dynamically Adding Rows in ASP Table on Button Click event

    - by Vincent Maverick Durano
    In my previous post here, I wrote an example that demonstrates how are we going to generate table rows dynamically using ASP Table on click of the Button control. Now based on some comments in my previous example and in the forums they wanted to implement it within Masterpage. Unfortunately the code in my previous example doesn't work in Masterpage for the following main reasons: The Table is dynamically added within the Form tag and so the TextBox control will not be generated correcty in the page. The data will not be retained on each and every postbacks because the SetPreviousData() method is looking for the Table element within the Page and not on the MasterPage. The Request.Form key value should be set correctly since all controls within the master page are prefixed with the naming containter ID to prevent duplicate ids on the final rendered HTML. For example the TextBox control with the ID of TextBoxRow will turn to ID to this ctl00$MainBody$TextBoxRow. In order for the previous example to work within Masterpage then we will have to correct those three main reasons above and this post will guide you how to correct it. Suppose we have this content page declaration below:   <asp:Content ID="Content1" ContentPlaceHolderID="MainHead" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainBody" Runat="Server"> <asp:PlaceHolder ID="PlaceHolder1" runat="server"> <asp:Button ID="BTNAdd" runat="server" Text="Add New Row" OnClick="BTNAdd_Click" /> </asp:PlaceHolder> </asp:Content> As you notice I've added a PlaceHolder control within the MainBody ContentPlaceHolder. This is because we are going to generate the Table in the PlaceHolder instead of generating it within the Form element. Now since issue #1 is already corrected then let's proceed to the code beind part. Here are the full code blocks below:     using System; using System.Web.UI; using System.Web.UI.WebControls; public partial class DynamicControlDemo : System.Web.UI.Page { private int numOfRows = 1; protected void Page_Load(object sender, EventArgs e) { //Generate the Rows on Initial Load if (!Page.IsPostBack) { GenerateTable(numOfRows); } } protected void BTNAdd_Click(object sender, EventArgs e) { if (ViewState["RowsCount"] != null) { numOfRows = Convert.ToInt32(ViewState["RowsCount"].ToString()); GenerateTable(numOfRows); } } private void SetPreviousData(int rowsCount, int colsCount) { Table table = (Table)this.Page.Master.FindControl("MainBody").FindControl("Table1"); // **** if (table != null) { for (int i = 0; i < rowsCount; i++) { for (int j = 0; j < colsCount; j++) { //Extracting the Dynamic Controls from the Table TextBox tb = (TextBox)table.Rows[i].Cells[j].FindControl("TextBoxRow_" + i + "Col_" + j); //Use Request object for getting the previous data of the dynamic textbox tb.Text = Request.Form["ctl00$MainBody$TextBoxRow_" + i + "Col_" + j];//***** } } } } private void GenerateTable(int rowsCount) { //Creat the Table and Add it to the Page Table table = new Table(); table.ID = "Table1"; PlaceHolder1.Controls.Add(table);//****** //The number of Columns to be generated const int colsCount = 3;//You can changed the value of 3 based on you requirements // Now iterate through the table and add your controls for (int i = 0; i < rowsCount; i++) { TableRow row = new TableRow(); for (int j = 0; j < colsCount; j++) { TableCell cell = new TableCell(); TextBox tb = new TextBox(); // Set a unique ID for each TextBox added tb.ID = "TextBoxRow_" + i + "Col_" + j; // Add the control to the TableCell cell.Controls.Add(tb); // Add the TableCell to the TableRow row.Cells.Add(cell); } // And finally, add the TableRow to the Table table.Rows.Add(row); } //Set Previous Data on PostBacks SetPreviousData(rowsCount, colsCount); //Sore the current Rows Count in ViewState rowsCount++; ViewState["RowsCount"] = rowsCount; } }   As you observed the code is pretty much similar to the previous example except for the highlighted lines above. That's it! I hope someone find this post usefu! Technorati Tags: Dynamic Controls,ASP.NET,C#,Master Page

    Read the article

  • 2010 is gone and Welcome 2011

    - by anirudha
    last days i spent my week @ firozabadthe town is much small and near to agraso i never forget to see the taj mahal and red fort their even it’s first chance to see them.i make a plan that i go to Agra last Saturday. firstly i go to red fort and i talking with many foreigner and they love to talking with me because their is only one man who with with them who is their GUIDE a person like a  book they never can talk with you but tell you about everything of the location because you buy them. their are many person come from various country such as German , Japan,  Russ , Italy and many other. their is no problem to talk with them perhaps they happen with talk to me. when i completely watch the Red fort at least i see a girl who are look like a foreigner. i talk themselves where they come from they tell me Francewhen i go elsewhere i thing to propose them to be  a friend of mine. i never propose any girl for friendship with me even in school and college. so i propose them to be a friend of mine.  they accept it i put the email ID in their hand whenever they gone. but i still not get their mail. 2ndly i go to Taj mahal the taj experience is not so good i spent 3 or 4 hours in rush. i found their is no security even their are many army force. they all person are too slow to work. they spent 10 minute to check  a person for security . their hands work very slow just like a low configuration computer. i talk many person their too. i talk to a person who tell themselves Jacob and they from Chicago. they speak very fast and i not know what they tell in speech. a another problem i got with some Chinese person. when i talking with them that i found they speak only Chinese language. Wish you a very very happy new year.

    Read the article

  • jQuery "Auto Post-back" Select/Drop-Down List

    - by Doug Lampe
    I have one common piece of jQuery code which I use to submit a form any time the selection changes on a drop-down list (HTML select tag).  This is similar to setting AutoPostBack = true in ASP.Net.  I use a single CSS class (autoSubmit) to annotate that I want the drop-down to force the form to submit on change so the HTML looks something like this: <select id="myAutoSubmitDropDown" name="myAutoSubmitDropDown" class="autoSubmit">     <option value="1">Option 1</option>     <option value="2">Option 2</option> </select> Then the following jQuery will look for any element with this CSS class and submit the parent form when the value is changed: function wireUpAutoSubmit() {   $(".autoSubmit").each(function (index) {     $(this).change(function () {       $(this).closest('form').submit();     })   }); } I put this in a separate function since I might need to wire this up explicitly after an ajax call.  Therefore I use the following code to set this method to fire when the DOM is loaded: $(document).ready(function () {   wireUpAutoSubmit(); });

    Read the article

  • BizTalk: Suspend shape and Convoy

    - by Leonid Ganeline
    Part 1: BizTalk: Instance Subscription and Convoys: Details This is a Part 2. I am discussing the Suspend shape together with Convoys and going to show that using them together is undesirable. In previous article we investigated the Instance Subscriptions and how they could create situation with dangerous zones in processing.  Let' start with Suspend shape. [See the BizTalk Help] "You can use the Suspend shape to make an orchestration instance stop running until an administrator explicitly intervenes, perhaps to reflect an error condition that requires attention beyond the scope of the orchestration. All of the state information for the orchestration instance is saved, and will be reinstated when the administrator resumes the orchestration instance. When an orchestration instance is suspended, an error is raised. You can specify a message string to accompany the error to help the administrator diagnose the situation."   On the Suspend shape the orchestration is stopped in the Suspended (Resumable) state. Next we have two choices, one is to resume and the second is to terminate the orchestration. Is the orchestration is stopped or unenlisted? You don't find a note about it anywhere. The fact is the Orchestration is stopped and still enlisted. It is very important. So again, the suspended orchestration can be resumed or terminated. The moment when the operator or the operation script resumes or terminates can be far away. It is also important too. Let's go back to the case from previous article. Make sure you notice the convoy and the dangerous zone after the last Receive shape.     Now we have a Suspend shape inside the orchestration. The first orchestration instance is suspended. Next messages start new orchestration instance and have been consumed by this orchestration, right? Wrong! The orchestration is stopped on the Suspend shape but still enlisted. Now the dangerous zone, the "zombie zone" is expanded to the interval between the last receive and the moment of termination or end of the orchestration. The new orchestration instance for this convoy will not start till this moment. How fast operator finds out this suspended orchestration? Maybe hours or days. All this time orchestration is still enlisted and gathering the convoy messages. We can resume the orchestration but we cannot resume these messages together with orchestration. Seems the name Suspended of the orchestration is misleading. The orchestration can be in the Started (and Enlisted)/Stopped (and Enlisted)/Unenlisted state. The Suspend shape switches orchestration exactly to the Stopped state. The Stop name would describe the shape clearly and unambiguously and the Stopped state would describe the orchestration. Imagine we can change the BizTalk. The Orchestration editor can search these situations and returns the compile error. In similar case the Orchestration Editor forces us to use only ordered delivery port with convoys. The run-time core can force the orchestration with convoy be suspended in Unresumable state, that means the run-time unenlists the orchestration instance subscriptions. The Suspend shape name should be changed. The "Suspend" name is misleading. The "Stop" name is clear and unambiguous. The same for the orchestration state, it should be “Stopped” not “Suspended (Resumable)”.   Conclusion:  It is not recommended using a Suspend shape together with the convoy orchestrations.

    Read the article

  • Send Multiple InMemory Attachments Using FileUpload Controls

    - by bullpit
    I wanted to give users an ability to send multiple attachments from the web application. I did not want anything fancy, just a few FileUpload controls on the page and then send the email. So I dropped five FileUpload controls on the web page and created a function to send email with multiple attachments. Here’s the code: public static void SendMail(string fromAddress, string toAddress, string subject, string body, HttpFileCollection fileCollection)     {         // CREATE THE MailMessage OBJECT         MailMessage mail = new MailMessage();           // SET ADDRESSES         mail.From = new MailAddress(fromAddress);         mail.To.Add(toAddress);           // SET CONTENT         mail.Subject = subject;         mail.Body = body;         mail.IsBodyHtml = false;                        // ATTACH FILES FROM HttpFileCollection         for (int i = 0; i < fileCollection.Count; i++)         {             HttpPostedFile file = fileCollection[i];             if (file.ContentLength > 0)             {                 Attachment attachment = new Attachment(file.InputStream, Path.GetFileName(file.FileName));                 mail.Attachments.Add(attachment);             }         }           // SEND MESSAGE         SmtpClient smtp = new SmtpClient("127.0.0.1");         smtp.Send(mail);     } And here’s how you call the method: protected void uxSendMail_Click(object sender, EventArgs e)     {         HttpFileCollection fileCollection = Request.Files;         string fromAddress = "[email protected]";         string toAddress = "[email protected]";         string subject = "Multiple Mail Attachment Test";         string body = "Mail Attachments Included";         HelperClass.SendMail(fromAddress, toAddress, subject, body, fileCollection);            }

    Read the article

  • Associating your MentionNotifier subscriptions with OAuth

    - by Tim Hibbard
    We recently added OAuth to MentionNotifier so that users can quickly view and edit their subscriptions without needed an additional login.  This is enabled by default for new users, but existing users will need to do the following steps to associate their subscriptions with OAuth: 1)  Go to http://software.engraph.com/ManageMentionNotifier 2)  Click “Sign in with Twitter” 3)  Verify that your twittername and email are correct 4)  Click "Associate with OAuth" This will also allow you to reply to notification emails and MentionNotifier will tweet on your behalf.  This is made possible by @sidePop written by @ferventcoder Note that the reply by email is new and buggy, so make sure that what was tweeted is correct and as expected. If you run into any issues, sent me a reply to @timhibbard. You can also join the MentionNotifier fan page on facebook, or follow @MentionNotifier on twitter.

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

  • Visual Studio 2010 Keyboard Shortcut Posters Available

    - by Jim Duffy
    I’m a firm believer in the productivity gains you experience when using keyboard shortcuts in Visual Studio. If you’re not using keyboard shortcuts while coding then your productivity is suffering. Some of my favorites (omitting the obvious ones like F5 to start debugging) as are: Ctrl+K, C – Comment section of code Ctrl+K, U – Uncomment section of code Ctrl+K, D – Format the current document (indentation, etc.) Shift+Alt+C – Add new class to a project Shift+Alt+A – Add existing item to a project Ctrl+Shift+A – Add new item to a project The good news is all of these and a TON of others are all documented in the Visual Studio Keyboard Shortcut Posters (available as PDFs). The only problem is there are so many you need a printer capable of printing on larger paper because while you can read them all on 8 1/2 x 11 paper in landscape mode, for them to be a valuable quick reference on your cubicle wall you’re going to need to print them on large paper. If you don’t have a printer capable of producing large sized printouts head down to Office Depot, Staples, FedEx Office, or your favorite print shop and have them print one for you. Oh and one last thing, I’d really like Microsoft to take those people’s picture off them. Really? Do we need to look at these people when trying to improve our productivity? Have a day. :-|

    Read the article

  • Login screen appears even if logged in

    - by Prasenjit
    HEHE it's a stupid problem, for those who care place the following on the load event of the master page or default             Response.AppendHeader("Cache-Control", "no-cache"); //HTTP 1.1             Response.AppendHeader("Cache-Control", "private"); // HTTP 1.1             Response.AppendHeader("Cache-Control", "no-store"); // HTTP 1.1             Response.AppendHeader("Cache-Control", "must-revalidate"); // HTTP 1.1             Response.AppendHeader("Cache-Control", "max-stale=0"); // HTTP 1.1              Response.AppendHeader("Cache-Control", "post-check=0"); // HTTP 1.1              Response.AppendHeader("Cache-Control", "pre-check=0"); // HTTP 1.1              Response.AppendHeader("Pragma", "no-cache"); // HTTP 1.1              Response.AppendHeader("Keep-Alive", "timeout=3, max=993"); // HTTP 1.1              Response.AppendHeader("Expires", "Mon, 26 Jul 1997 05:00:00 GMT"); // HTTP 1.1   It finally worked for me:)

    Read the article

  • SharePoint 2010 Hosting :: Sending SMS Alerts in SharePoint 2010 Over Office Mobile Service Protocol (OMS)

    - by mbridge
    In this post, I want to share the exciting news of SharePoint's 2010 new feature. Finally it's possible to send SMS directly from SharePoint to mobile phones. The advantages of sending SMS instead of Email messages are obvious: SMS alerts or reminders that are received on mobile phones are more preferred than Email messages that can be lost in the mass of spam. The interface is standard as it's very similar to previous versions of the product. Adjustments are easy to do, simply enter the address of the Office Mobile Service (OMS) web-service which you want to use for sending messages, then specify the connection parameters. Further details on Office Mobile Service is available below. The Test Service button checks if OMS web-service is accessible using provided URL (user name and password are not verified). This check is needed because OMS web-service URL depends on the mobile operator and country. It's now possible to select the method of sending alerts in alerts settings. Email option is selected by default. Alerts delivery method is displayed in the list of existing alerts. Office Mobile Service (OMS) SharePoint 2010 uses exterior servers similar to SMTP servers for sending SMS alerts. However, Microsoft started development and promotion of their own protocol instead of using existing ones. That is how Office Mobile Service (OMS) appeared. This open protocol enables clients to send text and multimedia messages (mobile messages) remotely to the server which processes these messages and delivers them to mobile phones.  Typical scenario of utilizing this protocol is data transfer between computer application and mobile phone. The recipient can answer messages and the server in return will deliver the answer by SMTP protocol, i.e. by email.  Key quality of this protocol is that it's built on base of HTPP(S) and SOAP protocols.     This means that in fact SMS gateway must support typified web-service. What do you get from web-service? What you get is the ability to send SMS from any platform you want.  The protocol is being developed at the moment and version 0.2 from 08/28/2009 was available when the article was published.  For promotion of their protocol and simplifying server search, Microsoft represented web-service http://messaging.office.microsoft.com/HostingProviders.aspx that helps to receive the list of providers, which supports OMS protocol and message delivery to your operator.  All you need to do is decide which provider to use, complete the agreement, then adjust the SharePoint connection parameters and start working.  Some providers advertise themselves not only for clients but for mobile operators as well. They offer automatic adding to the list of the Office Mobile Service Providers.  To view the full specifications of OMS, please go to http://msdn.microsoft.com/en-us/library/dd774103.aspx.

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • Override an IOCTL Handler in PQOAL

    - by Kate Moss' Big Fan
    When porting or creating a BSP to a new platform, we often need to make change to OEMIoControl or HAL IOCTL handler for more specific. Since Microsoft introduced PQOAL in CE 5.0 and more and more BSP today leverages PQOAL to simplify the OAL, we no longer define the OEMIoControl directly. It is somehow analogous to migrate from pure Windows SDK to MFC; people starts to define those MFC handlers and forgot the WinMain and the big message loop. If you ever take a look at the interface between OAL and Kernel, PUBLIC\COMMON\OAK\INC\oemglobal.h, the pfnOEMIoctl is still there just as the entry point of Windows Program is WinMain since day one. (For those may argue about pfnOEMIoctl is not OEMIoControl, I will encourage you to dig into PRIVATE\WINCEOS\COREOS\NK\OEMMAIN\oemglobal.c which initialized pfnOEMIoctl to OEMIoControl. The interface is just to split OAL and Kernel which no longer linked to one executable file in CE 6, all of the function signature is still identical) So let's trace into PQOAL to realize how it implements OEMIoControl and how can we override an IOCTL handler we interest. First thing to know is the entry point (just as finding the WinMain in MFC), OEMIoControl is defined in PLATFORM\COMMON\SRC\COMMON\IOCTL\ioctl.c. Basically, it does nothing special but scan a pre-defined IOCTL table, g_oalIoCtlTable, and then execute the handler. (The highlight part) Other than that is just for error handling and the use of critical section to serialize the function. BOOL OEMIoControl(     DWORD code, VOID *pInBuffer, DWORD inSize, VOID *pOutBuffer, DWORD outSize,     DWORD *pOutSize ) {     BOOL rc = FALSE;     UINT32 i; ...     // Search the IOCTL table for the requested code.     for (i = 0; g_oalIoCtlTable[i].pfnHandler != NULL; i++) {         if (g_oalIoCtlTable[i].code == code) break;     }     // Indicate unsupported code     if (g_oalIoCtlTable[i].pfnHandler == NULL) {         NKSetLastError(ERROR_NOT_SUPPORTED);         OALMSG(OAL_IOCTL, (             L"OEMIoControl: Unsupported Code 0x%x - device 0x%04x func %d\r\n",             code, code >> 16, (code >> 2)&0x0FFF         ));         goto cleanUp;     }            // Take critical section if required (after postinit & no flag)     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Take critical section                    EnterCriticalSection(&g_ioctlState.cs);     }     // Execute the handler     rc = g_oalIoCtlTable[i].pfnHandler(         code, pInBuffer, inSize, pOutBuffer, outSize, pOutSize     );     // Release critical section if it was taken above     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Release critical section                    LeaveCriticalSection(&g_ioctlState.cs);     } cleanUp:     OALMSG(OAL_IOCTL&&OAL_FUNC, (L"-OEMIoControl(rc = %d)\r\n", rc ));     return rc; }   Where is the g_oalIoCtlTable? It is defined in your BSP. Let's use DeviceEmulator BSP as an example. The PLATFORM\DEVICEEMULATOR\SRC\OAL\OALLIB\ioctl.c defines the table as const OAL_IOCTL_HANDLER g_oalIoCtlTable[] = { #include "ioctl_tab.h" }; And that leads to PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h which defined some of IOCTL handler but others are defined in oal_ioctl_tab.h which is under PLATFORM\COMMON\SRC\INC\. Finally, we got the full table body! (Just like tracing MFC, always jumping back and forth). The format of table is very straight forward, IOCTL code, Flags and Handler Function // IOCTL CODE,                          Flags   Handler Function //------------------------------------------------------------------------------ { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlHalInitRegistry     }, { IOCTL_HAL_INIT_RTC,                       0,  OALIoCtlHalInitRTC          }, { IOCTL_HAL_REBOOT,                         0,  OALIoCtlHalReboot           }, The PQOAL scans through the table until it find a matched IOCTL code, then invokes the handler function. Since it scans the table from the top which means if we define TWO handler with same IOCTL code, the first one is always invoked with no exception. Now back to the PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h, with the following table { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlDeviceEmulatorHalInitRegistry     }, ... #include <oal_ioctl_tab.h> Note the IOCTL_HAL_INITREGISTRY handler are defined in both BSP's local ioctl_tab.h and the common oal_ioctl_tab.h, but due to BSP's local handler comes before "#include <oal_ioctl_tab.h>" so we know the OALIoCtlDeviceEmulatorHalInitRegistry always get called. In this example, the DeviceEmulator BSP overrides the IOCTL_HAL_INITREGISTRY handler from OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry by manipulating the g_oalIoCtlTable table. (In some point of view, it is similar to message map in MFC) Please be aware, when you override an IOCTL handler in PQOAL, you may want to clone the original implementation to your BSP and change to meet your need. It is recommended and save you the redundant works but remember to rename the handler function (Just like the DeviceEmulator it changes the name of OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry). If you don't change the name, linker may not be happy (due to name conflict) and the more important is by using different handler name, you could always redirect the handler back to original one. (It is like the concept of OOP that calling a function in base class; still not so clear? I am goinf to show you soon!) The OALIoCtlDeviceEmulatorHalInitRegistry setups DeviceEmulator specific registry settings and in the end, if everything goes well, it calls the OALIoCtlHalInitRegistry (PLATFORM\COMMON\SRC\COMMON\IOCTL\reginit.c) to do the rest.     if(fOk) {         fOk = OALIoCtlHalInitRegistry(code, pInpBuffer, inpSize, pOutBuffer,             outSize, pOutSize);     } Now you got the picture, whenever you want to override an IOCTL hadnler that is implemented in PQOAL just Clone the handler function to your BSP as a template. Simple name change for the handler function, and a name change in the IOCTL table header file that maps the IOCTL with the function Implement your IOCTL handler and whenever you need to redirect it back just calling the original handler function. It is the standard way of implementing a custom IOCTL and most Microsoft developers prefer. The mapping of IOCTL routine to IOCTL code is platform specific - you control the header file that does that mapping.

    Read the article

  • Find related field in Dynamics AX

    - by DAXShekhar
    static void findRelatedFieldId(Args _args) {     SysDictTable    dictTable = new SysDictTable(tablenum(InventTrans));     int             i,j;     SysDictRelation dictRelation;     TableId         externId = tablenum(SalesLine);     IndexId         indexId;     ; // Search the explicit relations     for (i = 1; i <= dictTable.relationCnt(); ++i)     {         dictRelation = new SysDictRelation(dictTable.id());         dictRelation.loadNameRelation(dictTable.relation(i)); // If it is a 'relation' then you use externTable(), but for extended data types you use table() (see next block).         if (SysDictRelation::externId(dictRelation) == externId)         {             for (j =1; j <= dictRelation.lines(); j++)             {                 info(strfmt("%1", dictRelation.lineExternTableValue(j)));                 info(fieldid2name(dictRelation.externTable(),dictRelation.lineExternTableValue(j)));             }         }     }     info(strfmt("%1", dictRelation.loadFieldRelation(fieldnum(InventTrans, InventTransId)))); }

    Read the article

  • Geekswithblogs.net | Congrats to the new and renewed MVPs

    - by Geekswithblogs Administrator
    We just wanted to send a shout out to all those who have entered or have been renewed into the MVP program. I always wondered why they wouldn’t move the April date off of April Fool’s Day cause that would be an interesting email to get on April 1. If you are a GWB blogger and an MVP but your name does not have an MVP logo next to it on the homepage, let us know via support and we will get you added. Related Tags: Geekswithblogs.net, MVP, Microsoft

    Read the article

  • Poor Customer Service Example

    - by MightyZot
    Lately I have been frustrated by examples of poor customer service. At least one is worth writing about because I don’t think companies realize the effects of their service policies on loyal customers. Bad Customer Service Example #1 Recently, I received an offer in the mail from my cable company, suddenLink. The offer was for an updated TiVo for $12/mo. Normally I ignore offers like this one because I already have the service they’re offering and many times advertisers are offering alternatives to what is already an excellent product offering. I tend to exhibit a high level of loyalty to the products and brands that I use. In this case, we were looking to upgrade our TiVo and this deal is attractive for several reasons: I don’t want to pay a huge amount up-front for the device, so paying a monthly amount for the device is attractive to me. My entertainment is almost all on a single invoice. I’m no longer going to be billed by suddenLink and TiVo. TiVo is still involved, so I am still loyal to the brand I love. I have resisted moving to other DVRs and services for over a decade. I called suddenLink to order the new TiVo and was rewarded with great customer service. In fact, I can’t remember ever getting poor customer service from suddenLink. They are always there to answer my technical support questions and they are very responsive to outages. Then I called TiVo. First of all, I chose the option on the phone system to change or cancel my service, which was consequently met by an inordinate hold time. (I’m calling this time inordinate because I get through very quickly if I want to purchase something.) This is a trend that I’ve noticed with companies – if you want me to be loyal to you, it should be just as easy to cancel your service as it is to purchase it. Because, I should never be cancelling because I am unhappy. And, if you ever want my business again, or more importantly a reference, then you’d better make the exit door open just as easy as the enter door. After quite some time on hold, I talked to “Victor” who was very courteous. Victor canceled my service and then told me that I could keep my current TiVo and transfer recorded programs to it from the new TiVo.  Cool I said, but what about the cost?  He said there was no extra cost.  This was also attractive to me because I paid for my TiVo and it would be good to use it for something at least.  That was four months ago. This month I noticed that TiVo was still charging me for my original service. I was a little upset, but I decided to give them the benefit of the doubt. After all, I am a loyal TiVo customer and I have resisted moving to other solutions for over a decade. I’m sure they will do whatever it takes to keep my business, through TiVo or through suddenLink. After quite some time on hold, I was able to talk to a customer service representative, “Les”. I explained that I am a loyal TiVo customer, but I purchased this deal through my cable provider. I’m still with TiVo, I just wanted a single bill and to take advantage of the pay-over-time option. “Les” told me that he was very sorry to hear that I’m leaving TiVo, to which I responded again that I wasn’t leaving TiVo, I just want one invoice, and to take advantage of the pay-over-time. So, after explaining that I requested a termination of the non-suddenLink account (TiVo can see both of course), I was put on hold again for quite some time while my refund was “approved”.  “Les” said that he could see my cancellation request back in July. Note that it is now November, so they have billed me inappropriately four times. After quite some time, he came back on the line and told me that he was able to “get me most of my money back.” He got approval to refund 90 days. Even though I requested cancellation of one of my accounts, TiVo has that cancellation request on file and they admit overbilling me, I am going to get “most” of my money back. To top this experience off, when we were ready to hang up, “Les” told me that he was sorry to see me go and that he hoped I would come back to TiVo again. Again, I explained to “Les” that I have not left TiVo. I am just paying them through suddenLink. At that point, he went into a small dissertation about how this is a special arrangement they have with suddenLink and very few others. He made me feel like I was doing something wrong. Why should I feel that way? TiVo made the deal with suddenLink, not me, and the deal seemed like a good compromise for me to be able to get what I need. Here is what TiVo Customer Service accomplished on those two calls – I no longer feel like I need to be loyal to the TiVo brand or service. If I had been treated better on these two calls, I would still be recommending TiVo to my friends. They would still be getting revenue from a loyal customer, who paid the same rate for over a decade, and this article wouldn’t be here for you to read. Interesting… In my opinion, if you want brand loyalty, be loyal to your customers!

    Read the article

  • How do I programatically determine which port a SQL Server is running on?

    - by Ralph Willgoss
    How do I programatically determine which port a SQL Server is running on?/*===== Param ref for xp_readerrorlog ===1. Value of error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2, etc...2. Log file type: 1 or NULL = error log, 2 = SQL Agent log3. Search string 1: String one you want to search for4. Search string 2: String two you want to search for to further refine the results5. Search from start time6. Search to end time7. Sort order for results: N'asc' = ascending, N'desc' = descendingHow many error logs do I have?SMSStudio -> Management -> SQL Server Logs -> (right click) -> configure = see values*/USE MasterGO--  get log countDECLARE @logcount intDROP TABLE #ResultCREATE TABLE #Result (ArchiveNo int, Date datetime, Size int)INSERT INTO #ResultEXEC xp_enumerrorlogsSET @logcount = (SELECT COUNT(*) FROM #Result)-- search logsDECLARE @counter intSET @counter = 0WHILE @counter <= @logcountBEGIN    EXEC xp_readerrorlog @counter, 1, N'Server is listening on', 'any', NULL, NULL, N'asc'    SET @counter = @counter + 1ENDGO

    Read the article

  • BizTalk: Internals: the Partner Direct Ports and the Orchestration Chains

    - by Leonid Ganeline
    Partner Direct Port is one of the BizTalk hidden gems. It opens simple ways to the several messaging patterns. This article based on the Kevin Lam’s blog article. The article is pretty detailed but it still leaves several unclear pieces. So I have created a sample and will show how it works from different perspectives. Requirements We should create an orchestration chain where the messages should be routed from the first stage to the second stage. The messages should not be modified. All messages has the same message type. Common artifacts Source code can be downloaded here. It is interesting but all orchestrations use only one port type. It is possible because all ports are one-way ports and use only one operation. I have added a B orchestration. It helps to test the sample, showing all test messages in channel. The Receive shape Filter is empty. A Receive Port (R_Shema1Direct) is a plain Direct Port. As you can see, a subscription expression of this direct port has only one part, the MessageType for our test schema: A Filer is empty but, as you know, a link from the Receive shape to the Port creates this MessageType expression. I use only one Physical Receive File port to send a message to all processes. Each orchestration outputs a Trace.WriteLine(“<Orchestration Name>”). Forward Binding This sample has three orchestrations: A_1, A_21 and A_22. A_1 is a sender, A_21 and A_22 are receivers. Here is a subscription of the A_1 orchestration: It has two parts A MessageType. The same was for the B orchestration. A ReceivePortID. There was no such parameter for the B orchestration. It was created because I have bound the orchestration port with Physical Receive File port. This binding means the PortID parameter is added to the subscription. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for the sending orchestration, A_1. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, receive orchestrations, A_21 or A_22, but it is A_1 orchestration and S_SentFromA_1 port. Now we have to choose a Partner Orchestration parameter for the received orchestrations, A_21 and A_22. Nothing strange is here except a parameter name. We choose the port of the sender, A_1 orchestration and S_SentFromA_1 port. As you can see the Partner Orchestration parameter for the sender and receiver orchestrations is the same. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B and by A_1 A_1 sent a message forward. A message was received by B, A_21, A_22 Let’s look at a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports.     Now let’s see a subscription of the A_21 and A_22 orchestrations. Now it makes sense. That’s why we have chosen such a strange value for the Partner Orchestration parameter of the sending orchestration. Inverse Binding This sample has three orchestrations: A_11, A_12 and A_2. A_11 and A_12 are senders, A_2 is receiver. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for a receiving orchestration, A_2. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, sent orchestrations, A_11 or A_12, but it is A_2 orchestration and R_SentToA_2 port. Now we have to choose a Partner Orchestration parameter for the sending orchestrations, A_11 and A_12. Nothing strange is here except a parameter name. We choose the port of the sender, A_2 orchestration and R_SentToA_2 port. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B, A_11 and by A_12 A_11 and A_12 sent two messages forward. The messages were received by B, A_2 Let’s see what was a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports. Here is a subscription of the A_2 orchestration. Models I had a hard time trying to explain the Partner Direct Ports in simple terms. I have finished with this model: Forward Binding Receivers know a Sender. Sender doesn’t know Receivers. Publishers know a Subscriber. Subscriber doesn’t know Publishers. 1 –> 1 1 –> M Inverse Binding Senders know a Receiver. Receiver doesn’t know Senders. Subscribers know a Publisher. Publisher doesn’t know Subscribers. 1 –> 1 M –> 1 Notes   Orchestration chain It’s worth to note, the Partner Direct Port Binding creates a chain opened from one side and closed from another. The Forward Binding: A new Receiver can be added at run-time. The Sender can not be changed without design-time changes in Receivers. The Inverse Binding: A new Sender can be added at run-time. The Receiver can not be changed without design-time changes in Senders.

    Read the article

  • Thoughts about MVC

    - by ayyash
    so i figured this one out, as a newcomer to the web development scene from the telecom biz, where we dealt with low level hardware API's, and a C++ fanboy, i tend to be bothered by automagical code, yes i appreciate the effort that went into it, and i certainly appreciate the luxury it provides to get more things done, but i just don't get it. so i decided to change that, and start investigating the new MVC based web apps, and at first it was like hitting a brick wall, i knew MVC from MFC days, so i'm familiar with the pattern, but i just couldn't get my head around the web version of it, till i came to realize the routing, is actually a separate feature to be inspected, much like understanding how LINQ works by better understanding anonymous objects. and so this article serve as an introduction to the following blogs where i share my views of how asp.net routing works, and then leverage that to the MVC level, and play around that field for a bit. as with most of my shared knowladge that may seem trivial to some, but i guess a newcomer's point of view can be useful for some folks out there.

    Read the article

  • Telesharp: An application metadata repository that enables true agility in enterprise .NET applications.

    - by Vishal
    Tellago Studios proudly announces its newest product, a third one within a year of time : TELESHARP .NET Configuration Management has always been a nightmare for any enterprise. TeleSharp is an innovative product that addresses the most common challenges of .NET applications in the enterprise. After years of struggle developing and managing large .NET applications, we decided to create a tool that makes .NET applications truly agile. You can read more about Telesharp and what difference it can make into your enterprise. Also if you want to see Telesharp in action, check the videos about it. Click here to get more information about TeleSharp trial version! Click here to register for the TeleSharp webinar on July 6th from 2PM - 3PM EST.   -Vishal

    Read the article

  • SQL Server-Determine which query is taking a long time to complete

    - by Neil Smith
    Cool little trick to determine which sql query which is taking a long time to execute, first while offending query is running from another machine do EXEC sp_who2 Locate the SPID responsible via Login, DBName and ProgramName columns, then do DBCC INPUTBUFFER (<SPID>) The offending query will be in the EventInfo column.  This is a great little time saver for me, before I found out about this I used to split my concatenated query script in to multiple sql files until I located the problem query

    Read the article

  • Marshalling the value of a char* ANSI string DLL API parameter into a C# string

    - by Brian Biales
    For those who do not mix .NET C# code with legacy DLL's that use char* pointers on a regular basis, the process to convert the strings one way or the other is non-obvious. This is not a comprehensive article on the topic at all, but rather an example of something that took me some time to go find, maybe it will save someone else the time. I am utilizing a third party too that uses a call back function to inform my application of its progress.  This callback includes a pointer that under some circumstances is a pointer to an ANSI character string.  I just need to marshal it into a C# string variable.  Seems pretty simple, yes?  Well, it is, (as are most things, once you know how to do them). The parameter of my callback function is of type IntPtr, which implies it is an integer representation of a pointer.  If I know the pointer is pointing to a simple ANSI string, here is a simple static method to copy it to a C# string: private static string GetStringFromCharStar(IntPtr ptr) {     return System.Runtime.InteropServices.Marshal.PtrToStringAnsi(ptr); } The System.Runtime.InteropServices is where to look any time you are mixing legacy unmanaged code with your .NET application.

    Read the article

  • Windows Azure Tools for Microsoft Visual Studio 1.2 (June 2010)

    - by Jim Duffy
    I have good news! Microsoft has released the June 2010 Windows Azure Tools + SDK. These new tools extend Visual Studio (both VS 2010 & VS 2008) for Windows Azure development. With these tools you can create, configure, debug, build, run, and deploy scalable web apps on Windows Azure. At first glance what I see as some of the most interesting points of interest are the fact that Visual Studio 2010 RTM is fully supported as well as .NET 4 support. You can choose to build your apps with the .NET 3.5 or .NET 4 frameworks. Another area of interest that I’ll be digging into is the cloud storage explorer. It provides a read-only view of your Windows Azure tables and blob containers from within Visual Studio via the Server Explorer. I’m sure I’ll have more to say about the Windows Azure Tools as I dig deeper… Have a day. :-|

    Read the article

  • Windows Azure Tools for Microsoft Visual Studio 1.2 (June 2010)

    - by Eric Nelson
    Yey – we have a public release of the Windows Azure Tools which fully supports Visual Studio 2010 RTM and the .NET 4 Framework. And the biggy I have been waiting for – IntelliTrace support to debug your cloud deployed services (Requires  VS2010 Ultimate) Download today http://bit.ly/azuretoolsjune New for version 1.2: Visual Studio 2010 RTM Support: Full support for Visual Studio 2010 RTM. .NET 4 support: Choose to build services targeting either the .NET 3.5 or .NET 4 framework. Cloud storage explorer: Displays a read-only view of Windows Azure tables and blob containers through Server Explorer. Integrated deployment: Deploy services directly from Visual Studio by selecting ‘Publish’ from Solution Explorer. Service monitoring: Keep track of the state of your services through the ‘compute’ node in Server Explorer. IntelliTrace support for services running in the cloud: Adds support for debugging services in the cloud by using the Visual Studio 2010 IntelliTrace feature. This is enabled by using the deployment feature, and logs are retrieved through Server Explorer. Related Links: http://ukazure.ning.com for UK fans of Windows Azure IntelliTrace explained

    Read the article

  • AJI Report #20 | Devin Rader On Usability and REST

    - by Jeff Julian
    Devin is one of our great friends from days of ole'. Devin was a great community leader in St. Louis .NET space. The then moved to New Jersey to work at Infragistics where he was a huge asset for the .NET and Usability communities. He is now at Twilio as an evangelist and you will see him pretty much at every cool conference promoting Twilio and educating the masses. In this show, we talk about what Usability is and how developers can understanding what the how to solve problems with usability and some of the patterns we can use. Devin really wants to bring the focus back to the beginning of knowing who your users are and we talk about how to produce personas of the users of our products. We dive into REST for the second piece of this podcast. Devin helps us understand more about REST and what goes into a RESTful application or service. Listen to the Show Twilio Site: http://www.twilio.com Twitter: @DevinRader LinkedIn: Profile Link

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >