Search Results

Search found 15515 results on 621 pages for 'datetime format'.

Page 154/621 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Silverlight 4 Tools for VS 2010 and WCF RIA Services Released

    - by ScottGu
    The final release of the Silverlight 4 Tools for Visual Studio 2010 and WCF RIA Services is now available for download.  Download and Install If you already have Visual Studio 2010 installed (or the free Visual Web Developer 2010 Express), then you can install both the Silverlight 4 Tooling Support as well as WCF RIA Services support by downloading and running this setup package (note: please make sure to uninstall the preview release of the Silverlight 4 Tools for VS 2010 if you have previously installed that).  The Silverlight 4 Tools for VS 2010 package extends the Silverlight support built into Visual Studio 2010 and enables support for Silverlight 4 applications as well.  It also installs WCF RIA Services application templates and libraries: Today’s release includes the English edition of the Silverlight 4 Tooling – localized versions will be available next month for other Visual Studio languages as well. Silverlight Tooling Support Visual Studio 2010 includes rich tooling support for building Silverlight and WPF applications. It includes a WYSIWYG designer surface that enables you to easily use controls to construct UI – including the ability to take advantage of layout containers, and apply styles and resources: The VS 2010 designer enables you to leverage the rich data binding support within Silverlight and WPF, and easily wire-up bindings on controls.  The Data Sources window within Silverlight projects can be used to reference POCO objects (plain old CLR objects), WCF Services, WCF RIA Services client proxies or SharePoint Lists.  For example, let’s assume we add a “Person” class like below to our project: We could then add it to the Data Source window which will cause it to show up like below in the IDE: We can optionally customize the default UI control types that are associated for each property on the object.  For example, below we’ll default the BirthDate property to be represented by a “DatePicker” control: And then when we drag/drop the Person type from the Data Sources onto the design-surface it will automatically create UI controls that are bound to the properties of our Person class: VS 2010 allows you to optionally customize each UI binding further by selecting a control, and then right-click on any of its properties within the property-grid and pull up the “Apply Bindings” dialog: This will bring up a floating data-binding dialog that enables you to easily configure things like the binding path on the data source object, specify a format convertor, specify string-format settings, specify how validation errors should be handled, etc: In addition to providing WYSIWYG designer support for WPF and Silverlight applications, VS 2010 also provides rich XAML intellisense and code editing support – enabling a rich source editing environment. Silverlight 4 Tool Enhancements Today’s Silverlight 4 Tooling Release for VS 2010 includes a bunch of nice new features.  These include: Support for Silverlight Out of Browser Applications and Elevated Trust Applications You can open up a Silverlight application’s project properties window and click the “Enable Running Application Out of Browser” checkbox to enable you to install an offline, out of browser, version of your Silverlight 4 application.  You can then customize a number of “out of browser” settings of your application within Visual Studio: Notice above how you can now indicate that you want to run with elevated trust, with hardware graphics acceleration, as well as customize things like the Window style of the application (allowing you to build a nice polished window style for consumer applications). Support for Implicit Styles and “Go to Value Definition” Support: Silverlight 4 now allows you to define “implicit styles” for your applications.  This allows you to style controls by type (for example: have a default look for all buttons) and avoid you having to explicitly reference styles from each control.  In addition to honoring implicit styles on the designer-surface, VS 2010 also now allows you to right click on any control (or on one of it properties) and choose the “Go to Value Definition…” context menu to jump to the XAML where the style is defined, and from there you can easily navigate onward to any referenced resources.  This makes it much easier to figure out questions like “why is my button red?”: Style Intellisense VS 2010 enables you to easily modify styles you already have in XAML, and now you get intellisense for properties and their values within a style based on the TargetType of the specified control.  For example, below we have a style being set for controls of type “Button” (this is indicated by the “TargetType” property).  Notice how intellisense now automatically shows us properties for the Button control (even within the <Setter> element): Great Video - Watch the Silverlight Designer Features in Action You can see all of the above Silverlight 4 Tools for Visual Studio 2010 features (and some more cool ones I haven’t mentioned) demonstrated in action within this 20 minute Silverlight.TV video on Channel 9: WCF RIA Services Today we also shipped the V1 release of WCF RIA Services.  It is included and automatically installed as part of the Silverlight 4 Tools for Visual Studio 2010 setup. WCF RIA Services makes it much easier to build business applications with Silverlight.  It simplifies the traditional n-tier application pattern by bringing together the ASP.NET and Silverlight platforms using the power of WCF for communication.  WCF RIA Services provides a pattern to write application logic that runs on the mid-tier and controls access to data for queries, changes and custom operations. It also provides end-to-end support for common tasks such as data validation, authentication and authorization based on roles by integrating with Silverlight components on the client and ASP.NET on the mid-tier. Put simply – it makes it much easier to query data stored on a server from a client machine, optionally manipulate/modify the data on the client, and then save it back to the server.  It supports a validation architecture that helps ensure that your data is kept secure and business rules are applied consistently on both the client and middle-tiers. WCF RIA Services uses WCF for communication between the client and the server  It supports both an optimized .NET to .NET binary serialization format, as well as a set of open extensions to the ATOM format known as ODATA and an optional JavaScript Object Notation (JSON) format that can be used by any client. You can hear Nikhil and Dinesh talk a little about WCF RIA Services in this 13 minutes Channel 9 video. Putting it all Together – the Silverlight 4 Training Kit Check out the Silverlight 4 Training Kit to learn more about how to build business applications with Silverlight 4, Visual Studio 2010 and WCF RIA Services. The training kit includes 8 modules, 25 videos, and several hands-on labs that explain Silverlight 4 and WCF RIA Services concepts and walks you through building an end-to-end application with them.    The training kit is available for free and is a great way to get started. Summary I’m really excited about today’s release – as they really complete the Silverlight development story and deliver a great end to end runtime + tooling story for building applications.  All of the above features are available for use both in VS 2010 as well as the free Visual Web Developer 2010 Express Edition – making it really easy to get started building great solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Exploring the Excel Services REST API

    - by jamiet
    Over the last few years Analysis Services guru Chris Webb and I have been on something of a crusade to enable better access to data that is locked up in countless Excel workbooks that litter the hard drives of enterprise PCs. The most prominent manifestation of that crusade up to now has been a forum thread that Chris began on Microsoft Answers entitled Excel Web App API? Chris began that thread with: I was wondering whether there was an API for the Excel Web App? Specifically, I was wondering if it was possible (or if it will be possible in the future) to expose data in a spreadsheet in the Excel Web App as an OData feed, in the way that it is possible with Excel Services? Up to recently the last 10 words of that paragraph "in the way that it is possible with Excel Services" had completely washed over me however a comment on my recent blog post Thoughts on ExcelMashup.com (and a rant) by Josh Booker in which Josh said: Excel Services is a service application built for sharepoint 2010 which exposes a REST API for excel documents. We're looking forward to pros like you giving it a try now that Office365 makes sharepoint more easily accessible.  Can't wait for your future blog about using REST API to load data from Excel on Offce 365 in SSIS. made me think that perhaps the Excel Services REST API is something I should be looking into and indeed that is what I have been doing over the past few days. And you know what? I'm rather impressed with some of what Excel Services' REST API has to offer. Unfortunately Excel Services' REST API also has one debilitating aspect that renders this blog post much less useful than it otherwise would be; namely that it is not publicly available from the Excel Web App on SkyDrive. Therefore all I can do in this blog post is show you screenshots of what the REST API provides in Sharepoint rather than linking you directly to those REST resources; that's a great shame because one of the benefits of a REST API is that it is easily and ubiquitously demonstrable from a web browser. Instead I am hosting a workbook on Sharepoint in Office 365 because that does include Excel Services' REST API but, again, all I can do is show you screenshots. N.B. If anyone out there knows how to make Office-365-hosted spreadsheets publicly-accessible (i.e. without requiring a username/password) please do let me know (because knowing which forum on which to ask the question is an exercise in futility). In order to demonstrate Excel Services' REST API I needed some decent data and for that I used the World Tourism Organization Statistics Database and Yearbook - United Nations World Tourism Organization dataset hosted on Azure Datamarket (its free, by the way); this dataset "provides comprehensive information on international tourism worldwide and offers a selection of the latest available statistics on international tourist arrivals, tourism receipts and expenditure" and you can explore the data for yourself here. If you want to play along at home by viewing the data as it exists in Excel then it can be viewed here. Let's dive in.   The root of Excel Services' REST API is the model resource which resides at: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model Note that this is true for every workbook hosted in a Sharepoint document library - each Excel workbook is a RESTful resource. (Update: Mark Stacey on Twitter tells me that "It's turned off by default in onpremise Sharepoint (1 tickbox to turn on though)". Thanks Mark!) The data is provided as an ATOM feed but I have Firefox's feed reading ability turned on so you don't see the underlying XML goo. As you can see there are four top level resources, Ranges, Charts, Tables and PivotTables; exploring one of those resources is where things get interesting. Let's take a look at the Tables Resource: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables Our workbook contains only one table, called ‘Table1’ (to reiterate, you can explore this table yourself here). Viewing that table via the REST API is pretty easy, we simply append the name of the table onto our previous URI: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1') As you can see, that quite simply gives us a representation of the data in that table. What you cannot see from this screenshot is that this is pure HTML that is being served up; that is all well and good but actually we can do more interesting things. If we specify that the data should be returned not as HTML but as: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1')?$format=image then that data comes back as a pure image and can be used in any web page where you would ordinarily use images. This is the thing that I really like about Excel Services’ REST API – we can embed an image in any web page but instead of being a copy of the data, that image is actually live – if the underlying data in the workbook were to change then hitting refresh will show a new image. Pretty cool, no? The same is true of any Charts or Pivot Tables in your workbook - those can be embedded as images too and if the underlying data changes, boom, the image in your web page changes too. There is a lot of data in the workbook so the image returned by that previous URI is too large to show here so instead let’s take a look at a different resource, this time a range: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15') That URI returns cells A1 to C15 from a worksheet called “Data”: And if we ask for that as an image again: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15')?$format=image Were this image resource not behind a username/password then this would be a live image of the data in the workbook as opposed to one that I had to copy and upload elsewhere. Nonetheless I hope this little wrinkle doesn't detract from the inate value of what I am trying to articulate here; that an existing image in a web page can be changed on-the-fly simply by inserting some data into an Excel workbook. I for one think that that is very cool indeed! I think that's enough in the way of demo for now as this shows what is possible using Excel Services' REST API. Of course, not all features work quite how I would like and here is a bulleted list of some of my more negative feedback: The URIs are pig-ugly. Are "_vti_bin" & "ExcelRest.aspx" really necessary as part of the URI? Would this not be better: http://server/Documents/TourismExpenditureInMillionsOfUSD.xlsx/Model/Tables(‘Table1’) That URI provides the necessary addressability and is a lot easier to remember. Discoverability of these resources is not easy, we essentially have to handcrank a URI ourselves. Take the example of embedding a chart into a blog post - would it not be better if I could browse first through the document library to an Excel workbook and THEN through the workbook to the chart/range/table that I am interested in? Call it a wizard if you like. That would be really cool and would, I am sure, promote this feature and cut down on the copy-and-paste disease that the REST API is meant to alleviate. The resources that I demonstrated can be returned as feeds as well as images or HTML simply by changing the format parameter to ?$format=atom however for some inexplicable reason they don't return OData and no-one on the Excel Services team can tell me why (believe me, I have asked). $format is an OData parameter however other useful parameters such as $top and $filter are not supported. It would be nice if they were. Although I haven't demonstrated it here Excel Services' REST API does provide a makeshift way of altering the data by changing the value of specific cells however what it does not allow you to do is add new data into the workbook. Google Docs allows this and was one of the motivating factors for Chris Webb's forum post that I linked to above. None of this works for Excel workbooks hosted on SkyDrive This blog post is as long as it needs to be for a short introduction so I'll stop now. If you want to know more than I recommend checking out a few links: Excel Services REST API documentation on MSDNSo what does REST on Excel Services look like??? by Shahar PrishExcel Services in SharePoint 2010 REST API Syntax by Christian Stich. Any thoughts? Let's hear them in the comments section below! @Jamiet 

    Read the article

  • Parallel LINQ - PLINQ

    - by nmarun
    Turns out now with .net 4.0 we can run a query like a multi-threaded application. Say you want to query a collection of objects and return only those that meet certain conditions. Until now, we basically had one ‘control’ that iterated over all the objects in the collection, checked the condition on each object and returned if it passed. We obviously agree that if we can ‘break’ this task into smaller ones, assign each task to a different ‘control’ and ask all the controls to do their job - in-parallel, the time taken the finish the entire task will be much lower. Welcome to PLINQ. Let’s take some examples. I have the following method that uses our good ol’ LINQ. 1: private static void Linq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers =   from num in source 12: where IsDivisibleBy(num, 2) 13: select num; 14: 15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } I’ve added comments for the major steps, but the only thing I want to talk about here is the IsDivisibleBy() method. I know I could have just included the logic directly in the where clause. I called a method to add ‘delay’ to the execution of the query - to simulate a loooooooooong operation (will be easier to compare the results). 1: private static bool IsDivisibleBy(int number, int divisor) 2: { 3: // iterate over some database query 4: // to add time to the execution of this method; 5: // the TableB has around 10 records 6: for (int i = 0; i < 10; i++) 7: { 8: DataClasses1DataContext dataContext = new DataClasses1DataContext(); 9: var query = from b in dataContext.TableBs select b; 10: 11: foreach (var row in query) 12: { 13: // Do NOTHING (wish my job was like this) 14: } 15: } 16:  17: return number % divisor == 0; 18: } Now, let’s look at how to modify this to PLINQ. 1: private static void Plinq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers = from num in source.AsParallel() 12: where IsDivisibleBy(num, 2) 13: select num; 14:  15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } That’s it, this is now in PLINQ format. Oh and if you haven’t found the difference, look line 11 a little more closely. You’ll see an extension method ‘AsParallel()’ added to the ‘source’ variable. Couldn’t be more simpler right? So this is going to improve the performance for us. Let’s test it. So in my Main method of the Console application that I’m working on, I make a call to both. 1: static void Main(string[] args) 2: { 3: // set lower and upper limits 4: int lowerLimit = 1; 5: int upperLimit = 20; 6: // call the methods 7: Console.WriteLine("Calling Linq() method"); 8: Linq(lowerLimit, upperLimit); 9: 10: Console.WriteLine(); 11: Console.WriteLine("Calling Plinq() method"); 12: Plinq(lowerLimit, upperLimit); 13:  14: Console.ReadLine(); // just so I get enough time to read the output 15: } YMMV, but here are the results that I got:    It’s quite obvious from the above results that the Plinq() method is taking considerably less time than the Linq() version. I’m sure you’ve already noticed that the output of the Plinq() method is not in order. That’s because, each of the ‘control’s we sent to fetch the results, reported with values as and when they obtained them. This is something about parallel LINQ that one needs to remember – the collection cannot be guaranteed to be undisturbed. This could be counted as a negative about PLINQ (emphasize ‘could’). Nevertheless, if we want the collection to be sorted, we can use a SortedSet (.net 4.0) or build our own custom ‘sorter’. Either way we go, there’s a good chance we’ll end up with a better performance using PLINQ. And there’s another negative of PLINQ (depending on how you see it). This is regarding the CPU cycles. See the usage for Linq() method (used ResourceMonitor): I have dual CPU’s and see the height of the peak in the bottom two blocks and now compare to what happens when I run the Plinq() method. The difference is obvious. Higher usage, but for a shorter duration (width of the peak). Both these points make sense in both cases. Linq() runs for a longer time, but uses less resources whereas Plinq() runs for a shorter time and consumes more resources. Even after knowing all these, I’m still inclined towards PLINQ. PLINQ rocks! (no hard feelings LINQ)

    Read the article

  • Exploring the Excel Services REST API

    - by jamiet
    Over the last few years Analysis Services guru Chris Webb and I have been on something of a crusade to enable better access to data that is locked up in countless Excel workbooks that litter the hard drives of enterprise PCs. The most prominent manifestation of that crusade up to now has been a forum thread that Chris began on Microsoft Answers entitled Excel Web App API? Chris began that thread with: I was wondering whether there was an API for the Excel Web App? Specifically, I was wondering if it was possible (or if it will be possible in the future) to expose data in a spreadsheet in the Excel Web App as an OData feed, in the way that it is possible with Excel Services? Up to recently the last 10 words of that paragraph "in the way that it is possible with Excel Services" had completely washed over me however a comment on my recent blog post Thoughts on ExcelMashup.com (and a rant) by Josh Booker in which Josh said: Excel Services is a service application built for sharepoint 2010 which exposes a REST API for excel documents. We're looking forward to pros like you giving it a try now that Office365 makes sharepoint more easily accessible.  Can't wait for your future blog about using REST API to load data from Excel on Offce 365 in SSIS. made me think that perhaps the Excel Services REST API is something I should be looking into and indeed that is what I have been doing over the past few days. And you know what? I'm rather impressed with some of what Excel Services' REST API has to offer. Unfortunately Excel Services' REST API also has one debilitating aspect that renders this blog post much less useful than it otherwise would be; namely that it is not publicly available from the Excel Web App on SkyDrive. Therefore all I can do in this blog post is show you screenshots of what the REST API provides in Sharepoint rather than linking you directly to those REST resources; that's a great shame because one of the benefits of a REST API is that it is easily and ubiquitously demonstrable from a web browser. Instead I am hosting a workbook on Sharepoint in Office 365 because that does include Excel Services' REST API but, again, all I can do is show you screenshots. N.B. If anyone out there knows how to make Office-365-hosted spreadsheets publicly-accessible (i.e. without requiring a username/password) please do let me know (because knowing which forum on which to ask the question is an exercise in futility). In order to demonstrate Excel Services' REST API I needed some decent data and for that I used the World Tourism Organization Statistics Database and Yearbook - United Nations World Tourism Organization dataset hosted on Azure Datamarket (its free, by the way); this dataset "provides comprehensive information on international tourism worldwide and offers a selection of the latest available statistics on international tourist arrivals, tourism receipts and expenditure" and you can explore the data for yourself here. If you want to play along at home by viewing the data as it exists in Excel then it can be viewed here. Let's dive in.   The root of Excel Services' REST API is the model resource which resides at: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model Note that this is true for every workbook hosted in a Sharepoint document library - each Excel workbook is a RESTful resource. (Update: Mark Stacey on Twitter tells me that "It's turned off by default in onpremise Sharepoint (1 tickbox to turn on though)". Thanks Mark!) The data is provided as an ATOM feed but I have Firefox's feed reading ability turned on so you don't see the underlying XML goo. As you can see there are four top level resources, Ranges, Charts, Tables and PivotTables; exploring one of those resources is where things get interesting. Let's take a look at the Tables Resource: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables Our workbook contains only one table, called ‘Table1’ (to reiterate, you can explore this table yourself here). Viewing that table via the REST API is pretty easy, we simply append the name of the table onto our previous URI: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1') As you can see, that quite simply gives us a representation of the data in that table. What you cannot see from this screenshot is that this is pure HTML that is being served up; that is all well and good but actually we can do more interesting things. If we specify that the data should be returned not as HTML but as: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1')?$format=image then that data comes back as a pure image and can be used in any web page where you would ordinarily use images. This is the thing that I really like about Excel Services’ REST API – we can embed an image in any web page but instead of being a copy of the data, that image is actually live – if the underlying data in the workbook were to change then hitting refresh will show a new image. Pretty cool, no? The same is true of any Charts or Pivot Tables in your workbook - those can be embedded as images too and if the underlying data changes, boom, the image in your web page changes too. There is a lot of data in the workbook so the image returned by that previous URI is too large to show here so instead let’s take a look at a different resource, this time a range: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15') That URI returns cells A1 to C15 from a worksheet called “Data”: And if we ask for that as an image again: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15')?$format=image Were this image resource not behind a username/password then this would be a live image of the data in the workbook as opposed to one that I had to copy and upload elsewhere. Nonetheless I hope this little wrinkle doesn't detract from the inate value of what I am trying to articulate here; that an existing image in a web page can be changed on-the-fly simply by inserting some data into an Excel workbook. I for one think that that is very cool indeed! I think that's enough in the way of demo for now as this shows what is possible using Excel Services' REST API. Of course, not all features work quite how I would like and here is a bulleted list of some of my more negative feedback: The URIs are pig-ugly. Are "_vti_bin" & "ExcelRest.aspx" really necessary as part of the URI? Would this not be better: http://server/Documents/TourismExpenditureInMillionsOfUSD.xlsx/Model/Tables(‘Table1’) That URI provides the necessary addressability and is a lot easier to remember. Discoverability of these resources is not easy, we essentially have to handcrank a URI ourselves. Take the example of embedding a chart into a blog post - would it not be better if I could browse first through the document library to an Excel workbook and THEN through the workbook to the chart/range/table that I am interested in? Call it a wizard if you like. That would be really cool and would, I am sure, promote this feature and cut down on the copy-and-paste disease that the REST API is meant to alleviate. The resources that I demonstrated can be returned as feeds as well as images or HTML simply by changing the format parameter to ?$format=atom however for some inexplicable reason they don't return OData and no-one on the Excel Services team can tell me why (believe me, I have asked). $format is an OData parameter however other useful parameters such as $top and $filter are not supported. It would be nice if they were. Although I haven't demonstrated it here Excel Services' REST API does provide a makeshift way of altering the data by changing the value of specific cells however what it does not allow you to do is add new data into the workbook. Google Docs allows this and was one of the motivating factors for Chris Webb's forum post that I linked to above. None of this works for Excel workbooks hosted on SkyDrive This blog post is as long as it needs to be for a short introduction so I'll stop now. If you want to know more than I recommend checking out a few links: Excel Services REST API documentation on MSDNSo what does REST on Excel Services look like??? by Shahar PrishExcel Services in SharePoint 2010 REST API Syntax by Christian Stich. Any thoughts? Let's hear them in the comments section below! @Jamiet 

    Read the article

  • How to Use the Signature Editor in Outlook 2013

    - by Lori Kaufman
    The Signature Editor in Outlook 2013 allows you to create a custom signature from text, graphics, or business cards. We will show you how to use the various features of the Signature Editor to customize your signatures. To open the Signature Editor, click the File tab and select Options on the left side of the Account Information screen. Then, click Mail on the left side of the Options dialog box and click the Signatures button. For more details, refer to one of the articles mentioned above. Changing the font for your signature is pretty self-explanatory. Select the text for which you want to change the font and select the desired font from the drop-down list. You can also set the justification (left, center, right) for each line of text separately. The drop-down list that reads Automatic by default allows you to change the color of the selected text. Click OK to accept your changes and close the Signatures and Stationery dialog box. To see your signature in an email, click Mail on the Navigation Bar. Click New Email on the Home tab. The Message window displays and your default signature is inserted into the body of the email. NOTE: You shouldn’t use fonts that are not common in your signatures. In order for the recipient to see your signature as you intended, the font you choose also needs to be installed on the recipient’s computer. If the font is not installed, the recipient would see a different font, the wrong characters, or even placeholder characters, which are empty square boxes. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can save it as a draft if you want, but it’s not necessary. If you decide to use a font that is not common, a better way to do so would be to create a signature as an image, or logo. Create your image or logo in an image editing program making it the exact size you want to use in your signature. Save the image in a file size as small as possible. The .jpg format works well for pictures, the .png format works well for detailed graphics, and the .gif format works well for simple graphics. The .gif format generally produces the smallest files. To insert an image in your signature, open the Signatures and Stationery dialog box again. Either delete the text currently in the editor, if any, or create a new signature. Then, click the image button on the editor’s toolbar. On the Insert Picture dialog box, navigate to the location of your image, select the file, and click Insert. If you want to insert an image from the web, you must enter the full URL for the image in the File name edit box (instead of the local image filename). For example, http://www.somedomain.com/images/signaturepic.gif. If you want to link to the image at the specified URL, you must also select Link to File from the Insert drop-down list to maintain the URL reference. The image is inserted into the Edit signature box. Click OK to accept your changes and close the Signatures and Stationery dialog box. Create a new email message again. You’ll notice the image you inserted into the signature displays in the body of the message. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You may want to put a link to a webpage or an email link in your signature. To do this, open the Signatures and Stationery dialog box again. Enter the text to display for the link, highlight the text, and click the Hyperlink button on the editor’s toolbar. On the Insert Hyperlink dialog box, select the type of link from the list on the left and enter the webpage, email, or other type of address in the Address edit box. You can change the text that will display in the signature for the link in the Text to display edit box. Click OK to accept your changes and close the dialog box. The link displays in the editor with the default blue, underlined text. Click OK to accept your changes and close the Signatures and Stationery dialog box. Here’s an example of an email message with a link in the signature. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can also insert your contact information into your signature as a Business Card. To do so, click Business Card on the editor’s toolbar. On the Insert Business Card dialog box, select the contact you want to insert as a Business Card. Select a size for the Business Card image from the Size drop-down list. Click OK. The Business Card image displays in the Signature Editor. Click OK to accept your changes and close the Signatures and Stationery dialog box. When you insert a Business Card into your signature, the Business Card image displays in the body of the email message and a .vcf file containing your contact information is attached to the email. This .vcf file can be imported into programs like Outlook that support this format. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can also insert your Business Card into your signature without the image or without the .vcf file attached. If you want to provide recipients your contact info in a .vcf file, but don’t want to attach it to every email, you can upload the .vcf file to a location on the internet and add a link to the file, such as “Get my vCard,” in your signature. NOTE: If you want to edit your business card, such as applying a different template to it, you must select a different View other than People for your Contacts folder so you can open the full contact editing window.     

    Read the article

  • Using the ASP.NET Cache to cache data in a Model or Business Object layer, without a dependency on System.Web in the layer - Part One.

    - by Rhames
    ASP.NET applications can make use of the System.Web.Caching.Cache object to cache data and prevent repeated expensive calls to a database or other store. However, ideally an application should make use of caching at the point where data is retrieved from the database, which typically is inside a Business Objects or Model layer. One of the key features of using a UI pattern such as Model-View-Presenter (MVP) or Model-View-Controller (MVC) is that the Model and Presenter (or Controller) layers are developed without any knowledge of the UI layer. Introducing a dependency on System.Web into the Model layer would break this independence of the Model from the View. This article gives a solution to this problem, using dependency injection to inject the caching implementation into the Model layer at runtime. This allows caching to be used within the Model layer, without any knowledge of the actual caching mechanism that will be used. Create a sample application to use the caching solution Create a test SQL Server database This solution uses a SQL Server database with the same Sales data used in my previous post on calculating running totals. The advantage of using this data is that it gives nice slow queries that will exaggerate the effect of using caching! To create the data, first create a new SQL database called CacheSample. Next run the following script to create the Sale table and populate it: USE CacheSample GO   CREATE TABLE Sale(DayCount smallint, Sales money) CREATE CLUSTERED INDEX ndx_DayCount ON Sale(DayCount) go INSERT Sale VALUES (1,120) INSERT Sale VALUES (2,60) INSERT Sale VALUES (3,125) INSERT Sale VALUES (4,40)   DECLARE @DayCount smallint, @Sales money SET @DayCount = 5 SET @Sales = 10   WHILE @DayCount < 5000  BEGIN  INSERT Sale VALUES (@DayCount,@Sales)  SET @DayCount = @DayCount + 1  SET @Sales = @Sales + 15  END Next create a stored procedure to calculate the running total, and return a specified number of rows from the Sale table, using the following script: USE [CacheSample] GO   SET ANSI_NULLS ON GO   SET QUOTED_IDENTIFIER ON GO   -- ============================================= -- Author:        Robin -- Create date: -- Description:   -- ============================================= CREATE PROCEDURE [dbo].[spGetRunningTotals]       -- Add the parameters for the stored procedure here       @HighestDayCount smallint = null AS BEGIN       -- SET NOCOUNT ON added to prevent extra result sets from       -- interfering with SELECT statements.       SET NOCOUNT ON;         IF @HighestDayCount IS NULL             SELECT @HighestDayCount = MAX(DayCount) FROM dbo.Sale                   DECLARE @SaleTbl TABLE (DayCount smallint, Sales money, RunningTotal money)         DECLARE @DayCount smallint,                   @Sales money,                   @RunningTotal money         SET @RunningTotal = 0       SET @DayCount = 0         DECLARE rt_cursor CURSOR       FOR       SELECT DayCount, Sales       FROM Sale       ORDER BY DayCount         OPEN rt_cursor         FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales         WHILE @@FETCH_STATUS = 0 AND @DayCount <= @HighestDayCount        BEGIN        SET @RunningTotal = @RunningTotal + @Sales        INSERT @SaleTbl VALUES (@DayCount,@Sales,@RunningTotal)        FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales        END         CLOSE rt_cursor       DEALLOCATE rt_cursor         SELECT DayCount, Sales, RunningTotal       FROM @SaleTbl   END   GO   Create the Sample ASP.NET application In Visual Studio create a new solution and add a class library project called CacheSample.BusinessObjects and an ASP.NET web application called CacheSample.UI. The CacheSample.BusinessObjects project will contain a single class to represent a Sale data item, with all the code to retrieve the sales from the database included in it for simplicity (normally I would at least have a separate Repository or other object that is responsible for retrieving data, and probably a data access layer as well, but for this sample I want to keep it simple). The C# code for the Sale class is shown below: using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient;   namespace CacheSample.BusinessObjects {     public class Sale     {         public Int16 DayCount { get; set; }         public decimal Sales { get; set; }         public decimal RunningTotal { get; set; }           public static IEnumerable<Sale> GetSales(int? highestDayCount)         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager .ConnectionStrings["CacheSample"].ConnectionString;               using(SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         }     } }   The static GetSale() method makes a call to the spGetRunningTotals stored procedure and then reads each row from the returned SqlDataReader into an instance of the Sale class, it then returns a List of the Sale objects, as IEnnumerable<Sale>. A reference to System.Configuration needs to be added to the CacheSample.BusinessObjects project so that the connection string can be read from the web.config file. In the CacheSample.UI ASP.NET project, create a single web page called ShowSales.aspx, and make this the default start up page. This page will contain a single button to call the GetSales() method and a label to display the results. The html mark up and the C# code behind are shown below: ShowSales.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowSales.aspx.cs" Inherits="CacheSample.UI.ShowSales" %>   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">   <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server">     <title>Cache Sample - Show All Sales</title> </head> <body>     <form id="form1" runat="server">     <div>         <asp:Button ID="btnTest1" runat="server" onclick="btnTest1_Click"             Text="Get All Sales" />         &nbsp;&nbsp;&nbsp;         <asp:Label ID="lblResults" runat="server"></asp:Label>         </div>     </form> </body> </html>   ShowSales.aspx.cs using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls;   using CacheSample.BusinessObjects;   namespace CacheSample.UI {     public partial class ShowSales : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         {         }           protected void btnTest1_Click(object sender, EventArgs e)         {             System.Diagnostics.Stopwatch stopWatch = new System.Diagnostics.Stopwatch();             stopWatch.Start();               var sales = Sale.GetSales(null);               var lastSales = sales.Last();               stopWatch.Stop();               lblResults.Text = string.Format( "Count of Sales: {0}, Last DayCount: {1}, Total Sales: {2}. Query took {3} ms", sales.Count(), lastSales.DayCount, lastSales.RunningTotal, stopWatch.ElapsedMilliseconds);         }       } }   Finally we need to add a connection string to the CacheSample SQL Server database, called CacheSample, to the web.config file: <?xmlversion="1.0"?>   <configuration>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   Run the application and click the button a few times to see how long each call to the database takes. On my system, each query takes about 450ms. Next I shall look at a solution to use the ASP.NET caching to cache the data returned by the query, so that subsequent requests to the GetSales() method are much faster. Adding Data Caching Support I am going to create my caching support in a separate project called CacheSample.Caching, so the next step is to add a class library to the solution. We shall be using the application configuration to define the implementation of our caching system, so we need a reference to System.Configuration adding to the project. ICacheProvider<T> Interface The first step in adding caching to our application is to define an interface, called ICacheProvider, in the CacheSample.Caching project, with methods to retrieve any data from the cache or to retrieve the data from the data source if it is not present in the cache. Dependency Injection will then be used to inject an implementation of this interface at runtime, allowing the users of the interface (i.e. the CacheSample.BusinessObjects project) to be completely unaware of how the caching is actually implemented. As data of any type maybe retrieved from the data source, it makes sense to use generics in the interface, with a generic type parameter defining the data type associated with a particular instance of the cache interface implementation. The C# code for the ICacheProvider interface is shown below: using System; using System.Collections.Generic;   namespace CacheSample.Caching {     public interface ICacheProvider     {     }       public interface ICacheProvider<T> : ICacheProvider     {         T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);           IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);     } }   The empty non-generic interface will be used as a type in a Dictionary generic collection later to store instances of the ICacheProvider<T> implementation for reuse, I prefer to use a base interface when doing this, as I think the alternative of using object makes for less clear code. The ICacheProvider<T> interface defines two overloaded Fetch methods, the difference between these is that one will return a single instance of the type T and the other will return an IEnumerable<T>, providing support for easy caching of collections of data items. Both methods will take a key parameter, which will uniquely identify the cached data, a delegate of type Func<T> or Func<IEnumerable<T>> which will provide the code to retrieve the data from the store if it is not present in the cache, and absolute or relative expiry policies to define when a cached item should expire. Note that at present there is no support for cache dependencies, but I shall be showing a method of adding this in part two of this article. CacheProviderFactory Class We need a mechanism of creating instances of our ICacheProvider<T> interface, using Dependency Injection to get the implementation of the interface. To do this we shall create a CacheProviderFactory static class in the CacheSample.Caching project. This factory will provide a generic static method called GetCacheProvider<T>(), which shall return instances of ICacheProvider<T>. We can then call this factory method with the relevant data type (for example the Sale class in the CacheSample.BusinessObject project) to get a instance of ICacheProvider for that type (e.g. call CacheProviderFactory.GetCacheProvider<Sale>() to get the ICacheProvider<Sale> implementation). The C# code for the CacheProviderFactory is shown below: using System; using System.Collections.Generic;   using CacheSample.Caching.Configuration;   namespace CacheSample.Caching {     public static class CacheProviderFactory     {         private static Dictionary<Type, ICacheProvider> cacheProviders = new Dictionary<Type, ICacheProvider>();         private static object syncRoot = new object();           ///<summary>         /// Factory method to create or retrieve an implementation of the  /// ICacheProvider interface for type <typeparamref name="T"/>.         ///</summary>         ///<typeparam name="T">  /// The type that this cache provider instance will work with  ///</typeparam>         ///<returns>An instance of the implementation of ICacheProvider for type  ///<typeparamref name="T"/>, as specified by the application  /// configuration</returns>         public static ICacheProvider<T> GetCacheProvider<T>()         {             ICacheProvider<T> cacheProvider = null;             // Get the Type reference for the type parameter T             Type typeOfT = typeof(T);               // Lock the access to the cacheProviders dictionary             // so multiple threads can work with it             lock (syncRoot)             {                 // First check if an instance of the ICacheProvider implementation  // already exists in the cacheProviders dictionary for the type T                 if (cacheProviders.ContainsKey(typeOfT))                     cacheProvider = (ICacheProvider<T>)cacheProviders[typeOfT];                 else                 {                     // There is not already an instance of the ICacheProvider in       // cacheProviders for the type T                     // so we need to create one                       // Get the Type reference for the application's implementation of       // ICacheProvider from the configuration                     Type cacheProviderType = Type.GetType(CacheProviderConfigurationSection.Current. CacheProviderType);                     if (cacheProviderType != null)                     {                         // Now get a Type reference for the Cache Provider with the                         // type T generic parameter                         Type typeOfCacheProviderTypeForT = cacheProviderType.MakeGenericType(new Type[] { typeOfT });                         if (typeOfCacheProviderTypeForT != null)                         {                             // Create the instance of the Cache Provider and add it to // the cacheProviders dictionary for future use                             cacheProvider = (ICacheProvider<T>)Activator. CreateInstance(typeOfCacheProviderTypeForT);                             cacheProviders.Add(typeOfT, cacheProvider);                         }                     }                 }             }               return cacheProvider;                 }     } }   As this code uses Activator.CreateInstance() to create instances of the ICacheProvider<T> implementation, which is a slow process, the factory class maintains a Dictionary of the previously created instances so that a cache provider needs to be created only once for each type. The type of the implementation of ICacheProvider<T> is read from a custom configuration section in the application configuration file, via the CacheProviderConfigurationSection class, which is described below. CacheProviderConfigurationSection Class The implementation of ICacheProvider<T> will be specified in a custom configuration section in the application’s configuration. To handle this create a folder in the CacheSample.Caching project called Configuration, and add a class called CacheProviderConfigurationSection to this folder. This class will extend the System.Configuration.ConfigurationSection class, and will contain a single string property called CacheProviderType. The C# code for this class is shown below: using System; using System.Configuration;   namespace CacheSample.Caching.Configuration {     internal class CacheProviderConfigurationSection : ConfigurationSection     {         public static CacheProviderConfigurationSection Current         {             get             {                 return (CacheProviderConfigurationSection) ConfigurationManager.GetSection("cacheProvider");             }         }           [ConfigurationProperty("type", IsRequired=true)]         public string CacheProviderType         {             get             {                 return (string)this["type"];             }         }     } }   Adding Data Caching to the Sales Class We now have enough code in place to add caching to the GetSales() method in the CacheSample.BusinessObjects.Sale class, even though we do not yet have an implementation of the ICacheProvider<T> interface. We need to add a reference to the CacheSample.Caching project to CacheSample.BusinessObjects so that we can use the ICacheProvider<T> interface within the GetSales() method. Once the reference is added, we can first create a unique string key based on the method name and the parameter value, so that the same cache key is used for repeated calls to the method with the same parameter values. Then we get an instance of the cache provider for the Sales type, using the CacheProviderFactory, and pass the existing code to retrieve the data from the database as the retrievalMethod delegate in a call to the Cache Provider Fetch() method. The C# code for the modified GetSales() method is shown below: public static IEnumerable<Sale> GetSales(int? highestDayCount) {     string cacheKey = string.Format("CacheSample.BusinessObjects.GetSalesWithCache({0})", highestDayCount);       return CacheSample.Caching.CacheProviderFactory. GetCacheProvider<Sale>().Fetch(cacheKey,         delegate()         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager. ConnectionStrings["CacheSample"].ConnectionString;               using (SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         },         null,         new TimeSpan(0, 10, 0)); }     This example passes the code to retrieve the Sales data from the database to the Cache Provider as an anonymous method, however it could also be written as a lambda. The main advantage of using an anonymous function (method or lambda) is that the code inside the anonymous function can access the parameters passed to the GetSales() method. Finally the absolute expiry is set to null, and the relative expiry set to 10 minutes, to indicate that the cache entry should be removed 10 minutes after the last request for the data. As the ICacheProvider<T> has a Fetch() method that returns IEnumerable<T>, we can simply return the results of the Fetch() method to the caller of the GetSales() method. This should be all that is needed for the GetSales() method to now retrieve data from a cache after the first time the data has be retrieved from the database. Implementing a ASP.NET Cache Provider The final step is to actually implement the ICacheProvider<T> interface, and add the implementation details to the web.config file for the dependency injection. The cache provider implementation needs to have access to System.Web. Therefore it could be placed in the CacheSample.UI project, or in its own project that has a reference to System.Web. Implementing the Cache Provider in a separate project is my favoured approach. Create a new project inside the solution called CacheSample.CacheProvider, and add references to System.Web and CacheSample.Caching to this project. Add a class to the project called AspNetCacheProvider. Make the class a generic class by adding the generic parameter <T> and indicate that the class implements ICacheProvider<T>. The C# code for the AspNetCacheProvider class is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Caching;   using CacheSample.Caching;   namespace CacheSample.CacheProvider {     public class AspNetCacheProvider<T> : ICacheProvider<T>     {         #region ICacheProvider<T> Members           public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           #endregion           #region Helper Methods           private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             U value;             if (!TryGetValue<U>(key, out value))             {                 value = retrieveData();                 if (!absoluteExpiry.HasValue)                     absoluteExpiry = Cache.NoAbsoluteExpiration;                   if (!relativeExpiry.HasValue)                     relativeExpiry = Cache.NoSlidingExpiration;                   HttpContext.Current.Cache.Insert(key, value, null, absoluteExpiry.Value, relativeExpiry.Value);             }             return value;         }           private bool TryGetValue<U>(string key, out U value)         {             object cachedValue = HttpContext.Current.Cache.Get(key);             if (cachedValue == null)             {                 value = default(U);                 return false;             }             else             {                 try                 {                     value = (U)cachedValue;                     return true;                 }                 catch                 {                     value = default(U);                     return false;                 }             }         }           #endregion       } }   The two interface Fetch() methods call a private method called FetchAndCache(). This method first checks for a element in the HttpContext.Current.Cache with the specified cache key, and if so tries to cast this to the specified type (either T or IEnumerable<T>). If the cached element is found, the FetchAndCache() method simply returns it. If it is not found in the cache, the method calls the retrievalMethod delegate to get the data from the data source, and then adds this to the HttpContext.Current.Cache. The final step is to add the AspNetCacheProvider class to the relevant custom configuration section in the CacheSample.UI.Web.Config file. To do this there needs to be a <configSections> element added as the first element in <configuration>. This will match a custom section called <cacheProvider> with the CacheProviderConfigurationSection. Then we add a <cacheProvider> element, with a type property set to the fully qualified assembly name of the AspNetCacheProvider class, as shown below: <?xmlversion="1.0"?>   <configuration>  <configSections>     <sectionname="cacheProvider" type="CacheSample.Base.Configuration.CacheProviderConfigurationSection, CacheSample.Base" />  </configSections>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <cacheProvidertype="CacheSample.CacheProvider.AspNetCacheProvider`1, CacheSample.CacheProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null">  </cacheProvider>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   One point to note is that the fully qualified assembly name of the AspNetCacheProvider class includes the notation `1 after the class name, which indicates that it is a generic class with a single generic type parameter. The CacheSample.UI project needs to have references added to CacheSample.Caching and CacheSample.CacheProvider so that the actual application is aware of the relevant cache provider implementation. Conclusion After implementing this solution, you should have a working cache provider mechanism, that will allow the middle and data access layers to implement caching support when retrieving data, without any knowledge of the actually caching implementation. If the UI is not ASP.NET based, if for example it is Winforms or WPF, the implementation of ICacheProvider<T> would be written around whatever technology is available. It could even be a standalone caching system that takes full responsibility for adding and removing items from a global store. The next part of this article will show how this caching mechanism may be extended to provide support for cache dependencies, such as the System.Web.Caching.SqlCacheDependency. Another possible extension would be to cache the cache provider implementations instead of storing them in a static Dictionary in the CacheProviderFactory. This would prevent a build up of seldom used cache providers in the application memory, as they could be removed from the cache if not used often enough, although in reality there are probably unlikely to be vast numbers of cache provider implementation instances, as most applications do not have a massive number of business object or model types.

    Read the article

  • SOA Suite Integration: Part 3: Loading files

    - by Anthony Shorten
    One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process. For this example I will use the File Adapter to load a file of user and emails to update the user object within the Oracle Utilities Application Framework. Remember you can repeat this process with other objects and other file types. Again I am illustrating the ease of integration. The first thing is to create an empty BPEL process that will hold our flow. In Oracle JDeveloper this can be achieved by specifying the Define Service Later template (as other templates have predefined inputs and outputs and in this case we want to specify those). So I will create simpleFileLoad process to house our process. You will start with an empty canvas so you need to first specify the load part of the process using the File Adapter. Select the File Adapter from the Component Palette under BPEL Services and drag and drop it to the left side Partner Links (left is input). You name the Service. In this case I chose LoadFile. Press Next. We will define the interface as part of the wizard so select Define from operation and schema (specified later). Press Next. We are going to choose Read File to denote that we will read the file and specify the default Operation Name as Read. Press Next. The next step is to tell the Adapter the location of the files, how to process them and what to do with them after they have been processed. I am using hardcoded locations in this example but you can have logical locations as well. Press Next. I am now going to tell the adapter how to recognize the files I want to load. In my case I am using CSV files and more importantly I am tell the adapter to run the process for each record in the file it encounters. Press Next. Now, I tell the adapter how often I want to poll for the files. I have taken the defaults. Press Next. At this stage I have no explanation of the format of the input. So I am going to invoke the Native Format Wizard which will guide me through the process of creating the file input format. Clicking the purple cog icon will start the wizard. After an introduction screen (not shown), you specify the format of the input file. The File Adapter supports multiple format types. For this example, I will use Delimited as I am going to load a CSV file. Press Next. The best way for the wizard to work is with a sample. I have a sample file and the wizard will ask how much of the file to use as a template. I will use the defaults. Note: If you are using a language that has other languages other than US-ASCII, it is at this point you specify the character set to use.  Press Next. The sample contains multiple instances of a single record type. The wizard supports complex types as well. We will use the appropriate setting for our file. Press Next. You have to specify the file element and the record element. This will be used by the input wizard to translate the CSV data into an XML structure (this will make sense later). I am using LoadUsers as my file delimiter (root element) and User Record as my record root element. Press Next. As the file is CSV the delimiter is "," so I will also specify that the End Of Line (EOL) indicator indicates the end of a record. Press Next. Up until this point your have not given the columns their names. In my case my sample includes the column names in the first record. This is not always the case but you can specify the names and formats of columns in this dialog (not shown). Press Next. The wizard now generates the schema for the input file. You can specify a name for the schema. I have used userupdate.xsd. We want to verify the schema so press Test. You can test the schema by specifying an input sample. and pressing the green play button. You will see the delimiters you specified earlier for the file and the records. Press Ok to continue. A confirmation screen will be displayed showing you the location of the schema in your project. Press Finish to return to the File Adapter configuration. You will now see the schema and elements prepopulated from the wizard. Press Next. The File Adapter configuration is now complete. Press Finish. Now you need to receive the input from the LoadFile component so we need to place a Receive node in the BPEL process by drag and dropping the Receive component from the Component Palette under BPEL Constructs onto the BPEL process. We link the receive process with the LoadFile component by dragging the left most connect node of the Receive node to the LoadFile component. Once the link is established you need to name the Receive node appropriately and as in the post of the last part of this series you need to generate input variables for the BPEL process to hold the input records in. You need to now add the product Web Service. The process is the same as described in the post of the last part of this series. You drop the Web Service BPEL Service onto the right side of the process and fill in the details of the WSDL URL . You also have to add an Invoke node to call the service and generate the input and outputs variables for the call in the Invoke node. Now, to get the inputs from File to the service. You have to use a Transform (you can use an Assign action but a Transform action is more flexible). You drag and drop the Transform component from the Component Palette under Oracle Extensions and place it between the Receive and Invoke nodes. We name the Transform Node, Mapper File and associate the source of the mapping the schema from the Receive node and the output will be the input variable from the Invoke node. We now build the transform. We first map the user and email attributes by drag and drop the elements from the left to the right. The reason we needed to use the transform is that we will be telling the AS-User service that we want to issue an update action. Remember when we registered the service we actually used Read as the default. If we do not otherwise inform the service to use the Update action it will use the Read action instead (which is not desired). To specify the update action you need to click on the transactionType node on the right and select Set Text to set the action. You need to specify the transactionType of UPD (for update). The mapping is now complete. The final BPEL process is ready for deployment. You then deploy the BPEL process to the server and to test the service by simply dropping a file, in the same pattern/name as you specified, in the directory you specified in the File Adapter. You will see each record as a separate instance entry in the Fusion Middleware Control console. You can now load files into the product. You can repeat this process for each type of file to process. While this was a simple example it illustrates the method of loading data can be achieved using SOA Suite in conjunction with our products.

    Read the article

  • Working with PivotTables in Excel

    - by Mark Virtue
    PivotTables are one of the most powerful features of Microsoft Excel.  They allow large amounts of data to be analyzed and summarized in just a few mouse clicks. In this article, we explore PivotTables, understand what they are, and learn how to create and customize them. Note:  This article is written using Excel 2010 (Beta).  The concept of a PivotTable has changed little over the years, but the method of creating one has changed in nearly every iteration of Excel.  If you are using a version of Excel that is not 2010, expect different screens from the ones you see in this article. A Little History In the early days of spreadsheet programs, Lotus 1-2-3 ruled the roost.  Its dominance was so complete that people thought it was a waste of time for Microsoft to bother developing their own spreadsheet software (Excel) to compete with Lotus.  Flash-forward to 2010, and Excel’s dominance of the spreadsheet market is greater than Lotus’s ever was, while the number of users still running Lotus 1-2-3 is approaching zero.  How did this happen?  What caused such a dramatic reversal of fortunes? Industry analysts put it down to two factors:  Firstly, Lotus decided that this fancy new GUI platform called “Windows” was a passing fad that would never take off.  They declined to create a Windows version of Lotus 1-2-3 (for a few years, anyway), predicting that their DOS version of the software was all anyone would ever need.  Microsoft, naturally, developed Excel exclusively for Windows.  Secondly, Microsoft developed a feature for Excel that Lotus didn’t provide in 1-2-3, namely PivotTables.  The PivotTables feature, exclusive to Excel, was deemed so staggeringly useful that people were willing to learn an entire new software package (Excel) rather than stick with a program (1-2-3) that didn’t have it.  This one feature, along with the misjudgment of the success of Windows, was the death-knell for Lotus 1-2-3, and the beginning of the success of Microsoft Excel. Understanding PivotTables So what is a PivotTable, exactly? Put simply, a PivotTable is a summary of some data, created to allow easy analysis of said data.  But unlike a manually created summary, Excel PivotTables are interactive.  Once you have created one, you can easily change it if it doesn’t offer the exact insights into your data that you were hoping for.  In a couple of clicks the summary can be “pivoted” – rotated in such a way that the column headings become row headings, and vice versa.  There’s a lot more that can be done, too.  Rather than try to describe all the features of PivotTables, we’ll simply demonstrate them… The data that you analyze using a PivotTable can’t be just any data – it has to be raw data, previously unprocessed (unsummarized) – typically a list of some sort.  An example of this might be the list of sales transactions in a company for the past six months. Examine the data shown below: Notice that this is not raw data.  In fact, it is already a summary of some sort.  In cell B3 we can see $30,000, which apparently is the total of James Cook’s sales for the month of January.  So where is the raw data?  How did we arrive at the figure of $30,000?  Where is the original list of sales transactions that this figure was generated from?  It’s clear that somewhere, someone must have gone to the trouble of collating all of the sales transactions for the past six months into the summary we see above.  How long do you suppose this took?  An hour?  Ten?  Probably. If we were to track down the original list of sales transactions, it might look something like this: You may be surprised to learn that, using the PivotTable feature of Excel, we can create a monthly sales summary similar to the one above in a few seconds, with only a few mouse clicks.  We can do this – and a lot more too! How to Create a PivotTable First, ensure that you have some raw data in a worksheet in Excel.  A list of financial transactions is typical, but it can be a list of just about anything:  Employee contact details, your CD collection, or fuel consumption figures for your company’s fleet of cars. So we start Excel… …and we load such a list… Once we have the list open in Excel, we’re ready to start creating the PivotTable. Click on any one single cell within the list: Then, from the Insert tab, click the PivotTable icon: The Create PivotTable box appears, asking you two questions:  What data should your new PivotTable be based on, and where should it be created?  Because we already clicked on a cell within the list (in the step above), the entire list surrounding that cell is already selected for us ($A$1:$G$88 on the Payments sheet, in this example).  Note that we could select a list in any other region of any other worksheet, or even some external data source, such as an Access database table, or even a MS-SQL Server database table.  We also need to select whether we want our new PivotTable to be created on a new worksheet, or on an existing one.  In this example we will select a new one: The new worksheet is created for us, and a blank PivotTable is created on that worksheet: Another box also appears:  The PivotTable Field List.  This field list will be shown whenever we click on any cell within the PivotTable (above): The list of fields in the top part of the box is actually the collection of column headings from the original raw data worksheet.  The four blank boxes in the lower part of the screen allow us to choose the way we would like our PivotTable to summarize the raw data.  So far, there is nothing in those boxes, so the PivotTable is blank.  All we need to do is drag fields down from the list above and drop them in the lower boxes.  A PivotTable is then automatically created to match our instructions.  If we get it wrong, we only need to drag the fields back to where they came from and/or drag new fields down to replace them. The Values box is arguably the most important of the four.  The field that is dragged into this box represents the data that needs to be summarized in some way (by summing, averaging, finding the maximum, minimum, etc).  It is almost always numerical data.  A perfect candidate for this box in our sample data is the “Amount” field/column.  Let’s drag that field into the Values box: Notice that (a) the “Amount” field in the list of fields is now ticked, and “Sum of Amount” has been added to the Values box, indicating that the amount column has been summed. If we examine the PivotTable itself, we indeed find the sum of all the “Amount” values from the raw data worksheet: We’ve created our first PivotTable!  Handy, but not particularly impressive.  It’s likely that we need a little more insight into our data than that. Referring to our sample data, we need to identify one or more column headings that we could conceivably use to split this total.  For example, we may decide that we would like to see a summary of our data where we have a row heading for each of the different salespersons in our company, and a total for each.  To achieve this, all we need to do is to drag the “Salesperson” field into the Row Labels box: Now, finally, things start to get interesting!  Our PivotTable starts to take shape….   With a couple of clicks we have created a table that would have taken a long time to do manually. So what else can we do?  Well, in one sense our PivotTable is complete.  We’ve created a useful summary of our source data.  The important stuff is already learned!  For the rest of the article, we will examine some ways that more complex PivotTables can be created, and ways that those PivotTables can be customized. First, we can create a two-dimensional table.  Let’s do that by using “Payment Method” as a column heading.  Simply drag the “Payment Method” heading to the Column Labels box: Which looks like this: Starting to get very cool! Let’s make it a three-dimensional table.  What could such a table possibly look like?  Well, let’s see… Drag the “Package” column/heading to the Report Filter box: Notice where it ends up…. This allows us to filter our report based on which “holiday package” was being purchased.  For example, we can see the breakdown of salesperson vs payment method for all packages, or, with a couple of clicks, change it to show the same breakdown for the “Sunseekers” package: And so, if you think about it the right way, our PivotTable is now three-dimensional.  Let’s keep customizing… If it turns out, say, that we only want to see cheque and credit card transactions (i.e. no cash transactions), then we can deselect the “Cash” item from the column headings.  Click the drop-down arrow next to Column Labels, and untick “Cash”: Let’s see what that looks like…As you can see, “Cash” is gone. Formatting This is obviously a very powerful system, but so far the results look very plain and boring.  For a start, the numbers that we’re summing do not look like dollar amounts – just plain old numbers.  Let’s rectify that. A temptation might be to do what we’re used to doing in such circumstances and simply select the whole table (or the whole worksheet) and use the standard number formatting buttons on the toolbar to complete the formatting.  The problem with that approach is that if you ever change the structure of the PivotTable in the future (which is 99% likely), then those number formats will be lost.  We need a way that will make them (semi-)permanent. First, we locate the “Sum of Amount” entry in the Values box, and click on it.  A menu appears.  We select Value Field Settings… from the menu: The Value Field Settings box appears. Click the Number Format button, and the standard Format Cells box appears: From the Category list, select (say) Accounting, and drop the number of decimal places to 0.  Click OK a few times to get back to the PivotTable… As you can see, the numbers have been correctly formatted as dollar amounts. While we’re on the subject of formatting, let’s format the entire PivotTable.  There are a few ways to do this.  Let’s use a simple one… Click the PivotTable Tools/Design tab: Then drop down the arrow in the bottom-right of the PivotTable Styles list to see a vast collection of built-in styles: Choose any one that appeals, and look at the result in your PivotTable:   Other Options We can work with dates as well.  Now usually, there are many, many dates in a transaction list such as the one we started with.  But Excel provides the option to group data items together by day, week, month, year, etc.  Let’s see how this is done. First, let’s remove the “Payment Method” column from the Column Labels box (simply drag it back up to the field list), and replace it with the “Date Booked” column: As you can see, this makes our PivotTable instantly useless, giving us one column for each date that a transaction occurred on – a very wide table! To fix this, right-click on any date and select Group… from the context-menu: The grouping box appears.  We select Months and click OK: Voila!  A much more useful table: (Incidentally, this table is virtually identical to the one shown at the beginning of this article – the original sales summary that was created manually.) Another cool thing to be aware of is that you can have more than one set of row headings (or column headings): …which looks like this…. You can do a similar thing with column headings (or even report filters). Keeping things simple again, let’s see how to plot averaged values, rather than summed values. First, click on “Sum of Amount”, and select Value Field Settings… from the context-menu that appears: In the Summarize value field by list in the Value Field Settings box, select Average: While we’re here, let’s change the Custom Name, from “Average of Amount” to something a little more concise.  Type in something like “Avg”: Click OK, and see what it looks like.  Notice that all the values change from summed totals to averages, and the table title (top-left cell) has changed to “Avg”: If we like, we can even have sums, averages and counts (counts = how many sales there were) all on the same PivotTable! Here are the steps to get something like that in place (starting from a blank PivotTable): Drag “Salesperson” into the Column Labels Drag “Amount” field down into the Values box three times For the first “Amount” field, change its custom name to “Total” and it’s number format to Accounting (0 decimal places) For the second “Amount” field, change its custom name to “Average”, its function to Average and it’s number format to Accounting (0 decimal places) For the third “Amount” field, change its name to “Count” and its function to Count Drag the automatically created field from Column Labels to Row Labels Here’s what we end up with: Total, average and count on the same PivotTable! Conclusion There are many, many more features and options for PivotTables created by Microsoft Excel – far too many to list in an article like this.  To fully cover the potential of PivotTables, a small book (or a large website) would be required.  Brave and/or geeky readers can explore PivotTables further quite easily:  Simply right-click on just about everything, and see what options become available to you.  There are also the two ribbon-tabs: PivotTable Tools/Options and Design.  It doesn’t matter if you make a mistake – it’s easy to delete the PivotTable and start again – a possibility old DOS users of Lotus 1-2-3 never had. We’ve included an Excel that should work with most versions of Excel, so you can download to practice your PivotTable skills. Download Our Practice Excel File Similar Articles Productive Geek Tips Magnify Selected Cells In Excel 2007Share Access Data with Excel in Office 2010Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer

    Read the article

  • Log message Request and Response in ASP.NET WebAPI

    - by Fredrik N
    By logging both incoming and outgoing messages for services can be useful in many scenarios, such as debugging, tracing, inspection and helping customers with request problems etc.  I have a customer that need to have both incoming and outgoing messages to be logged. They use the information to see strange behaviors and also to help customers when they call in  for help (They can by looking in the log see if the customers sends in data in a wrong or strange way).   Concerns Most loggings in applications are cross-cutting concerns and should not be  a core concern for developers. Logging messages like this:   // GET api/values/5 public string Get(int id) { //Cross-cutting concerns Log(string.Format("Request: GET api/values/{0}", id)); //Core-concern var response = DoSomething(); //Cross-cutting concerns Log(string.Format("Reponse: GET api/values/{0}\r\n{1}", id, response)); return response; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } will only result in duplication of code, and unnecessarily concerns for the developers to be aware of, if they miss adding the logging code, no logging will take place. Developers should focus on the core-concern, not the cross-cutting concerns. By just focus on the core-concern the above code will look like this: // GET api/values/5 public string Get(int id) { return DoSomething(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The logging should then be placed somewhere else so the developers doesn’t need to focus care about the cross-concern. Using Message Handler for logging There are different ways we could place the cross-cutting concern of logging message when using WebAPI. We can for example create a custom ApiController and override the ApiController’s ExecutingAsync method, or add a ActionFilter, or use a Message Handler. The disadvantage with custom ApiController is that we need to make sure we inherit from it, the disadvantage of ActionFilter, is that we need to add the filter to the controllers, both will modify our ApiControllers. By using a Message Handler we don’t need to do any changes to our ApiControllers. So the best suitable place to add our logging would be in a custom Message Handler. A Message Handler will be used before the HttpControllerDispatcher (The part in the WepAPI pipe-line that make sure the right controller is used and called etc). Note: You can read more about message handlers here, it will give you a good understanding of the WebApi pipe-line. To create a Message Handle we can inherit from the DelegatingHandler class and override the SendAsync method: public class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { return base.SendAsync(request, cancellationToken); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   If we skip the call to the base.SendAsync our ApiController’s methods will never be invoked, nor other Message Handlers. Everything placed before base.SendAsync will be called before the HttpControllerDispatcher (before WebAPI will take a look at the request which controller and method it should be invoke), everything after the base.SendAsync, will be executed after our ApiController method has returned a response. So a message handle will be a perfect place to add cross-cutting concerns such as logging. To get the content of our response within a Message Handler we can use the request argument of the SendAsync method. The request argument is of type HttpRequestMessage and has a Content property (Content is of type HttpContent. The HttpContent has several method that can be used to read the incoming message, such as ReadAsStreamAsync, ReadAsByteArrayAsync and ReadAsStringAsync etc. Something to be aware of is what will happen when we read from the HttpContent. When we read from the HttpContent, we read from a stream, once we read from it, we can’t be read from it again. So if we read from the Stream before the base.SendAsync, the next coming Message Handlers and the HttpControllerDispatcher can’t read from the Stream because it’s already read, so our ApiControllers methods will never be invoked etc. The only way to make sure we can do repeatable reads from the HttpContent is to copy the content into a buffer, and then read from that buffer. This can be done by using the HttpContent’s LoadIntoBufferAsync method. If we make a call to the LoadIntoBufferAsync method before the base.SendAsync, the incoming stream will be read in to a byte array, and then other HttpContent read operations will read from that buffer if it’s exists instead directly form the stream. There is one method on the HttpContent that will internally make a call to the  LoadIntoBufferAsync for us, and that is the ReadAsByteArrayAsync. This is the method we will use to read from the incoming and outgoing message. public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var requestMessage = await request.Content.ReadAsByteArrayAsync(); var response = await base.SendAsync(request, cancellationToken); var responseMessage = await response.Content.ReadAsByteArrayAsync(); return response; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above code will read the content of the incoming message and then call the SendAsync and after that read from the content of the response message. The following code will add more logic such as creating a correlation id to combine the request with the response, and create a log entry etc: public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var corrId = string.Format("{0}{1}", DateTime.Now.Ticks, Thread.CurrentThread.ManagedThreadId); var requestInfo = string.Format("{0} {1}", request.Method, request.RequestUri); var requestMessage = await request.Content.ReadAsByteArrayAsync(); await IncommingMessageAsync(corrId, requestInfo, requestMessage); var response = await base.SendAsync(request, cancellationToken); var responseMessage = await response.Content.ReadAsByteArrayAsync(); await OutgoingMessageAsync(corrId, requestInfo, responseMessage); return response; } protected abstract Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message); protected abstract Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message); } public class MessageLoggingHandler : MessageHandler { protected override async Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message) { await Task.Run(() => Debug.WriteLine(string.Format("{0} - Request: {1}\r\n{2}", correlationId, requestInfo, Encoding.UTF8.GetString(message)))); } protected override async Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message) { await Task.Run(() => Debug.WriteLine(string.Format("{0} - Response: {1}\r\n{2}", correlationId, requestInfo, Encoding.UTF8.GetString(message)))); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The code above will show the following in the Visual Studio output window when the “api/values” service (One standard controller added by the default WepAPI template) is requested with a Get http method : 6347483479959544375 - Request: GET http://localhost:3208/api/values 6347483479959544375 - Response: GET http://localhost:3208/api/values ["value1","value2"] .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Register a Message Handler To register a Message handler we can use the Add method of the GlobalConfiguration.Configration.MessageHandlers in for example Global.asax: public class WebApiApplication : System.Web.HttpApplication { protected void Application_Start() { GlobalConfiguration.Configuration.MessageHandlers.Add(new MessageLoggingHandler()); ... } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Summary By using a Message Handler we can easily remove cross-cutting concerns like logging from our controllers. You can also find the source code used in this blog post on ForkCan.com, feel free to make a fork or add comments, such as making the code better etc. Feel free to follow me on twitter @fredrikn if you want to know when I will write other blog posts etc.

    Read the article

  • Using Deployment Manager

    - by Jess Nickson
    One of the teams at Red Gate has been working very hard on a new product: Deployment Manager. Deployment Manager is a free tool that lets you deploy updates to .NET apps, services and databases through a central dashboard. Deployment Manager has been out for a while, but I must admit that even though I work in the same building, until now I hadn’t even looked at it. My job at Red Gate is to develop and maintain some of our community sites, which involves carrying out regular deployments. One of the projects I have to deploy on a fairly regular basis requires me to send my changes to our build server, TeamCity. The output is a Zip file of the build. I then have to go and find this file, copy it across to the staging machine, extract it, and copy some of the sub-folders to other places. In order to keep track of what builds are running, I need to rename the folders accordingly. However, even after all that, I still need to go and update the site and its applications in IIS to point at these new builds. Oh, and then, I have to repeat the process when I deploy on production. Did I mention the multiple configuration files that then need updating as well? Manually? The whole process can take well over half an hour. I’m ready to try out a new process. Deployment Manager is designed to massively simplify the deployment processes from what could be lots of manual copying of files, managing of configuration files, and database upgrades down to a few clicks. It’s a big promise, but I decided to try out this new tool on one of the smaller ASP.NET sites at Red Gate, Format SQL (the result of a Red Gate Down Tools week). I wanted to add some new functionality, but given it was a new site with no set way of doing things, I was reluctant to have to manually copy files around servers. I decided to use this opportunity as a chance to set the site up on Deployment Manager and check out its functionality. What follows is a guide on how to get set up with Deployment Manager, a brief overview of its features, and what I thought of the experience. To follow along with the instructions that follow, you’ll first need to download Deployment Manager from Red Gate. It has a free ‘Starter Edition’ which allows you to create up to 5 projects and agents (machines you deploy to), so it’s really easy to get up and running with a fully-featured version. The Initial Set Up After installing the product and setting it up using the administration tool it provides, I launched Deployment Manager by going to the URL and port I had set it to run on. This loads up the main dashboard. The dashboard does a good job of guiding me through the process of getting started, beginning with a prompt to create some environments. 1. Setting up Environments The dashboard informed me that I needed to add new ‘Environments’, which are essentially ways of grouping the machines you want to deploy to. The environments that get added will show up on the main dashboard. I set up two such environments for this project: ‘staging’ and ‘live’.   2. Add Target Machines Once I had created the environments, I was ready to add ‘target machine’s to them, which are the actual machines that the deployment will occur on.   To enable me to deploy to a new machine, I needed to download and install an Agent on it. The ‘Add target machine’ form on the ‘Environments’ page helpfully provides a link for downloading an Agent.   Once the agent has been installed, it is just a case of copying the server key to the agent, and the agent key to the server, to link them up.   3. Run Health Check If, after adding your new target machine, the ‘Status’ flags an error, it is possible that the Agent and Server keys have not been entered correctly on both Deployment Manager and the Agent service.     You can ‘Check Health’, which will give you more information on any issues. It is probably worth running this regardless of what status the ‘Environments’ dashboard is claiming, just to be on the safe side.     4. Add Projects Going back to the main Dashboard tab at this point, I found that it was telling me that I needed to set up a new project.   I clicked the ‘project’ link to get started, gave my new project a name and clicked ‘Create’. I was then redirected to the ‘Steps’ page for the project under the Projects tab.   5. Package Steps The ‘Steps’ page was fairly empty when it first loaded.   Adding a ‘step’ allowed me to specify what packages I wanted to grab for the deployment. This part requires a NuGet package feed to be set up, which is where Deployment Manager will look for the packages. At Red Gate, we already have one set up, so I just needed to tell Deployment Manager about it. Don’t worry; there is a nice guide included on how to go about doing all of this on the ‘Package Feeds’ page in ‘Settings’, if you need any help with setting these bits up.    At Red Gate we use a build server, TeamCity, which is capable of publishing built projects to the NuGet feed we use. This makes the workflow for Format SQL relatively simple: when I commit a change to the project, the build server is configured to grab those changes, build the project, and spit out a new NuGet package to the Red Gate NuGet package feed. My ‘package step’, therefore, is set up to look for this package on our feed. The final part of package step was simply specifying which machines from what environments I wanted to be able to deploy the project to.     Format SQL Now the main Dashboard showed my new project and environment in a rather empty looking grid. Clicking on my project presented me with a nice little message telling me that I am now ready to create my first release!   Create a release Next I clicked on the ‘Create release’ button in the Projects tab. If your feeds and package step(s) were set up correctly, then Deployment Manager will automatically grab the latest version of the NuGet package that you want to deploy. As you can see here, it was able to pick up the latest build for Format SQL and all I needed to do was enter a version number and description of the release.   As you can see underneath ‘Version number’, it keeps track of what version the previous release was given. Clicking ‘Create’ created the release and redirected me to a summary of it where I could check the details before deploying.   I clicked ‘Deploy this release’ and chose the environment I wanted to deploy to and…that’s it. Deployment Manager went off and deployed it for me.   Once I clicked ‘Deploy release’, Deployment Manager started to automatically update and provide continuing feedback about the process. If any errors do arise, then I can expand the results to see where it went wrong. That’s it, I’m done! Keep in mind, if you hit errors with the deployment itself then it is possible to view the log output to try and determine where these occurred. You can keep expanding the logs to narrow down the problem. The screenshot below is not from my Format SQL deployment, but I thought I’d post one to demonstrate the logging output available. Features One of the best bits of Deployment Manager for me is the ability to very, very easily deploy the same release to multiple machines. Deploying this same release to production was just a case of selecting the deployment and choosing the ‘live’ environment as the place to deploy to. Following on from this is the fact that, as Deployment Manager keeps track of all of your releases, it is extremely easy to roll back to a previous release if anything goes pear-shaped! You can view all your previous releases and select one to re-deploy. I needed this feature more than once when differences in my production and staging machines lead to some odd behavior.     Another option is to use the TeamCity integration available. This enables you to set Deployment Manager up so that it will automatically create releases and deploy these to an environment directly from TeamCity, meaning that you can always see the latest version up and running without having to do anything. Machine Specific Deployments ‘What about custom configuration files?’ I hear you shout. Certainly, it was one of my concerns. Our setup on the staging machine is not in line with that on production. What this means is that, should we deploy the same configuration to both, one of them is going to break. Thankfully, it turns out that Deployment Manager can deal with this. Given I had environments ‘staging’ and ‘live’, and that staging used the project’s web.config file, while production (‘live’) required the config file to undergo some transformations, I simply added a web.live.config file in the project, so that it would be included as part of the NuGet package. In this file, I wrote the XML document transformations I needed and Deployment Manager took care of the rest. Another option is to set up ‘variables’ for your project, which allow you to specify key-value pairs for your configuration file, and which environment to apply them to. You’ll find Variables as a full left-hand submenu within the ‘Projects’ tab. These features will definitely be of interest if you have a large number of environments! There are still many other features that I didn’t get a chance to play around with like running PowerShell scripts for more personalised deployments. Maybe next time! Also, let’s not forget that my use case in this article is a very simple one – deploying a single package. I don’t believe that all projects will be equally as simple, but I already appreciate how much easier Deployment Manager could make my life. I look forward to the possibility of moving our other sites over to Deployment Manager in the near future.   Conclusion In this article I have described the steps involved in setting up and configuring an instance of Deployment Manager, creating a new automated deployment process, and using this to actually carry out a deployment. I’ve tried to mention some of the features I found particularly useful, such as error logging, easy release management allowing you to deploy the same release multiple times, and configuration file transformations. If I had to point out one issue, then it would be that the releases are immutable, which from a development point of view makes sense. However, this causes confusion where I have to create a new release to deploy to a newly set up environment – I cannot simply deploy an old release onto a new environment, the whole release needs to be recreated. I really liked how easy it was to get going with the product. Setting up Format SQL and making a first deployment took very little time. Especially when you compare it to how long it takes me to manually deploy the other site, as I described earlier. I liked how it let me know what I needed to do next, with little messages flagging up that I needed to ‘create environments’ or ‘add some deployment steps’ before I could continue. I found the dashboard incredibly convenient. As the number of projects and environments increase, it might become awkward to try and search them and find out what state they are in. Instead, the dashboard handily keeps track of the latest deployments of each project and lets you know what version is running on each of the environments, and when that deployment occurred. Finally, do you remember my complaint about having to rename folders so that I could keep track of what build they came from? This is yet another thing that Deployment Manager takes care of for you. Each release is put into its own directory, which takes the name of whatever version number that release has, though these can be customised if necessary. If you’d like to take a look at Deployment Manager for yourself, then you can download it here.

    Read the article

  • ffmpeg: Could not find codec parameters for stream 0 (Video: h264) unspecified size

    - by dempap
    I try to convert a video from .raw to .mp4. For this reason I did download, build and install both x264 and ffmpeg. However, command: ffmpeg -f h264 -i output.raw -vcodec copy output.mp4 fails with error (shown in picture below). Is there any way to fix this? Commands I also run: 1 root@beagleboard:/# v4l2-ctl --list-formats ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'YUYV' Name : YUV 4:2:2 (YUYV) Index : 1 Type : Video Capture Pixel Format: 'MJPG' (compressed) Name : MJPEG 2 root@beagleboard:/dev# v4l2-ctl --set-fmt-video=pixelformat=0

    Read the article

  • How to export skype history

    - by Peter Štibraný
    Is it possible to export skype chat history into some readable plain-text format? (txt, xml, html) Alternatively, is it possible to backup/restore skype chat history? (I wouldn't mind backup to Gmail, or to readable plain-text format). I have found numerous tools on the internet and even tried some of them, but they don't seem to work as advertised :-( Please answer only if you have good experience with some tool. I'm using Skype v4.

    Read the article

  • avi to mpeg4 command line convertor

    - by Samvel Siradeghyan
    Hi all I am writting program for recording IP cameras videos. I use Aforge framework and can save video in avi format, but it's size is too big. I need some command line program to convert videos from avi to mpeg4 format. Is there any free program and if yes where can I download them and how to use it. Thanks.

    Read the article

  • I am looking for a good site and I don’t mind paying?. [closed]

    - by Vijeta Negi
    Hi, As a Programmer. While writing the codes I had prepared a manual and taken printed out also, However for some reason Ic with no information, The only thing I am left is this printed material which I need to modify too. I have scanned it and is in Tiff format now. Does any one know a good site where I can convert scanned manual into text format –I am looking for a good site and I don’t mind paying?.

    Read the article

  • Video Editing Software Recommendation

    - by Lee
    I want to get some recommendations for Video Editing software. I need the software to do the following: Encode to multiple formats, .avi, .wma, DVD format, etc. Most of all we need to encode a file to .flv format. Ability to burn the file to DVD. Ability to perform video editing on the file. Easy-to-use. Specially, for the beginning video editor.

    Read the article

  • Rip DVD on Linux

    - by becomingGuru
    I want to rip a DVD and store it in a good and compressed format. What software to use for that and what is the best format; I want best quality and low file size. +1 if I don't have to install any new software (at least those not in the repos) +1 if it can be done on command line

    Read the article

  • EMF to EPS Converter

    - by Uwe Honekamp
    I'm looking for (free) tools for converting images stored in EMF (Enhanced Metafile Format) format to EPS (Encapsulated Postscript). What features make your recommendation stand out? Edit: can you recommendation be used for batch processing?

    Read the article

  • How to convert pcm to mp3?

    - by avirk
    I have some .pcm files and I want to convert them on high quality .mp3 format. I tried to find tools by Google search but did not get the right one for me. I will prefer the freeware but if there is not a good freeware then I can also consider the shareware. The pcm format has much large files as I have 200-500 mb so the tool should be able to handle the large files. Please help me regard this problem.

    Read the article

  • Ubuntu: Convert OpenOffice Calc to Excel workbook using CLI

    - by Adam Matan
    I need to create an automated report in a spreadsheet format. Unfortunately, There seems to be an easy way to create these reports using OpenOffice Calc, but upper managements wants them in MS Excel format. As these reports are to be created and emailed automatically, is there a nice, command-line way to convert between these file formats?

    Read the article

  • OCZ Vertex 2 not recognized by Ubuntu installer

    - by Zsub
    As I boot into the Ubuntu 10.10 (or 11.04, doesn't matter) live environment or installer, it just refuses to recognise my Vertex 2. It reports the disk as ATA and not supporting smart, shows no serial number, and doesn't list the size correctly. All fdisk tells me is Unable to read /dev/sda (it's the only storage in the PC). I'm now running a temporary install of Windows 7 off of it, which worked like a charm, so where am I going wrong with Ubuntu... Specs: Asus M4N68T-M LE V2 (BIOS 0702, most recent) OCZ Vertex 2 SSD 60 GB Amd Athlon II X4 640 Patriot PSD34G13332 4GB DDR3 ram (two banks) EDIT I installed a second drive, installed Ubuntu on that and booted, it recognised the SSD just fine. I'm now trying to apt-get upgrade the live-environment. I wonder if there is any way to sort of install Ubuntu from Ubuntu (I boot into the working install on the other drive, install it on the SSD and then boot from the SSD). EDIT2 Ok, so that doesn't work. The install detects the SSD, however, it cannot format it. EDIT3 After a fresh boot I can read out SMART-data and even perform a read-benchmark, but if I try to format it, or do a write-bench, it'll crap out and after that it says SMART is not supported. So basically it seems I can't write to the disk, as it will stop working when I do, I will try to run repeated read-benchmarks to see if that has any effect. EDIT4 I'm running several read benchmarks on the drive right now, they give results that are to be expected from an SSD. If the read-benches don't fail, I can use fdisk on the disk, but it is now stuck trying to re-read the partition table after issueing the 'w' command. EDIT5 Parted Magic did recognize the drive and with hdparm -I even could tell me the drive was in a frozen state. I powercycled it (just pull out the plug from the SSD and plug it back in) and it wasn't frozen anymore. After that I could upgrade the firmware on the drive (still using Parted Magic) and format it to Ext4. After I rebooted into the Ubuntu installer, it wouldn't get recognized and hdparm didn't want to talk to it saying HDIO_DRIVE_CMD(identify) failed: Invalid exchange. EDIT6 For some reason if I enable one of the RAID controllers (the one the SSD is connected to, obviously) Ubuntu will let me format it, mount it and write to it. The installer also recognizes it. However if the raid controller is enabled but no array is defined the motherboard can't boot from it :(

    Read the article

  • List full timestamps of files in a tarball

    - by Mechanical snail
    I have a large tar archive and want to see the exact (nanosecond) timestamps that are stored for each file in the archive. In case it's relevant, the tarball is in POSIX-2001 format (tar --format=posix). tar --list --verbose displays the timestamps rounded off to the minute. For comparison, ls --full-time does what I want, but I'd rather not have to extract everything first because it's huge. For my purposes, command-line and GUI tools are both fine.

    Read the article

  • Pages '09 reverts to a specific printer after choosing "Any Printer"

    - by Matthew Rankin
    When I select Any Printer in the Format For: drop down box for a Pages '09 document, the next time I reopen the document it reverts to formatting for the specific printer that was originally selected. Why isn't the Format For: selection persistent? I know for certain that this problem exists on documents that I originally created in an earlier version of Pages. I have not yet confirmed if it also happens in documents originally created in Pages '09.

    Read the article

  • Unable to log into Ubuntu

    - by Rodnower
    I have Ubuntu 12.04.1. Last time I did nothing especial, but suddenly some problem appear: I have a login screen (using lightdm), when I attempt a login, I get a console session and returned to the login screen. I see that it is a known issue, so I tried everything from following steps: To removed .XAuthority Configure to use gdm Reinstall lightdm To include my user to nopasswdlogin group But nothing help... So, these are errors from /var/log/auth.log: Oct 3 01:11:48 alphabet-2 lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Oct 3 01:11:48 alphabet-2 lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 Oct 3 01:11:48 alphabet-2 lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "andrey" Oct 3 01:11:48 alphabet-2 dbus[704]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.35" (uid=104 pid=1709 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.14" (uid=0 pid=1169 comm="/usr/sbin/console-kit-daemon --no-daemon ") Any ideas?

    Read the article

  • Unable to open an ra_local session to URL

    - by AntonAL
    I have repository on my Windows machine, using VisualSVN + TortoiseSVN I want to use this repositories on my Mac with Versions.app I create a new Repository bookmark, choose a repository directory and after clicking "Create", get following error: Unable to open an ra_local session to URL Unable to open repository 'file://localhost/Path/to/my/repositories/MyRepo' Expected FS format '2'; found format '4' I can surf through the repository, using svn ls file://localhost/Path/to/my/repositories/MyRepo Also, when i create local repository, using Versions.app, the bookmark is created well and i can work with it Help!

    Read the article

  • Suggestions for sharing and using data between Ubuntu and Windows 7 dual boot

    - by wizjany
    Note: TL;DR, scroll to bottom for summary. I recently set up my computer for a dual boot between Ubuntu 9.10 and Windows 7. My current drive setup is as follows. | A1 | A2 | | B1 | B2 | B3 | A1: 100 mb, windows 7 "System Reserved" boot partition A2: 230 gb, data section, this one needs to be shared between the operating systems B1: 125 gb, windows 7 OS B2: 123 gb, ubuntu OS B3: 2gb, linux swap space Pretty much i want to have my documents, music, pictures, videos, etc accessible from both operating systems. My first attempt involved making the data (a2) partition NTFS, and moving my home folder from ubuntu to the data partition. However, as I read NTFS does not work nice with permission, and it messed up my home folder. My next idea is one of the following: 1) format the data partition to ext2/3/4 and move my home folder from linux there, and get a driver to read ext partitions in windows 7. The problem with that is that most of the ext drivers/software are not compatible with windows 7 or do not integrate with windows explorer (I really don't want to open a separate software window just to access my data, plus it's probably not compatible with other software.) http://www.fs-driver.org/ looks promising, but I'm not sure how it works with ext4 and windows 7 (not officially supported, when trying in vista compatibility mode, it tells me I need to format the ext drive to use it). My next idea, 2) keep the home folder in ubuntu where it is, but create symlinks for the Documents, Music, etc folders to an NTFS formatted Data (A2) partition, and add those locations to the windows 7 libraries. I'm not totally sure how the permissions would work out, but it should be fine since it's only the documents, music, etc and not the important config files in the rest of /home/user/. Correct me if I'm wrong. Currently, symlinks is my best idea, although i'm not sure how it will work. Any suggestions, additions to my ideas, links, pointers, whatever would be greatly appreciated. Even if it means i should reformat both my drives and repartition (2 250gb drives if you want to suggest a setup for that), I won't be too opposed if that's the best suggestion (I've gone through the format/install/format/reinstall process 5 times over the past 3 days, once more won't hurt me). TL;DR, summary: I have two hard drives. One is partitioned for Ubuntu and Windows 7, the second one I want accessible to both operating systems to store documents, music, pictures, videos, etc. Suggestions on how to set up the data drive please =) P.S. bonus if I can get an apache server document root folder working between the two OS's as well (permissions could become very complicated, so don't worry too much about that) P.P.S. Related question, but data viewing is one way: http://superuser.com/questions/84586/partition-scheme-and-size-for-dual-boot-windows-7-and-ubuntu-9-10-with-separate-p

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >