Search Results

Search found 8892 results on 356 pages for 'disk druid'.

Page 352/356 | < Previous Page | 348 349 350 351 352 353 354 355 356  | Next Page >

  • How to handle failure to release a resource which is contained in a smart pointer?

    - by cj
    How should an error during resource deallocation be handled, when the object representing the resource is contained in a shared pointer? Smart pointers are a useful tool to manage resources safely. Examples of such resources are memory, disk files, database connections, or network connections. // open a connection to the local HTTP port boost::shared_ptr<Socket> socket = Socket::connect("localhost:80"); In a typical scenario, the class encapsulating the resource should be noncopyable and polymorphic. A good way to support this is to provide a factory method returning a shared pointer, and declare all constructors non-public. The shared pointers can now be copied from and assigned to freely. The object is automatically destroyed when no reference to it remains, and the destructor then releases the resource. /** A TCP/IP connection. */ class Socket { public: static boost::shared_ptr<Socket> connect(const std::string& address); virtual ~Socket(); protected: Socket(const std::string& address); private: // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; But there is a problem with this approach. The destructor must not throw, so a failure to release the resource will remain undetected. A common way out of this problem is to add a public method to release the resource. class Socket { public: virtual void close(); // may throw // ... }; Unfortunately, this approach introduces another problem: Our objects may now contain resources which have already been released. This complicates the implementation of the resource class. Even worse, it makes it possible for clients of the class to use it incorrectly. The following example may seem far-fetched, but it is a common pitfall in multi-threaded code. socket->close(); // ... size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use! Either we ensure that the resource is not released before the object is destroyed, thereby losing any way to deal with a failed resource deallocation. Or we provide a way to release the resource explicitly during the object's lifetime, thereby making it possible to use the resource class incorrectly. There is a way out of this dilemma. But the solution involves using a modified shared pointer class. These modifications are likely to be controversial. Typical shared pointer implementations, such as boost::shared_ptr, require that no exception be thrown when their object's destructor is called. Generally, no destructor should ever throw, so this is a reasonable requirement. These implementations also allow a custom deleter function to be specified, which is called in lieu of the destructor when no reference to the object remains. The no-throw requirement is extended to this custom deleter function. The rationale for this requirement is clear: The shared pointer's destructor must not throw. If the deleter function does not throw, nor will the shared pointer's destructor. However, the same holds for other member functions of the shared pointer which lead to resource deallocation, e.g. reset(): If resource deallocation fails, no exception can be thrown. The solution proposed here is to allow custom deleter functions to throw. This means that the modified shared pointer's destructor must catch exceptions thrown by the deleter function. On the other hand, member functions other than the destructor, e.g. reset(), shall not catch exceptions of the deleter function (and their implementation becomes somewhat more complicated). Here is the original example, using a throwing deleter function: /** A TCP/IP connection. */ class Socket { public: static SharedPtr<Socket> connect(const std::string& address); protected: Socket(const std::string& address); virtual Socket() { } private: struct Deleter; // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; struct Socket::Deleter { void operator()(Socket* socket) { // Close the connection. If an error occurs, delete the socket // and throw an exception. delete socket; } }; SharedPtr<Socket> Socket::connect(const std::string& address) { return SharedPtr<Socket>(new Socket(address), Deleter()); } We can now use reset() to free the resource explicitly. If there is still a reference to the resource in another thread or another part of the program, calling reset() will only decrement the reference count. If this is the last reference to the resource, the resource is released. If resource deallocation fails, an exception is thrown. SharedPtr<Socket> socket = Socket::connect("localhost:80"); // ... socket.reset();

    Read the article

  • how to extract data from excel (apache poi) to put it in mysql table using jsp? [ SOLVED]

    - by Nihad KH
    I want to extract data from excel sheet to insert it into a mysql table using jsp, so far i've done this and its printing data into the outpout(using apache poi),what should i add to this code ? Output : Name Age Adress Mark 35 New york,AA Elise 22 India,bb Charlotte 45 France,cc Readexcel.jsp : <%@page import="java.sql.Statement"%> <%@page import="java.util.ArrayList"%> <%@page import="java.sql.PreparedStatement"%> <%@page import="java.sql.Connection"%> <%@page import="java.util.Date"%> <%@page import="org.apache.poi.ss.usermodel.Cell"%> <%@page import="org.apache.poi.ss.usermodel.Row"%> <%@page import="org.apache.poi.xssf.usermodel.XSSFSheet"%> <%@page import="org.apache.poi.xssf.usermodel.XSSFWorkbook"%> <%@page import="java.io.File"%> <%@page import="org.apache.commons.io.FilenameUtils"%> <%@page import="org.apache.commons.fileupload.FileItem"%> <%@page import="java.util.Iterator"%> <%@page import="java.util.List"%> <%@page import="org.apache.commons.fileupload.servlet.ServletFileUpload"%> <%@page import="org.apache.commons.fileupload.disk.DiskFileItemFactory"%> <%@page import="org.apache.commons.fileupload.FileItemFactory"%> <%@page contentType="text/html" pageEncoding="UTF-8"%> <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>PRINT DATA FROM EXCEL FILE</title> </head> <body> <% try{ boolean ismultipart=ServletFileUpload.isMultipartContent(request); if(!ismultipart){ }else{ FileItemFactory factory = new DiskFileItemFactory(); ServletFileUpload upload = new ServletFileUpload(factory); List items = null; try{ items = upload.parseRequest(request); }catch(Exception e){ } Iterator itr = items.iterator(); while(itr.hasNext()){ FileItem item = (FileItem)itr.next(); if(item.isFormField()){ }else{ String itemname = item.getName(); if((itemname==null || itemname.equals(""))){ continue; } String filename = FilenameUtils.getName(itemname); File f = checkExist(filename); item.write(f); try{ XSSFWorkbook workbook = new XSSFWorkbook(item.getInputStream()); XSSFSheet sheet = workbook.getSheetAt(0); Iterator<Row> rowIterator = sheet.iterator(); while (rowIterator.hasNext()){ Row row = rowIterator.next(); Iterator<Cell> cellIterator = row.cellIterator(); while (cellIterator.hasNext()) { Cell cell = cellIterator.next(); switch (cell.getCellType()){ case Cell.CELL_TYPE_NUMERIC: out.print(cell.getNumericCellValue() + "t"); break; case Cell.CELL_TYPE_STRING: out.print(cell.getStringCellValue() + "t"); break;} } out.println(""); } }catch (Exception e){ e.printStackTrace(); } } } } }catch(Exception e){ } finally { out.close(); } %> <%! private File checkExist(String fileName){ String saveFile = "D:/upload/"; File f = new File(saveFile+"/"+fileName); if(f.exists()){ StringBuffer sb = new StringBuffer(fileName); sb.insert(sb.lastIndexOf("."),"-"+new Date().getTime()); f = new File(saveFile+"/"+sb.toString()); } return f; } %> </body> </html> I've created a table in my database named EXCELDATA with the header of the excel sheet : ExcelData (Name varchar(50),age int,adress varchar(50)); what should i add to this code to get the data from the excel sheet to the mysql table ??

    Read the article

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • IIS SSL Certificate Renewal Pain

    - by Rick Strahl
    I’m in the middle of my annual certificate renewal for the West Wind site and I can honestly say that I hate IIS’s certificate system.  When it works it’s fine, but when it doesn’t man can it be a pain. Because I deal with public certificates on my site merely once a year, and you have to perform the certificate dance just the right way, I seem to run into some sort of trouble every year, thinking that Microsoft surely must have addressed the issues I ran into previously – HA! Not so. Don’t ever use the Renew Certificate Feature in IIS! The first rule that I should have never forgotten is that certificate renewals in IIS (7 is what I’m using but I think it’s no different in 7.5 and 8), simply don’t work if you’re submitting to get a public certificate from a certificate authority. I use DNSimple for my DNS domain management and SSL certificates because they provide ridiculously easy domain management and good prices for SSL certs – especially wildcard certificates, which is what I use on west-wind.com. Certificates in IIS can be found pegged to the machine root. If you go into the IIS Manager, go to the machine root the tree and then click on certificates and you then get various certificate options: Both of these options create a new Certificate request (CSR), which is just a text file. But if you’re silly enough like me to click on the Renew button on your old certificate, you’ll find that you end up generating a very long Certificate Request that looks nothing like the original certificate request and the format that’s used for this is not accepted by most certificate authorities. While I’m not sure exactly what the problem is, it simply looks like IIS is respecting none of your original certificate bit size choices and is generating a huge certificate request that is 3 times the size of a ‘normal’ certificate request. The end result is (and I’ve done this at least twice now) is that the certificate processor is likely to fail processing those renewals. Always create a new Certificate While it’s a little more work and you have to remember how to fill out the certificate request properly, this is the safe way to make sure your certificate generates properly. First comes the Distinguished Name Properties dialog: Ah yes you have to love the nomenclature of this stuff. Distinguished name, Common name – WTF is a common name? It doesn’t look common to me! Make sure this form gets filled out correctly. Common NameThis is the domain name of the Web site. In my case I’m creating a wildcard certificate so I’m using the * prefix. If you’re purchasing a certificate for a specific domain use www.west-wind.com or store.west-wind.com for example. Make sure this matches the EXACT domain you’re trying to use secure access on because that’s all the certificate is going to work on unless you get a wildcard certificate. Organization Is the name of your company or organization. Depending on the kind of certificate you purchase this name will show up on your certificate. Most low end SSL certificates (ie. those that cost under $100 for single domains) don’t list the organization, the higher signature certificates that also require extensive validation by the cert authority do. Regardless you should make sure this matches the right company/organization. Organizational Unit This can be anything. Not really sure what this is for, but traditionally I’ve always set this to Web because – well this is a Web thing after all right? I’ve never seen this used anywhere that I can tell other than to internally reference the cert. State and CountryPretty obvious. Should reflect the location of the business/organization/person or site.   Next you have to configure the bit size used for the certificate: The default on this dialog is 1024, but I’ve found that most providers these days request a minimum bit length of 2048, as did my DNSimple provider. Again check with the provider when you submit to make sure. Bit length mismatches can cause problems if you use a size that isn’t supported by the provider. I had that happen last year when I submitted my CSR and it got rejected quite a bit later, when the certs usually are issued within an hour or less. When you’re done here, the certificate is saved to disk as a .txt file and it should look something like this (this is a 2048 bit length CSR):-----BEGIN NEW CERTIFICATE REQUEST----- MIIEVGCCAz0CAQAwdjELMAkGA1UEBhMCVVMxDzANBgNVBAgMBkhhd2FpaTENMAsG A1UEBwwEUGFpYTEfMB0GA1UECgwWV2VzdCBXaW5kIFRlY2hub2xvZ2llczEMMAoG B1UECwwDV2ViMRgwFgYDVQQDDA8qLndlc3Qtd2luZC5jb20wggEiMA0GCSqGSIb3 DQEBAQUAA4IBDwAwggEKAoIBAQDIPWOFMkMVRp2Ftj9w/cCVV4OYYhoZYtl+8lTk oqDwKca0xWHLgioX/9v0rZLS6a82MHqKEBxVXu+cuCmSE4AQtB/1YH9lS4tpc/be OZDvnTotP6l4MCEzzAfROcw4CiIg6X0RMSnl8IATAvv2V5LQM9TDdt9oDdMpX2IY +vVC9RZ7PMHBmR9kwI2i/lrKitzhQKaHgpmKcRlM6iqpALUiX28w5HJaDKK1MDHN 607tyFJLHijuJKx7PdTqZYf50KkC3NupfZ2avVycf18Q13jHWj59tvwEOczoVzRL l4LQivAqbhyiqMpWnrZunIOUZta5aGm+jo7O1knGWJjxuraTAgMBAAGgggGYMBoG CisGAQQBgjcNAgMxDBYKNi4yLjkyMDAuMjA0BgkrBgEEAYI3FRQxJzAlAgEFDAZS QVNYUFMMC1JBU1hQU1xSaWNrDAtJbmV0TWdyLmV4ZTByBgorBgEEAYI3DQICMWQw YgIBAR5aAE0AaQBjAHIAbwBzAG8AZgB0ACAAUgBTAEEAIABTAEMAaABhAG4AbgBl AGwAIABDAHIAeQBwAHQAbwBnAHIAYQBwAGgAaQBjACAAUAByAG8AdgBpAGQAZQBy AwEAMIHPBgkqhkiG9w0BCQ4xgcEwgb4wDgYDVR0PAQH/BAQDAgTwMBMGA1UdJQQM MAoGCCsGAQUFBwMBMHgGCSqGSIb3DQEJDwRrMGkwDgYIKoZIhvcNAwICAgCAMA4G CCqGSIb3DQMEAgIAgDALBglghkgBZQMEASowCwYJYIZIAWUDBAEtMAsGCWCGSAFl AwQBAjALBglghkgBZQMEAQUwBwYFKw4DAgcwCgYIKoZIhvcNAwcwHQYDVR0OBBYE FD/yOsTbXE+GVFCFMmldzQvyloz9MA0GCSqGSIb3DQEBBQUAA4IBAQCK6LlsCuIM 1AU0niB6QZ9v0FTsGFxP1dYvVUnJyY6VEKNiGFiQjZac7UCs0p58yScdXWEFOE8V OsjAYD3xYNc05+ckyD67UHRGEUAVB9RBvbKW23KeR/8kBmEzc8PemD52YOgExxAJ 57xWmAwEHAvbgYzQvhO8AOzH3TGvvHbg5UKM1pYgNmuwZq5DkL/IDoeIJwfk/wrI wghNTuxxIFgbH4YrgLgv4PRvrS/LaTCRBdboaCgzATMczaOb1nd/DVNR+3fCtMhM W0psTAjzRbmXF3nJyAQa7jF/52gkY0RfFX2lG5tJnG+XDsVNvKNvh9Qa5Tlmkm06 ILKCm9ciWCKk -----END NEW CERTIFICATE REQUEST----- You can take that certificate request and submit that to your certificate provider. Since this is base64 encoded you can typically just paste it into a text box on the submission page, or some providers will ask you to upload the CSR as a file. What does a Renewal look like? Note the length of the CSR will vary somewhat with key strength, but compare this to a renewal request that IIS generated from my existing site:-----BEGIN NEW CERTIFICATE REQUEST----- MIIPpwYFKoZIhvcNAQcCoIIPmDCCD5QCAQExCzAJBgUrDgMCGgUAMIIIqAYJKoZI hvcNAQcBoIIImQSCCJUwggiRMIIH+gIBADBdMSEwHwYDVQQLDBhEb21haW4gQ29u dHJvbCBWYWxpFGF0ZWQxHjAcBgNVBAsMFUVzc2VudGlhbFNTTCBXaWxkY2FyZDEY MBYGA1UEAwwPKi53ZXN0LXdpbmQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB iQKBgQCK4OuIOR18Wb8tNMGRZiD1c9X57b332Lj7DhbckFqLs0ys8kVDHrTXSj+T Ye9nmAvfPpZmBtE5p9qRNN79rUYugAdl+qEtE4IJe1bRfxXzcKa1SXa8+TEs3zQa zYSmcR2dDuC8om1eAdeCtt0NnkvANgm1VLwGOor/UHMASaEhCQIDAQABoIIG8jAa BgorBgEEAYI3DQIDMQwWCjYuMi45MjAwLjIwNAYJKwYBBAGCNxUUMScwJQIBBQwG UkFTWFBTDAtSQVNYUFNcUmljawwLSW5ldE1nci5leGUwZgYKKwYBBAGCNw0CAjFY MFYCAQIeTgBNAGkAYwByAG8AcwBvAGYAdAAgAFMAdAByAG8AbgBnACAAQwByAHkA cAB0AG8AZwByAGEAcABoAGkAYwAgAFAAcgBvAHYAaQBkAGUAcgMBADCCAQAGCSqG SIb3DQEJDjGB8jCB7zAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIwADA0BgNV HSUELTArBggrBgEFBQcDAQYIKwYBBQUHAwIGCisGAQQBgjcKAwMGCWCGSAGG+EIE ATBPBgNVHSAESDBGMDoGCysGAQQBsjEBAgIHMCswKQYIKwYBBQUHAgEWHWh0dHBz Oi8vc2VjdXJlLmNvbW9kby5jb20vQ1BTMAgGBmeBDAECATApBgNVHREEIjAggg8q Lndlc3Qtd2luZC5jb22CDXdlc3Qtd2luZC5jb20wHQYDVR0OBBYEFEVLAyO8gDiv lsfovKrx9mHPyrsiMIIFMAYJKwYBBAGCNw0BMYIFITCCBR0wggQFoAMCAQICEQDu 1E1T5Jvtkm5LOfSHabWlMA0GCSqGSIb3DQEBBQUAMHIxCzAJBgNVBAYTAkdCMRsw GQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAY BgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMRgwFgYDVQQDEw9Fc3NlbnRpYWxTU0wg Q0EwHhcNMTQwNTA3MDAwMDAwWhcNMTUwNjA2MjM1OTU5WjBdMSEwHwYDVQQLExhE b21haW4gQ29udHJvbCBWYWxpZGF0ZWQxHjAcBgNVBAsTFUVzc2VudGlhbFNTTCBX aWxkY2FyZDEYMBYGA1UEAxQPKi53ZXN0LXdpbmQuY29tMIIBIjANBgkqhkiG9w0B AQEFAAOCAQ8AMIIBCgKCAQEAiyKfL66XB51DlUfm6xXqJBcvMU2qorRHxC+WjEpB amvg8XoqNfCKzDAvLMbY4BLhbYCTagqtslnP3Gj4AKhXqRKU0n6iSbmS1gcWzCJM CHufZ5RDtuTuxhTdJxzP9YqZUfKV5abWQp/TK6V1ryaBJvdqM73q4tRjrQODtkiR PfZjxpybnBHFJS8jYAf8jcOjSDZcgN1d9Evc5MrEJCp/90cAkozyF/NMcFtD6Yj8 UM97z3MzDT2JPDoH3kAr3cCgpUNyQ2+wDNCnL9eWYFkOQi8FZMsZol7KlZ5NgNfO a7iZMVGbqDg6rkS//2uGe6tSQJTTs+mAZB+na+M8XT2UqwIDAQABo4IBwTCCAb0w HwYDVR0jBBgwFoAU2svqrVsIXcz//CZUzknlVcY49PgwHQYDVR0OBBYEFH0AmLiL RSEL9+sQD/n5O4N7/nnqMA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMDQG A1UdJQQtMCsGCCsGAQUFBwMBBggrBgEFBQcDAgYKKwYBBAGCNwoDAwYJYIZIAYb4 QgQBME8GA1UdIARIMEYwOgYLKwYBBAGyMQECAgcwKzApBggrBgEFBQcCARYdaHR0 cHM6Ly9zZWN1cmUuY29tb2RvLmNvbS9DUFMwCAYGZ4EMAQIBMDsGA1UdHwQ0MDIw MKAuoCyGKmh0dHA6Ly9jcmwuY29tb2RvY2EuY29tL0Vzc2VudGlhbFNTTENBLmNy bDBuBggrBgEFBQcBAQRiMGAwOAYIKwYBBQUHMAKGLGh0dHA6Ly9jcnQuY29tb2Rv Y2EuY29tL0Vzc2VudGlhbFNTTENBXzIuY3J0MCQGCCsGAQUFBzABhhhodHRwOi8v b2NzcC5jb21vZG9jYS5jb20wKQYDVR0RBCIwIIIPKi53ZXN0LXdpbmQuY29tgg13 ZXN0LXdpbmQuY29tMA0GCSqGSIb3DQEBBQUAA4IBAQBqBfd6QHrxXsfgfKARG6np 8yszIPhHGPPmaE7xq7RpcZjY9H+8l6fe4jQbGFjbA5uHBklYI4m2snhPaW2p8iF8 YOkm2V2hEsSTnkf5/flw9mZtlCFEDFXSsBxBdNz8RYTthPMu1h09C0XuDB30sztg nR692FrxJN5/bXsk+MC9nEweTFW/t2HW+XZ8bhM7vsAS+pZionR4MyuQ0mYIt/lD csZVZ91KxTsIm8rNMkkYGFoSIXjQ0+0tCbxMF0i2qnpmNRpA6PU8l7lxxvPkplsk 9KB8QIPFrR5p/i/SUAd9vECWh5+/ktlcrfFP2PK7XcEwWizsvMrNqLyvQVNXSUPT MA0GCSqGSIb3DQEBBQUAA4GBABt/NitwMzc5t22p5+zy4HXbVYzLEjesLH8/v0ot uLQ3kkG8tIWNh5RplxIxtilXt09H4Oxpo3fKUN0yw+E6WsBfg0sAF8pHNBdOJi48 azrQbt4HvKktQkGpgYFjLsormjF44SRtToLHlYycDHBNvjaBClUwMCq8HnwY6vDq xikRoIIFITCCBR0wggQFoAMCAQICEQDu1E1T5Jvtkm5LOfSHabWlMA0GCSqGSIb3 DQEBBQUAMHIxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0 ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVk MRgwFgYDVQQDEw9Fc3NlbnRpYWxTU0wgQ0EwHhcNMTQwNTA3MDAwMDAwWhcNMTUw NjA2MjM1OTU5WjBdMSEwHwYDVQQLExhEb21haW4gQ29udHJvbCBWYWxpZGF0ZWQx HjAcBgNVBAsTFUVzc2VudGlhbFNTTCBXaWxkY2FyZDEYMBYGA1UEAxQPKi53ZXN0 LXdpbmQuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAiyKfL66X B51DlUfm6xXqJBcvMU2qorRHxC+WjEpBamvg8XoqNfCKzDAvLMbY4BLhbYCTagqt slnP3Gj4AKhXqRKU0n6iSbmS1gcWzCJMCHufZ5RDtuTuxhTdJxzP9YqZUfKV5abW Qp/TK6V1ryaBJvdqM73q4tRjrQODtkiRPfZjxpybnBHFJS8jYAf8jcOjSDZcgN1d 9Evc5MrEJCp/90cAkozyF/NMcFtD6Yj8UM97z3MzDT2JPDoH3kAr3cCgpUNyQ2+w DNCnL9eWYFkOQi8FZMsZol7KlZ5NgNfOa7iZMVGbqDg6rkS//2uGe6tSQJTTs+mA ZB+na+M8XT2UqwIDAQABo4IBwTCCAb0wHwYDVR0jBBgwFoAU2svqrVsIXcz//CZU zknlVcY49PgwHQYDVR0OBBYEFH0AmLiLRSEL9+sQD/n5O4N7/nnqMA4GA1UdDwEB /wQEAwIFoDAMBgNVHRMBAf8EAjAAMDQGA1UdJQQtMCsGCCsGAQUFBwMBBggrBgEF BQcDAgYKKwYBBAGCNwoDAwYJYIZIAYb4QgQBME8GA1UdIARIMEYwOgYLKwYBBAGy MQECAgcwKzApBggrBgEFBQcCARYdaHR0cHM6Ly9zZWN1cmUuY29tb2RvLmNvbS9D UFMwCAYGZ4EMAQIBMDsGA1UdHwQ0MDIwMKAuoCyGKmh0dHA6Ly9jcmwuY29tb2Rv Y2EuY29tL0Vzc2VudGlhbFNTTENBLmNybDBuBggrBgEFBQcBAQRiMGAwOAYIKwYB BQUHMAKGLGh0dHA6Ly9jcnQuY29tb2RvY2EuY29tL0Vzc2VudGlhbFNTTENBXzIu Y3J0MCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5jb21vZG9jYS5jb20wKQYDVR0R BCIwIIIPKi53ZXN0LXdpbmQuY29tgg13ZXN0LXdpbmQuY29tMA0GCSqGSIb3DQEB BQUAA4IBAQBqBfd6QHrxXsfgfKARG6np8yszIPhHGPPmaE7xq7RpcZjY9H+8l6fe 4jQbGFjbA5uHBklYI4m2snhPaW2p8iF8YOkm2V2hEsSTnkf5/flw9mZtlCFEDFXS sBxBdNz8RYTthPMu1h09C0XuDB30sztgnR692FrxJN5/bXsk+MC9nEweTFW/t2HW +XZ8bhM7vsAS+pZionR4MyuQ0mYIt/lDcsZVZ91KxTsIm8rNMkkYGFoSIXjQ0+0t CbxMF0i2qnpmNRpA6PU8l7lxxvPkplsk9KB8QIPFrR5p/i/SUAd9vECWh5+/ktlc rfFP2PK7XcEwWizsvMrNqLyvQVNXSUPTMYIBrzCCAasCAQEwgYcwcjELMAkGA1UE BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2Fs Zm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxGDAWBgNVBAMTD0Vzc2Vu dGlhbFNTTCBDQQIRAO7UTVPkm+2Sbks59IdptaUwCQYFKw4DAhoFADANBgkqhkiG 9w0BAQEFAASCAQB8PNQ6bYnQpWfkHyxnDuvNKw3wrqF2p7JMZm+SuN2qp3R2LpCR mW2LrGtQIm9Iob/QOYH+8houYNVdvsATGPXX2T8gzn+anof4tOG0vCTK1Bp9bwf9 MkRP+1c8RW/vkYmUW4X5/C+y3CZpMH5dDTaXBIpXFzjX/fxNpH/rvLzGiaYYL3Cn OLO+aOADr9qq5yoqwpiYCSfYNNYKTUNNGfYIidQwYtbHXEYhSukB2oR89xD2sZZ4 bOqFjUPgTa5SsERLDDeg3omMKiIXVYGxlqBEq51Kge6IQt4qQV9P9VgInW7cWmKe dTqNHI9ri3ttewdEnT++TKGKKfTjX9SR8Waj -----END NEW CERTIFICATE REQUEST----- Clearly there’s something very different between this an my original request! And it didn’t work. IIS creates a custom CSR that is encoded in a format that no certificate authority I’ve ever used uses. If you want the gory details of what’s in there look at this ServerFault question (thanks to Mika in the comments). In the end it doesn’t matter  though – no certificate authority knows what to do with this CSR. So create a new CSR and skip the renewal. Always! Use the same Server Keep in mind that on IIS at least you should always create your certificate on a single server and then when you receive the final certificate from your provider import it on that server. IIS tracks the CSR it created and requires it in order to import the final certificate properly. So if for some reason you try to install the certificate on another server, it won’t work. I’ve also run into trouble trying to install the same certificate twice – this time around I didn’t give my certificate the proper friendly name and IIS failed to allow me to assign the certificate to any of my Web sites. So I removed the certificate and tried to import again, only to find it failed the second time around. There are other ways to fix this, but in my case I had to have the certificate re-issued to work – not what you want to do. Regardless of what you do though, when you import make sure you do it right the first time by crossing all your t’s and dotting your i's– it’ll save you a lot of grief! You don’t actually have to use the server that the certificate gets installed on to generate the CSR and first install it, but it is generally a good idea to do so just so you can get the certificate installed into the right place right away. If you have access to the server where you need to install the certificate you might as well use it. But you can use another machine to generated the and install the certificate, then export the certificate and move it to another machine as needed. So you can use your Dev machine to create a certificate then export it and install it on a live server. More on installation and back up/export later. Installing the Certificate Once you’ve submitted a CSR request your provider will process the request and eventually issue you a new final certificate that contains another text file with the final key to import into your certificate store. IIS does this by combining the content in your certificate request with the original CSR. If all goes well your new certificate shows up in the certificate list and you’re ready to assign the certificate to your sites. Make sure you use a friendly name that matches domain name of your site. So use *.mysite.com or www.mysite.com or store.mysite.com to ensure IIS recognizes the certificate. I made the mistake of not naming my friendly name this way and found that IIS was unable to link my sites to my wildcard certificate. It needed to have the *. as part of the certificate otherwise the Hostname input field was blanked out. Changing the Friendly Name If you by accidentally used an invalid friendly name you can change it later in the Windows certificate store. Bring up a Run Box Type MMC File | Add/Remove Snap In Add Certificates | Computer Account | Local Computer Drill into Certificates | Personal | Certificates Find your Certificate | Right Click | Properties Edit the Friendly Name | Click OK Backing up your Certificate The first thing you should do once your certificate is successfully installed is to back it up! In case your server crashes or you otherwise lose your configuration this will ensure you have an easy way to recover and reinstall your certificate either on the same server or a different one. If you’re running a server farm or using a wildcard certificate you also need to get the certificate onto other machines and a PFX file import is the easiest way to do this. To back up your certificate select your certificate and choose Export from the context or sidebar menu: The Export Certificate option allows you to export a password protected binary file that you can import in a single step. You can copy the resulting binary PFX file to back up or copy to other machines to install on. Importing the certificate on another machine is as easy as pointing at the PFX file and specifying the password. IIS handles the rest. Assigning a new certificate to your Site Once you have the new certificate installed, all that’s left to do is assign it to your site. In IIS select your Web site and bring up the Site Bindings from the right sidebar. Add a new binding for https, bind it to port 443, specify your hostname and pick the certificate from the pick list. If you’re using a root site make sure to set up your certificate for www.yoursite.com and also for yoursite.com so that both work properly with SSL. Note that you need to explicitly configure each hostname for a certificate if you plan to use SSL. Luckily if you update your SSL certificate in the following year, IIS prompts you and asks whether you like to update all other sites that are using the existing cert to the newer cert. And you’re done. So what’s the Pain? So, all of this is old hat and it doesn’t look all that bad right? So what’s the pain here? Well if you follow the instructions and do everything right, then the process is about as straight forward as you would expect it to be. You create a cert request, you import it and assign it to your sites. That’s the basic steps and to be perfectly fair it works well – if nothing goes wrong. However, renewing tends to be the problem. The first unintuitive issue is that you simply shouldn’t renew but create a new CSR and generate your new certificate from that. Over the years I’ve fallen prey to the belief that Microsoft eventually will fix this so that the renewal creates the same type of CSR as the old cert, but apparently that will just never happen. Booo! The other problem I ran into is that I accidentally misnamed my imported certificate which in turn set off a chain of events that caused my originally issued certificate to become uninstallable. When I received my completed certificate I installed it and it installed just fine, but the friendly name was wrong. As a result IIS refused to assign the certificate to any of my host headered sites. That’s strike number one. Why the heck should the friendly name have any effect on the ability to attach the certificate??? Next I uninstalled the certificate because I figured that would be the easiest way to make sure I get it right. But I found that I could not reinstall my certificate. I kept getting these stop errors: "ASN1 bad tag value met" that would prevent the installation from completion. After searching around for this error and reading countless long messages on forums, I found that this error supposedly does not actually mean the install failed, but the list wouldn’t refresh. Commodo has this to say: Note: There is a known issue in IIS 7 giving the following error: "Cannot find the certificate request associated with this certificate file. A certificate request must be completed on the computer where it was created." You may also receive a message stating "ASN1 bad tag value met". If this is the same server that you generated the CSR on then, in most cases, the certificate is actually installed. Simply cancel the dialog and press "F5" to refresh the list of server certificates. If the new certificate is now in the list, you can continue with the next step. If it is not in the list, you will need to reissue your certificate using a new CSR (see our CSR creation instructions for IIS 7). After creating a new CSR, login to your Comodo account and click the 'replace' button for your certificate. Not sure if this issue is fixed in IIS 8 but that’s an insane bug to have crop up. As it turns out, in my case the refresh didn’t work and the certificate didn’t show up in the IIS list after the reinstall. In fact when looking at the certificate store I could see my certificate was installed in the right place, but the private key is missing which is most likely why IIS is not picking it up. It looks like IIS could not match the final cert to the original CSR generated. But again some sort of message to that affect might be helpful instead of ASN1 bad tag value met. Recovering the Private Key So it turns out my original problem was that I received the published key, but when I imported the private key was missing. There’s a relatively easy way to recover from this. If your certificate doesn’t show up in IIS check in the certificate store for the local machine (see steps above on how to bring this up). If you look at the certificate in Certificates/Personal/Certificates make sure you see the key as shown in the image below: if the key is missing it means that the certificate is missing the private key most likely. To fix a certificate you can do the following: Double click the certificate Go to the Details Tab Copy down the Serial number You can copy the serial number from the area blurred out above. The serial number will be in a format like ?00 a7 9b a1 a4 9d 91 63 57 d6 9f 26 b8 ee 79 b5 cb and you’ll need to strip out the spaces in order to use it in the next step. Next open up an Administrative command prompt and issue the following command: certutil -repairstore my 00a79ba1a49d916357d69f26b8ee79b5cb You should get a confirmation message that the repair worked. If you now go back to the certificate store you should now see the key icon show up on the certificate. Your certificate is fixed. Now go back into IIS Manager and refresh the list of certificates and if all goes well you should see all the certificates that showed in the cert store now: Remember – back up the key first then map to your site… Summary I deal with a lot of customers who run their own IIS servers, and I can’t tell you how often I hear about botched SSL installations. When I posted some of my issues on Twitter yesterday I got a hell storm of “me too” responses. I’m clearly not the only one, who’s run into this especially with renewals. I feel pretty comfortable with IIS configuration and I do a lot of it for support purposes, but the SSL configuration is one that never seems to go seamlessly. This blog post is meant as reminder to myself to read next time I do a renewal. So I can dot my i's and dash my t’s before I get caught in the mess I’m dealing with today. Hopefully some of you find this useful as well.© Rick Strahl, West Wind Technologies, 2005-2014Posted in IIS7  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • apache2 doesn't start with location

    - by Geod24
    I have a small domain, which I use only for personal purposes. I'm the main user, and have at most 3-4 users at the same time. I use apache2 with passenger to serve redmine. So I start with an empty apache2: root@xxxxx:/home/# service apache2 start [ ok ] Starting web server: apache2. root@xxxxx:/home/# a2dissite Your choices are: Which site(s) do you want to disable (wildcards ok)? Then enable my site, and restart (not reload) apache2: root@xxxxx:/home/# a2ensite 200-redmine Enabling site 200-redmine. To activate the new configuration, you need to run: service apache2 reload root@xxxxx:/home/# service apache2 restart [FAIL] Restarting web server: apache2 failed! [warn] The apache2 instance did not start within 20 seconds. Please read the log files to discover problems ... (warning). root@xxxxx:/home/# service apache2 restart [FAIL] Restarting web server: apache2 failed! [warn] There are processes named 'apache2' running which do not match your pid file which are left untouched in the name of safety, Please review the situation by hand. ... (warning). root@xxxxx:/home/# pidof apache2 20948 Here's my 200-redmine.conf: PerlLoadModule Apache::Redmine <VirtualHost *:80> ServerName redmine.xxxxx.xxx DocumentRoot /var/www/redmine/public/ ErrorLog ${APACHE_LOG_DIR}/redmine.error.log CustomLog ${APACHE_LOG_DIR}/redmine.access.log common MaxRequestLen 20971520 <Directory "/var/www/redmine/public/"> Options Indexes ExecCGI FollowSymLinks Order allow,deny Allow from all AllowOverride all </Directory> SetEnv GIT_PROJECT_ROOT /opt/git/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend/ <Location /git> PerlAuthenHandler Apache::Authn::Redmine::authen_handler PerlAccessHandler Apache::Authn::Redmine::access_handler AuthType Basic Require valid-user AuthName "Redmine Git Repository" RedmineDSN "DBI:mysql:database=redmine;host=localhost:3306" RedmineDbUser "redmine" RedmineDbPass "password" RedmineCacheCredsMax 50 </Location> </VirtualHost> Now if I comment out the ScriptAlias / stuff, it works ! In addition, starting the server with 200-redmine disabled, then enabling it works. But apache2 will die randomly. Plus the location doesn't work. The logs show nothing: root@xxxxx:/home/# ll /var/log/apache2/ total 8 drwxr-xr-x 2 root root 4096 Oct 30 07:52 coredump -rw-r--r-- 1 root root 0 Nov 4 02:39 default.access.log -rw-r--r-- 1 root root 2356 Nov 4 02:39 default.error.log -rw-r--r-- 1 root root 0 Nov 4 02:39 other_vhosts_access.log -rw-r--r-- 1 root root 0 Nov 4 02:39 redmine.access.log -rw-r--r-- 1 root root 0 Nov 4 02:39 redmine.error.log root@xxxxx:/home/# ll /var/log/apache2/coredump/ total 0 root@xxxxx:/home/# cat /var/log/apache2/default.error.log [ 2013-11-04 02:39:36.0130 21471/7fcf090f4740 agents/Watchdog/Main.cpp:452 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '21470', 'web_server_type' => 'apache', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' } [ 2013-11-04 02:39:36.0255 21474/7f9a99fda740 agents/HelperAgent/Main.cpp:597 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.21470/generation-0/request [ 2013-11-04 02:39:36.0507 21479/7f8316b0f740 agents/LoggingAgent/Main.cpp:330 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.21470/generation-0/logging [ 2013-11-04 02:39:36.0511 21471/7fcf090f4740 agents/Watchdog/Main.cpp:635 ]: All Phusion Passenger agents started! [ 2013-11-04 02:39:36.3158 21495/7fba6f686740 agents/Watchdog/Main.cpp:452 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '21491', 'web_server_type' => 'apache', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' } [ 2013-11-04 02:39:36.3304 21498/7f0106d9b740 agents/HelperAgent/Main.cpp:597 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.21491/generation-0/request [ 2013-11-04 02:39:36.3522 21503/7f92ad392740 agents/LoggingAgent/Main.cpp:330 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.21491/generation-0/logging [ 2013-11-04 02:39:36.3525 21495/7fba6f686740 agents/Watchdog/Main.cpp:635 ]: All Phusion Passenger agents started! And at last: root@xxxxx:/home/# apache2ctl -t -D DUMP_VHOSTS VirtualHost configuration: *:80 is a NameVirtualHost default server redmine.xxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) port 80 namevhost redmine.xxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) port 80 namevhost redmine.xxxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) root@xxxxx:/home/# uname -a Linux xxxx.xxx 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux root@xxxxx:/home/# dpkg --list | grep apache2 ii apache2 2.4.6-3 amd64 Apache HTTP Server ii apache2-bin 2.4.6-3 amd64 Apache HTTP Server (binary files and modules) ii apache2-data 2.4.6-3 all Apache HTTP Server (common files) ii apache2-utils 2.4.6-3 amd64 Apache HTTP Server (utility programs for web servers) ii libapache2-mod-fcgid 1:2.3.9-1 amd64 FastCGI interface module for Apache 2 ii libapache2-mod-passenger 4.0.10-1 amd64 Rails and Rack support for Apache2 ii libapache2-mod-perl2 2.0.8+httpd24-r1449661-6+b1 amd64 Integration of perl with the Apache2 web server ii libapache2-mod-perl2-dev 2.0.8+httpd24-r1449661-6 all Integration of perl with the Apache2 web server - development files ii libapache2-mod-perl2-doc 2.0.8+httpd24-r1449661-6 all Integration of perl with the Apache2 web server - documentation ii libapache2-mod-proxy-html 1:2.4.6-3 amd64 Transitional package for apache2-bin ii libapache2-mod-svn 1.7.13-2 amd64 Apache Subversion server modules for Apache httpd ii libapache2-reload-perl 0.12-2 all module for reloading Perl modules when changed on disk ii libapache2-svn 1.7.13-2 all Apache Subversion server modules for Apache httpd (dummy package) root@xxxxx:/home/# a2dismod Your choices are: access_compat alias auth_basic authn_core authn_file authz_core authz_host authz_svn authz_user autoindex dav dav_svn deflate dir env fcgid filter mime mpm_event negotiation passenger perl proxy proxy_http rewrite setenvif status Which module(s) do you want to disable (wildcards ok)?

    Read the article

  • An easy way to create Side by Side registrationless COM Manifests with Visual Studio

    - by Rick Strahl
    Here's something I didn't find out until today: You can use Visual Studio to easily create registrationless COM manifest files for you with just a couple of small steps. Registrationless COM lets you use COM component without them being registered in the registry. This means it's possible to deploy COM components along with another application using plain xcopy semantics. To be sure it's rarely quite that easy - you need to watch out for dependencies - but if you know you have COM components that are light weight and have no or known dependencies it's easy to get everything into a single folder and off you go. Registrationless COM works via manifest files which carry the same name as the executable plus a .manifest extension (ie. yourapp.exe.manifest) I'm going to use a Visual FoxPro COM object as an example and create a simple Windows Forms app that calls the component - without that component being registered. Let's take a walk down memory lane… Create a COM Component I start by creating a FoxPro COM component because that's what I know and am working with here in my legacy environment. You can use VB classic or C++ ATL object if that's more to your liking. Here's a real simple Fox one: DEFINE CLASS SimpleServer as Session OLEPUBLIC FUNCTION HelloWorld(lcName) RETURN "Hello " + lcName ENDDEFINE Compile it into a DLL COM component with: BUILD MTDLL simpleserver FROM simpleserver RECOMPILE And to make sure it works test it quickly from Visual FoxPro: server = CREATEOBJECT("simpleServer.simpleserver") MESSAGEBOX( server.HelloWorld("Rick") ) Using Visual Studio to create a Manifest File for a COM Component Next open Visual Studio and create a new executable project - a Console App or WinForms or WPF application will all do. Go to the References Node Select Add Reference Use the Browse tab and find your compiled DLL to import  Next you'll see your assembly in the project. Right click on the reference and select Properties Click on the Isolated DropDown and select True Compile and that's all there's to it. Visual Studio will create a App.exe.manifest file right alongside your application's EXE. The manifest file created looks like this: xml version="1.0" encoding="utf-8"? assembly xsi:schemaLocation="urn:schemas-microsoft-com:asm.v1 assembly.adaptive.xsd" manifestVersion="1.0" xmlns:asmv1="urn:schemas-microsoft-com:asm.v1" xmlns:asmv2="urn:schemas-microsoft-com:asm.v2" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:co.v1="urn:schemas-microsoft-com:clickonce.v1" xmlns:co.v2="urn:schemas-microsoft-com:clickonce.v2" xmlns="urn:schemas-microsoft-com:asm.v1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / file name="simpleserver.DLL" asmv2:size="27293" hash xmlns="urn:schemas-microsoft-com:asm.v2" dsig:Transforms dsig:Transform Algorithm="urn:schemas-microsoft-com:HashTransforms.Identity" / dsig:Transforms dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" / dsig:DigestValuepuq+ua20bbidGOWhPOxfquztBCU=dsig:DigestValue hash typelib tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}" version="1.0" helpdir="" resourceid="0" flags="HASDISKIMAGE" / comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file assembly Now let's finish our super complex console app to test with: using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication1 {     class Program     {         static voidMain(string[] args)         { Type type = Type.GetTypeFromProgID("simpleserver.simpleserver",true); dynamic server = Activator.CreateInstance(type); Console.WriteLine(server.HelloWorld("rick")); Console.ReadLine(); } } } Now run the Console Application… As expected that should work. And why not? The COM component is still registered, right? :-) Nothing tricky about that. Let's unregister the COM component and then re-run and see what happens. Go to the Command Prompt Change to the folder where the DLL is installed Unregister with: RegSvr32 -u simpleserver.dll      To be sure that the COM component no longer works, check it out with the same test you used earlier (ie. o = CREATEOBJECT("SimpleServer.SimpleServer") in your development environment or VBScript etc.). Make sure you run the EXE and you don't re-compile the application or else Visual Studio will complain that it can't find the COM component in the registry while compiling. In fact now that we have our .manifest file you can remove the COM object from the project. When you run run the EXE from Windows Explorer or a command prompt to avoid the recompile. Watch out for embedded Manifest Files Now recompile your .NET project and run it… and it will most likely fail! The problem is that .NET applications by default embeds a manifest file into the compiled EXE application which results in the externally created manifest file being completely ignored. Only one manifest can be applied at a time and the compiled manifest takes precedency. Uh, thanks Visual Studio - not very helpful… Note that if you use another development tool like Visual FoxPro to create your EXE this won't be an issue as long as the tool doesn't automatically add a manifest file. Creating a Visual FoxPro EXE for example will work immediately with the generated manifest file as is. If you are using .NET and Visual Studio you have a couple of options of getting around this: Remove the embedded manifest file Copy the contents of the generated manifest file into a project manifest file and compile that in To remove an embedded manifest in a Visual Studio project: Open the Project Properties (Alt-Enter on project node) Go down to Resources | Manifest and select | Create Application without a Manifest   You can now add use the external manifest file and it will actually be respected when the app runs. The other option is to let Visual Studio create the manifest file on disk and then explicitly add the manifest file into the project. Notice on the dialog above I did this for app.exe.manifest and the manifest actually shows up in the list. If I select this file it will be compiled into the EXE and be used in lieu of any external files and that works as well. Remove the simpleserver.dll reference so you can compile your code and run the application. Now it should work without COM registration of the component. Personally I prefer external manifests because they can be modified after the fact - compiled manifests are evil in my mind because they are immutable - once they are there they can't be overriden or changed. So I prefer an external manifest. However, if you are absolutely sure nothing needs to change and you don't want anybody messing with your manifest, you can also embed it. The option to either is there. Watch for Manifest Caching While working trying to get this to work I ran into some problems at first. Specifically when it wasn't working at first (due to the embedded schema) I played with various different manifest layouts in different files etc.. There are a number of different ways to actually represent manifest files including offloading to separate folder (more on that later). A few times I made deliberate errors in the schema file and I found that regardless of what I did once the app failed or worked no amount of changing of the manifest file would make it behave differently. It appears that Windows is caching the manifest data for a given EXE or DLL. It takes a restart or a recompile of either the EXE or the DLL to clear the caching. Recompile your servers in order to see manifest changes unless there's an outright failure of an invalid manifest file. If the app starts the manifest is being read and caches immediately. This can be very confusing especially if you don't know that it's happening. I found myself always recompiling the exe after each run and before making any changes to the manifest file. Don't forget about Runtimes of COM Objects In the example I used above I used a Visual FoxPro COM component. Visual FoxPro is a runtime based environment so if I'm going to distribute an application that uses a FoxPro COM object the runtimes need to be distributed as well. The same is true of classic Visual Basic applications. Assuming that you don't know whether the runtimes are installed on the target machines make sure to install all the additional files in the EXE's directory alongside the COM DLL. In the case of Visual FoxPro the target folder should contain: The EXE  App.exe The Manifest file (unless it's compiled in) App.exe.manifest The COM object DLL (simpleserver.dll) Visual FoxPro Runtimes: VFP9t.dll (or VFP9r.dll for non-multithreaded dlls), vfp9rENU.dll, msvcr71.dll All these files should be in the same folder. Debugging Manifest load Errors If you for some reason get your manifest loading wrong there are a couple of useful tools available - SxSTrace and SxSParse. These two tools can be a huge help in debugging manifest loading errors. Put the following into a batch file (SxS_Trace.bat for example): sxstrace Trace -logfile:sxs.bin sxstrace Parse -logfile:sxs.bin -outfile:sxs.txt Then start the batch file before running your EXE. Make sure there's no caching happening as described in the previous section. For example, if I go into the manifest file and explicitly break the CLSID and/or ProgID I get a detailed report on where the EXE is looking for the manifest and what it's reading. Eventually the trace gives me an error like this: INFO: Parsing Manifest File C:\wwapps\Conf\SideBySide\Code\app.EXE.     INFO: Manifest Definition Identity is App.exe,processorArchitecture="x86",type="win32",version="1.0.0.0".     ERROR: Line 13: The value {AAaf2c2811-0657-4264-a1f5-06d033a969ff} of attribute clsid in element comClass is invalid. ERROR: Activation Context generation failed. End Activation Context Generation. pinpointing nicely where the error lies. Pay special attention to the various attributes - they have to match exactly in the different sections of the manifest file(s). Multiple COM Objects The manifest file that Visual Studio creates is actually quite more complex than is required for basic registrationless COM object invokation. The manifest file can be simplified a lot actually by stripping off various namespaces and removing the type library references altogether. Here's an example of a simplified manifest file that actually includes references to 2 COM servers: xml version="1.0" encoding="utf-8"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / file name="simpleserver.DLL" comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file file name = "sidebysidedeploy.dll" comClass clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}" progid="sidebysidedeploy.SidebysidedeployServer" description="SidebySideDeploy Server" threadingModel="apartment" / file assembly Simple enough right? Routing to separate Manifest Files and Folders In the examples above all files ended up in the application's root folder - all the DLLs, support files and runtimes. Sometimes that's not so desirable and you can actually create separate manifest files. The easiest way to do this is to create a manifest file that 'routes' to another manifest file in a separate folder. Basically you create a new 'assembly identity' via a named id. You can then create a folder and another manifest with the id plus .manifest that points at the actual file. In this example I create: App.exe.manifest A folder called App.deploy A manifest file in App.deploy All DLLs and runtimes in App.deploy Let's start with that master manifest file. This file only holds a reference to another manifest file: App.exe.manifest xml version="1.0" encoding="UTF-8" standalone="yes"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / dependency dependentAssembly assemblyIdentity name="App.deploy" version="1.0.0.0" type="win32" / dependentAssembly dependency assembly   Note this file only contains a dependency to App.deploy which is another manifest id. I can then create App.deploy.manifest in the current folder or in an App.deploy folder. In this case I'll create App.deploy and in it copy the DLLs and support runtimes. I then create App.deploy.manifest. App.deploy.manifest xml version="1.0" encoding="UTF-8" standalone="yes"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.deploy" type="win32" version="1.0.0.0" / file name="simpleserver.DLL" comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file file name="sidebysidedeploy.dll" comClass clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}" threadingModel="Apartment" progid="sidebysidedeploy.SidebysidedeployServer" description="SidebySideDeploy Server" / file assembly   In this manifest file I then host my COM DLLs and any support runtimes. This is quite useful if you have lots of DLLs you are referencing or if you need to have separate configuration and application files that are associated with the COM object. This way the operation of your main application and the COM objects it interacts with is somewhat separated. You can see the two folders here:   Routing Manifests to different Folders In theory registrationless COM should be pretty easy in painless - you've seen the configuration manifest files and it certainly doesn't look very complicated, right? But the devil's in the details. The ActivationContext API (SxS - side by side activation) is very intolerant of small errors in the XML or formatting of the keys, so be really careful when setting up components, especially if you are manually editing these files. If you do run into trouble SxsTrace/SxsParse are a huge help to track down the problems. And remember that if you do have problems that you'll need to recompile your EXEs or DLLs for the SxS APIs to refresh themselves properly. All of this gets even more fun if you want to do registrationless COM inside of IIS :-) But I'll leave that for another blog post…© Rick Strahl, West Wind Technologies, 2005-2011Posted in COM  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Coding With Windows Azure IaaS

    - by Hisham El-bereky
    This post will focus on some advanced programming topics concerned with IaaS (Infrastructure as a Service) which provided as windows azure virtual machine (with its related resources like virtual disk and virtual network), you know that windows azure started as PaaS cloud platform but regarding to some business cases which need to have full control over their virtual machine, so windows azure directed toward providing IaaS. Sometimes you will need to manage your cloud IaaS through code may be for these reasons: Working on hyper-cloud system by providing bursting connector to windows azure virtual machines Providing multi-tenant system which consume windows azure virtual machine Automated process on your on-premises or cloud service which need to utilize some virtual resources We are going to implement the following basic operation using C# code: List images Create virtual machine List virtual machines Restart virtual machine Delete virtual machine Before going to implement the above operations we need to prepare client side and windows azure subscription to communicate correctly by providing management certificate (x.509 v3 certificates) which permit client access to resources in your Windows Azure subscription, whilst requests made using the Windows Azure Service Management REST API require authentication against a certificate that you provide to Windows Azure More info about setting management certificate located here. And to install .cer on other client machine you will need the .pfx file, or if not exist by exporting .cer as .pfx Note: You will need to install .net 4.5 on your machine to try the code So let start This post built on the post sent by Michael Washam "Advanced Windows Azure IaaS – Demo Code", so I'm here to declare some points and to add new operation which is not exist in Michael's demo The basic C# class object used here as client to azure REST API for IaaS service is HttpClient (Provides a base class for sending HTTP requests and receiving HTTP responses from a resource identified by a URI) this object must be initialized with the required data like certificate, headers and content if required. Also I'd like to refer here that the code is based on using Asynchronous programming with calls to azure which enhance the performance and gives us the ability to work with complex calls which depends on more than one sub-call to achieve some operation The following code explain how to get certificate and initializing HttpClient object with required data like headers and content HttpClient GetHttpClient() { X509Store certificateStore = null; X509Certificate2 certificate = null; try { certificateStore = new X509Store(StoreName.My, StoreLocation.CurrentUser); certificateStore.Open(OpenFlags.ReadOnly); string thumbprint = ConfigurationManager.AppSettings["CertThumbprint"]; var certificates = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false); if (certificates.Count > 0) { certificate = certificates[0]; } } finally { if (certificateStore != null) certificateStore.Close(); }   WebRequestHandler handler = new WebRequestHandler(); if (certificate!= null) { handler.ClientCertificates.Add(certificate); HttpClient httpClient = new HttpClient(handler); //And to set required headers lik x-ms-version httpClient.DefaultRequestHeaders.Add("x-ms-version", "2012-03-01"); httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/xml")); return httpClient; } return null; }  Let us keep the object httpClient as reference object used to call windows azure REST API IaaS service. For each request operation we need to define: Request URI HTTP Method Headers Content body (1) List images The List OS Images operation retrieves a list of the OS images from the image repository Request URI https://management.core.windows.net/<subscription-id>/services/images] Replace <subscription-id> with your windows Id HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None.  C# Code List<String> imageList = new List<String>(); //replace _subscriptionid with your WA subscription String uri = String.Format("https://management.core.windows.net/{0}/services/images", _subscriptionid);  HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);  if (responseStream != null) {      XDocument xml = XDocument.Load(responseStream);      var images = xml.Root.Descendants(ns + "OSImage").Where(i => i.Element(ns + "OS").Value == "Windows");      foreach (var image in images)      {      string img = image.Element(ns + "Name").Value;      imageList.Add(img);      } } More information about the REST call (Request/Response) located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/jj157191.aspx (2) Create Virtual Machine Creating virtual machine required service and deployment to be created first, so creating VM should be done through three steps incase hosted service and deployment is not created yet Create hosted service, a container for service deployments in Windows Azure. A subscription may have zero or more hosted services Create deployment, a service that is running on Windows Azure. A deployment may be running in either the staging or production deployment environment. It may be managed either by referencing its deployment ID, or by referencing the deployment environment in which it's running. Create virtual machine, the previous two steps info required here in this step I suggest here to use the same name for service, deployment and service to make it easy to manage virtual machines Note: A name for the hosted service that is unique within Windows Azure. This name is the DNS prefix name and can be used to access the hosted service. For example: http://ServiceName.cloudapp.net// 2.1 Create service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/gg441304.aspx C# code The following method show how to create hosted service async public Task<String> NewAzureCloudService(String ServiceName, String Location, String AffinityGroup, String subscriptionid) { String requestID = String.Empty;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices", subscriptionid); HttpClient http = GetHttpClient();   System.Text.ASCIIEncoding ae = new System.Text.ASCIIEncoding(); byte[] svcNameBytes = ae.GetBytes(ServiceName);   String locationEl = String.Empty; String locationVal = String.Empty;   if (String.IsNullOrEmpty(Location) == false) { locationEl = "Location"; locationVal = Location; } else { locationEl = "AffinityGroup"; locationVal = AffinityGroup; }   XElement srcTree = new XElement("CreateHostedService", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("ServiceName", ServiceName), new XElement("Label", Convert.ToBase64String(svcNameBytes)), new XElement(locationEl, locationVal) ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } 2.2 Create Deployment Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deploymentslots/<deployment-slot-name> <deployment-slot-name> with staging or production, depending on where you wish to deploy your service package <service-name> provided as input from the previous step HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/ee460813.aspx C# code The following method show how to create hosted service deployment async public Task<String> NewAzureVMDeployment(String ServiceName, String VMName, String VNETName, XDocument VMXML, XDocument DNSXML) { String requestID = String.Empty;     String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments", _subscriptionid, ServiceName); HttpClient http = GetHttpClient(); XElement srcTree = new XElement("Deployment", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("Name", ServiceName), new XElement("DeploymentSlot", "Production"), new XElement("Label", ServiceName), new XElement("RoleList", null) );   if (String.IsNullOrEmpty(VNETName) == false) { srcTree.Add(new XElement("VirtualNetworkName", VNETName)); }   if(DNSXML != null) { srcTree.Add(new XElement("DNS", new XElement("DNSServers", DNSXML))); }   XDocument deploymentXML = new XDocument(srcTree); ApplyNamespace(srcTree, ns);   deploymentXML.Descendants(ns + "RoleList").FirstOrDefault().Add(VMXML.Root);     String fixedXML = deploymentXML.ToString().Replace(" xmlns=\"\"", ""); HttpContent content = new StringContent(fixedXML); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); }   return requestID; } 2.3 Create Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<cloudservice-name>/deployments/<deployment-name>/roles <cloudservice-name> and <deployment-name> are provided as input from the previous steps Http Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) located here http://msdn.microsoft.com/en-us/library/windowsazure/jj157186.aspx C# code async public Task<String> NewAzureVM(String ServiceName, String VMName, XDocument VMXML) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName);   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles", _subscriptionid, ServiceName, deployment);   HttpClient http = GetHttpClient(); HttpContent content = new StringContent(VMXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml"); HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } (3) List Virtual Machines To list virtual machine hosted on windows azure subscription we have to loop over all hosted services to get its hosted virtual machines To do that we need to execute the following operations: listing hosted services listing hosted service Virtual machine 3.1 Listing Hosted Services Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/ee460781.aspx C# Code async private Task<List<XDocument>> GetAzureServices(String subscriptionid) { String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices ", subscriptionid); List<XDocument> services = new List<XDocument>();   HttpClient http = GetHttpClient();   Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var svcs = xml.Root.Descendants(ns + "HostedService"); foreach (XElement r in svcs) { XDocument vm = new XDocument(r); services.Add(vm); } }   return services; }  3.2 Listing Hosted Service Virtual Machines Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name> HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157193.aspx C# Code async public Task<XDocument> GetAzureVM(String ServiceName, String VMName, String subscriptionid) { String deployment = await GetAzureDeploymentName(ServiceName); XDocument vmXML = new XDocument();   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles/{3}", subscriptionid, ServiceName, deployment, VMName);   HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri); if (responseStream != null) { vmXML = XDocument.Load(responseStream); }   return vmXML; }  So the final method which can be used to list all virtual machines is: async public Task<XDocument> GetAzureVMs() { List<XDocument> services = await GetAzureServices(); XDocument vms = new XDocument(); vms.Add(new XElement("VirtualMachines")); ApplyNamespace(vms.Root, ns); foreach (var svc in services) { string ServiceName = svc.Root.Element(ns + "ServiceName").Value;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}", _subscriptionid, ServiceName, "Production");   try { HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var roles = xml.Root.Descendants(ns + "RoleInstance"); foreach (XElement r in roles) { XElement svcnameel = new XElement("ServiceName", ServiceName); ApplyNamespace(svcnameel, ns); r.Add(svcnameel); // not part of the roleinstance vms.Root.Add(r); } } } catch (HttpRequestException http) { // no vms with cloud service } } return vms; }  (4) Restart Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name>/Operations HTTP Method POST (HTTP 1.1) Headers x-ms-version: 2012-03-01 Content-Type: application/xml Body <RestartRoleOperation xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <OperationType>RestartRoleOperation</OperationType> </RestartRoleOperation>  More details about this http request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157197.aspx  C# Code async public Task<String> RebootVM(String ServiceName, String RoleName) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName); String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roleInstances/{3}/Operations", _subscriptionid, ServiceName, deployment, RoleName);   HttpClient http = GetHttpClient();   XElement srcTree = new XElement("RestartRoleOperation", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("OperationType", "RestartRoleOperation") ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; }  (5) Delete Virtual Machine You can delete your hosted virtual machine by deleting its deployment, but I prefer to delete its hosted service also, so you can easily manage your virtual machines from code 5.1 Delete Deployment Request URI https://management.core.windows.net/< subscription-id >/services/hostedservices/< service-name >/deployments/<Deployment-Name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteDeployment( string deploymentName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}", _subscriptionid, deploymentName, deploymentName); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  5.2 Delete Hosted Service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteService(string serviceName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}", _subscriptionid, serviceName); Log.Info("Windows Azure URI (http DELETE verb): " + uri, typeof(VMManager)); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  And the following is the method which can used to delete both of deployment and service async public Task<string> DeleteVM(string vmName) { string responseString = string.Empty;   // as a convention here in this post, a unified name used for service, deployment and VM instance to make it easy to manage VMs HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await DeleteDeployment(vmName);   if (responseMessage != null) {   string requestID = responseMessage.Headers.GetValues("x-ms-request-id").FirstOrDefault(); OperationResult result = await PollGetOperationStatus(requestID, 5, 120); if (result.Status == OperationStatus.Succeeded) { responseString = result.Message; HttpResponseMessage sResponseMessage = await DeleteService(vmName); if (sResponseMessage != null) { OperationResult sResult = await PollGetOperationStatus(requestID, 5, 120); responseString += sResult.Message; } } else { responseString = result.Message; } } return responseString; }  Note: This article is subject to be updated Hisham  References Advanced Windows Azure IaaS – Demo Code Windows Azure Service Management REST API Reference Introduction to the Azure Platform Representational state transfer Asynchronous Programming with Async and Await (C# and Visual Basic) HttpClient Class

    Read the article

  • Solaris 11.1: Changes to included FOSS packages

    - by alanc
    Besides the documentation changes I mentioned last time, another place you can see Solaris 11.1 changes before upgrading is in the online package repository, now that the 11.1 packages have been published to http://pkg.oracle.com/solaris/release/, as the “0.175.1.0.0.24.2” branch. (Oracle Solaris Package Versioning explains what each field in that version string means.) When you’re ready to upgrade to the packages from either this repo, or the support repository, you’ll want to first read How to Update to Oracle Solaris 11.1 Using the Image Packaging System by Pete Dennis, as there are a couple issues you will need to be aware of to do that upgrade, several of which are due to changes in the Free and Open Source Software (FOSS) packages included with Solaris, as I’ll explain in a bit. Solaris 11 can update more readily than Solaris 10 In the Solaris 10 and older update models, the way the updates were built constrained what changes we could make in those releases. To change an existing SVR4 package in those releases, we created a Solaris Patch, which applied to a given version of the SVR4 package and replaced, added or deleted files in it. These patches were released via the support websites (originally SunSolve, now My Oracle Support) for applying to existing Solaris 10 installations, and were also merged into the install images for the next Solaris 10 update release. (This Solaris Patches blog post from Gerry Haskins dives deeper into that subject.) Some of the restrictions of this model were that package refactoring, changes to package dependencies, and even just changing the package version number, were difficult to do in this hybrid patch/OS update model. For instance, when Solaris 10 first shipped, it had the Xorg server from X11R6.8. Over the first couple years of update releases we were able to keep it up to date by replacing, adding, & removing files as necessary, taking it all the way up to Xorg server release 1.3 (new version numbering begun after the X11R7 split of the X11 tree into separate modules gave each module its own version). But if you run pkginfo on the SUNWxorg-server package, you’ll see it still displayed a version number of 6.8, confusing users as to which version was actually included. We stopped upgrading the Xorg server releases in Solaris 10 after 1.3, as later versions added new dependencies, such as HAL, D-Bus, and libpciaccess, which were very difficult to manage in this patching model. (We later got libpciaccess to work, but HAL & D-Bus would have been much harder due to the greater dependency tree underneath those.) Similarly, every time the GNOME team looked into upgrading Solaris 10 past GNOME 2.6, they found these constraints made it so difficult it wasn’t worthwhile, and eventually GNOME’s dependencies had changed enough it was completely infeasible. Fortunately, this worked out for both the X11 & GNOME teams, with our management making the business decision to concentrate on the “Nevada” branch for desktop users - first as Solaris Express Desktop Edition, and later as OpenSolaris, so we didn’t have to fight to try to make the package updates fit into these tight constraints. Meanwhile, the team designing the new packaging system for Solaris 11 was seeing us struggle with these problems, and making this much easier to manage for both the development teams and our users was one of their big goals for the IPS design they were working on. Now that we’ve reached the first update release to Solaris 11, we can start to see the fruits of their labors, with more FOSS updates in 11.1 than we had in many Solaris 10 update releases, keeping software more up to date with the upstream communities. Of course, just because we can more easily update now, doesn’t always mean we should or will do so, it just removes the package system limitations from forcing the decision for us. So while we’ve upgraded the X Window System in the 11.1 release from X11R7.6 to 7.7, the Solaris GNOME team decided it was not the right time to try to make the jump from GNOME 2 to GNOME 3, though they did update some individual components of the desktop, especially those with security fixes like Firefox. In other parts of the system, decisions as to what to update were prioritized based on how they affected other projects, or what customer requests we’d gotten for them. So with all that background in place, what packages did we actually update or add between Solaris 11.0 and 11.1? Core OS Functionality One of the FOSS changes with the biggest impact in this release is the upgrade from Grub Legacy (0.97) to Grub 2 (1.99) for the x64 platform boot loader. This is the cause of one of the upgrade quirks, since to go from Solaris 11.0 to 11.1 on x64 systems, you first need to update the Boot Environment tools (such as beadm) to a new version that can handle boot environments that use the Grub2 boot loader. System administrators can find the details they need to know about the new Grub in the Administering the GRand Unified Bootloader chapter of the Booting and Shutting Down Oracle Solaris 11.1 Systems guide. This change was necessary to be able to support new hardware coming into the x64 marketplace, including systems using UEFI firmware or booting off disk drives larger than 2 terabytes. For both platforms, Solaris 11.1 adds rsyslog as an optional alternative to the traditional syslogd, and OpenSCAP for checking security configuration settings are compliant with site policies. Note that the support repo actually has newer versions of BIND & fetchmail than the 11.1 release, as some late breaking critical fixes came through from the community upstream releases after the Solaris 11.1 release was frozen, and made their way to the support repository. These are responsible for the other big upgrade quirk in this release, in which to upgrade a system which already installed those versions from the support repo, you need to either wait for those packages to make their way to the 11.1 branch of the support repo, or follow the steps in the aforementioned upgrade walkthrough to let the package system know it's okay to temporarily downgrade those. Developer Stack While Solaris 11.0 included Python 2.7, many of the bundled python modules weren’t packaged for it yet, limiting its usability. For 11.1, many more of the python modules include 2.7 versions (enough that I filtered them out of the below table, but you can always search on the package repository server for them. For other language runtimes and development tools, 11.1 expands the use of IPS mediated links to choose which version of a package is the default when the packages are designed to allow multiple versions to install side by side. For instance, in Solaris 11.0, GNU automake 1.9 and 1.10 were provided, and developers had to run them as either automake-1.9 or automake-1.10. In Solaris 11.1, when automake 1.11 was added, also added was a /usr/bin/automake mediated link, which points to the automake-1.11 program by default, but can be changed to another version by running the pkg set-mediator command. Mediated links were also used for the Java runtime & development kits in 11.1, changing the default versions to the Java 7 releases (the 1.7.0.x package versions), while allowing admins to switch links such as /usr/bin/javac back to Java 6 if they need to for their site, to deal with Java 7 compatibility or other issues, without having to update each usage to use the full versioned /usr/jdk/jdk1.6.0_35/bin/javac paths for every invocation. Desktop Stack As I mentioned before, we upgraded from X11R7.6 to X11R7.7, since a pleasant coincidence made the X.Org release dates line up nicely with our feature & code freeze dates for this release. (Or perhaps it wasn’t so coincidental, after all, one of the benefits of being the person making the release is being able to decide what schedule is most convenient for you, and this one worked well for me.) For the table below, I’ve skipped listing the packages in which we use the X11 “katamari” version for the Solaris package version (mainly packages combining elements of multiple upstream modules with independent version numbers), since they just all changed from 7.6 to 7.7. In the graphics drivers, we worked with Intel to update the Intel Integrated Graphics Processor support to support 3D graphics and kernel mode setting on the Ivy Bridge chipsets, and updated Nvidia’s non-FOSS graphics driver from 280.13 to 295.20. Higher up in the desktop stack, PulseAudio was added for audio support, and liblouis for Braille support, and the GNOME applications were built to use them. The Mozilla applications, Firefox & Thunderbird moved to the current Extended Support Release (ESR) versions, 10.x for each, to bring up-to-date security fixes without having to be on Mozilla’s agressive 6 week feature cycle release train. Detailed list of changes This table shows most of the changes to the FOSS packages between Solaris 11.0 and 11.1. As noted above, some were excluded for clarity, or to reduce noise and duplication. All the FOSS packages which didn't change the version number in their packaging info are not included, even if they had updates to fix bugs, security holes, or add support for new hardware or new features of Solaris. Package11.011.1 archiver/unrar 3.8.5 4.1.4 audio/sox 14.3.0 14.3.2 backup/rdiff-backup 1.2.1 1.3.3 communication/im/pidgin 2.10.0 2.10.5 compress/gzip 1.3.5 1.4 compress/xz not included 5.0.1 database/sqlite-3 3.7.6.3 3.7.11 desktop/remote-desktop/tigervnc 1.0.90 1.1.0 desktop/window-manager/xcompmgr 1.1.5 1.1.6 desktop/xscreensaver 5.12 5.15 developer/build/autoconf 2.63 2.68 developer/build/autoconf/xorg-macros 1.15.0 1.17 developer/build/automake-111 not included 1.11.2 developer/build/cmake 2.6.2 2.8.6 developer/build/gnu-make 3.81 3.82 developer/build/imake 1.0.4 1.0.5 developer/build/libtool 1.5.22 2.4.2 developer/build/makedepend 1.0.3 1.0.4 developer/documentation-tool/doxygen 1.5.7.1 1.7.6.1 developer/gnu-binutils 2.19 2.21.1 developer/java/jdepend not included 2.9 developer/java/jdk-6 1.6.0.26 1.6.0.35 developer/java/jdk-7 1.7.0.0 1.7.0.7 developer/java/jpackage-utils not included 1.7.5 developer/java/junit 4.5 4.10 developer/lexer/jflex not included 1.4.1 developer/parser/byaccj not included 1.14 developer/parser/java_cup not included 0.10 developer/quilt 0.47 0.60 developer/versioning/git 1.7.3.2 1.7.9.2 developer/versioning/mercurial 1.8.4 2.2.1 developer/versioning/subversion 1.6.16 1.7.5 diagnostic/constype 1.0.3 1.0.4 diagnostic/nmap 5.21 5.51 diagnostic/scanpci 0.12.1 0.13.1 diagnostic/wireshark 1.4.8 1.8.2 diagnostic/xload 1.1.0 1.1.1 editor/gnu-emacs 23.1 23.4 editor/vim 7.3.254 7.3.600 file/lndir 1.0.2 1.0.3 image/editor/bitmap 1.0.5 1.0.6 image/gnuplot 4.4.0 4.6.0 image/library/libexif 0.6.19 0.6.21 image/library/libpng 1.4.8 1.4.11 image/library/librsvg 2.26.3 2.34.1 image/xcursorgen 1.0.4 1.0.5 library/audio/pulseaudio not included 1.1 library/cacao 2.3.0.0 2.3.1.0 library/expat 2.0.1 2.1.0 library/gc 7.1 7.2 library/graphics/pixman 0.22.0 0.24.4 library/guile 1.8.4 1.8.6 library/java/javadb 10.5.3.0 10.6.2.1 library/java/subversion 1.6.16 1.7.5 library/json-c not included 0.9 library/libedit not included 3.0 library/libee not included 0.3.2 library/libestr not included 0.1.2 library/libevent 1.3.5 1.4.14.2 library/liblouis not included 2.1.1 library/liblouisxml not included 2.1.0 library/libtecla 1.6.0 1.6.1 library/libtool/libltdl 1.5.22 2.4.2 library/nspr 4.8.8 4.8.9 library/openldap 2.4.25 2.4.30 library/pcre 7.8 8.21 library/perl-5/subversion 1.6.16 1.7.5 library/python-2/jsonrpclib not included 0.1.3 library/python-2/lxml 2.1.2 2.3.3 library/python-2/nose not included 1.1.2 library/python-2/pyopenssl not included 0.11 library/python-2/subversion 1.6.16 1.7.5 library/python-2/tkinter-26 2.6.4 2.6.8 library/python-2/tkinter-27 2.7.1 2.7.3 library/security/nss 4.12.10 4.13.1 library/security/openssl 1.0.0.5 (1.0.0e) 1.0.0.10 (1.0.0j) mail/thunderbird 6.0 10.0.6 network/dns/bind 9.6.3.4.3 9.6.3.7.2 package/pkgbuild not included 1.3.104 print/filter/enscript not included 1.6.4 print/filter/gutenprint 5.2.4 5.2.7 print/lp/filter/foomatic-rip 3.0.2 4.0.15 runtime/java/jre-6 1.6.0.26 1.6.0.35 runtime/java/jre-7 1.7.0.0 1.7.0.7 runtime/perl-512 5.12.3 5.12.4 runtime/python-26 2.6.4 2.6.8 runtime/python-27 2.7.1 2.7.3 runtime/ruby-18 1.8.7.334 1.8.7.357 runtime/tcl-8/tcl-sqlite-3 3.7.6.3 3.7.11 security/compliance/openscap not included 0.8.1 security/nss-utilities 4.12.10 4.13.1 security/sudo 1.8.1.2 1.8.4.5 service/network/dhcp/isc-dhcp 4.1 4.1.0.6 service/network/dns/bind 9.6.3.4.3 9.6.3.7.2 service/network/ftp (ProFTPD) 1.3.3.0.5 1.3.3.0.7 service/network/samba 3.5.10 3.6.6 shell/conflict 0.2004.9.1 0.2010.6.27 shell/pipe-viewer 1.1.4 1.2.0 shell/zsh 4.3.12 4.3.17 system/boot/grub 0.97 1.99 system/font/truetype/liberation 1.4 1.7.2 system/library/freetype-2 2.4.6 2.4.9 system/library/libnet 1.1.2.1 1.1.5 system/management/cim/pegasus 2.9.1 2.11.0 system/management/ipmitool 1.8.10 1.8.11 system/management/wbem/wbemcli 1.3.7 1.3.9.1 system/network/routing/quagga 0.99.8 0.99.19 system/rsyslog not included 6.2.0 terminal/luit 1.1.0 1.1.1 text/convmv 1.14 1.15 text/gawk 3.1.5 3.1.8 text/gnu-grep 2.5.4 2.10 web/browser/firefox 6.0.2 10.0.6 web/browser/links 1.0 1.0.3 web/java-servlet/tomcat 6.0.33 6.0.35 web/php-53 not included 5.3.14 web/php-53/extension/php-apc not included 3.1.9 web/php-53/extension/php-idn not included 0.2.0 web/php-53/extension/php-memcache not included 3.0.6 web/php-53/extension/php-mysql not included 5.3.14 web/php-53/extension/php-pear not included 5.3.14 web/php-53/extension/php-suhosin not included 0.9.33 web/php-53/extension/php-tcpwrap not included 1.1.3 web/php-53/extension/php-xdebug not included 2.2.0 web/php-common not included 11.1 web/proxy/squid 3.1.8 3.1.18 web/server/apache-22 2.2.20 2.2.22 web/server/apache-22/module/apache-sed 2.2.20 2.2.22 web/server/apache-22/module/apache-wsgi not included 3.3 x11/diagnostic/xev 1.1.0 1.2.0 x11/diagnostic/xscope 1.3 1.3.1 x11/documentation/xorg-docs 1.6 1.7 x11/keyboard/xkbcomp 1.2.3 1.2.4 x11/library/libdmx 1.1.1 1.1.2 x11/library/libdrm 2.4.25 2.4.32 x11/library/libfontenc 1.1.0 1.1.1 x11/library/libfs 1.0.3 1.0.4 x11/library/libice 1.0.7 1.0.8 x11/library/libsm 1.2.0 1.2.1 x11/library/libx11 1.4.4 1.5.0 x11/library/libxau 1.0.6 1.0.7 x11/library/libxcb 1.7 1.8.1 x11/library/libxcursor 1.1.12 1.1.13 x11/library/libxdmcp 1.1.0 1.1.1 x11/library/libxext 1.3.0 1.3.1 x11/library/libxfixes 4.0.5 5.0 x11/library/libxfont 1.4.4 1.4.5 x11/library/libxft 2.2.0 2.3.1 x11/library/libxi 1.4.3 1.6.1 x11/library/libxinerama 1.1.1 1.1.2 x11/library/libxkbfile 1.0.7 1.0.8 x11/library/libxmu 1.1.0 1.1.1 x11/library/libxmuu 1.1.0 1.1.1 x11/library/libxpm 3.5.9 3.5.10 x11/library/libxrender 0.9.6 0.9.7 x11/library/libxres 1.0.5 1.0.6 x11/library/libxscrnsaver 1.2.1 1.2.2 x11/library/libxtst 1.2.0 1.2.1 x11/library/libxv 1.0.6 1.0.7 x11/library/libxvmc 1.0.6 1.0.7 x11/library/libxxf86vm 1.1.1 1.1.2 x11/library/mesa 7.10.2 7.11.2 x11/library/toolkit/libxaw7 1.0.9 1.0.11 x11/library/toolkit/libxt 1.0.9 1.1.3 x11/library/xtrans 1.2.6 1.2.7 x11/oclock 1.0.2 1.0.3 x11/server/xdmx 1.10.3 1.12.2 x11/server/xephyr 1.10.3 1.12.2 x11/server/xorg 1.10.3 1.12.2 x11/server/xorg/driver/xorg-input-keyboard 1.6.0 1.6.1 x11/server/xorg/driver/xorg-input-mouse 1.7.1 1.7.2 x11/server/xorg/driver/xorg-input-synaptics 1.4.1 1.6.2 x11/server/xorg/driver/xorg-input-vmmouse 12.7.0 12.8.0 x11/server/xorg/driver/xorg-video-ast 0.91.10 0.93.10 x11/server/xorg/driver/xorg-video-ati 6.14.1 6.14.4 x11/server/xorg/driver/xorg-video-cirrus 1.3.2 1.4.0 x11/server/xorg/driver/xorg-video-dummy 0.3.4 0.3.5 x11/server/xorg/driver/xorg-video-intel 2.10.0 2.18.0 x11/server/xorg/driver/xorg-video-mach64 6.9.0 6.9.1 x11/server/xorg/driver/xorg-video-mga 1.4.13 1.5.0 x11/server/xorg/driver/xorg-video-openchrome 0.2.904 0.2.905 x11/server/xorg/driver/xorg-video-r128 6.8.1 6.8.2 x11/server/xorg/driver/xorg-video-trident 1.3.4 1.3.5 x11/server/xorg/driver/xorg-video-vesa 2.3.0 2.3.1 x11/server/xorg/driver/xorg-video-vmware 11.0.3 12.0.2 x11/server/xserver-common 1.10.3 1.12.2 x11/server/xvfb 1.10.3 1.12.2 x11/server/xvnc 1.0.90 1.1.0 x11/session/sessreg 1.0.6 1.0.7 x11/session/xauth 1.0.6 1.0.7 x11/session/xinit 1.3.1 1.3.2 x11/transset 0.9.1 1.0.0 x11/trusted/trusted-xorg 1.10.3 1.12.2 x11/x11-window-dump 1.0.4 1.0.5 x11/xclipboard 1.1.1 1.1.2 x11/xclock 1.0.5 1.0.6 x11/xfd 1.1.0 1.1.1 x11/xfontsel 1.0.3 1.0.4 x11/xfs 1.1.1 1.1.2 P.S. To get the version numbers for this table, I ran a quick perl script over the output from: % pkg contents -H -r -t depend -a type=incorporate -o fmri \ `pkg contents -H -r -t depend -a type=incorporate -o fmri [email protected],5.11-0.175.1.0.0.24` \ | sort /tmp/11.1 % pkg contents -H -r -t depend -a type=incorporate -o fmri \ `pkg contents -H -r -t depend -a type=incorporate -o fmri [email protected],5.11-0.175.0.0.0.2` \ | sort /tmp/11.0

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Saturday, January 01, 2011

    CodePlex Daily Summary for Saturday, January 01, 2011Popular ReleasesBloodSim: BloodSim - 1.3.0.0: - Added tally for number of boss swings and swing avoids - Removed a large number of options that were carried over from Beta and are no longer relevant - Changed stat entry to use Rating format for Dodge, Parry, Haste and Mastery - Rearranged Settings interface - BloodSim will now check for updates on startup and notify the user if a new version is available - Added option to Show/Hide the Simulation Log to increase speed during large simulationsEnhSim: EnhSim 2.2.8 ALPHA: 2.2.8 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Rebuilt Feral Spir...CBM-Command: Version 2.0 Beta 2 - 2010-12-31: This version fixes three major bugs in Version 2.0 Beta 1 Changes(64, 128, Plus/4) Changed the timing code back to version 1.7 because the clock() function in cc65 does not reflect accurate timing. (VIC, PET) The time(NULL) function is not available on these targets and thus had to be removed. Help Hot-Key Fixed - Now when you press either F1 or H you get the help file (if the CBM-Command disk is in the drive you started CBM-Command from). Updated Help File - the new help file by popmi...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.6 Released: Hi, Today we are releasing final version of Visifire, v3.6.6 with the following new feature: * TextDecorations property is implemented in Title for Chart. * TitleTextDecorations property is implemented in Axis. * MinPointHeight property is now applicable for Column and Bar Charts. Also this release includes few bug fixes: * ToolTipText property of DataSeries was not getting applied from Style. * Chart threw exception if IndicatorEnabled property was set to true and Too...StyleCop Compliant Visual Studio Code Snippets: Visual Studio Code Snippets - January 2011: StyleCop Compliant Visual Studio Code Snippets Visual Studio 2010 provides C# developers with 38 code snippets, enhancing developer productivty and increasing the consistency of the code. Within this project the original code snippets have been refactored to provide StyleCop compliant versions of the original code snippets while also adding many new code snippets. Within the January 2011 release you'll find 82 code snippets to make you more productive and the code you write more consistent!...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.2: Version: 2.0.0.2 (Milestone 2): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...eCompany: eCompany v0.2.0 Build 63: Version 0.2.0 Build 63: Added Splash screen & about box Added downloading of currencies when eCompany launched for the first time (must close any bug caused by no currency rate existing) Added corp creation when eCompany launched for the first time (for now, you didn't need to edit the company.xml file manually) You just need to decompress file "eCompany v0.2.0.63.zip" into your current eCompany install directory.SQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 8: 1. added truncate table/defrag index/check db functions 2. improved alert 3. fixed problem with alert causing config file corrupted(hopefully)Temporary Data Storage Folder: TDS Folder version 0.2 Beta: In this release following bugs are fixed: 'Send to' entry bug fixed Preferences bug fixedSilverlight File Upload and Download with Interlink: HSS Interlink v.2.1.300: Latest Release 2.1.300 - December 29th 2010 Change Log DownloadFileDialog Modified to support an absolute uri for the DownloadUri property, which is required for OOB support UploadFileDialog Modified to support an absolute uri for the UploadUri property, which is required for OOB support For existing users be sure to uninstall the older version prior to installing this version Note: The demo application is NOT included with the installer but can be reviewed here Stable release and ready...RDPAddins .NET: RDPAddins Alpha 2 (0.2.0.0): Second alpha release... Breaking changes (now this project has 0.2 version): now addin should implement RDPAddins.Common.IAddin and should me exported with RDPAddins.Common.AddinMetadataAtrribute !!!most of all old Addin base class method you can fide in IChannel or IUI interfeces see FileTransferAddin Now RDPAddins.Common.dll just provide interfaces for addin, channel, ui, and export metadata Whole implementation is in RDPAddins.exe RDPAddins.Common.dll has some documentation :) why all this...DocX: DocX v1.0.0.11: Building Examples projectTo build the Examples project, download DocX.dll and add it as a reference to the project. OverviewThis version of DocX contains many bug fixes, it is a serious step towards a stable release. Added1) Unit testing project, 2) Examples project, 3) To many bug fixes to list here, see the source code change list history.Cosmos (C# Open Source Managed Operating System): 71406: This is the second release supporting the full line of Visual Studio 2010 editions. Changes since release 71246 include: Debug info is now stored in a single .cpdb file (which is a Firebird database) Keyboard input works now (using Console.ReadLine) Console colors work (using Console.ForegroundColor and .BackgroundColor)AutoLoL: AutoLoL v1.5.0: Added the all new Masteries Browser which replaces the Quick Open combobox AutoLoL will now attemt to create file associations for mastery (*.lolm) files Each Mastery Build can now contain keywords that the Masteries Browser will use for filtering Changed the way AutoLoL detects if another instance is already running Changed the format of the mastery files to allow more information stored in* Dialogs will now focus the Ok or Cancel button which allows the user to press Return to clo...Paint.NET PSD Plugin: 1.6.0: Handling of layer masks has been greatly improved. Improved reliability. Many PSD files that previously loaded in as garbage will now load in correctly. Parallelized loading. PSD files containing layer masks will load in a bit quicker thanks to the removal of the sequential bottleneck. Hidden layers are no longer made visible on save. Many thanks to the users who helped expose the layer masks problem: Rob Horowitz, M_Lyons10. Please keep sending in those bug reports and PSD repro files!Facebook C# SDK: 4.1.1: From 4.1.1 Release: Authentication bug fix caused by facebook change (error with redirects in Safari) Authenticator fix, always returning true From 4.1.0 Release Lots of bug fixes Removed Dynamic Runtime Language dependencies from non-dynamic platforms. Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample Changed internal serialization to use Json.net BREAKING CHANGE: Canvas Session is no longer supported. Use Signed...Catel - WPF and Silverlight MVVM library: 1.0.0: And there it is, the final release of Catel, and it is no longer a beta version!Euro for Windows XP: ChangeRegionalSettings 1..0: *Rocket Framework (.Net 4.0): Rocket Framework for Windows V 1.0.0: Architecture is reviewed and adjusted in a way so that I can introduce the Web version and WPF version of this framework next. - Rocket.Core is introduced - Controller button functions revisited and updated - DB is renewed to suite the implemented features - Create New button functionality is changed - Add Question Handling featuresFlickr Wallpaper Rotator (for Windows desktop): Wallpaper Flickr 1.1: Some minor bugfixes (mostly covering when network connection is flakey, so I discovered them all while at my parents' house for Christmas).New Projects7-Up: PowerShell Scripts for Upgrading SharePoint 2007 to 2010: PowerShell scripts to automate the upgrade of SharePoint 2007 to SharePoint 2010 using a content database or hybrid upgrade approach.Apple Wireless Keyboard: Helper that Allows people use the Apple Wireless (or Wired possibly) Keyboard under Windows 7 without loosing the mac functionalityckTest: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttestCrude - .Net Dependency Management: Crude is light dependency management for .net, there was no dependency management solution for .net as Maven or Ivy until now. DeepTime: Event tacking, management and plotting site.Dev/Test Cloud Platform: Dev/Test Cloud Platform is implemented by Beyondsoft and Microsoft. The platform is dedicated to software development and test scenarios. With Dev/Test Cloud you can save your cost, improve work efficiency, and improve product quality.Image Viewer (wpf version): Image viewer helps windows users to review images on their computer. It is written i Csharp and wpf.LNOne: All common code, framework code, utility code in one.LOGL::GLib: LOGL::GLib is an OpenGL game library written in C++ that allows new C++/OpenGL programmers to focus more on the game and less on the details.MAPI.GUI: Graphical program uses function from mcopyapi.codeplex.com and mdeleteapi.codeplex.com console programs. Program support longPath (above 259 chars and less 32000 chars)Movie Collection Manager: Gerenciador que ajuda a manusear coleção de filmes. Possui uma interface super atraente e simples de usar. Ferramentas e tecnologias usadas: - Visual C# 2010 Express - SQL Server 2008 R2 Express - Windows Forms - Entity Framework OBS: Projeto concorrente do Desafio .NETMyStudioServer.com: Source code for the DotNetNuke http://MyStudioServer.com websiteNusya Tester: A software to create and use various tests with right answers explanations to help one in self-education. It's developed in C#Project Zylaphon: Project Zylaphon is a C# GPL Open Source Computer Aided Music Composition System. Its design goals include highly interactive composition, music computation, and analysis; system control to be provided by an A.I. goal based agenda and quasi Blackboard for maximum flexibility.Remember The Task: RememberTheTask makes it easier to remember what you have been doing the last hour. It's developed in Visual Basic.Rent Payments Scheduling: Register rent schedules and keep track of them.Simple MVVM Toolkit for Silverlight: Simple MVVM Toolkit makes it easier to develop Silverlight applications using the Model-View-ViewModel design pattern.Sparrow.NET HtmlTemplate: Its a template engine for generating web pages.(?????HTML?????,??????html Tag???????html????。)SQLDiagUI: SQLDiagUI provides a GUI for Microsoft SQlDiag Utility which allows users to create a configuration file and start/stop/schedule SQLDiag against a SQL Server.Test Control: testing frameworkTwicko: Simple twitter client.Unity3D Utilities: Unity3D Utilities provides additional functionality to Unity3D (3.0+) via extensions, utilities, mini-frameworks, etc.VBA Composite Controls Object Model: VBA Composite Controls encapsulate complex event-driven interactions between ActiveX controls and other objects in Microsoft Office. It's uses are limited only by the imagination, from binding controls together for updating, to creating unique user interfaces!World of Warcraft Backit up(wow backit up): a project that helps you backup and restore your settings in wow easilyYamma: Yet Another Money Management Application. Build in .NET 4, SQLCE, LINQ and the Entity Framework.

    Read the article

  • Cold Start

    - by antony.reynolds
    Well we had snow drifts 3ft deep on Saturday so it must be spring time.  In preparation for Spring we decided to move the lawn tractor.  Of course after sitting in the garage all winter it refused to start.  I then come into the office and need to start my 11g SOA Suite installation.  I thought about this and decided my tractor might be cranky but at least I can script the startup of my SOA Suite 11g installation. So with this in mind I created 6 scripts.  I created them for Linux but they should translate to Windows without too many problems.  This is left as an exercise to the reader, note you will have to hardcode more than I did in the Linux scripts and create separate script files for the sqlplus and WLST sections. Order to start things I believe there should be order in all things, especially starting the SOA Suite.  So here is my preferred order. Start Database This is need by EM and the rest of SOA Suite so best to start it before the Admin Server and managed servers. Start Node Manager on all machines This is needed if you want the scripts to work across machines. Start Admin Server Once this is done in theory you can manually stat the managed servers using WebLogic console.  But then you have to wait for console to be available.  Scripting it all is quicker and easier way of starting. Start Managed Servers & Clusters Best to start them one per physical machine at a time to avoid undue load on the machines.  Non-clustered install will have just soa_server1 and bam_serv1 by default.  Clusters will have at least SOA and BAM clusters that can be started as a group or individually.  I have provided scripts for standalone servers, but easy to change them to work with clusters. Starting Database I have provided a very primitive script (available here) to start the database, the listener and the DB console.  The section highlighted in red needs to match your database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#####################" echo "# Starting Database #" echo "#####################" sqlplus / as sysdba <<-EOF startup exit EOF echo "#####################" echo "# Starting Listener #" echo "#####################" lsnrctl start echo "######################" echo "# Starting dbConsole #" echo "######################" emctl start dbconsole read -p "Hit <enter> to continue" Starting SOA Suite My script for starting the SOA Suite (available here) breaks the task down into five sections. Setting the Environment First set up the environment variables.  The variables highlighted in red probably need changing for your environment. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME Starting the Node Manager I start node manager with a nohup to stop it exiting when the script terminates and I redirect the standard output and standard error to a file in a logs directory. cd $DOMAIN_HOME echo "#########################" echo "# Starting Node Manager #" echo "#########################" nohup $WL_HOME/server/bin/startNodeManager.sh >logs/NodeManager.out 2>&1 & Starting the Admin Server I had problems starting the Admin Server from Node Manager so I decided to start it using the command line script.  I again use nohup and redirect output. echo "#########################" echo "# Starting Admin Server #" echo "#########################" nohup ./startWebLogic.sh >logs/AdminServer.out 2>&1 & Starting the Managed Servers I then used WLST (WebLogic Scripting Tool) to start the managed servers.  First I waited for the Admin Server to come up by putting a connect command in a loop.  I could have put the WLST commands into a separate script file but I wanted to reduce the number of files I was using and so used redirected input (here syntax). $ORACLE_HOME/common/bin/wlst.sh <<-EOF import time sleep=time.sleep print "#####################################" print "# Waiting for Admin Server to Start #" print "#####################################" while True:   try:     connect(adminServerName="AdminServer")     break   except:     sleep(10) I then start the SOA server and tell WLST to wait until it is started before returning.  If starting a cluster then the start command would be modified accordingly to start the SOA cluster. print "#######################" print "# Starting SOA Server #" print "#######################" start(name="soa_server1", block="true") I then start the BAM server in the same way as the SOA server. print "#######################" print "# Starting BAM Server #" print "#######################" start(name="bam_server1", block="true") EOF Finally I let people know the servers are up and wait for input in case I am running in a separate window, in which case the result would be lost without the read command. echo "#####################" echo "# SOA Suite Started #" echo "#####################" read -p "Hit <enter> to continue" Stopping the SOA Suite My script for shutting down the SOA Suite (available here)  is basically the reverse of my startup script.  After setting the environment I connect to the Admin Server using WLST and shut down the managed servers and the admin server.  Again the script would need modifying for a cluster. Stopping the Servers If I cannot connect to the Admin Server I try to connect to the node manager, in case the Admin Server is down but the managed servers are up. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME cd $DOMAIN_HOME $MW_HOME/Oracle_SOA/common/bin/wlst.sh <<-EOF try:   print("#############################")   print("# Connecting to AdminServer #")   print("#############################")   connect(username='weblogic',password='welcome1',url='t3://localhost:7001') except:   print "#########################################"   print "#   Unable to connect to Admin Server   #"   print "# Attempting to connect to Node Manager #"   print "#########################################"   nmConnect(domainName=os.getenv("DOMAIN_NAME")) print "#######################" print "# Stopping BAM Server #" print "#######################" shutdown('bam_server1') print "#######################" print "# Stopping SOA Server #" print "#######################" shutdown('soa_server1') print "#########################" print "# Stopping Admin Server #" print "#########################" shutdown('AdminServer') disconnect() nmDisconnect() EOF Stopping the Node Manager I stopped the node manager by searching for the java node manager process using the ps command and then killing that process. echo "#########################" echo "# Stopping Node Manager #" echo "#########################" kill -9 `ps -ef | grep java | grep NodeManager |  awk '{print $2;}'` echo "#####################" echo "# SOA Suite Stopped #" echo "#####################" read -p "Hit <enter> to continue" Stopping the Database Again my script for shutting down the database is the reverse of my start script.  It is available here.  The only change needed might be to the database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "######################" echo "# Stopping dbConsole #" echo "######################" emctl stop dbconsole echo "#####################" echo "# Stopping Listener #" echo "#####################" lsnrctl stop echo "#####################" echo "# Stopping Database #" echo "#####################" sqlplus / as sysdba <<-EOF shutdown immediate exit EOF read -p "Hit <enter> to continue" Cleaning Up Cleaning SOA Suite I often run tests and want to clean up all the log files.  The following script (available here) does this for the WebLogic servers in a given domain on a machine.  After setting the domain I just remove all files under the servers logs directories.  It also cleans up the log files I created with my startup scripts.  These scripts could be enhanced to copy off the log files if you needed them but in my test environments I don’t need them and would prefer to reclaim the disk space. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME echo "##########################" echo "# Cleaning SOA Log Files #" echo "##########################" cd $DOMAIN_HOME rm -Rf logs/* servers/*/logs/* read -p "Hit <enter> to continue" Cleaning Database I also created a script to clean up the dump files of an Oracle database instance and also the EM log files (available here).  This relies on the machine name being correct as the EM log files are stored in a directory that is based on the hostname and the Oracle SID. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#############################" echo "# Cleaning Oracle Log Files #" echo "#############################" rm -Rf $ORACLE_BASE/admin/$ORACLE_SID/*dump/* rm -Rf $ORACLE_HOME/`hostname`_$ORACLE_SID/sysman/log/* read -p "Hit <enter> to continue" Summary Hope you find the above scripts useful.  They certainly stop me hanging around waiting for things to happen on my test machine and make it easy to run a test, change parameters, bounce the SOA Suite and clean the logs between runs so I can see exactly what is happening. Now I need to get that mower started…

    Read the article

  • sudo apt-get install mysql-server fails

    - by danwoods
    Hi all, I'm coming from a fresh install of Ubuntu server 9.10 and trying to install mysql-server by using 'sudo apt-get mysql-server' I get the following errors: dan@dev:~$ sudo apt-get install mysql-server [sudo] password for dan: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-server-5.1 Suggested packages: dbishell libipc-sharedcache-perl tinyca The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-server mysql-server-5.1 0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded. Need to get 16.5MB of archives. After this operation, 39.0MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://us.archive.ubuntu.com karmic/main libnet-daemon-perl 0.43-1 [46.9kB] Get:2 http://us.archive.ubuntu.com karmic/main libplrpc-perl 0.2020-2 [36.0kB] Get:3 http://us.archive.ubuntu.com karmic/main libdbi-perl 1.609-1 [800kB] Get:4 http://us.archive.ubuntu.com karmic/main libdbd-mysql-perl 4.011-1ubuntu1 [136kB] Get:5 http://us.archive.ubuntu.com karmic-updates/main mysql-client-5.1 5.1.37- 1ubuntu5.1 [8,202kB] Get:6 http://us.archive.ubuntu.com karmic-updates/main mysql-server-5.1 5.1.37-1ubuntu5.1 [7,186kB] Get:7 http://us.archive.ubuntu.com karmic/main libhtml-template-perl 2.9-1 [65.8kB] Get:8 http://us.archive.ubuntu.com karmic-updates/main mysql-server 5.1.37-1ubuntu5.1 [64.3kB] Fetched 16.5MB in 1min 34s (175kB/s) Preconfiguring packages ... Selecting previously deselected package libnet-daemon-perl. (Reading database ... 123083 files and directories currently installed.) Unpacking libnet-daemon-perl (from .../libnet-daemon-perl_0.43-1_all.deb) ... Selecting previously deselected package libplrpc-perl. Unpacking libplrpc-perl (from .../libplrpc-perl_0.2020-2_all.deb) ... Selecting previously deselected package libdbi-perl. Unpacking libdbi-perl (from .../libdbi-perl_1.609-1_i386.deb) ... Selecting previously deselected package libdbd-mysql-perl. Unpacking libdbd-mysql-perl (from .../libdbd-mysql-perl_4.011-1ubuntu1_i386.deb) ... Selecting previously deselected package mysql-client-5.1. Unpacking mysql-client-5.1 (from .../mysql-client-5.1_5.1.37-1ubuntu5.1_i386.deb) ... Selecting previously deselected package mysql-server-5.1. Unpacking mysql-server-5.1 (from .../mysql-server-5.1_5.1.37-1ubuntu5.1_i386.deb) ... Selecting previously deselected package libhtml-template-perl. Unpacking libhtml-template-perl (from .../libhtml-template-perl_2.9-1_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.1.37-1ubuntu5.1_all.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Setting up libnet-daemon-perl (0.43-1) ... Setting up libplrpc-perl (0.2020-2) ... Setting up libdbi-perl (1.609-1) ... Setting up libdbd-mysql-perl (4.011-1ubuntu1) ... Setting up mysql-client-5.1 (5.1.37-1ubuntu5.1) ... Setting up mysql-server-5.1 (5.1.37-1ubuntu5.1) ... * Stopping MySQL database server mysqld [ OK ] * Starting MySQL database server mysqld [fail] invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.1 (--configure): subprocess installed post-installation script returned error exit status 1 Setting up libhtml-template-perl (2.9-1) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.1; however: Package mysql-server-5.1 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.1 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) What am I missing? [update] mysqld returns: dan@dev:~$ sudo mysqld [sudo] password for dan: 100220 12:18:17 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. 100220 12:18:17 InnoDB: Retrying to lock the first data file InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process This goes on for a while... InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. ^[[BInnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. 100220 12:19:57 InnoDB: Unable to open the first data file InnoDB: Error in opening ./ibdata1 100220 12:19:57 InnoDB: Operating system error number 11 in a file operation. InnoDB: Error number 11 means 'Resource temporarily unavailable'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.1/en/operating-system-error-codes.html InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 100220 12:19:57 [ERROR] Plugin 'InnoDB' init function returned error. 100220 12:19:57 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100220 12:19:57 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use 100220 12:19:57 [ERROR] Do you already have another mysqld server running on port: 3306 ? 100220 12:19:57 [ERROR] Aborting 100220 12:19:57 [Warning] Forcing shutdown of 1 plugins 100220 12:19:57 [Note] mysqld: Shutdown complete How can I check what process is using port: 3306? [Update]: sudo netstat -anp | grep LISTEN returns dan@dev:~$ sudo netstat -anp | grep LISTEN [sudo] password for dan: tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 1372/master tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 4391/mysqld tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1409/cupsd tcp6 0 0 ::1:631 :::* LISTEN 1409/cupsd [More Updates]: I can log into mysql if that makes a difference

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (1) seems to be choking on kde-runtime-data version issue

    - by BMT
    12.04 LTS, on a dell mini 10. Install stable until about a week ago. Updated about 1x a week, sometimes more often. Several days ago, I booted up and the system was no longer working correctly. All these symptoms occurred simultaneously: Cannot run (exit on opening, every time): Update manager, software center, ubuntuOne, libreOffice. Vinagre autostarts on boot, no explanation, not set to startup with Ubuntu. Using apt-get to fix install results in the following: maura@pandora:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following package was automatically installed and is no longer required: libtelepathy-farstream2 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: gwibber gwibber-service kde-runtime-data software-center Suggested packages: gwibber-service-flickr gwibber-service-digg gwibber-service-statusnet gwibber-service-foursquare gwibber-service-friendfeed gwibber-service-pingfm gwibber-service-qaiku unity-lens-gwibber The following packages will be upgraded: gwibber gwibber-service kde-runtime-data software-center 4 upgraded, 0 newly installed, 0 to remove and 39 not upgraded. 20 not fully installed or removed. Need to get 0 B/5,682 kB of archives. After this operation, 177 kB of additional disk space will be used. Do you want to continue [Y/n]? debconf: Perl may be unconfigured (Can't locate Scalar/Util.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/lib/perl/5.14/Hash/Util.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/Hash/Util.pm line 9. Compilation failed in require at /usr/share/perl/5.14/fields.pm line 122. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at (eval 1) line 4. BEGIN failed--compilation aborted at (eval 1) line 4. ) -- aborting (Reading database ... 242672 files and directories currently installed.) Preparing to replace gwibber 3.4.1-0ubuntu1 (using .../gwibber_3.4.2-0ubuntu1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gwibber-service 3.4.1-0ubuntu1 (using .../gwibber-service_3.4.2-0ubuntu1_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace kde-runtime-data 4:4.8.3-0ubuntu0.1 (using .../kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb) ... Unpacking replacement kde-runtime-data ... dpkg: error processing /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb (--unpack): trying to overwrite '/usr/share/sounds', which is also in package sound-theme-freedesktop 0.7.pristine-2 dpkg-deb (subprocess): subprocess data was killed by signal (Broken pipe) dpkg-deb: error: subprocess <decompress> returned error exit status 2 Preparing to replace python-crypto 2.4.1-1 (using .../python-crypto_2.4.1-1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2.2 (using .../software-center_5.2.4_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/software-center_5.2.4_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace xdiagnose 2.5 (using .../archives/xdiagnose_2.5_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/xdiagnose_2.5_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb /var/cache/apt/archives/software-center_5.2.4_all.deb /var/cache/apt/archives/xdiagnose_2.5_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) maura@pandora:~$ ^C maura@pandora:~$

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • CodePlex Daily Summary for Thursday, June 30, 2011

    CodePlex Daily Summary for Thursday, June 30, 2011Popular ReleasesASP.NET Comet Ajax Library (Reverse Ajax - Server Push): Reverse Ajax Samples v1.53: 16 Comprehensive ASP.NET Ajax / Reverse Ajax / WCF / MVC / Mono samplesReactive Extensions - Extensions (Rxx): Rxx 1.1: What's NewRelated Work Items Please read the latest release notes for details about what's new. About LabsAll "Labs" downloads include the Rxx.dll assembly, so only a single download is required. To start RxxLabs.exe, right-mouse click and select Run as Administrator; otherwise, do not run the Reactive WebClient lab because it will crash the program. RxxLabs.exe requires administrator privileges for the Reactive WebClient lab to register a local HTTP port. To launch the Silverlight labs...CommonLibrary.NET: CommonLibrary.NET - 0.9.7 Final: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. Samples in <root>\src\Lib\CommonLibrary.NET\Samples CommonLibrary.NET 0.9.7Documentation 6738 6503 New 6535 Enhancements 6759 6748 6583 6737datajs - JavaScript Library for data-centric web applications: datajs version 1.0.0: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.4.4: Fix for http://coding4fun.codeplex.com/workitem/6869 was incomplete. Back button wouldn't return app bar. Corrected now. High impact bugSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.528.279): Added keyboard shortcuts: - Cut (CTRL+X) - Copy (CTRL+C) - Paste (CTRL+V) - Delete (CTRL+D) - Move up (CTRL+UP ARROW) - Move down (CTRL+DOWN ARROW) Added ability to save/load SiteMap from/to a Xml file on disk Bug fix: - Connect to a server through the status bar was throwing error "Object Reference not set to an instance of an object" - Rename TreeNode.Name after changing TreeNode.TextMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample: V2.01 ALPHA N-Layered SampleApp .NET 4.0 and EF4.1: V2.0.01 - ALPHARequired Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power ...Mosaic Project: Mosaic Alpha build 261: - Fixed crash when pinning applications in x64 OS - Added Hub to video widget. It shows videos from Video library (only .wmv and .avi). Can work slow if there are too much files. - Fixed some issues with scrolling - Fixed bug with html widgets - Fixed bug in Gmail widget - Added html today widget missed in previous release - Now Mosaic saves running widgets if you restarting from optionsEnhSim: EnhSim 2.4.9 BETA: 2.4.9 BETAThis release supports WoW patch 4.2 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in some of th....NET Reflector Add-Ins: Reflector V7 Add-Ins: All the add-ins compiled for Reflector V7TerrariViewer: TerrariViewer v4.1 [4.0 Bug Fixes]: Version 4.1 ChangelogChanged how users will Open Player files (This change makes it much easier) This allowed me to remove the "Current player file" labels that were present Changed file control icons Added submit bug button Various Bug Fixes Fixed crashes related to clicking on buffs before a character is loaded Fixed crashes related to selecting "No Buff" when choosing a new buff Fixed crashes related to clicking on a "Max" button on the buff tab before a character is loaded Cor...AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta8: ??AcDown???????????????,?????????????????????。????????????????????,??Acfun、Bilibili、???、???、?????,???????????、???????。 AcDown???????????????????????????,???,???????????????????。 AcDown???????C#??,?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ??v3.0 Beta8 ?? ??????????????? ???????????????(??????????) ???????...BlogEngine.NET: BlogEngine.NET 2.5: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.5 instructions. To get started, be sure to check out our installatio...PHP Manager for IIS: PHP Manager 1.2 for IIS 7: This release contains all the functionality available in 62183 plus the following additions: Command Line Support via PowerShell - now it is possible to manage and script PHP installations on IIS by using Windows PowerShell. More information is available at Managing PHP installations with PHP Manager command line. Detection and alert when using local PHP handler - if a web site or a directory has a local copy of PHP handler mapping then the configuration changes made on upper configuration ...MiniTwitter: 1.71: MiniTwitter 1.71 ???? ?? OAuth ???????????? ????????、??????????????????? ???????????????????????SizeOnDisk: 1.0.10.0: Fix: issue 327: size format error when save settings Fix: some UI bindings trouble (sorting, refresh) Fix: user settings file deletion when corrupted Feature: TreeView virtualization (better speed with many folders) Feature: New file type DataGrid column Feature: In KByte view, show size of file < 1024B and > 0 with 3 decimal Feature: New language: Italian Task: Cleanup for speedRawr: Rawr 4.2.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...N2 CMS: 2.2: * Web platform installer support available ** Nuget support available What's newDinamico Templates (beta) - an MVC3 & Razor based template pack using the template-first! development paradigm Boilerplate CSS & HTML5 Advanced theming with css comipilation (concrete, dark, roadwork, terracotta) Template-first! development style Content, news, listing, slider, image sizes, search, sitemap, globalization, youtube, google map Display Tokens - replaces text tokens with rendered content (usag...KinectNUI: Jun 25 Alpha Release: Initial public version. No installer needed, just run the EXE.Terraria World Viewer: Version 1.5: Update June 24th Made compatible with the new tiles found in Terraria 1.0.5New Projects{Adjunct} functionality for the .NET framework: A project to provide Ingots for the .NET framework.3Webee.net: 3Webee.net is the First Navigator dedicated to Web.3.0 by P2P. Developped in C# for DotNet-3.5 or Mono.net, compatible with Linux ready. Website.fr : http://3webee.net/ Download Win32 : http://3webee.net/Download/3Webee.net.beta.0.0.Win32.exe AB Donor Choose Planner: The goal of this doantion planner is to allow you to coordinate the completion of one or more Donors Choose projects. The tool gives you a portfolio view of the donations you would like to invest in by targeting your investments in a location/regional focused manner.appperu1: asdasdsaddas: ColinTestingasdfFindClone: Find clone files, find duplicate files, remove duplicate filesfirstcpapp: This is my appForecast Parser: The NOAA hosts forecast data accessible over the web. These libraries download and parse seven day hourly forecast data and encapsulates the data in an easy to use class.Future Apple Osx: Future Apple Osx, Is a free operating system to use. It was built from the ground up with the help of Cosmos. It is free to use and download. So please check it out today.Glimpse: Glimpse is a web debugger and diagnostics tool for ASP.NET and ASP.NET MVC. You can find out more at getGlimpse.comiAdm: ?? iToday ??? wince 6 R2 ?????ImagineCup Worldwide Finals Tracker: An open-source Windows Phone application that is used to track the events going on at the ImagineCup Worldwide Finals.John Owl: John OwlLAM - Local Area Messaging: A VB.NET local area chat application. Finds the lan clients and communicates through the lan with old Ms-Winsock Interop.MessyBrain: Organize your tasks in an efficient way. Assign tasks to members of your team. Create workflows for your team. Written in C# and ASP.NET MVC. Why did I start this project? Making mistakes is part of the learning process. They can be painful especially when they are made at work; there’s a cost attached to it. Why not start a project on my own? Mistakes will be less painful (only my ego will be damaged), I learn something new and there isn’t a cost attached to it. I can share the code ...Navigation Light Toolkit: Navigation Light add support for View Navigation in WPF and SilverlightRight Click Calculator: a mini calculator with numpad that opens in a dialogbox. it can combined with a textbox. bir textbox üzerinde sag tus ile açabileceginiz ufak bir hesap makinesi örnegi.Shaaps & Ladders: It's a modern software implementation of the popular classic board game Snakes & Ladders. The Bengali translation of "Snakes" pronounces "Shaaps" and thus the name of the game is such. The game is being developed on top of the Shaaps & Ladders GDK. Source for both are released.SharePoint PowerShell Scripts: Usefull PowerShell scripts written for SharePoint that helps organizations with governance.Smith Web Tools: Smith Web Tools are some useful controls to help to build web application. They are written in pure JavaScript and CSS without introducing any other JavaScript framework. At present, the Smith Calendar, Smith Editor, and Smith Dialog are available.SSAS Query Log Decoder & Analyzer: The Crisp description for the project will be "CUBE FOR CUBE". This is an end-to-end BI solution for analyzing the Query log created by SSAS Server. The Query Log data is loaded into a dimensional model and a cube is built on top of it for analysis of cube usage.takcandmansys: takcandmansysTestHG1: TestHG1TESTProjHG: TESTProjHGTestTFS1: TestTFS1TESTTFSAAA: TESTTFSAAATower: tower showTransit Feed Generator: Library for help the integration with Google Transit. This library generates the zipped file with data for Google Transit Feed, in the GTFS formatVRE Collaborator Search Kit for SharePoint 2010: VRE Collaborator Search Kit for SharePoint 2010VRE Content Archiving Kit for SharePoint 2010: VRE Content Archiving Kit for SharePoint 2010VRE Document Review Workflow Kit for SharePoint 2010: VRE Document Review Workflow Kit for SharePoint 2010 VRE Literature Review Kit for SharePoint 2010: VRE Literature Review Kit for SharePoint 2010 VRE Researcher and Project Templates for SharePoint 2010: VRE Researcher and Project Templates for SharePoint 2010 VRE RSS Feeds Kit for SharePoint 2010: VRE RSS Feeds Kit for SharePoint 2010 VRE User Administration (FBA) Kit for SharePoint 2010: VRE User Administration (FBA) Kit for SharePoint 2010 WebPart Collapser: WebPart Collapser is a lightweight, customizable jQuery plugin for SharePoint 2007 that allows visitors to expand/collapse WebParts. Through the use of cookies, the collapsed state of any webparts will be saved and collapsed each time a user visits a page. WPF Hex Editor: hex editor which is created with WPF with a xaml designed UI.Xoorscript: A proprietary scripting language created by Jared Thomson for the purpose of script defining an easy to use make system. The plans for this project are minimal for now, but if things go well I may expand it. This is low priority for me.

    Read the article

  • Know more about Enqueue Deadlock Detection

    - by Liu Maclean(???)
    ??? ORACLE ALLSTAR???????????????????,??????? ???????enqueue lock?????????3 ??????,????????????????????????????ora-00060 dead lock??process???3s: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com PROCESS A: set timing on; update maclean1 set t1=t1+1; PROCESS B: update maclean2 set t1=t1+1; PROCESS A: update maclean2 set t1=t1+1; PROCESS B: update maclean1 set t1=t1+1; ??3s? PROCESS A ?? ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:03.02 ????Process A????????????? 3s,?????????????,??????? ?????????? ???????: SQL> col name for a30 SQL> col value for a5 SQL> col DESCRIB for a50 SQL> set linesize 140 pagesize 1400 SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm='_enqueue_deadlock_scan_secs'; NAME VALUE DESCRIB ------------------------------ ----- -------------------------------------------------- _enqueue_deadlock_scan_secs 0 deadlock scan interval SQL> alter system set "_enqueue_deadlock_scan_secs"=18 scope=spfile; System altered. Elapsed: 00:00:00.01 SQL> startup force; ORACLE instance started. Total System Global Area 851443712 bytes Fixed Size 2100040 bytes Variable Size 738198712 bytes Database Buffers 104857600 bytes Redo Buffers 6287360 bytes Database mounted. Database opened. PROCESS A: SQL> set timing on; SQL> update maclean1 set t1=t1+1; 1 row updated. Elapsed: 00:00:00.06 Process B SQL> update maclean2 set t1=t1+1; 1 row updated. SQL> update maclean1 set t1=t1+1; Process A: SQL> SQL> alter session set events '10704 trace name context forever,level 10:10046 trace name context forever,level 8'; Session altered. SQL> update maclean2 set t1=t1+1; update maclean2 set t1=t1+1 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource  Elapsed: 00:00:18.05 ksqcmi: TX,90011,4a9 mode=6 timeout=21474836 WAIT #12: nam='enq: TX - row lock contention' ela= 2930070 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114759849120 WAIT #12: nam='enq: TX - row lock contention' ela= 2930636 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114762779801 WAIT #12: nam='enq: TX - row lock contention' ela= 2930439 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114765710430 *** 2012-06-12 09:58:43.089 WAIT #12: nam='enq: TX - row lock contention' ela= 2931698 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114768642192 WAIT #12: nam='enq: TX - row lock contention' ela= 2930428 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114771572755 WAIT #12: nam='enq: TX - row lock contention' ela= 2931408 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114774504207 DEADLOCK DETECTED ( ORA-00060 ) [Transaction Deadlock] The following deadlock is not an ORACLE error. It is a deadlock due to user error in the design of an application or from issuing incorrect ad-hoc SQL. The following information may aid in determining the deadlock: ??????Process A?’enq: TX – row lock contention’ ?????ORA-00060 deadlock detected????3s ??? 18s , ???hidden parameter “_enqueue_deadlock_scan_secs”?????,????????0? ??????????: SQL> alter system set "_enqueue_deadlock_scan_secs"=4 scope=spfile; System altered. Elapsed: 00:00:00.01 SQL> alter system set "_enqueue_deadlock_time_sec"=9 scope=spfile; System altered. Elapsed: 00:00:00.00 SQL> startup force; ORACLE instance started. Total System Global Area 851443712 bytes Fixed Size 2100040 bytes Variable Size 738198712 bytes Database Buffers 104857600 bytes Redo Buffers 6287360 bytes Database mounted. Database opened. SQL> set linesize 140 pagesize 1400 SQL> show parameter dead NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _enqueue_deadlock_scan_secs integer 4 _enqueue_deadlock_time_sec integer 9 SQL> set timing on SQL> select * from maclean1 for update wait 8; T1 ---------- 11 Elapsed: 00:00:00.01 PROCESS B SQL> select * from maclean2 for update wait 8; T1 ---------- 3 SQL> select * from maclean1 for update wait 8; select * from maclean1 for update wait 8 PROCESS A SQL> select * from maclean2 for update wait 8; select * from maclean2 for update wait 8 * ERROR at line 1: ORA-30006: resource busy; acquire with WAIT timeout expired Elapsed: 00:00:08.00 ???????? ??? select for update wait?enqueue request timeout ?????8s? ,???????”_enqueue_deadlock_scan_secs”=4(deadlock scan interval),?4s???deadlock detected,????Process A????deadlock ???, ??????? ??Process A?????8s?raised??”ORA-30006: resource busy; acquire with WAIT timeout expired”??,??ORA-00060,?????process A???????? ????????”_enqueue_deadlock_time_sec”(requests with timeout <= this will not have deadlock detection)???,?enqueue request time < “_enqueue_deadlock_time_sec”?Server process?????dead lock detection,?????????enqueue request ??????timeout??????(_enqueue_deadlock_time_sec????5,?timeout<5s),???????????????;??????timeout>”_enqueue_deadlock_time_sec”???,Oracle????????????????????? ??????????: SQL> show parameter dead NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _enqueue_deadlock_scan_secs integer 4 _enqueue_deadlock_time_sec integer 9 Process A: SQL> set timing on; SQL> select * from maclean1 for update wait 10; T1 ---------- 11 Process B: SQL> select * from maclean2 for update wait 10; T1 ---------- 3 SQL> select * from maclean1 for update wait 10; PROCESS A: SQL> select * from maclean2 for update wait 10; select * from maclean2 for update wait 10 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:06.02 ??????? select for update wait 10?10s??, ?? 10s?????_enqueue_deadlock_time_sec???(9s),??Process A???????? ???????????????6s ???????_enqueue_deadlock_scan_secs?4s ? ???????????,???????????_enqueue_deadlock_scan_secs?????????3???? ??: enqueue lock?????????????? 1. ?????????deadlock detection??3s????, ????????_enqueue_deadlock_scan_secs(deadlock scan interval)???,??????0,????????_enqueue_deadlock_scan_secs?????????3???, ?_enqueue_deadlock_scan_secs=0 ??3s??, ?_enqueue_deadlock_scan_secs=4??6s??,????? 2. ???????_enqueue_deadlock_time_sec(requests with timeout <= this will not have deadlock detection)???,?enqueue request timeout< _enqueue_deadlock_time_sec(????5),?Server process?????????enqueue request timeout>_enqueue_deadlock_time_sec ????_enqueue_deadlock_scan_secs???????, ??request timeout??????select for update wait [TIMEOUT]??? ??: ???10.2.0.1?????????2?hidden parameter , ???patchset 10.2.0.3????? _enqueue_deadlock_time_sec, ?patchset 10.2.0.5??????_enqueue_deadlock_scan_secs? ?????RAC???????????10s, ???????_lm_dd_interval(dd time interval in seconds) ,????????8.0.6???? ???????????????,??????,  ?10g???????60s,?11g???????10s?  ???????11g??_lm_dd_interval?????????????,?????11g??LMD????????????,??????????RAC?LMD?Deadlock Detection???????CPU,???11g?Oracle????Team???LMD????????CPU????: ????????11g?LMD???????,???????11g??? UTS TRACE ????? DD???: SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> SQL> select * from global_name 2 ; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> alter system set "_lm_dd_interval"=20 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1570009088 bytes Fixed Size 2228704 bytes Variable Size 1325403680 bytes Database Buffers 234881024 bytes Redo Buffers 7495680 bytes Database mounted. Database opened. SQL> set linesize 140 pagesize 1400 SQL> show parameter lm_dd NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _lm_dd_interval integer 20 SQL> select count(*) from gv$instance; COUNT(*) ---------- 2 instance 1: SQL> oradebug setorapid 12 Oracle pid: 12, Unix process pid: 8608, image: [email protected] (LMD0) ? LMD0??? UTS TRACE??RAC???????????? SQL> oradebug event 10046 trace name context forever,level 8:10708 trace name context forever,level 103: trace[rac.*] disk high; Statement processed. Elapsed: 00:00:00.00 SQL> update maclean1 set t1=t1+1; 1 row updated. instance 2: SQL> update maclean2 set t1=t1+1; 1 row updated. SQL> update maclean1 set t1=t1+1; Instance 1: SQL> update maclean2 set t1=t1+1; update maclean2 set t1=t1+1 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:20.51 LMD0???UTS TRACE 2012-06-12 22:27:00.929284 : [kjmpbmsg:process][type 22][msg 0x7fa620ac85a8][from 1][seq 8148.0][len 192] 2012-06-12 22:27:00.929346 : [kjmxmpm][type 22][seq 0.0][msg 0x7fa620ac85a8][from 1] *** 2012-06-12 22:27:00.929 * kjddind: received DDIND msg with subtype x6 * reqp->dd_master_inst_kjxmddi == 1 * kjddind: dump sgh: 2012-06-12 22:27:00.929346*: kjddind: req->timestamp [0.15], kjddt [0.13] 2012-06-12 22:27:00.929346*: >> DDmsg:KJX_DD_REMOTE,TS[0.15],Inst 1->2,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.929346*: lock [0x95023930,829], op = [mast] 2012-06-12 22:27:00.929346*: reqp->timestamp [0.15], kjddt [0.13] 2012-06-12 22:27:00.929346*: kjddind: updated local timestamp [0.15] * kjddind: case KJX_DD_REMOTE 2012-06-12 22:27:00.929346*: ADD IO NODE WFG: 0 frame pointer 2012-06-12 22:27:00.929346*: PUSH: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: POP: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: kjddopr[TX 0xe000c.0x32][ext 0x5,0x0]: blocking lock 0xbbb9a800, owner 2097154 of inst 2 2012-06-12 22:27:00.929346*: PUSH: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: ADD NODE TO WFG: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: POP: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: kjddopt: converting lock 0xbbce92f8 on 'TX' 0x80016.0x5d4,txid [2097154,34]of inst 2 2012-06-12 22:27:00.929346*: PUSH: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: ADD NODE TO WFG: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929855 : GSIPC:AMBUF: rcv buff 0x7fa620aa8cd8, pool rcvbuf, rqlen 1102 2012-06-12 22:27:00.929878 : GSIPC:GPBMSG: new bmsg 0x7fa620aa8d48 mb 0x7fa620aa8cd8 msg 0x7fa620aa8d68 mlen 192 dest x100 flushsz -1 2012-06-12 22:27:00.929878*: << DDmsg:KJX_DD_REMOTE,TS[0.15],Inst 2->1,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.929878*: lock [0xbbce92f8,287], op = [mast] 2012-06-12 22:27:00.929878*: ADD IO NODE WFG: 0 frame pointer 2012-06-12 22:27:00.929923 : [kjmpbmsg:compl][msg 0x7fa620ac8588][typ p][nmsgs 1][qtime 0][ptime 0] 2012-06-12 22:27:00.929947 : GSIPC:PBAT: flush start. flag 0x79 end 0 inc 4.4 2012-06-12 22:27:00.929963 : GSIPC:PBAT: send bmsg 0x7fa620aa8d48 blen 224 dest 1.0 2012-06-12 22:27:00.929979 : GSIPC:SNDQ: enq msg 0x7fa620aa8d48, type 65521 seq 8325, inst 1, receiver 0, queued 1 012-06-12 22:27:00.929979 : GSIPC:SNDQ: enq msg 0x7fa620aa8d48, type 65521 seq 8325, inst 1, receiver 0, queued 1 2012-06-12 22:27:00.929996 : GSIPC:BSEND: flushing sndq 0xb491dd28, id 0, dcx 0xbc517770, inst 1, rcvr 0 qlen 0 1 2012-06-12 22:27:00.930014 : GSIPC:BSEND: no batch1 msg 0x7fa620aa8d48 type 65521 len 224 dest (1:0) 2012-06-12 22:27:00.930088 : kjbsentscn[0x0.3f72dc][to 1] 2012-06-12 22:27:00.930144 : GSIPC:SENDM: send msg 0x7fa620aa8d48 dest x10000 seq 8325 type 65521 tkts x1 mlen xe00110 2012-06-12 22:27:00.930531 : GSIPC:KSXPCB: msg 0x7fa620aa8d48 status 30, type 65521, dest 1, rcvr 0 WAIT #0: nam='ges remote message' ela= 1372 waittime=80 loop=0 p3=74 obj#=-1 tim=1339554420931640 2012-06-12 22:27:00.931728 : GSIPC:RCVD: ksxp msg 0x7fa620af6490 sndr 1 seq 0.8149 type 65521 tkts 1 2012-06-12 22:27:00.931746 : GSIPC:RCVD: watq msg 0x7fa620af6490 sndr 1, seq 8149, type 65521, tkts 1 2012-06-12 22:27:00.931763 : GSIPC:RCVD: seq update (0.8148)->(0.8149) tp -15 fg 0x4 from 1 pbattr 0x0 2012-06-12 22:27:00.931779 : GSIPC:TKT: collect msg 0x7fa620af6490 from 1 for rcvr 0, tickets 1 2012-06-12 22:27:00.931794 : kjbrcvdscn[0x0.3f72dc][from 1][idx 2012-06-12 22:27:00.931810 : kjbrcvdscn[no bscn dd_master_inst_kjxmddi == 1 * kjddind: dump sgh: NXTIN (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 NXTOUT (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 2012-06-12 22:27:00.932058*: kjddind: req->timestamp [0.15], kjddt [0.15] 2012-06-12 22:27:00.932058*: >> DDmsg:KJX_DD_VALIDATE,TS[0.15],Inst 1->2,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.932058*: lock [(nil),0], op = [vald_dd] 2012-06-12 22:27:00.932058*: kjddind: updated local timestamp [0.15] * kjddind: case KJX_DD_VALIDATE *** 2012-06-12 22:27:00.932 * kjddvald called: kjxmddi stuff: * cont_lockp (nil) * dd_lockp 0x95023930 * dd_inst 1 * dd_master_inst 1 * sgh graph: NXTIN (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 NXTOUT (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 POP WFG NODE: lock=(nil) * kjddvald: dump the PRQ: BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 * kjddvald: KJDD_NXTONOD ->node_kjddsg.dinst_kjddnd =1 * kjddvald: ... which is not my node, my subgraph is validated but the cycle is not complete Global blockers dump start:--------------------------------- DUMP LOCAL BLOCKER/HOLDER: block level 5 res [0x80016][0x5d4],[TX][ext 0x2,0x0] ??dead lock!!! ???????11.2.0.3???? RAC LMD???????????”_lm_dd_interval”????????????20s?  ???????10g?_lm_dd_interval???60s,??????Processes?????????????????,????????????Server Process????????60s??????11g?????(??????LMD???????)???????,???????????10s??? Enqueue Deadlock Detection? ?11g??? RAC?LMD???????hidden parameter ????”_lm_dd_interval”???,RAC????????????????,???????????: SQL> col name for a50 SQL> col describ for a60 SQL> col value for a20 SQL> set linesize 140 pagesize 1400 SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm like '_lm_dd%'; NAME VALUE DESCRIB -------------------------------------------------- -------------------- ------------------------------------------------------------ _lm_dd_interval 20 dd time interval in seconds _lm_dd_scan_interval 5 dd scan interval in seconds _lm_dd_search_cnt 3 number of dd search per token get _lm_dd_max_search_time 180 max dd search time per token _lm_dd_maxdump 50 max number of locks to be dumped during dd validation _lm_dd_ignore_nodd FALSE if TRUE nodeadlockwait/nodeadlockblock options are ignored 6 rows selected.

    Read the article

  • CodePlex Daily Summary for Wednesday, June 02, 2010

    CodePlex Daily Summary for Wednesday, June 02, 2010New ProjectsBackupCleaner.Net: A C#.Net-based tool for automatically removing old backups selectively. It can be used when you already make daily backups on disk, and want to cle...C# Dotnetnuke Module development Template for Visual Studio 2010: C# DNN Module Development template for Visual Studio 2010 Get a head-start on DNN Module development. Whether you're a pro or just starting with D...Christoc's DotNetNuke C# Module Development Template: A quick and easy to use C# Module development template for DotNetNuke 5, Visual Studio 2008.Client per la digitalizzazione di documenti integrato con DotNetNuke: Questo applicativo in ambiente windows 32bit consente di digitalizzare documenti con piu scanner contemporaneamente, processare OCR in 17 Lingue (p...ContainerOne - C# application server: An application server completely written in c# (.net 4.0).Drop7 Silverlight: It's a clone of the original Drop7 written in Silverlight (C#). Echo.Net: Echo.Net is an embedded task manager for web and windows apps. It allows for simple management of background tasks at specific times. It's develope...energy: Smartgrid Demand Response ImplementationGenerate Twitter Message for Live Writer: This is a plug-in for Windows Live Writer that generates a twitter message with your blog post name and a TinyUrl link to the blog post. It will d...HomingCMS.Net: A lightweight cms.Information Système & Shell à distance: Un web service qui permet d'avoir des informations sur le système et de lancer de commande (terminal) à distance.Javascript And Jquery: gqq's javascript and jquery examplesMemory++: "Tweak the memory to speed up your computer" Memory ++ is basically an application that will speed up your computer ensuring comfort in their norma...Microformat Parsers for .NET: Microformat's Parsers for .NET helps you to collect information you run into on the web, that is stored by means of microformats. It's written in C...MoneyManager: Trying to make Personal Finances management System for my needs. Microsoft stopped to support MSMoney - it makes me so sad, so I wanna to make my ...Open source software for running a financial markets trading business: The core conceptual model will support running a business in the financial markets, for example running a trading exchange business.Ovik: Open Video Converter with simple and intuitive interface for converting video files to the open formats.Oxygen Smil Player: The <project name> is a open a-smil player implementation that is meaned to be connected to a Digital Signage CMS like Oxygen media platform ( www....Protect The Carrot: Protect The Carrot is a small fastpaced XNA game. You are a farmer whose single carrot is under attack by ravenous rabbits. You have to shoot the r...Race Day Commander: The core project is designed to support coaches of "long distance" or "endurance" sporting events coach their athletes during a race. The idea bein...Raygun Diplomacy: Raygun Diplomacy is an action shooter sandbox game set in a futuristic world. It will use procedural generation for the world, weapons, and vehicle...Resx-Translator-Bot: Resx-Translator-Bot uses Google Tanslate to automatically translate the .resx-files in your .NET and ASP.NET applications.Sistema de Expedición del Permiso Único de Siembra: Sistema de Expedición del Permiso Único de Siembra.SiteOA: 一个基于asp.net mvc2的OAStraighce: This is a low-featured, cyclic (log files reside in appname\1.txt to at mose 31.txt), thread-safe and non-blocking TraceListener with some extensio...Touch Mice: Touch Mice turns multiple mice on a computer into individual touch devices. This allows you to create multi-touch applications using the new touch...TStringValidator: A project helper to validate strings. Use this class to hold your regex strings for use with any project's string validation.Ultimate Dotnetnuke Skin Object: Ultimate Skin Object is a Dotnetnuke 5.4.2+ extension that will allow you to easily change your skins doc type, remove unneeded css files, inject e...Ventosus: Ventosus is an upcoming partially text-based game. No further information is available at this time.vit: vit based on asp.net mvcW7 Auto Playlist Generator: Purpose: This application is designed to create W7MC playlist automatically whenever you want. You can select if you want the playlist sorted Alpha...W7 Video Playlist Creator: Purpose: This program allows you to quickly create wvx video play list for Windows Media Center. This functionality is not included in WMC and is u...New ReleasesBCryptTool: BCryptTool v0.2.1: The Microsoft .NET Framework 4.0 is needed to run this program.BFBC2 PRoCon: PRoCon 0.5.2.0: Notes available on phogue.netC# Dotnetnuke Module development Template for Visual Studio 2010: DNNModule 1.0: This is the initial release of DNNModule as was available for download from http://www.subodh.com/Projects/DNNModule In this release: Contains one...Client per la digitalizzazione di documenti integrato con DotNetNuke: Versione 3.0.1: Versione 3.0.1CommonLibrary.NET: CommonLibrary.NET 0.9.4 - Final Release: A collection of very reusable code and components in C# 3.5 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars,...Community Forums NNTP bridge: Community Forums NNTP Bridge V20: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V21: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...DirectQ: Release 1.8.4: Significant bug fixes and improvments over 1.8.3c; there may be some bugs that I haven't caught here as development became a little disjointed towa...DotNetNuke 5 Thai Language Pack: 1.0.1: Fixed Installation Problem. Version 1.0 -> 1.0.1 Type : Character encoding. Change : ().dnn description file to "ไทย (ไทย)".dnnEcho.Net: Echo.Net 1.0: Initial release of Echo.Net.Extend SmallBasic: Teaching Extensions v.018: ShapeMaker, Program Window, Timer, and many threading errors fixedExtend SmallBasic: Teaching Extensions v.019: Added Rectangles to shapemaker, and the bubble quizGenerate Twitter Message for Live Writer: Final Source Code Plus Binaries: Compete C# source code available below. I have also included the binary for those that just want to run it.GoogleMap Control: GoogleMap Control 4.5: Map and map events are only functional. New state, persistence and event implementation in progress. Javascript classes are implemented as MS AJAX ...Industrial Dashboard: ID 3.1: -Added new widget IndustrialSlickGrid. -Added example with IndustrialChart.LongBar: LongBar 2.1 Build 310: - Library: Double-clicking on tile will install it - Feedback: Now you can type your e-mail and comment for errorMavention: Mavention Insert Lorem ipsum: A Sandbox Solution for SharePoint 2010 that allows you to easily insert Lorem ipsum text into RTE. More information and screenshots available @ htt...Memory++: Memory ++: Tweak the memory to speed up your computer Memory is basically an application that will speed up your computer ensuring comfort in their normal ac...MyVocabulary: Version 2.0: Improvements over version 1.0: Several bug fixes New shortcuts added to increase usability A new section for testing verbs was addednopCommerce. Open Source online shop e-commerce solution.: nopCommerce 1.60: You can also install nopCommerce using the Microsoft Web Platform Installer. Simply click the button below: nopCommerce To see the full list of f...Nuntio Content: Nuntio Content 4.2.1: Patch release that fixes a couple of minor issues with version numbers and priority settings for role content. The release one package for DNN 4 an...Ovik: Ovik v0.0.1 (Preview Release): This is a very early preview release of Ovik. It contains only the pure processes of selecting files and launching a conversion process. Preview r...PHPExcel: PHPExcel 1.7.3c Production: This is a patch release for 26477. Want to contribute?Please refer the Contribute page. DonationsDonate via PayPal. If you want to, we can also a...PowerShell Admin Modules: PAM 0.2: Version 0.2 contains the PAMShare module with Get-Share Get-ShareAccessMask Get-ShareSecurity New-Share Remove-Share Set-Share and the PAMath modu...Professional MRDS: RDS 2008 R3 Release: This is an updated version of the code to work with RDS 2008 R3 (version 2.2.76.0). IMPORTANT NOTE These samples are supplied as a ZIP file. Unzip...Protect The Carrot: First release: We provide two ways to install the game. The first is PTC 1.0.0.0 Zip which contains a Click-Once installer (the DVD type since codeplex does not...PST File Format SDK: PST File Format SDK v0.2.0: Updated version of pstsdk with several bug fixes, including: Improved compiler support (several changes, including two patches from hub) Fixed Do...Race Day Commander: Race Day Commander v1: First release. The exact code that was written on the day in 6 hours.Resx-Translator-Bot: Release 1.0: Initial releaseSalient.StackApps: JavaScript API Wrapper beta 2: This is the first draft of the generated JS wrapper. Added basic test coverage for each route that can also serve as basic usage examples. More i...SharePoint 2010 PowerShell Scripts & Utilities: PSSP2010 Utils 0.2: Added Install-SPIFilter script cmdlet More information can be found at http://www.ravichaganti.com/blog/?p=1439SharePoint Tools from China Community: ECB菜单控制器: ECB菜单控制器Shopping Cart .NET: 1.5: Shopping Cart .NET 1.5 has received an upgrade to .NET 4.0 along with SQL Server 2005/8. There is a new AJAX Based inventory system that helps you ...sMODfix: sMODfix v1.0b: Added: provisional support for ecm_v54 Added: provisional support for gfx_v88SNCFT Gadget: SNCFT gadget v1: cette version est la version 1 de ma gadgetSnippet Designer: Snippet Designer 1.3: Change logChanges for Visual Studio 2010Fixed bug where "Export as Snippet" was failing in a website project Changed Snippet Explorer search to u...sNPCedit: sNPCedit v0.9b: + Fixed: structure of resources + Changed: some labels in GUISoftware Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+05: Sw. Is Hw. Lib. 3.0.0.x+05SQL Server PowerShell Extensions: 2.2.3 Beta: Release 2.2 re-implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 9 modules with 133 advanced functions, 2 cmdlets and 7 scri...StackOverflow.Net: Preview Beta: Goes with the Stack Apps API version 0.8Ultimate Dotnetnuke Skin Object: Ultimate Skin Object V1.00.00: Ultimate Skin Object is a Dotnetnuke 5.4.2+ extension that will allow you to easily change your skins doc type, remove unneeded css files, inject e...VCC: Latest build, v2.1.30601.0: Automatic drop of latest buildVelocity Shop: JUNE 2010: Source code aligned to .NET Framework 4.0, ASP.NET 4.0 and Windows Server AppFabric Caching RC.ViperWorks Ignition: ViperWorks_5.0.1005.31: ViperWorks Ignition Source, version 5.0.1005.31.Visual Studio 2010 and Team Foundation Server 2010 VM Factory: Session Recordings: This release contains the "raw" and undedited session recordings and slides delivered by the team. 2010-06-01 Create package and add two session r...W7 Auto Playlist Generator: Source Code plus Binaries: Compete C# and WinForm source code available below. I have also included the binary for those that just want to run it.W7 Video Playlist Creator: Source Code plus Binaries: Compete C# and WPF source code available below. I have also included the binary for those that just want to run it.Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgepatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationBlogEngine.NETIonics Isapi Rewrite FilterMirror Testing SystemRawrCaliburn: An Application Framework for WPF and SilverlightPHPExcelCustomer Portal Accelerator for Microsoft Dynamics CRM

    Read the article

  • High CPU usage with Team Speak 3.0.0-rc2

    - by AlexTheBird
    The CPU usage is always around 40 percent. I use push-to-talk and I had uninstalled pulseaudio. Now I use Alsa. I don't even have to connect to a Server. By simply starting TS the cpu usage goes up 40 percent and stays there. The CPU usage of 3.0.0-rc1 [Build: 14468] is constantly 14 percent. This is the output of top, mpstat and ps aux while I am running TS3 ... of course: alexandros@alexandros-laptop:~$ top top - 18:20:07 up 2:22, 3 users, load average: 1.02, 0.85, 0.77 Tasks: 163 total, 1 running, 162 sleeping, 0 stopped, 0 zombie Cpu(s): 5.3%us, 1.9%sy, 0.1%ni, 91.8%id, 0.7%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 2061344k total, 964028k used, 1097316k free, 69116k buffers Swap: 3997688k total, 0k used, 3997688k free, 449032k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2714 alexandr 20 0 206m 31m 24m S 37 1.6 0:12.78 ts3client_linux 868 root 20 0 47564 27m 10m S 8 1.4 3:21.73 Xorg 1 root 20 0 2804 1660 1204 S 0 0.1 0:00.53 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.01 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.45 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 7 root 20 0 0 0 0 S 0 0.0 0:00.08 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root 20 0 0 0 0 S 0 0.0 0:01.17 events/0 10 root 20 0 0 0 0 S 0 0.0 0:00.81 events/1 11 root 20 0 0 0 0 S 0 0.0 0:00.00 cpuset 12 root 20 0 0 0 0 S 0 0.0 0:00.00 khelper 13 root 20 0 0 0 0 S 0 0.0 0:00.00 async/mgr 14 root 20 0 0 0 0 S 0 0.0 0:00.00 pm 16 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers 17 root 20 0 0 0 0 S 0 0.0 0:00.00 bdi-default 18 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/0 19 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/1 20 root 20 0 0 0 0 S 0 0.0 0:00.05 kblockd/0 21 root 20 0 0 0 0 S 0 0.0 0:00.02 kblockd/1 22 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpid 23 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_notify 24 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_hotplug 25 root 20 0 0 0 0 S 0 0.0 0:00.99 ata/0 26 root 20 0 0 0 0 S 0 0.0 0:00.92 ata/1 27 root 20 0 0 0 0 S 0 0.0 0:00.00 ata_aux 28 root 20 0 0 0 0 S 0 0.0 0:00.00 ksuspend_usbd 29 root 20 0 0 0 0 S 0 0.0 0:00.00 khubd alexandros@alexandros-laptop:~$ mpstat Linux 2.6.32-32-generic (alexandros-laptop) 16.06.2011 _i686_ (2 CPU) 18:20:15 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 18:20:15 all 5,36 0,09 1,91 0,68 0,07 0,06 0,00 0,00 91,83 alexandros@alexandros-laptop:~$ ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2804 1660 ? Ss 15:58 0:00 /sbin/init root 2 0.0 0.0 0 0 ? S 15:58 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S 15:58 0:00 [migration/0] root 4 0.0 0.0 0 0 ? S 15:58 0:00 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S 15:58 0:00 [watchdog/0] root 6 0.0 0.0 0 0 ? S 15:58 0:00 [migration/1] root 7 0.0 0.0 0 0 ? S 15:58 0:00 [ksoftirqd/1] root 8 0.0 0.0 0 0 ? S 15:58 0:00 [watchdog/1] root 9 0.0 0.0 0 0 ? S 15:58 0:01 [events/0] root 10 0.0 0.0 0 0 ? S 15:58 0:00 [events/1] root 11 0.0 0.0 0 0 ? S 15:58 0:00 [cpuset] root 12 0.0 0.0 0 0 ? S 15:58 0:00 [khelper] root 13 0.0 0.0 0 0 ? S 15:58 0:00 [async/mgr] root 14 0.0 0.0 0 0 ? S 15:58 0:00 [pm] root 16 0.0 0.0 0 0 ? S 15:58 0:00 [sync_supers] root 17 0.0 0.0 0 0 ? S 15:58 0:00 [bdi-default] root 18 0.0 0.0 0 0 ? S 15:58 0:00 [kintegrityd/0] root 19 0.0 0.0 0 0 ? S 15:58 0:00 [kintegrityd/1] root 20 0.0 0.0 0 0 ? S 15:58 0:00 [kblockd/0] root 21 0.0 0.0 0 0 ? S 15:58 0:00 [kblockd/1] root 22 0.0 0.0 0 0 ? S 15:58 0:00 [kacpid] root 23 0.0 0.0 0 0 ? S 15:58 0:00 [kacpi_notify] root 24 0.0 0.0 0 0 ? S 15:58 0:00 [kacpi_hotplug] root 25 0.0 0.0 0 0 ? S 15:58 0:00 [ata/0] root 26 0.0 0.0 0 0 ? S 15:58 0:00 [ata/1] root 27 0.0 0.0 0 0 ? S 15:58 0:00 [ata_aux] root 28 0.0 0.0 0 0 ? S 15:58 0:00 [ksuspend_usbd] root 29 0.0 0.0 0 0 ? S 15:58 0:00 [khubd] root 30 0.0 0.0 0 0 ? S 15:58 0:00 [kseriod] root 31 0.0 0.0 0 0 ? S 15:58 0:00 [kmmcd] root 34 0.0 0.0 0 0 ? S 15:58 0:00 [khungtaskd] root 35 0.0 0.0 0 0 ? S 15:58 0:00 [kswapd0] root 36 0.0 0.0 0 0 ? SN 15:58 0:00 [ksmd] root 37 0.0 0.0 0 0 ? S 15:58 0:00 [aio/0] root 38 0.0 0.0 0 0 ? S 15:58 0:00 [aio/1] root 39 0.0 0.0 0 0 ? S 15:58 0:00 [ecryptfs-kthrea] root 40 0.0 0.0 0 0 ? S 15:58 0:00 [crypto/0] root 41 0.0 0.0 0 0 ? S 15:58 0:00 [crypto/1] root 48 0.0 0.0 0 0 ? S 15:58 0:03 [scsi_eh_0] root 50 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_1] root 53 0.0 0.0 0 0 ? S 15:58 0:00 [kstriped] root 54 0.0 0.0 0 0 ? S 15:58 0:00 [kmpathd/0] root 55 0.0 0.0 0 0 ? S 15:58 0:00 [kmpathd/1] root 56 0.0 0.0 0 0 ? S 15:58 0:00 [kmpath_handlerd] root 57 0.0 0.0 0 0 ? S 15:58 0:00 [ksnapd] root 58 0.0 0.0 0 0 ? S 15:58 0:03 [kondemand/0] root 59 0.0 0.0 0 0 ? S 15:58 0:02 [kondemand/1] root 60 0.0 0.0 0 0 ? S 15:58 0:00 [kconservative/0] root 61 0.0 0.0 0 0 ? S 15:58 0:00 [kconservative/1] root 213 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_2] root 222 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_3] root 234 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_4] root 235 0.0 0.0 0 0 ? S 15:58 0:01 [usb-storage] root 255 0.0 0.0 0 0 ? S 15:58 0:00 [jbd2/sda5-8] root 256 0.0 0.0 0 0 ? S 15:58 0:00 [ext4-dio-unwrit] root 257 0.0 0.0 0 0 ? S 15:58 0:00 [ext4-dio-unwrit] root 290 0.0 0.0 0 0 ? S 15:58 0:00 [flush-8:0] root 318 0.0 0.0 2316 888 ? S 15:58 0:00 upstart-udev-bridge --daemon root 321 0.0 0.0 2616 1024 ? S<s 15:58 0:00 udevd --daemon root 526 0.0 0.0 0 0 ? S 15:58 0:00 [kpsmoused] root 528 0.0 0.0 0 0 ? S 15:58 0:00 [led_workqueue] root 650 0.0 0.0 0 0 ? S 15:58 0:00 [radeon/0] root 651 0.0 0.0 0 0 ? S 15:58 0:00 [radeon/1] root 652 0.0 0.0 0 0 ? S 15:58 0:00 [ttm_swap] root 654 0.0 0.0 2612 984 ? S< 15:58 0:00 udevd --daemon root 656 0.0 0.0 0 0 ? S 15:58 0:00 [hd-audio0] root 657 0.0 0.0 2612 916 ? S< 15:58 0:00 udevd --daemon root 674 0.6 0.0 0 0 ? S 15:58 0:57 [phy0] syslog 715 0.0 0.0 34812 1776 ? Sl 15:58 0:00 rsyslogd -c4 102 731 0.0 0.0 3236 1512 ? Ss 15:58 0:02 dbus-daemon --system --fork root 740 0.0 0.1 19088 3380 ? Ssl 15:58 0:00 gdm-binary root 744 0.0 0.1 18900 4032 ? Ssl 15:58 0:01 NetworkManager avahi 749 0.0 0.0 2928 1520 ? S 15:58 0:00 avahi-daemon: running [alexandros-laptop.local] avahi 752 0.0 0.0 2928 544 ? Ss 15:58 0:00 avahi-daemon: chroot helper root 753 0.0 0.1 4172 2300 ? S 15:58 0:00 /usr/sbin/modem-manager root 762 0.0 0.1 20584 3152 ? Sl 15:58 0:00 /usr/sbin/console-kit-daemon --no-daemon root 836 0.0 0.1 20856 3864 ? Sl 15:58 0:00 /usr/lib/gdm/gdm-simple-slave --display-id /org/gnome/DisplayManager/Display1 root 856 0.0 0.1 4836 2388 ? S 15:58 0:00 /sbin/wpa_supplicant -u -s root 868 2.3 1.3 36932 27924 tty7 Rs+ 15:58 3:22 /usr/bin/X :0 -nr -verbose -auth /var/run/gdm/auth-for-gdm-a46T4j/database -nolisten root 891 0.0 0.0 1792 564 tty4 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty4 root 901 0.0 0.0 1792 564 tty5 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty5 root 908 0.0 0.0 1792 564 tty2 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty2 root 910 0.0 0.0 1792 568 tty3 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty3 root 913 0.0 0.0 1792 564 tty6 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty6 root 917 0.0 0.0 2180 1072 ? Ss 15:58 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket daemon 924 0.0 0.0 2248 432 ? Ss 15:58 0:00 atd root 927 0.0 0.0 2376 900 ? Ss 15:58 0:00 cron root 950 0.0 0.0 11736 1372 ? Ss 15:58 0:00 /usr/sbin/winbindd root 958 0.0 0.0 11736 1184 ? S 15:58 0:00 /usr/sbin/winbindd root 974 0.0 0.1 6832 2580 ? Ss 15:58 0:00 /usr/sbin/cupsd -C /etc/cups/cupsd.conf root 1078 0.0 0.0 1792 564 tty1 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty1 gdm 1097 0.0 0.0 3392 772 ? S 15:58 0:00 /usr/bin/dbus-launch --exit-with-session root 1112 0.0 0.1 19216 3292 ? Sl 15:58 0:00 /usr/lib/gdm/gdm-session-worker root 1116 0.0 0.1 5540 2932 ? S 15:58 0:01 /usr/lib/upower/upowerd root 1131 0.0 0.1 6308 3824 ? S 15:58 0:00 /usr/lib/policykit-1/polkitd 108 1163 0.0 0.2 16788 4360 ? Ssl 15:58 0:01 /usr/sbin/hald root 1164 0.0 0.0 3536 1300 ? S 15:58 0:00 hald-runner root 1188 0.0 0.0 3612 1256 ? S 15:58 0:00 hald-addon-input: Listening on /dev/input/event6 /dev/input/event5 /dev/input/event2 root 1194 0.0 0.0 3612 1224 ? S 15:58 0:00 /usr/lib/hal/hald-addon-rfkill-killswitch root 1200 0.0 0.0 3608 1240 ? S 15:58 0:00 /usr/lib/hal/hald-addon-generic-backlight root 1202 0.0 0.0 3616 1236 ? S 15:58 0:02 hald-addon-storage: polling /dev/sr0 (every 2 sec) root 1204 0.0 0.0 3616 1236 ? S 15:58 0:00 hald-addon-storage: polling /dev/sdb (every 2 sec) root 1211 0.0 0.0 3624 1220 ? S 15:58 0:00 /usr/lib/hal/hald-addon-cpufreq 108 1212 0.0 0.0 3420 1200 ? S 15:58 0:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket 1000 1222 0.0 0.1 24196 2816 ? Sl 15:58 0:00 /usr/bin/gnome-keyring-daemon --daemonize --login 1000 1240 0.0 0.3 28228 7312 ? Ssl 15:58 0:00 gnome-session 1000 1274 0.0 0.0 3284 356 ? Ss 15:58 0:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session gnome-session 1000 1277 0.0 0.0 3392 772 ? S 15:58 0:00 /usr/bin/dbus-launch --exit-with-session gnome-session 1000 1278 0.0 0.0 3160 1652 ? Ss 15:58 0:00 /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session 1000 1281 0.0 0.2 8172 4636 ? S 15:58 0:00 /usr/lib/libgconf2-4/gconfd-2 1000 1287 0.0 0.5 24228 10896 ? Ss 15:58 0:03 /usr/lib/gnome-settings-daemon/gnome-settings-daemon 1000 1290 0.0 0.1 6468 2364 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd 1000 1293 0.0 0.6 38104 13004 ? S 15:58 0:03 metacity 1000 1296 0.0 0.1 30280 2628 ? Ssl 15:58 0:00 /usr/lib/gvfs//gvfs-fuse-daemon /home/alexandros/.gvfs 1000 1301 0.0 0.0 3344 988 ? S 15:58 0:03 syndaemon -i 0.5 -k 1000 1303 0.0 0.1 8060 3488 ? S 15:58 0:00 /usr/lib/gvfs/gvfs-gdu-volume-monitor root 1306 0.0 0.1 15692 3104 ? Sl 15:58 0:00 /usr/lib/udisks/udisks-daemon 1000 1307 0.4 1.0 50748 21684 ? S 15:58 0:34 python -u /usr/share/screenlets/DigiClock/DigiClockScreenlet.py 1000 1308 0.0 0.9 35608 18564 ? S 15:58 0:00 python /usr/share/screenlets-manager/screenlets-daemon.py 1000 1309 0.0 0.3 19524 6468 ? S 15:58 0:00 /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 1000 1311 0.0 0.5 37412 11788 ? S 15:58 0:01 gnome-power-manager 1000 1312 0.0 1.0 50772 22628 ? S 15:58 0:03 gnome-panel 1000 1313 0.1 1.5 102648 31184 ? Sl 15:58 0:10 nautilus root 1314 0.0 0.0 5188 996 ? S 15:58 0:02 udisks-daemon: polling /dev/sdb /dev/sr0 1000 1315 0.0 0.6 51948 12464 ? SL 15:58 0:01 nm-applet --sm-disable 1000 1317 0.0 0.1 16956 2364 ? Sl 15:58 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor 1000 1318 0.0 0.3 20164 7792 ? S 15:58 0:00 bluetooth-applet 1000 1321 0.0 0.1 7260 2384 ? S 15:58 0:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor 1000 1323 0.0 0.5 37436 12124 ? S 15:58 0:00 /usr/lib/notify-osd/notify-osd 1000 1324 0.0 1.9 197928 40456 ? Ssl 15:58 0:06 /home/alexandros/.dropbox-dist/dropbox 1000 1329 0.0 0.3 20136 7968 ? S 15:58 0:00 /usr/bin/gnome-screensaver --no-daemon 1000 1331 0.0 0.1 7056 3112 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-trash --spawner :1.6 /org/gtk/gvfs/exec_spaw/0 root 1340 0.0 0.0 2236 1008 ? S 15:58 0:00 /sbin/dhclient -d -sf /usr/lib/NetworkManager/nm-dhcp-client.action -pf /var/run/dhcl 1000 1348 0.0 0.1 42252 3680 ? Ssl 15:58 0:00 /usr/lib/bonobo-activation/bonobo-activation-server --ac-activate --ior-output-fd=19 1000 1384 0.0 1.7 80244 35480 ? Sl 15:58 0:02 /usr/bin/python /usr/lib/deskbar-applet/deskbar-applet/deskbar-applet --oaf-activate- 1000 1388 0.0 0.5 26196 11804 ? S 15:58 0:01 /usr/lib/gnome-panel/wnck-applet --oaf-activate-iid=OAFIID:GNOME_Wncklet_Factory --oa 1000 1393 0.1 0.5 25876 11548 ? S 15:58 0:08 /usr/lib/gnome-applets/multiload-applet-2 --oaf-activate-iid=OAFIID:GNOME_MultiLoadAp 1000 1394 0.0 0.5 25600 11140 ? S 15:58 0:03 /usr/lib/gnome-applets/cpufreq-applet --oaf-activate-iid=OAFIID:GNOME_CPUFreqApplet_F 1000 1415 0.0 0.5 39192 11156 ? S 15:58 0:01 /usr/lib/gnome-power-manager/gnome-inhibit-applet --oaf-activate-iid=OAFIID:GNOME_Inh 1000 1417 0.0 0.7 53544 15488 ? Sl 15:58 0:00 /usr/lib/gnome-applets/mixer_applet2 --oaf-activate-iid=OAFIID:GNOME_MixerApplet_Fact 1000 1419 0.0 0.4 23816 9068 ? S 15:58 0:00 /usr/lib/gnome-panel/notification-area-applet --oaf-activate-iid=OAFIID:GNOME_Notific 1000 1488 0.0 0.3 20964 7548 ? S 15:58 0:00 /usr/lib/gnome-disk-utility/gdu-notification-daemon 1000 1490 0.0 0.1 6608 2484 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-burn --spawner :1.6 /org/gtk/gvfs/exec_spaw/1 1000 1510 0.0 0.1 6348 2084 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-metadata 1000 1531 0.0 0.3 19472 6616 ? S 15:58 0:00 /usr/lib/gnome-user-share/gnome-user-share 1000 1535 0.0 0.4 77128 8392 ? Sl 15:58 0:00 /usr/lib/evolution/evolution-data-server-2.28 --oaf-activate-iid=OAFIID:GNOME_Evoluti 1000 1601 0.0 0.5 69576 11800 ? Sl 15:59 0:00 /usr/lib/evolution/2.28/evolution-alarm-notify 1000 1604 0.0 0.7 33924 15888 ? S 15:59 0:00 python /usr/share/system-config-printer/applet.py 1000 1701 0.0 0.5 37116 11968 ? S 15:59 0:00 update-notifier 1000 1892 4.5 7.0 406720 145312 ? Sl 17:11 3:09 /opt/google/chrome/chrome 1000 1896 0.0 0.1 69812 3680 ? S 17:11 0:02 /opt/google/chrome/chrome 1000 1898 0.0 0.6 91420 14080 ? S 17:11 0:00 /opt/google/chrome/chrome --type=zygote 1000 1916 0.2 1.3 140780 27220 ? Sl 17:11 0:12 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1918 0.7 1.8 155720 37912 ? Sl 17:11 0:31 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1921 0.0 1.0 135904 21052 ? Sl 17:11 0:02 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1927 6.5 3.6 194604 74960 ? Sl 17:11 4:32 /opt/google/chrome/chrome --type=renderer --disable-client-side-phishing-detection -- 1000 2156 0.4 0.7 48344 14896 ? Rl 18:03 0:04 gnome-terminal 1000 2157 0.0 0.0 1988 712 ? S 18:03 0:00 gnome-pty-helper 1000 2158 0.0 0.1 6504 3860 pts/0 Ss 18:03 0:00 bash 1000 2564 0.2 0.1 6624 3984 pts/1 Ss+ 18:17 0:00 bash 1000 2711 0.0 0.0 4208 1352 ? S 18:19 0:00 /bin/bash /home/alexandros/Programme/TeamSpeak3-Client-linux_x86_back/ts3client_runsc 1000 2714 36.5 1.5 210872 31960 ? SLl 18:19 0:18 ./ts3client_linux_x86 1000 2743 0.0 0.0 2716 1068 pts/0 R+ 18:20 0:00 ps aux Output of vmstat: alexandros@alexandros-laptop:~$ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 1093324 69840 449496 0 0 27 10 476 667 6 2 91 1 Output of lsusb alexandros@alexandros-laptop:~$ lspci 00:00.0 Host bridge: Silicon Integrated Systems [SiS] 671MX 00:01.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:02.0 ISA bridge: Silicon Integrated Systems [SiS] SiS968 [MuTIOL Media IO] (rev 01) 00:02.5 IDE interface: Silicon Integrated Systems [SiS] 5513 [IDE] (rev 01) 00:03.0 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) 00:03.1 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) 00:03.3 USB Controller: Silicon Integrated Systems [SiS] USB 2.0 Controller 00:05.0 IDE interface: Silicon Integrated Systems [SiS] SATA Controller / IDE mode (rev 03) 00:06.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:07.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:0d.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 00:0f.0 Audio device: Silicon Integrated Systems [SiS] Azalia Audio Controller 01:00.0 VGA compatible controller: ATI Technologies Inc Mobility Radeon X2300 02:00.0 Ethernet controller: Atheros Communications Inc. AR5001 Wireless Network Adapter (rev 01) The Team Speak log file : 2011-06-19 19:04:04.223522|INFO | | | Logging started, clientlib version: 3.0.0-rc2 [Build: 14642] 2011-06-19 19:04:04.761149|ERROR |SoundBckndIntf| | /home/alexandros/Programme/TeamSpeak3-Client-linux_x86_back/soundbackends/libpulseaudio_linux_x86.so error: NOT_CONNECTED 2011-06-19 19:04:05.871770|INFO |ClientUI | | Failed to init text to speech engine 2011-06-19 19:04:05.894623|INFO |ClientUI | | TeamSpeak 3 client version: 3.0.0-rc2 [Build: 14642] 2011-06-19 19:04:05.895421|INFO |ClientUI | | Qt version: 4.7.2 2011-06-19 19:04:05.895571|INFO |ClientUI | | Using configuration location: /home/alexandros/.ts3client/ts3clientui_qt.conf 2011-06-19 19:04:06.559596|INFO |ClientUI | | Last update check was: Sa. Jun 18 00:08:43 2011 2011-06-19 19:04:06.560506|INFO | | | Checking for updates... 2011-06-19 19:04:07.357869|INFO | | | Update check, my version: 14642, latest version: 14642 2011-06-19 19:05:52.978481|INFO |PreProSpeex | 1| Speex version: 1.2rc1 2011-06-19 19:05:54.055347|INFO |UIHelpers | | setClientVolumeModifier: 10 -8 2011-06-19 19:05:54.057196|INFO |UIHelpers | | setClientVolumeModifier: 11 2 Thanks for taking the time to read my message. UPDATE: Thanks to nickguletskii's link I googled for "alsa cpu usage" (without quotes) and it brought me to a forum. A user wrote that by directly selecting the hardware with "plughw:x.x" won't impact the performance of the system. I have selected it in the TS 3 configuration and it worked. But this solution is not optimal because now no other program can access the sound output. If you need any further information or my question is unclear than please tell me.

    Read the article

  • Create excel files with GemBox.Spreadsheet .NET component

    - by hajan
    Generating excel files from .NET code is not always a very easy task, especially if you need to make some formatting or you want to do something very specific that requires extra coding. I’ve recently tried the GemBox Spreadsheet and I would like to share my experience with you. First of all, you can install GemBox Spreadsheet library from VS.NET 2010 Extension manager by searching in the gallery: Go in the Online Gallery tab (as in the picture bellow) and write GemBox in the Search box on top-right of the Extension Manager, so you will get the following result: Click Download on GemBox.Spreadsheet and you will be directed to product website. Click on the marked link then you will get to the following page where you have the component download link Once you download it, install the MSI file. Open the installation folder and find the Bin folder. There you have GemBox.Spreadsheet.dll in three folders each for different .NET Framework version. Now, lets move to Visual Studio.NET. 1. Create sample ASP.NET Web Application and give it a name. 2. Reference The GemBox.Spreadsheet.dll file in your project So you don’t need to search for the dll file in your disk but you can simply find it in the .NET tab in ‘Add Reference’ window and you have all three versions. I chose the version for 4.0.30319 runtime. Next, I will retrieve data from my Pubs database. I’m using Entity Framework. Here is the code (read the comments in it):             //get data from pubs database, tables: authors, titleauthor, titles             pubsEntities context = new pubsEntities();             var authorTitles = (from a in context.authors                                join tl in context.titleauthor on a.au_id equals tl.au_id                                join t in context.titles on tl.title_id equals t.title_id                                select new AuthorTitles                                {                                     Name = a.au_fname,                                     Surname = a.au_lname,                                     Title = t.title,                                     Price = t.price,                                     PubDate = t.pubdate                                }).ToList();             //using GemBox library now             ExcelFile myExcelFile = new ExcelFile();             ExcelWorksheet excWsheet = myExcelFile.Worksheets.Add("Hajan's worksheet");             excWsheet.Cells[0, 0].Value = "Pubs database Authors and Titles";             excWsheet.Cells[0, 0].Style.Borders.SetBorders(MultipleBorders.Bottom,System.Drawing.Color.Red,LineStyle.Thin);             excWsheet.Cells[0, 1].Style.Borders.SetBorders(MultipleBorders.Bottom, System.Drawing.Color.Red, LineStyle.Thin);                                      int numberOfColumns = 5; //the number of properties in the authorTitles we have             //for each column             for (int c = 0; c < numberOfColumns; c++)             {                 excWsheet.Columns[c].Width = 25 * 256; //set the width to each column                             }             //header row cells             excWsheet.Rows[2].Cells[0].Value = "Name";             excWsheet.Rows[2].Cells[1].Value = "Surname";             excWsheet.Rows[2].Cells[2].Value = "Title";             excWsheet.Rows[2].Cells[3].Value = "Price";             excWsheet.Rows[2].Cells[4].Value = "PubDate";             //bind authorTitles in the excel worksheet             int currentRow = 3;             foreach (AuthorTitles at in authorTitles)             {                 excWsheet.Rows[currentRow].Cells[0].Value = at.Name;                 excWsheet.Rows[currentRow].Cells[1].Value = at.Surname;                 excWsheet.Rows[currentRow].Cells[2].Value = at.Title;                 excWsheet.Rows[currentRow].Cells[3].Value = at.Price;                 excWsheet.Rows[currentRow].Cells[4].Value = at.PubDate;                 currentRow++;             }             //stylizing my excel file look             CellStyle style = new CellStyle(myExcelFile);             style.HorizontalAlignment = HorizontalAlignmentStyle.Left;             style.VerticalAlignment = VerticalAlignmentStyle.Center;             style.Font.Color = System.Drawing.Color.DarkRed;             style.WrapText = true;             style.Borders.SetBorders(MultipleBorders.Top                 | MultipleBorders.Left | MultipleBorders.Right                 | MultipleBorders.Bottom, System.Drawing.Color.Black,                 LineStyle.Thin);                                 //pay attention on this, we set created style on the given (firstRow, firstColumn, lastRow, lastColumn)             //in my example:             //firstRow = 2; firstColumn = 0; lastRow = authorTitles.Count+1; lastColumn = numberOfColumns-1; variable             excWsheet.Cells.GetSubrangeAbsolute(3, 0, authorTitles.Count+2, numberOfColumns-1).Style = style;             //save my excel file             myExcelFile.SaveXls(Server.MapPath(".") + @"/myFile.xls"); The AuthorTitles class: public class AuthorTitles {     public string Name { get; set; }     public string Surname { get; set; }     public string Title { get; set; }     public decimal? Price { get; set; }     public DateTime PubDate { get; set; } } The excel file will be generated in the root of your ASP.NET Web Application. The result is: There is a lot more you can do with this library. A set of good examples you have in the GemBox.Spreadsheet Samples Explorer application which comes together with the installation and you can find it by default in Start –> All Programs –> GemBox Software –> GemBox.Spreadsheet Samples Explorer. Hope this was useful for you. Best Regards, Hajan

    Read the article

  • Java update/install via group policy

    - by Maximus
    I trying to deploy the latest Java RE version via GP, Java 7 update 9. I want to update computers that are currently running an older version of Java, a mixture of 7.6 and 7.7, some computers are running versions as old as 6.31. Some are running a mixture of both. I would also like this GP to install Java if it's not installed. Previously I used push out Java updates to users machines as Java didn't remove the old version. So when it was done the user would restart their browser or pc to start using the latest version. Not the best way to manage it as it leaves the old version installed but it worked. I've created group policies before for printer deployment, log on drive mapping scripts, but never software deployment. I've extracted the Java MSI and created a transform file to suppress reboot etc using orca. As described on this site http://ivan.dretvic.com/2011/06/how-to-package-and-deploy-java-jre-1-6-0_26-via-group-policy/. I have also tried saving the edited MSI directly and that didn't work either. But it just won't deploy. I have tried to enable logging as suggested on this site http://openofficetechnology.com/node/32, GPO logging via UserEnvDebugLevel, Software deployment logging via AppmgmtDebugLevel and MSI logging, but there is no log C:\Windows\Debug\UserMode\userenv.log being created. The windows event viewer has the following errors: Error 24/10/2012 11:44:04 AM - "Failed to apply changes to software installation settings. Software changes could not be applied. A previous log entry with details should exist. The error was : %%1612" Information 24/10/2012 11:44:04 AM - "The removal of the assignment of application Java 7 Update 9 - FB Java Transform from policy JavaDeploy succeeded." Error 24/10/2012 11:44:04 AM - "The install of application Java 7 Update 9 - FB Java Transform from policy JavaDeploy failed. The error was : %%1612" There is a log created for MSI logging and it's as below. It says the source is invalid but it exists on the share and the PC that I'm testing has permissions and I've included the recommendation here Group Policy installation failed error 1274 to enable "Always wait for the network at computer startup and logon" === Verbose logging started: 24/10/2012 11:43:59 Build type: SHIP UNICODE 5.00.7601.00 Calling process: C:\Windows\system32\svchost.exe === MSI (c) (9C:EC) [11:43:59:898]: Resetting cached policy values MSI (c) (9C:EC) [11:43:59:898]: Machine policy value 'Debug' is 3 MSI (c) (9C:EC) [11:43:59:898]: ******* RunEngine: ******* Product: {26a24ae4-039d-4ca4-87b4-2f83217009ff} ******* Action: ******* CommandLine: ********** MSI (c) (9C:EC) [11:43:59:898]: Client-side and UI is none or basic: Running entire install on the server. MSI (c) (9C:EC) [11:43:59:898]: Grabbed execution mutex. MSI (c) (9C:EC) [11:44:03:431]: Cloaking enabled. MSI (c) (9C:EC) [11:44:03:431]: Attempting to enable all disabled privileges before calling Install on Server MSI (c) (9C:EC) [11:44:03:439]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (2C:70) [11:44:03:574]: Running installation inside multi-package transaction {26a24ae4-039d-4ca4-87b4-2f83217009ff} MSI (s) (2C:70) [11:44:03:574]: Grabbed execution mutex. MSI (s) (2C:7C) [11:44:03:607]: Resetting cached policy values MSI (s) (2C:7C) [11:44:03:607]: Machine policy value 'Debug' is 3 MSI (s) (2C:7C) [11:44:03:607]: ******* RunEngine: ******* Product: {26a24ae4-039d-4ca4-87b4-2f83217009ff} ******* Action: ******* CommandLine: ********** MSI (s) (2C:7C) [11:44:03:607]: Machine policy value 'DisableUserInstalls' is 0 MSI (s) (2C:7C) [11:44:03:623]: User policy value 'SearchOrder' is 'nmu' MSI (s) (2C:7C) [11:44:03:624]: User policy value 'DisableMedia' is 0 MSI (s) (2C:7C) [11:44:03:624]: Machine policy value 'AllowLockdownMedia' is 0 MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Media enabled only if package is safe. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Looking for sourcelist for product {26a24ae4-039d-4ca4-87b4-2f83217009ff} MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Adding {26a24ae4-039d-4ca4-87b4-2f83217009ff}; to potential sourcelist list (pcode;disk;relpath). MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Now checking product {26a24ae4-039d-4ca4-87b4-2f83217009ff} MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Media is enabled for product. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Attempting to use LastUsedSource from source list. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Processing net source list. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Trying source \\server\share\deployment\Java\stable\x32\. MSI (s) (2C:7C) [11:44:03:650]: Note: 1: 2303 2: 5 3: \\server\share\ MSI (s) (2C:7C) [11:44:03:650]: Note: 1: 1325 2: deployment MSI (s) (2C:7C) [11:44:03:650]: ConnectToSource: CreatePath/CreateFilePath failed with: -2147483648 1325 -2147483648 MSI (s) (2C:7C) [11:44:03:650]: ConnectToSource (con't): CreatePath/CreateFilePath failed with: -2147483648 -2147483648 MSI (s) (2C:7C) [11:44:03:650]: SOURCEMGMT: net source '\\server\share\deployment\Java\stable\x32\' is invalid. MSI (s) (2C:7C) [11:44:03:650]: Note: 1: 1706 2: -2147483647 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:03:650]: SOURCEMGMT: Processing media source list. MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 2203 2: 3: -2147287037 MSI (s) (2C:7C) [11:44:04:668]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1706 2: -2147483647 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:04:668]: SOURCEMGMT: Processing URL source list. MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1402 2: UNKNOWN\URL 3: 2 MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1706 2: -2147483647 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1706 2: 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:04:668]: SOURCEMGMT: Failed to resolve source MSI (s) (2C:7C) [11:44:04:668]: MainEngineThread is returning 1612 MSI (s) (2C:70) [11:44:04:670]: User policy value 'DisableRollback' is 0 MSI (s) (2C:70) [11:44:04:670]: Machine policy value 'DisableRollback' is 0 MSI (s) (2C:70) [11:44:04:670]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (2C:70) [11:44:04:670]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 MSI (s) (2C:70) [11:44:04:671]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 MSI (s) (2C:70) [11:44:04:671]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2 MSI (s) (2C:70) [11:44:04:671]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2 MSI (s) (2C:70) [11:44:04:671]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (s) (2C:70) [11:44:04:671]: Restoring environment variables MSI (c) (9C:EC) [11:44:04:675]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (9C:EC) [11:44:04:675]: MainEngineThread is returning 1612 === Verbose logging stopped: 24/10/2012 11:44:04 === I'm not sure what my next approach should be. Any help would be much appreciated. Thanks.

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356  | Next Page >