Search Results

Search found 6770 results on 271 pages for 'azure storage'.

Page 177/271 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • CodePlex Daily Summary for Sunday, October 20, 2013

    CodePlex Daily Summary for Sunday, October 20, 2013Popular ReleasesKerbalAlarmClock: v2.6.1.0 Release: Version 2.6.1.0 Recompiled it for 0.21 Added Crew Alarms (track Kerbal rather than Vessel) Added Distance Target Alarms - distance from target vessel or altitude above planet Added Launch Rendezvous Alarm (under Ascending/Descending Node for Landed craft) - MechJeb2 code - thanks r4m0n Allow restoration of Nodes that you have passed (useful for interplanetary burns) Added missing Dres Transfer Model data - thanks Voneiden Added view only version of Alarm clock to both Space Center...MSBuild Extension Pack: October 2013: Release Blog Post The MSBuild Extension Pack October 2013 release provides a collection of over 480 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUI...VG-Ripper & PG-Ripper: VG-Ripper 2.9.49: changes NEW: Added Support for "ImageTeam.org links NEW: Added Support for "ImgNext.com" links NEW: Added Support for "HostUrImage.com" links NEW: Added Support for "3XVintage.com" linksMedia Companion: Media Companion MC3.583b: As before release but fixed for no movie poster sourcesNew* Both - Added 'An' as option to ignore in title * Movie - Renaming - added %Z - Sorttitle to Legend * Movie - Renaming - added %O - Audio Channels to Legend * Movie - Remove a poster source from priority list. Reset List back to defaults. * Made Media Companion truly portable application. Fixed* Movie - browse for Poster Or Fanart, allows for jpg, tbn, png and bmp images * Movie - Alt Fanart Browser - Url or Browse window now fully...MoreTerra (Terraria World Viewer): MoreTerra 1.11.3.1: Release 1.11.3.1 ================ = New Features = ================ Added markers for Copper Cache, Silver Cache and the Enchanted Sword. ============= = Bug Fixes = ============= Use Official Colors now no longer tries to change the Draw Wires option instead. World reading was breaking for people with a stock 1.2 Terraria version. Changed world name reading so it does not crash the program if you load MoreTerra while Terraria is saving the world. =================== = Feature Removal = =...patterns & practices - Windows Azure Guidance: Cloud Design Patterns: 1st drop of Cloud Design Patterns project. It contains 14 patterns with 6 related guidance.Player Framework by Microsoft: Player Framework for Windows and WP (v1.3): Includes all changes in v1.3 beta 1 and v1.3 beta 2 Support for Windows 8.1 RTM and VS2013 RTM Xaml: New property: AutoLoadPluginTypes to help control which stock plugins are loaded by default (requires AutoLoadPlugins = true). Support for SystemMediaTransportControls on Windows 8.1 JS: Support for visual markers in the timeline. JS: Support for markers collection and markerreached event. JS: New ChaptersPlugin to automatically populate timeline with chapter tracks. JS: Audio an...Json.NET: Json.NET 5.0 Release 8: Fix - Fixed not writing string quotes when QuoteName is falsePowerShell Community Extensions: 3.1 Production: PowerShell Community Extensions 3.1 Release NotesOct 17, 2013 This version of PSCX supports Windows PowerShell 3.0 and 4.0 See the ReleaseNotes.txt download above for more information.SQL Power Doc: Version 1.0.2.1: Misc. bug fixes Added logic to resolve members of a Windows Group server login Added columns to Excel workbooks to show definitions for server permissions, server roles, database permissions, and database rolesSocial Network Importer for NodeXL: SocialNetImporter(v.1.9): This new version includes: - Download latest status update and use it as vertex tooltip - Limit the timelines to parse to me, my friends or both - Fixed some reported bugs about the fan page and group importer - Fixed the login bug reported latelyTerrariViewer: TerrariViewer v7.1 [Terraria Inventory Editor]: You can now backspace in number fields Items added in 1.2.0.3 no longer corrupt player files Buff durations capped at 9999999 Item stacks capped at 9999999 Version info added Prefix IDs corrected Shoe and Eye color box are now properly clickable Moved Bank and Safe into their own tab Users will now be notified of new updatesPython Tools for Visual Studio: 2.0: PTVS 2.0 We’re pleased to announce the release of Python Tools for Visual Studio 2.0 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, IPython, and cross platform and cross language debugging support. QUICK VIDEO OVERVIEW For a quick overview of the general IDE experience, please watch this v...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.8.2: Solved scrolling problem after DocumentFormatting Implemented "format as you type" --- To avoid the DLLs getting locked by OS use MSI file for the installation.LINQ to Twitter: LINQ to Twitter v2.1.09: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.Sandcastle Help File Builder: SHFB v1.9.8.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...C++ REST SDK (codename "Casablanca"): C++ REST SDK 1.3.0: This release fixes multiple customer reported issues as well as the following: Full support for Dev12 binaries and project files Full support for Windows XP New sample highlighting the Client and Server APIs : BlackJack Expose underlying native handle to set custom options on http_client Improvements to Listener Library Note: Dev10 binaries have been dropped as of this release, however the Dev10 project files are still available in the Source CodeAD ACL Scanner: 1.3.2: Minor bug fixed: Powershell 4.0 will report: Select—Object: Parameter cannot be processed because the parameter name p is ambiguous.Fast YouTube Downloader: YouTube Downloader 2.2.0: YouTube Downloader 2.2.0VidCoder: 1.5.8 Beta: Added hardware acceleration options: Bicubic OpenCL scaling algorithm, QSV decoding/encoding and DXVA decoding. Updated HandBrake core to SVN 5834. Updated VidCoder setup icon. Fixed crash when choosing the mp4v2 container on x86 and opening on x64. Warning: the hardware acceleration features require specific hardware or file types to work correctly: QSV: Need an Intel processor that supports Quick Sync Video encoding, with a monitor hooked up to the Intel HD Graphics output and the lat...New ProjectsAdd2Nums: Add2Nums is a VB.NET project that takes 2 numbers, computes their sum and outputs the result. Developed by Justin Mifsud as part of Assignment 1 (7COM0152)Athir: GPAO El_AthirCS-MIC - C# Math Input Control: CS-MIC is a .NET library written in C# designed to give developers easy access to expression parsing.HLS Video Player: An open source over-the-network video player for HLS (Hinkle Light Show)Project Stark: This is a secret project available only to the project owners for the time being.Run ++: Run Plus Plus is a tool that enables you to custom the commands in the "Run" dialog.SimpleAddition: This is a simple ASP.NET in VB.NET page that allows users to enter 2 numbers, and display their sum. SWE 681 Go Fish: Project

    Read the article

  • Are you ready to take a walk in the clouds?

    - by Steve Loethen
    Cloud computing is here, whether we want it or not.  When I say "a walk in the clouds” I am not talking about a pleasant romantic comedy, but a real alternative to hosting applications on-premise.  For years we have had the power to host our web sites on remote systems.  Sure, challenges existed.  Mostly web sites.  I could, with a few clicks, create a account at a myriad of web host sites, put my site in the hands of a remote hosting company, and boom, I was a site on the internet.  But choices, power, and management was limited. Now, we have a set of services to let us approach and power and control we love, but with scalability of the data center.  My personal web site is hosted on a laptop running hyperV in my basement.  I have to manage the machine, patch it, make sure it is powered up.  This is fine for the “hello, this is my dog skippy site” that I maintain. If the football pool I run has an issue, one of the 10 users I have calls or emails me and I go check it out.  All is well. But this falls well below the needs of even the simplest of enterprises.  A business needs a stronger datacenter, a better pipe to the world.  Do I really want to base my business on a dynamic dns and a dsl line from the local phone company? Cloud computing gives us most of what I value (control, a db of my own, updating my site from Visual Studio). Come learn how this technology can transform your business.  If you are a Microsoft shop, or are interested in Microsoft in the cloud, on April 8 and 9, a 2 day free Azure training class is being conducted in Kansas City.  http://www.azurebootcamp.com/city/kansascity Hope to see you there.  If you come, make sure you look me up.

    Read the article

  • ugg slippers makes you feel balmy and comfy

    - by skhtyu skhtyu
    You accept to apperceive admitting that while cheap ugg boots is accepted for winter wear, it can in achievement be beat in every season. These uggs for kids usually are an Aussie acceptation complete from merino affidavit and are aswell asperous calm with fashionable. There are hip calm with adequate jackets, vests, sweatshirts, added more. For a contemporary ensemble that makes you feel balmy and comfy, try cutting your boots with some leggings or a sweater dress with a capote or bandage befuddled on. If you don't own a Bout of pink uggs even, now may be the adequate time to buy yourself one. They arise in checkered by azure to orange, to amber as able-bodied as the always-reliable black. as able-bodied as in the break you admiration to go for any decidedly added indigenous arise accession them up nation achievement architecture arise as noticed about the catwalks of purple uggs and Badgley Mischka. With this, you could already see which website is alms the ugg outlet online that you accept been absent at a amount that you can afford. The advanced throated ones are simple to blooper in and out which makes them admired a allotment of all women. uggs for sale were aboriginal fabricated in Australia. Aswell there are clear abode characterization rolls in which characterization fonts, sizes and chantry colours can be customized.

    Read the article

  • November 2012 Chicago IT Architects Group Meeting Recap

    - by Tim Murphy
    So the year is coming to an end.  A hearty few came out two days before Thanksgiving to discuss adopting agile in the enterprise.  While Norm Murrin claimed to be nervous about talking in front of a group your wouldn’t have known by his presentation.  He really made a topic that has always been hard to relate very personal.  This lead to some great discussion.  I came out of looking for ways to investigate agile further.  His presentation can be found here. This was our last meeting for the year.  We are looking forward to next year and are starting to line up some speakers and topics.  At this point we have an Azure presentation coming in February and are ironing out talks for January and March.  If your would like to join us and have topics you would like to see presented contact me through this blog.  Either leave a comment here or use the contact page.  I would love to hear from you. Have a great holiday season and we will see you next year. del.icio.us Tags: Chicago Information Technology Architects Group,CITAG,Agile,Norman Murrin

    Read the article

  • Release notes for 9/25/2012

    Below are the release notes from today's deployment. 1. With today’s deployment we’ve made some significant changes to the source code experience. First of all, you’ll noticed that we moved the Source Code tab closer to the project home tab.   We believe that this will help make source code more discoverable and emphasizes our focus on developer collaboration. The next thing you’ll notice is that when you click on the Source Code tab, you will immediately be browsing code. We want to get you to the project source code in a minimum number of clicks, and this change helps get you there. The changeset history is still there, which brings us to the next change… We implemented an action bar in the source code section, which will make certain actions more discoverable, including forking, cloning, and downloading source code The popups in the action bar will help you perform the tasks you need to do when contributing to projects, as well as managing your own projects. Take a look at how easy it is to find the clone/connection URL now! 2. The second exciting thing we turned on this week is the ability to enable Windows Azure Web Sites to build and deploy your project source code (for Git source code projects). You can read more about how to do this in Mark's post here. 3. We also made some improvements in other areas this week: Made some improvements to screen reader accessibility Fixed some minor UI issues in the browse source code page We'd love to have your feedback on the new changes to the source code tab. Please let us know what you think on our suggestions page, send us a message on Twitter @codeplex, or you can reach Mark Groves directly @mgroves84

    Read the article

  • Review of TechEd 2012 - so far

    - by Stefan Barrett
    Disclaimer: probably going to next years TechEd.  (but not 100% sure) As with most TechEd's, this is not one of the best - but it's not bad.  Some impressions so far: The food is not bad, through perhaps not as much choice as in previous years.  The snacks, while a bit limited, are at least available.  The alumni lounge is ok, through perhaps not as good as last years.  Wifi is a bit worse than previous years - not really working in the big room, and a bit sporadic in the rest of the building. The device seems to make a big difference - the iPad seems to connect the easiest, while the iPhone & Lumia 800 are really struggling.  The real problem is the content - not as developer focused as in previous years.  This shows up in a number of different ways, for example while there is a visual studio booth, there is not much sign of anybody from the language teams.  This is one of few TechEd's where I don't feel very surprised about anything - seen most of the developer stuff in previews. One example where I was surprised was the pre-conf on c++ - its been years since I did any c++, but based on that session perhaps I should start again. While there are sessions, I'm not finding my schedule very challenged. For each time-slot there only seems to 1, or rarely 2, interesting sessions.  The focus seems to be on windows 8, Azure and the phone, which while interesting (might give win8 a go), are not enough.

    Read the article

  • Use C2WTS to get a classic windows identity from a claims identity

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information I know you’re going to find this useful at some point. A lot of backend systems still demand classic windows identities, but everything we do now has moved to claims. So sometimes (albeit rare), we have to translate a claims identity into a classic windows identity. This is where the “Claims to Windows Token Service” comes into play. SharePoint 2010 and 2013 make use of this but you can use this in any .NET application. First of all, there are some basic requirements for this to work, First, you will need the string value of a UPN claim. Just a string value, really! This means you can also use FBA or anything else. The “proper” way to do this of course is that you must originate this from a AD backed claim. So a user authenticated using ADFS or similar would be perfect. Just remember that you must issue the UPN claim. Read full article ....

    Read the article

  • Microsoft WPC 12&ndash;Predictions

    - by D'Arcy Lussier
    Let me start by saying I have absolutely no inside knowledge, neither through the MVP program or any other means, that is fuelling what I’m about to write. This is entirely conjecture fuelled by speculation and too much Soporro beer at a fantastic Japanese restaurant tonight. Still, I present to you… D’Arcy’s Worldwide Partner Conference 2012 Predictions!!! So what can we expect to be announced at this year’s WPC? Much more than last year I’m hoping! Last year was sort of encouraging the troops to carry on with the Windows 7 messaging even with Windows 8 looming in the distance. It also showed Microsoft’s slant towards Private Cloud in addition to Azure. This year, we’re going to see a shift to a battle cry – Windows 8 is Coming, Windows 8 is Coming! I expect we’re going to hear an RTM date for Windows 8 from Steve Ballmer tomorrow, in addition to dates surrounding Windows Server 2012. We’ll also hear some announcement around Windows Phone 8, but I’m not really sure what – that whole piece is still quite muddy; are we going to actually *see* Windows Phone 8 devices this week? That would be great, but I imagine those types of announcements might be left for Build. Speaking of Build, I’m expecting an announcement on a date for a Build conference this Fall, probably late October. If any announcements are going to be made around Office 15, the schedule isn’t hinting at it. In fact, other than Office 365 there’s not much mention of Office in the conference sessions – either a red herring, or telling that Microsoft has another announcement coming later. The tagline of the conference is “A New Era. Together.” It’s obvious Microsoft is wanting to leverage WPC to rally their partners to carry the Windows 8 banner into the field of battle this fall when it ships. D

    Read the article

  • SQL Server 2008 R2 Launch Event - Montreal

    - by guybarrette
    If you’re into SQL Server, you may want to attend the free 2008 R2 launch event that will take place on May 26th, 2010 in Montreal. Agenda: 8:00 - 9:00am : Registration and Breakfast 9:00 – 9:15am:  Welcome and Introductions 9:15 – 10:00am:  Keynote Presentation 10:00 - 10:15am: Morning break 10:15 – 11:45am: SQL Server Presentation 11:45 – 12:45pm: Lunch 12:45 – 1:45pm: Track Session 1 1:45 – 2:45pm: Track Session 2 2:45 – 3:00pm: Afternoon break 3:00 - 4:00pm: Track Session 3 Track Descriptions DBA TRACK Session 1: Ensure Business Continuity with SQL Server 2008 R2,  Windows Server 2008 & Hyper-V Live Migration Session 2: Simplify management of your SQL Server data platform with Multi-server Management Session 3: Deliver unprecedented access to business-critical data at a lower TCO with SQL Server 2008 R2 Parallel Data Warehouse BI TRACK Session1: Enable Managed Self-service BI with Power Pivot for Excel and SharePoint 2010 Session 2: Achieve Rapid Reporting with Reporting Services and Report Builder 3.0 Session 3: Importance of Master Data Management Dev - Visual Studio TRACK Session 1: Developing SQL Applications with Visual Studio 2010 Session 2:Managing Change for SQL Server applications using Team Foundation Server  Session 3: Targeting SQL Azure using Visual Studio   Register here var addthis_pub="guybarrette";

    Read the article

  • The Web Weekly Newsletter

    - by dwahlin
    Several months ago I created a few FlipBoard magazines that a ton of developers (over 50,000!) have used to access content on JavaScript, HTML5, AngularJS, Azure, XAML, Web API, and more. While the feedback on the magazines has been super positive, several people have asked about having the content pushed to them. I’m generally too busy to remember to go check a particular link on a regular basis so I definitely understood and agree with the comments. I’m happy to announce a new newsletter I’m calling the Web Weekly. It’ll highlight content across the different FlipBoard magazines plus other sources and go out to subscribers a few times a month (weekly when possible). The first issue is ready to go and includes a “video highlights” segment I created to show some of my favorite content in the first issue. If you’re interested in staying on top of all the cutting edge Web technologies feel free to subscribe below!   Here’s a sample of some of the articles included:     Here’s the video from the first edition of the newsletter:   Subscribe to the newsletter below….

    Read the article

  • Windows Phone 8, I want you to be successful

    - by Sahil Malik
    SharePoint 2010 Training: more information I assure you, SharePointy posts are on their way – LOTS of them. Just aligning my cannons. But, here is something I need to get off my chest. Just think of this as thoughts of a fellow techie who wants the best for all of us. Consider this, There would have been no IE7, if there was no Firefox. There would be no Azure if there was no Amazon AWS. There would be no Windows Phone 7 if there was no iOS. And if there was no MacOS, you my dear friend would have to choose between Linux or Windows ME. See! Choice is good. Not only is it good for the consumer, it is good for us techies, us engineers. If there is no innovation, there is no new knowledge being created. No innovation devalues our minds – when there is no use for them. No innovation also means poorer experience for users. The tech industry is very different. Read full article ....

    Read the article

  • A Web Service to collect data from local servers every hour

    - by anilerduran
    I'm trying to find a way to collect data from different servers around the world. Here are the details: There is only one single PowerShell script on servers that encrypts data (simple csv file) and sends with preferred method (HTTP/HTTPS Post could be) There is no more control on that servers. Can't install any service, process etc. Just I can configure script to execute every hour. This script also will have encrypted username/password/license key for every server. Script will compress data and send to me with these information. So I need a service (I'm not sure if Web Service is the rigth solution) on the cloud that will help me to: Will get data that is sent from servers using a method. Will authenticate request to recognize sender using license key/username/password and most importantly, Will redirect/send this filecab to my SQL Server on the cloud (Azure). Also it should seperate data according to customer information in license key. So every data for every customer will be stored in dedicated DB/Tables on my SQL All the processes above should be completed automatically. No way for manual steps. Question: A Web Service (SOAP or Restful) is the rigth solution for that?

    Read the article

  • Build 2013&ndash;Keynote Thoughts

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/06/26/153243.aspxSome thoughts on the Build 2013 keynote. They Listened to Feedback while Keeping to their Plans I am one of the people in the “bring back the start menu” camp. I want my start menu. I *like* my start menu. Microsoft heard that and put it back, fantastic. But they implemented it in a way that still pushes the Windows 8 UI – and I’m actually pretty happy with it. When you hit the Start menu, you get the live-tiles displayed overlaying the desktop. But you can also swipe from the bottom to get the “all-applications” view. This, in essense, is really what those that like the Start Menu want. I believe it was mentioned that you can configure the all-applications view to be the default. They’re Committed to Improving Windows 8 The commitment to rapid deployments Ballmer talked about is crucial to Windows 8’s success. They need to keep it evolving quickly to maintain the interest of users and developers. I think the little improvements they showed are excellent (hands-free mode, multi window docking, better multi-monitor support, new developer controls, etc.). Hardware Vendors are Committed to Windows 8 They showed off a number of new hardware products (Windows 8 and Windows Phone). The Surface’s introduction to the market has done nothing to dissuade their hardware partners. Bing as a Platform is Huge for Developers!!! This was the biggest take-away from the keynote! What the team is doing with Bing not as a search engine but as a developer API is very impressive! I’m going to be diving into this over the rest of Build so watch more blog posts coming on it. Azure, Office 365, and other topics will be covered at tomorrow’s keynote. So far, great kick off to Build. Now on to sessions! D

    Read the article

  • MVC 4 Authentication

    - by Aligned
    First: After searching for awhile to figure out what’s new/different with MVC 4 and forms authentication, this is the best article I've found on the subject: http://weblogs.asp.net/jgalloway/archive/2012/08/29/simplemembership-membership-providers-universal-providers-and-the-new-asp-net-4-5-web-forms-and-asp-net-mvc-4-templates.aspx Some quotes from the article: “The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership” "WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)" “If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things:” “Universal Providers do not work with Simple Membership.” ~ this post (look for Bob.at.SBS’s answer) says Universal Providers is not needed for MVC 4 to work in Azure)   I've been trying to figure out the Forms Authentication in MVC4. It's different than the past approach (aspnet_regsql). If you do file new project -> MVC 4 -> internet application, you get a really nice template with the controller and model setup for you. However, the tables are different than using aspnet_regsql and the ASP.Net Configuration tool (WSAT) wasn’t connecting to the data I had (it was creating an App_Data/aspnet.mdf file, which I didn’t see right away). Points of Note The database tables are created in the SimpleMembershipInitializer class, when you first run your app using Entity Framework 5 migration functionality. The tables created are webpages_Membership, webpages_OAuthMembership, webpages_Roles, webpages_UsersInRoles, UserProfile. Web.config settings don’t seem to be needed.   Scott Hanselman on Universal Providers was also useful if not somewhat out dated. Universal Providers and SimpleMembership are not compatible. http://www.asp.net/web-pages/tutorials/security/16-adding-security-and-membership – walk-through

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • Interesting articles and blogs on SPARC T4

    - by mv
    Interesting articles and blogs on SPARC T4 processor   I have consolidated all the interesting information I could get on SPARC T4 processor and its hardware cryptographic capabilities.  Hope its useful. 1. Advantages of SPARC T4 processor  Most important points in this T4 announcement are : "The SPARC T4 processor was designed from the ground up for high speed security and has a cryptographic stream processing unit (SPU) integrated directly into each processor core. These accelerators support 16 industry standard security ciphers and enable high speed encryption at rates 3 to 5 times that of competing processors. By integrating encryption capabilities directly inside the instruction pipeline, the SPARC T4 processor eliminates the performance and cost barriers typically associated with secure computing and makes it possible to deliver high security levels without impacting the user experience." Data Sheet has more details on these  : "New on-chip Encryption Instruction Accelerators with direct non-privileged support for 16 industry-standard cryptographic algorithms plus random number generation in each of the eight cores: AES, Camellia, CRC32c, DES, 3DES, DH, DSA, ECC, Kasumi, MD5, RSA, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512" I ran "isainfo -v" command on Solaris 11 Sparc T4-1 system. It shows the new instructions as expected  : $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32  2.  Dan Anderson's Blog have some interesting points about how these can be used : "New T4 crypto instructions include: aes_kexpand0, aes_kexpand1, aes_kexpand2,         aes_eround01, aes_eround23, aes_eround01_l, aes_eround_23_l, aes_dround01, aes_dround23, aes_dround01_l, aes_dround_23_l.       Having SPARC T4 hardware crypto instructions is all well and good, but how do we access it ?      The software is available with Solaris 11 and is used automatically if you are running Solaris a SPARC T4.  It is used internally in the kernel through kernel crypto modules.  It is available in user space through the PKCS#11 library." 3.   Dans' Blog on Where's the Crypto Libraries? Although this was written in 2009 but still is very useful  "Here's a brief tour of the major crypto libraries shown in the digraph:   The libpkcs11 library contains the PKCS#11 API (C_\*() functions, such as C_Initialize()). That in turn calls library pkcs11_softtoken or pkcs11_kernel, for userland or kernel crypto providers. The latter is used mostly for hardware-assisted cryptography (such as n2cp for Niagara2 SPARC processors), as that is performed more efficiently in kernel space with the "kCF" module (Kernel Crypto Framework). Additionally, for Solaris 10, strong crypto algorithms were split off in separate libraries, pkcs11_softtoken_extra libcryptoutil contains low-level utility functions to help implement cryptography. libsoftcrypto (OpenSolaris and Solaris Nevada only) implements several symmetric-key crypto algorithms in software, such as AES, RC4, and DES3, and the bignum library (used for RSA). libmd implements MD5, SHA, and SHA2 message digest algorithms" 4. Difference in T3 and T4 Diagram in this blog is good and self explanatory. Jeff's blog also highlights the differences  "The T4 servers have improved crypto acceleration, described at https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine. It is "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". Every physical or virtual CPU on a SPARC-T4 has full access to hardware based crypto acceleration at all times. .... For completeness sake, it's worth noting that the T4 adds more crypto algorithms, and accelerates Camelia, CRC32c, and more SHA-x." 5. About performance counters In this blog, performance counters are explained : "Note that unlike T3 and before, T4 crypto doesn't require kernel modules like ncp or n2cp, there is no visibility of crypto hardware with kstats or cryptoadm. T4 does provide hardware counters for crypto operations.  You can see these using cpustat: cpustat -c pic0=Instr_FGU_crypto 5 You can check the general crypto support of the hardware and OS with the command "isainfo -v". Since T4 crypto's implementation now allows direct userland access, there are no "crypto units" visible to cryptoadm.  " For more details refer Martin's blog as well. 6. How to turn off  SPARC T4 or Intel AES-NI crypto acceleration  I found this interesting blog from Darren about how to turn off  SPARC T4 or Intel AES-NI crypto acceleration. "One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.   The alternate to this is having the application coded to call getisax(2) system call and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so and libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  For SPARC T4 : export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" .. For Intel systems with AES-NI support: export LD_HWCAP="-aes"" Note that LD_HWCAP is explained in  http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html "LD_HWCAP, LD_HWCAP_32, and LD_HWCAP_64 -  Identifies an alternative hardware capabilities value... A “-” prefix results in the capabilities that follow being removed from the alternative capabilities." 7. Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing This Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing explains more details.  It has DTrace scripts which may come in handy : "To ensure the hardware-assisted cryptographic acceleration is configured to use and working with the security scenarios, it is recommended to use the following Solaris DTrace script. #!/usr/sbin/dtrace -s pid$1:libsoftcrypto:yf*:entry, pid$target:libsoftcrypto:rsa*:entry, pid$1:libmd:yf*:entry { @[probefunc] = count(); } tick-1sec { printa(@ops); trunc(@ops); }" Note that I have slightly modified the D Script to have RSA "libsoftcrypto:rsa*:entry" as well as per recommendations from Chi-Chang Lin. 8. References http://www.oracle.com/us/corporate/features/sparc-t4-announcement-494846.html http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-1-ds-487858.pdf https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine https://blogs.oracle.com/DanX/entry/where_s_the_crypto_libraries https://blogs.oracle.com/darren/entry/howto_turn_off_sparc_t4 http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html   https://blogs.oracle.com/hardware/entry/unleash_the_power_of_cryptography https://blogs.oracle.com/cmt/entry/t4_crypto_cheat_sheet https://blogs.oracle.com/martinm/entry/t4_performance_counters_explained  https://blogs.oracle.com/jsavit/entry/no_mau_required_on_a http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-business-wp-524472.pdf

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Invitación a evento de Oracle sobre Transformación del CPD

    - by Eloy M. Rodríguez
    Ahora que se acaba el año y se van dejando atrás los últimos empujones a los temas que hay que cerrar, es un buen momento para hacer un pequeño alto en el camino y asistir a este evento que organiza Oracle y reflexionar sobre los enfoques innovadores que se plantean ya que la actual situación reclama actuaciones diferentes y, a veces, el árbol tapa al bosque. Adjunto la invitación oficial, con la agenda y acceso al registro automático.. Oracle Transformación del Centro de Datos: Acelerando la adopción eficaz de la Cloud Transformación del Centro de Datos: Acelerando la adopción eficaz de la Cloud Únase a nosotros en el evento Transformación del Centro de Datos y descubra cómo implementar un centro de datos que esté diseñado para promover la innovación, ofreciendo un mayor rendimiento y fiabilidad, simplificando la gestión y reduciendo significativamente los costes. Venga a conocer los últimas novedades tecnológicas aplicables a su negocio que Oracle acaba de anunciar en Oracle Open World, su conferencia mundial por excelencia, como el Supercluster, el nuevo procesador T4 y las soluciones de Storage Pillar. Sólo Oracle diseña hardware y software, para que estos trabajen conjuntamente desde las aplicaciones al disco, lo que permite reducir la complejidad, impulsar la productividad en toda la empresa y acelerar la innovación empresarial. Únase a nosotros para descubrir cómo transformar su centro de datos para maximizar la eficacia y restablecer IT como una ventaja competitiva del negocio de su empresa. Comparta ideas y experiencias con los mejores expertos y ejecutivos y descubra como: Acelerar la transformación del centro de datos a través de la tecnología que proporciona un rendimiento espectacular y una mayor eficiencia Reducir costes, acelerar y simplificar el despliegue y la consolidación de bases de datos y aplicaciones Optimizar el rendimiento a través de la utilización de los productos Oracle con la tecnología de virtualización incorporada sin coste adicional Minimizar el riesgo durante los despliegues de cloud empresarial con el apoyo de los productos líderes del mercado en materia de seguridad Aumentar la productividad y responder rápidamente a los cambios del mercado con las soluciones optimizadas de Oracle Transforme su centro de datos para optimizar el rendimiento, incrementar la agilidad de su negocio y maximizar sus inversiones en IT. No deje pasar esta oportunidad e inscríbase hoy mismo a este evento que tendrá lugar el próximo 14 de diciembre en Madrid. Inscríbase hoy mismo Para más información, contacte con [email protected] Inscríbase ahora 14 de diciembre de 2011 09:00 - 16:00 CÍRCULO DE BELLAS ARTES DE MADRID C/ Alcalá, 42 28014 MadridEntrada por c/ Marqués de Casa Riera Programa 09:00 Registro 09:30 Bienvenida e IntroducciónJoão Taron, Vice-President & Hardware Leader, Oracle Iberia 09:45 Estrategia OracleGerhard Schlabschi, Business Development Director, Oracle Systems EMEA 10:20 Como transformar su centro de datos eficazmente Manuel Vidal, Director Systems Presales, Oracle Iberia 10:45 Caso de Éxito 11:15 Café 11:45 Consolidacion en Private Cloud Rendimiento extremo con Oracle Exalogic Elastic Cloud & Exadata Lisa Martinez,Business Development Manager, Oracle  Aceleración de las aplicaciones empresariales con SPARC SuperClusters y servidores empresariales T4                                     Carlos Soler Ibanez, Principal Sales Consultant, Oracle 13:15 Almuerzo 14:15 Optimización del Centro de Datos Cómo maximizar el potencial de su infrastructura con sistemas virtualizados de Oracle Javier Cerrada, Senior Sales Consultant, Oracle Optimización de los recursos de almacenamiento con Data Tiering Miguel Angel Borrega, Storage Architect, Oracle 15:00 Gestión del Centro de Datos Oracle Solaris 11                                                                             Javier Cerrada, Senior Sales Consultant, Oracle Enterprise Manager 12c                                                                     Jesus Robles, Master Principal Sales Consultant, Oracle 15:45 Preguntas & respuestas 16:00 Conversaciones con sus interlocutores de Oracle & sorteo de iPAD If you are an employee or official of a government organization, please click here for important ethics information regarding this event. Copyright © 2011, Oracle and/or its affiliates. All rights reserved. Contacte con nosotros | Notas Legales y | Política de Privacidad

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 1)

    - by Sadequl Hussain
    It is a fact of life: SQL Server databases change homes. They move from one instance to another, from one version to the next, from old servers to new ones.  They move around as an organisation’s data grows, applications are enhanced or new versions of the database software are released. If not anything else, servers become old and unreliable and databases eventually need to find a new home. Consider the following scenarios: 1.     A new  database application is rolled out in a production server from the development or test environment 2.     A copy of the production database needs to be installed in a test server for troubleshooting purposes 3.     A copy of the development database is regularly refreshed in a test server during the system development life cycle 4.     A SQL Server is upgraded to a newer version. This can be an in-place upgrade or a side-by-side migration 5.     One or more databases need to be moved between different instances as part of a consolidation strategy. The instances can be running the same or different version of SQL Server 6.     A database has to be restored from a backup file provided by a third party application vendor 7.     A backup of the database is restored in the same or different instance for disaster recovery 8.     A database needs to be migrated within the same instance: a.     Files are moved from direct attached storage to storage area network b.    The same database is copied under a different name for another application Migrating SQL Server database applications is a complex topic in itself. There are a number of components that can be involved: jobs, DTS or SSIS packages, logins or linked servers are only few pieces of the puzzle. However, in this article we will focus only on the central part of migration: the installation of the database itself. Unless it is an in-place upgrade, typically the database is taken from a source server and installed in a destination instance.  Most of the time, a full backup file is used for the rollout. The backup file is either provided to the DBA or the DBA takes the backup and restores it in the target server. Sometimes the database is detached from the source and the files are copied to and attached in the destination. Regardless of the method of copying, moving, refreshing, restoring or upgrading the physical database, there are a number of steps the DBA should follow before and after it has been installed in the destination. It is these post database installation steps we are going to discuss below. Some of these steps apply in almost every scenario described above while some will depend on the type of objects contained within the database.  Also, the principles hold regardless of the number of databases involved. Step 1:  Make a copy of data and log files when attaching and detaching When detaching and attaching databases, ensure you have made copies of the data and log files if the destination is running a newer version of SQL Server. This is because once attached to a newer version, the database cannot be detached and attached back to an older version. Trying to do so will give you a message like the following: Server: Msg 602, Level 21, State 50, Line 1 Could not find row in sysindexes for database ID 6, object ID 1, index ID 1. Run DBCC CHECKTABLE on sysindexes. Connection Broken If you try to backup the attached database and restore it in the source, it will still fail. Similarly, if you are restoring the database in a newer version, it cannot be backed up or detached and put back in an older version of SQL. Unlike detach and attach method though, you do not lose the backup file or the original database here. When detaching and attaching a database, it is important you keep all the log files available along with the data files. It is possible to attach a database without a log file and SQL Server can be instructed to create a new log file, however this does not work if the database was detached when the primary file group was read-only. You will need all the log files in such cases. Step 2: Change database compatibility level Once the database has been restored or attached to a newer version of SQL Server, change the database compatibility level to reflect the newer version unless there is a compelling reason not to do so. When attaching or restoring from a previous version of SQL, the database retains the older version’s compatibility level.  The only time you would want to keep a database with an older compatibility level is when the code within your database is no longer supported by SQL Server. For example, outer joins with *= or the =* operators were still possible in SQL 2000 (with a warning message), but not in SQL 2005 anymore. If your stored procedures or triggers are using this form of join, you would want to keep the database with an older compatibility level.  For a list of compatibility issues between older and newer versions of SQL Server databases, refer to the Books Online under the sp_dbcmptlevel topic. Application developers and architects can help you in deciding whether you should change the compatibility level or not. You can always change the compatibility mode from the newest to an older version if necessary. To change the compatibility level, you can either use the database’s property from the SQL Server Management Studio or use the sp_dbcmptlevel stored procedure.   Bear in mind that you cannot run the built-in reports for databases from SQL Server Management Studio if you keep the database with an older compatibility level. The following figure shows the error message I received when trying to run the “Disk Usage by Top Tables” report against a database. This database was hosted in a SQL Server 2005 system and still had a compatibility mode 80 (SQL 2000).     Continues…

    Read the article

  • CodePlex Daily Summary for Sunday, May 02, 2010

    CodePlex Daily Summary for Sunday, May 02, 2010New ProjectsAdventureWorks in Access: AdventureWorks database in Access format. Data has been ported in Access starting from Adventure Works database for SQL Server 2008.amplifi: This project is still under construction. We will add more information here as soon as it is available.ASP.NET MVC Bug Tracker: Bug Track written in C# ASP.NET MVC 2BigDecimal: BigDecimal is an attempt to create a number class that can have large precision. It is developed in vb.net (.net 4).CBM-Command: Coming soon....Chuyou: ChuyouCMinus: A C Minus Compiler!Complex and advanced mathematical functions: Mathematics toolkit is a Class Library Project which help Programmers to Calculate Mathematics Functions easily.Confuser: Confuser is a obfuscator for .NET. It is developed in C# and using Mono.Cecil for assembly manipulation.easypos: Micro punto de venta que permite ventas express de ropa, que se acopla fácil y transaparente con el ERP Click OneElmech Address Book: Web based Address Book for maintaining details of your business clients. This project targets Suppliers - Traders - Manufacturers - users. Applicat...Feed Viewer: Feed Viewer is able to synchronize subscribed feed and red news among all computers you are using. It understands both RSS and Atom format. It can ...Google URL Shortener, C#: Implementation in C# of generating short URLs by Goo.gl service (Google URL Shortener)MARS - Medical Assistant Record System: MARS - Medical Assistant Record SystemRx Contrib: Rx Contrib is a library which contain extensions for the Rx frameworkSimple Service Administration Tool: A simple tool to start/stop/restart a service of a WinNT based system. The tool is placed in the task bar as a notify icon, so the specified servic...Vis3D: Visual 3D controls for Silverlight.VisContent: XML content controls for ASP.NET.Windows Phone 7 database: This project implements a Isolated Storage (IsolatedStorage) based database for Windows Phone 7. The database consists of table object, each one s...New Releases$log$ / Keyword Substitution / Expansion Check-In Policy (TFS - LogSubstPol): LogSubstPol_v1.2010.0.4 (VS2010): LogSubstPol is a TFS check-in policy which insertes the check-in comments and other keywords into your source code, so you can keep track of the ch...Bojinx: Bojinx Core V4.5.1: The following new features were added: You can now use either BojinxMXMLContext or ContextModule to configure your application or module context. ...CBM-Command: Initial Public Demonstration: Initial public demonstration version. Can browse attached drives and display directory of any attached drive. A common question is "How does it w...Confuser: Confuser v1.0: It is the Confuser v1.0 that used to confuse the reverse-engineers :)Font Family Name Retrieval: 2nd Release: Added New MKV Font Extractor application to showcase the library. MKV Font Extractor depends on MKVToolnix to be installed before it will work. R...Google URL Shortener, C#: Goo.gl-CS v1 Beta: Extract the ZIP file to any location. Two files have to be in the same folder!HouseFly controls: HouseFly controls alpha 0.9.6.1: HouseFly controls release 0.9.6.1 alphaIsWiX: IsWiX 1.0.261.0: Build 1.0.261.0 - built against Fireworks 1.0.264.0. Adds support for VS2010 Integration to support WiX 3.5 beta releases.Managed Extensibility Framework (MEF) Contrib: MefContrib 0.9.2.0: Added conventions based catalog (read more at http://www.thecodejunkie.com/2010/03/bringing-convention-based-registration.html) MEF + Unity integ...MARS - Medical Assistant Record System: license: licenseNSIS Autorun: NSIS Autorun 0.1.5: This release includes source code, executable binary, files and example materials.PHP.net: Release 0.0.0.1: This is the first release of PHP.Net. The features available in this release are: new File Save File Save As Open File In the rar file is th...Rx Contrib: V1: Rx Contrib is ongoing effort for community additions for Rx. Current features are: ReactiveQueue: ISubject that does not loose values if there are ...Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4 v1.0: - Added a margin for icon display. - Added the PopupMenuItem class which is a derivative of the DockPanel. - Find* methods can now drill down the v...Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4 v1.1 Beta: - Added a margin for icon display. - Added the PopupMenuItem class which is a derivative of the DockPanel. - Added a AddSeperator method. - The Fin...Simple Service Administration Tool: SSATool 0.1.3: New Simple Service Administration Tool Version 0.1.3 compiled with Visual Studio .NET 2010.sMAPedit: sMAPedit v0.7a + Map-Pack: Required Additional Map-Pack Added: height setting by color picker (shift+leftclick)sMAPedit: sMAPedit v0.7b: Fixed: force a gargabe collection update to prevent pictureBox's memory leaksqwarea: Sqwarea 0.0.228.0 (alpha): This release corrects a critical bug in ConnexityNotifier service. We strongly recommend you to upgrade to this version. Known bugs : if you open...StackOverflow Desktop Client in C# and WPF: StackOverflow Client 0.1: Source code for the sample.TortoiseHg: TortoiseHg 1.0.2: This is a bug fix release, we recommend all users upgrade to 1.0.2VCC: Latest build, v2.1.30501.0: Automatic drop of latest buildVidCoder: 0.4.0: Changes: Added ability to queue up multiple video files or titles at once. These queued jobs will use the currently selected encoding settings. Mul...WabbitStudio Z80 Software Tools: Wabbitemu 32-bit Test Release: Wabbitemu Visual Studio build for testing purposesWindows Phone 7 database: Initial Release v1.0: This project implements a Isolated Storage (IsolatedStorage) based database for Windows Phone 7. The usage of this software is very simple. You cre...YouTubeEmbeddedVideo WebControl for ASP.NET: VideoControls version 1: This zip file contains the VideoControls.dll, version 1.Most Popular ProjectsRawrWBFS ManagerAJAX Control Toolkitpatterns & practices – Enterprise LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionASP.NETDotNetNuke® Community EditionMost Active Projectspatterns & practices – Enterprise LibraryRawrIonics Isapi Rewrite FilterHydroServer - CUAHSI Hydrologic Information System Serverpatterns & practices: Azure Security GuidanceTinyProjectNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETDambach Linear Algebra FrameworkFacebook Developer Toolkit

    Read the article

  • PC hangs and reboots from time to time

    - by Bevor
    Hello, I have a very strange problem: Since I have my new PC, I have always had problems with it. From time to time the computer freezes for some seconds and suddendly reboots by itself. I've had this problem since Ubuntu 9.10. The same with 10.04 and 10.10. That's why I don't think it's a software failure because the problem persist too long. It doesn't have anything to do with what I'm doing at this time. Sometimes I listen to music, sometimes I only use Firefox, sometimes I'm running 2 or 3 VMs, sometimes I watch DVD. So it's not isolatable. I could freeze once a day or once a week. I put the PC to the vendor twice(!). The first time they changed my power supply but the problem persisted. The second time they told me that they made some heavy performance tests 50 hours long but they didn't find anything. (How can that be that I have daily freezes with normal usage). The vendor didn't check the hard discs because they used their own disc with Windows. (So they never checked the Linux installation). Yesterday I made some intensive hard disc scans with "SMART" but no errors were found. I ran memtest for 3 times but no errors found. I already had this problem in my old flat, so I doubt that I has something to do with current fluctuation. I already tried another electrical socket and changed to connector strip but the problem persists. At the moment I removed 2 of the RAMs (2x 2GB). In all I have 6GB, 2x2GB and 2x1GB. Could this difference maybe be a problem? Here is a list of my components. I hope that anybody find something I didn't think about yet. And here a list of my components: 1x AMD Phenom II X4 965 Black Edition, 3,4Ghz, Quad Core, S-AM3, Boxed 2x DDR3-RAM 2048MB, PC3-1333 Mhz, CL9, Kingston ValueRAM 2x DDR3-RAM 1024MB, PC3-1333 Mhz, CL9, Kingston ValueRAM 2x SATA II Seagate Barracuda 7200.12, 1TB 32MB Cache = RAID 1 1x DVD ROM SATA LG DH16NSR, 16x/52x 1x DVD-+R/-+RW SATA LG GH-22NS50 1x Cardreader 18in1 1x PCI-E 2.0 GeForce GTS 250, Retail, 1024MB 1x Power Supply ATX 400 Watt, CHIEFTEC APS-400S, 80 Plus 1x Network card PCI Intel PRO/1000GT 10/100/1000 MBit 1x Mainboard Socket-AM3 ASUS M4A79XTD EVO, ATX lshw: description: Desktop Computer product: System Product Name vendor: System manufacturer version: System Version serial: System Serial Number width: 64 bits capabilities: smbios-2.5 dmi-2.5 vsyscall64 vsyscall32 configuration: boot=normal chassis=desktop uuid=80E4001E-8C00-002C-AA59-E0CB4EBAC29A *-core description: Motherboard product: M4A79XTD EVO vendor: ASUSTeK Computer INC. physical id: 0 version: Rev X.0X serial: MT709CK11101196 slot: To Be Filled By O.E.M. *-firmware description: BIOS vendor: American Megatrends Inc. physical id: 0 version: 0704 (11/25/2009) size: 64KiB capacity: 960KiB capabilities: isa pci pnp apm upgrade shadowing escd cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer int10video acpi usb ls120boot zipboot biosbootspecification *-cpu description: CPU product: AMD Phenom(tm) II X4 965 Processor vendor: Advanced Micro Devices [AMD] physical id: 4 bus info: cpu@0 version: AMD Phenom(tm) II X4 965 Processor serial: To Be Filled By O.E.M. slot: AM3 size: 800MHz capacity: 3400MHz width: 64 bits clock: 200MHz capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp x86-64 3dnowext 3dnow constant_tsc rep_good nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt npt lbrv svm_lock nrip_save cpufreq *-cache:0 description: L1 cache physical id: 5 slot: L1-Cache size: 512KiB capacity: 512KiB capabilities: pipeline-burst internal varies data *-cache:1 description: L2 cache physical id: 6 slot: L2-Cache size: 2MiB capacity: 2MiB capabilities: pipeline-burst internal varies unified *-cache:2 description: L3 cache physical id: 7 slot: L3-Cache size: 6MiB capacity: 6MiB capabilities: pipeline-burst internal varies unified *-memory description: System Memory physical id: 36 slot: System board or motherboard size: 2GiB *-bank:0 description: DIMM Synchronous 1333 MHz (0.8 ns) product: ModulePartNumber00 vendor: Manufacturer00 physical id: 0 serial: SerNum00 slot: DIMM0 size: 1GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:1 description: DIMM Synchronous 1333 MHz (0.8 ns) product: ModulePartNumber01 vendor: Manufacturer01 physical id: 1 serial: SerNum01 slot: DIMM1 size: 1GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:2 description: DIMM [empty] product: ModulePartNumber02 vendor: Manufacturer02 physical id: 2 serial: SerNum02 slot: DIMM2 *-bank:3 description: DIMM [empty] product: ModulePartNumber03 vendor: Manufacturer03 physical id: 3 serial: SerNum03 slot: DIMM3 *-pci:0 description: Host bridge product: RD780 Northbridge only dual slot PCI-e_GFX and HT1 K8 part vendor: ATI Technologies Inc physical id: 100 bus info: pci@0000:00:00.0 version: 00 width: 32 bits clock: 66MHz *-pci:0 description: PCI bridge product: RD790 PCI to PCI bridge (external gfx0 port A) vendor: ATI Technologies Inc physical id: 2 bus info: pci@0000:00:02.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:a000(size=4096) memory:f8000000-fbbfffff ioport:d0000000(size=268435456) *-display description: VGA compatible controller product: G92 [GeForce GTS 250] vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:18 memory:fa000000-faffffff memory:d0000000-dfffffff memory:f8000000-f9ffffff ioport:ac00(size=128) memory:fbbe0000-fbbfffff *-pci:1 description: PCI bridge product: RD790 PCI to PCI bridge (PCI express gpp port C) vendor: ATI Technologies Inc physical id: 6 bus info: pci@0000:00:06.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:41 ioport:b000(size=4096) memory:fbc00000-fbcfffff ioport:f6f00000(size=1048576) *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 03 serial: e0:cb:4e:ba:c2:9a size: 10MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:45 ioport:b800(size=256) memory:f6fff000-f6ffffff memory:f6ff8000-f6ffbfff memory:fbcf0000-fbcfffff *-pci:2 description: PCI bridge product: RD790 PCI to PCI bridge (PCI express gpp port D) vendor: ATI Technologies Inc physical id: 7 bus info: pci@0000:00:07.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:42 ioport:c000(size=4096) memory:fbd00000-fbdfffff *-firewire description: FireWire (IEEE 1394) product: VT6315 Series Firewire Controller vendor: VIA Technologies, Inc. physical id: 0 bus info: pci@0000:03:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress ohci bus_master cap_list configuration: driver=firewire_ohci latency=0 resources: irq:19 memory:fbdff800-fbdfffff ioport:c800(size=256) *-pci:3 description: PCI bridge product: RD790 PCI to PCI bridge (PCI express gpp port E) vendor: ATI Technologies Inc physical id: 9 bus info: pci@0000:00:09.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:43 ioport:d000(size=4096) memory:fbe00000-fbefffff *-ide description: IDE interface product: 88SE6121 SATA II Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:04:00.0 version: b2 width: 32 bits clock: 33MHz capabilities: ide pm msi pciexpress bus_master cap_list configuration: driver=pata_marvell latency=0 resources: irq:17 ioport:dc00(size=8) ioport:d880(size=4) ioport:d800(size=8) ioport:d480(size=4) ioport:d400(size=16) memory:fbeffc00-fbefffff *-storage description: SATA controller product: SB700/SB800 SATA Controller [IDE mode] vendor: ATI Technologies Inc physical id: 11 bus info: pci@0000:00:11.0 logical name: scsi0 logical name: scsi2 version: 00 width: 32 bits clock: 66MHz capabilities: storage msi ahci_1.0 bus_master cap_list emulated configuration: driver=ahci latency=64 resources: irq:44 ioport:9000(size=8) ioport:8000(size=4) ioport:7000(size=8) ioport:6000(size=4) ioport:5000(size=16) memory:f7fffc00-f7ffffff *-disk:0 description: ATA Disk product: ST31000528AS vendor: Seagate physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: CC38 serial: 9VP3WD9Z size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=000ad206 *-volume:0 UNCLAIMED description: Linux filesystem partition vendor: Linux physical id: 1 bus info: scsi@0:0.0.0,1 version: 1.0 serial: 81839235-21ea-4853-90a4-814779f49000 size: 972MiB capacity: 972MiB capabilities: primary ext2 initialized configuration: filesystem=ext2 modified=2010-12-06 18:32:58 mounted=2010-11-01 07:05:10 state=unknown *-volume:1 UNCLAIMED description: Linux swap volume physical id: 2 bus info: scsi@0:0.0.0,2 version: 1 serial: 22b881d5-6f5c-484d-94e8-e231896fa91b size: 486MiB capacity: 486MiB capabilities: primary nofs swap initialized configuration: filesystem=swap pagesize=4096 *-volume:2 UNCLAIMED description: EXT3 volume vendor: Linux physical id: 3 bus info: scsi@0:0.0.0,3 version: 1.0 serial: ad5b0daf-11e8-4f8f-8598-4e89da9c0d84 size: 47GiB capacity: 47GiB capabilities: primary journaled extended_attributes large_files recover ext3 ext2 initialized configuration: created=2010-02-16 20:42:29 filesystem=ext3 modified=2010-11-29 17:02:34 mounted=2010-12-06 18:32:50 state=clean *-volume:3 UNCLAIMED description: Extended partition physical id: 4 bus info: scsi@0:0.0.0,4 size: 882GiB capacity: 882GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume UNCLAIMED description: Linux filesystem partition physical id: 5 capacity: 882GiB *-disk:1 description: ATA Disk product: ST31000528AS vendor: Seagate physical id: 1 bus info: scsi@2:0.0.0 logical name: /dev/sdb version: CC38 serial: 9VP3SCPF size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=000ad206 *-volume:0 UNCLAIMED description: Linux filesystem partition vendor: Linux physical id: 1 bus info: scsi@2:0.0.0,1 version: 1.0 serial: 81839235-21ea-4853-90a4-814779f49000 size: 972MiB capacity: 972MiB capabilities: primary ext2 initialized configuration: filesystem=ext2 modified=2010-12-06 18:32:58 mounted=2010-11-01 07:05:10 state=unknown *-volume:1 UNCLAIMED description: Linux swap volume physical id: 2 bus info: scsi@2:0.0.0,2 version: 1 serial: 22b881d5-6f5c-484d-94e8-e231896fa91b size: 486MiB capacity: 486MiB capabilities: primary nofs swap initialized configuration: filesystem=swap pagesize=4096 *-volume:2 UNCLAIMED description: EXT3 volume vendor: Linux physical id: 3 bus info: scsi@2:0.0.0,3 version: 1.0 serial: ad5b0daf-11e8-4f8f-8598-4e89da9c0d84 size: 47GiB capacity: 47GiB capabilities: primary journaled extended_attributes large_files recover ext3 ext2 initialized configuration: created=2010-02-16 20:42:29 filesystem=ext3 modified=2010-11-29 17:02:34 mounted=2010-12-06 18:32:50 state=clean *-volume:3 UNCLAIMED description: Extended partition physical id: 4 bus info: scsi@2:0.0.0,4 size: 882GiB capacity: 882GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume UNCLAIMED description: Linux filesystem partition physical id: 5 capacity: 882GiB *-usb:0 description: USB Controller product: SB700/SB800 USB OHCI0 Controller vendor: ATI Technologies Inc physical id: 12 bus info: pci@0000:00:12.0 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:16 memory:f7ffd000-f7ffdfff *-usb:1 description: USB Controller product: SB700 USB OHCI1 Controller vendor: ATI Technologies Inc physical id: 12.1 bus info: pci@0000:00:12.1 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:16 memory:f7ffe000-f7ffefff *-usb:2 description: USB Controller product: SB700/SB800 USB EHCI Controller vendor: ATI Technologies Inc physical id: 12.2 bus info: pci@0000:00:12.2 version: 00 width: 32 bits clock: 66MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=64 resources: irq:17 memory:f7fff800-f7fff8ff *-usb:3 description: USB Controller product: SB700/SB800 USB OHCI0 Controller vendor: ATI Technologies Inc physical id: 13 bus info: pci@0000:00:13.0 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:18 memory:f7ffb000-f7ffbfff *-usb:4 description: USB Controller product: SB700 USB OHCI1 Controller vendor: ATI Technologies Inc physical id: 13.1 bus info: pci@0000:00:13.1 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:18 memory:f7ffc000-f7ffcfff *-usb:5 description: USB Controller product: SB700/SB800 USB EHCI Controller vendor: ATI Technologies Inc physical id: 13.2 bus info: pci@0000:00:13.2 version: 00 width: 32 bits clock: 66MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=64 resources: irq:19 memory:f7fff400-f7fff4ff *-serial UNCLAIMED description: SMBus product: SBx00 SMBus Controller vendor: ATI Technologies Inc physical id: 14 bus info: pci@0000:00:14.0 version: 3c width: 32 bits clock: 66MHz capabilities: ht cap_list configuration: latency=0 *-ide description: IDE interface product: SB700/SB800 IDE Controller vendor: ATI Technologies Inc physical id: 14.1 bus info: pci@0000:00:14.1 logical name: scsi5 version: 00 width: 32 bits clock: 66MHz capabilities: ide msi bus_master cap_list emulated configuration: driver=pata_atiixp latency=64 resources: irq:16 ioport:1f0(size=8) ioport:3f6 ioport:170(size=8) ioport:376 ioport:ff00(size=16) *-cdrom:0 description: DVD reader product: DVDROM DH16NS30 vendor: HL-DT-ST physical id: 0.0.0 bus info: scsi@5:0.0.0 logical name: /dev/cdrom1 logical name: /dev/dvd1 logical name: /dev/scd0 logical name: /dev/sr0 version: 1.00 capabilities: removable audio dvd configuration: ansiversion=5 status=nodisc *-cdrom:1 description: DVD-RAM writer product: DVDRAM GH22NS50 vendor: HL-DT-ST physical id: 0.1.0 bus info: scsi@5:0.1.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd1 logical name: /dev/sr1 version: TN02 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-multimedia description: Audio device product: SBx00 Azalia (Intel HDA) vendor: ATI Technologies Inc physical id: 14.2 bus info: pci@0000:00:14.2 version: 00 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: driver=HDA Intel latency=64 resources: irq:16 memory:f7ff4000-f7ff7fff *-isa description: ISA bridge product: SB700/SB800 LPC host controller vendor: ATI Technologies Inc physical id: 14.3 bus info: pci@0000:00:14.3 version: 00 width: 32 bits clock: 66MHz capabilities: isa bus_master configuration: latency=0 *-pci:4 description: PCI bridge product: SBx00 PCI to PCI Bridge vendor: ATI Technologies Inc physical id: 14.4 bus info: pci@0000:00:14.4 version: 00 width: 32 bits clock: 66MHz capabilities: pci subtractive_decode bus_master resources: ioport:e000(size=4096) memory:fbf00000-fbffffff *-network description: Ethernet interface product: 82541PI Gigabit Ethernet Controller vendor: Intel Corporation physical id: 5 bus info: pci@0000:05:05.0 logical name: eth1 version: 05 serial: 00:1b:21:56:f3:60 size: 100MB/s capacity: 1GB/s width: 32 bits clock: 66MHz capabilities: pm pcix bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k6-NAPI duplex=full firmware=N/A ip=192.168.1.2 latency=64 link=yes mingnt=255 multicast=yes port=twisted pair speed=100MB/s resources: irq:20 memory:fbfe0000-fbffffff memory:fbfc0000-fbfdffff ioport:ec00(size=64) memory:fbfa0000-fbfbffff *-usb:6 description: USB Controller product: SB700/SB800 USB OHCI2 Controller vendor: ATI Technologies Inc physical id: 14.5 bus info: pci@0000:00:14.5 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:18 memory:f7ffa000-f7ffafff *-pci:1 description: Host bridge product: Family 10h Processor HyperTransport Configuration vendor: Advanced Micro Devices [AMD] physical id: 101 bus info: pci@0000:00:18.0 version: 00 width: 32 bits clock: 33MHz *-pci:2 description: Host bridge product: Family 10h Processor Address Map vendor: Advanced Micro Devices [AMD] physical id: 102 bus info: pci@0000:00:18.1 version: 00 width: 32 bits clock: 33MHz *-pci:3 description: Host bridge product: Family 10h Processor DRAM Controller vendor: Advanced Micro Devices [AMD] physical id: 103 bus info: pci@0000:00:18.2 version: 00 width: 32 bits clock: 33MHz *-pci:4 description: Host bridge product: Family 10h Processor Miscellaneous Control vendor: Advanced Micro Devices [AMD] physical id: 104 bus info: pci@0000:00:18.3 version: 00 width: 32 bits clock: 33MHz configuration: driver=k10temp resources: irq:0 *-pci:5 description: Host bridge product: Family 10h Processor Link Control vendor: Advanced Micro Devices [AMD] physical id: 105 bus info: pci@0000:00:18.4 version: 00 width: 32 bits clock: 33MHz *-scsi physical id: 1 bus info: usb@2:3 logical name: scsi8 capabilities: emulated scsi-host configuration: driver=usb-storage *-disk:0 description: SCSI Disk physical id: 0.0.0 bus info: scsi@8:0.0.0 logical name: /dev/sdc *-disk:1 description: SCSI Disk physical id: 0.0.1 bus info: scsi@8:0.0.1 logical name: /dev/sdd *-disk:2 description: SCSI Disk physical id: 0.0.2 bus info: scsi@8:0.0.2 logical name: /dev/sde *-disk:3 description: SCSI Disk physical id: 0.0.3 bus info: scsi@8:0.0.3 logical name: /dev/sdf *-network DISABLED description: Ethernet interface physical id: 1 logical name: vboxnet0 serial: 0a:00:27:00:00:00 capabilities: ethernet physical configuration: broadcast=yes multicast=yes

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • What's up with LDoms: Part 2 - Creating a first, simple guest

    - by Stefan Hinker
    Welcome back! In the first part, we discussed the basic concepts of LDoms and how to configure a simple control domain.  We saw how resources were put aside for guest systems and what infrastructure we need for them.  With that, we are now ready to create a first, very simple guest domain.  In this first example, we'll keep things very simple.  Later on, we'll have a detailed look at things like sizing, IO redundancy, other types of IO as well as security. For now,let's start with this very simple guest.  It'll have one core's worth of CPU, one crypto unit, 8GB of RAM, a single boot disk and one network port.  CPU and RAM are easy.  The network port we'll create by attaching a virtual network port to the vswitch we created in the primary domain.  This is very much like plugging a cable into a computer system on one end and a network switch on the other.  For the boot disk, we'll need two things: A physical piece of storage to hold the data - this is called the backend device in LDoms speak.  And then a mapping between that storage and the guest domain, giving it access to that virtual disk.  For this example, we'll use a ZFS volume for the backend.  We'll discuss what other options there are for this and how to chose the right one in a later article.  Here we go: root@sun # ldm create mars root@sun # ldm set-vcpu 8 mars root@sun # ldm set-mau 1 mars root@sun # ldm set-memory 8g mars root@sun # zfs create rpool/guests root@sun # zfs create -V 32g rpool/guests/mars.bootdisk root@sun # ldm add-vdsdev /dev/zvol/dsk/rpool/guests/mars.bootdisk \ mars.root@primary-vds root@sun # ldm add-vdisk root mars.root@primary-vds mars root@sun # ldm add-vnet net0 switch-primary mars That's all, mars is now ready to power on.  There are just three commands between us and the OK prompt of mars:  We have to "bind" the domain, start it and connect to its console.  Binding is the process where the hypervisor actually puts all the pieces that we've configured together.  If we made a mistake, binding is where we'll be told (starting in version 2.1, a lot of sanity checking has been put into the config commands themselves, but binding will catch everything else).  Once bound, we can start (and of course later stop) the domain, which will trigger the boot process of OBP.  By default, the domain will then try to boot right away.  If we don't want that, we can set "auto-boot?" to false.  Finally, we'll use telnet to connect to the console of our newly created guest.  The output of "ldm list" shows us what port has been assigned to mars.  By default, the console service only listens on the loopback interface, so using telnet is not a large security concern here. root@sun # ldm set-variable auto-boot\?=false mars root@sun # ldm bind mars root@sun # ldm start mars root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 8 7680M 0.5% 1d 4h 30m mars active -t---- 5000 8 8G 12% 1s root@sun # telnet localhost 5000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. ~Connecting to console "mars" in group "mars" .... Press ~? for control options .. {0} ok banner SPARC T3-4, No Keyboard Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.33.1, 8192 MB memory available, Serial # 87203131. Ethernet address 0:21:28:24:1b:50, Host ID: 85241b50. {0} ok We're done, mars is ready to install Solaris, preferably using AI, of course ;-)  But before we do that, let's have a little look at the OBP environment to see how our virtual devices show up here: {0} ok printenv auto-boot? auto-boot? = false {0} ok printenv boot-device boot-device = disk net {0} ok devalias root /virtual-devices@100/channel-devices@200/disk@0 net0 /virtual-devices@100/channel-devices@200/network@0 net /virtual-devices@100/channel-devices@200/network@0 disk /virtual-devices@100/channel-devices@200/disk@0 virtual-console /virtual-devices/console@1 name aliases We can see that setting the OBP variable "auto-boot?" to false with the ldm command worked.  Of course, we'd normally set this to "true" to allow Solaris to boot right away once the LDom guest is started.  The setting for "boot-device" is the default "disk net", which means OBP would try to boot off the devices pointed to by the aliases "disk" and "net" in that order, which usually means "disk" once Solaris is installed on the disk image.  The actual devices these aliases point to are shown with the command "devalias".  Here, we have one line for both "disk" and "net".  The device paths speak for themselves.  Note that each of these devices has a second alias: "net0" for the network device and "root" for the disk device.  These are the very same names we've given these devices in the control domain with the commands "ldm add-vnet" and "ldm add-vdisk".  Remember this, as it is very useful once you have several dozen disk devices... To wrap this up, in this part we've created a simple guest domain, complete with CPU, memory, boot disk and network connectivity.  This should be enough to get you going.  I will cover all the more advanced features and a little more theoretical background in several follow-on articles.  For some background reading, I'd recommend the following links: LDoms 2.2 Admin Guide: Setting up Guest Domains Virtual Console Server: vntsd manpage - This includes the control sequences and commands available to control the console session. OpenBoot 4.x command reference - All the things you can do at the ok prompt

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >