Search Results

Search found 14576 results on 584 pages for 'community building'.

Page 572/584 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • CodePlex Daily Summary for Friday, May 21, 2010

    CodePlex Daily Summary for Friday, May 21, 2010New Projects.Net wrapper around the Neo4j Rest Server: Neo4jRestSharp is a .Net API wrapper for the Neo4j Rest Server. Neo4j is an open sourced java based transactional graph database that stores data ...3D Editor Application Framework: A starting point for building 3D editing applications, such as video game editors, particle system editors, 3D modelling tools, visualization tools...Bulk Actions for SharePoint: This project aims to provide some essential and generic bulk actions for SharePoint lists. Idea is to include any custom actions that can be applie...CineRemote - The hometheater control board: CineRemote's purpose is to offer an alternative to expensive control system for dedicated hometheater rooms. CrmContrib: CrmContrib is a collection of useful items for developers and customizers working with the Dynamics CRM platform.db2xls: OleDb,Sql Server,Sqlite,....to excel, from sqlHappyNet - Silverlight reference application: HappyNet is a project using best practices to build an e-commerce web site. It is a full Silverlight application based on a solid architecture (PR...IP Multicast Library: IP Multicast Library makes it easier for developers to add Multicast, messaging to projects.Linkbutton Web Part: This Link Button Web Part can be installed in any SharePoint 2007 web site. You can onfigure a URL with query string that will be used by the Link...Majordomus pro Windows: Nástroj určený pro správce a vývojáře slouží k řízenému spuštění používaných a vypnutí nepotřebných služeb, procesů a aplikací ve Windows. Pomocí s...MRDS Samples: The MRDS Samples site hosts a variety of code samples for Microsoft Robotics Developer Studio (RDS).Mute4: Mute4 is a simple application that allows you to set a mute/vibration profile and it will switch back to your normal profile automatically after a ...Niko Neko Pureya: Niko Neko Pureya is a media player designed for people who watches a series of videos (like anime). It is very simple and easy to use & learn. And ...NVPX - VP8 Video Codec for .Net: NVPx allows you to use the now open-source VP8 codec on the .Net platform.openrs: openrs is an open-source RuneScape 2 emulator designed to be used with newer engine clients.Prism Evaluation: prism evaluationProj4Net: Proj4Net is a C#/.Net library to transform point coordinates from one geographic coordinate system to another, including datum transformation. The ...Read it to me!: Read it to me will allow you to load txt and rtf files and then speak them using SAPI 5 voices that are installed on your computer with an option t...sGSHOPedit: -SilverDice: SilverDice...SilverDude Toolkit for Silverlight: SilverDude Toolkit for Silverlight contains a collection of silverlight controls making life easier for developers. You'll no longer have to worry ...Silverlight Report: Open-Source Silverlight Reporting Engine. This project allows you to create and print reports using Silverlight 4.SimTrain5000: Train simulation project on University College of Northern Denmark.Springshield Sample Site for EPiServer CMS: City of Springshield - The accessible sample site for EPiServer CMS 6.Teach.Net: Teach.Net is a library/framework that can be used to create applications for testing and learning.The Amoeba Project: The Amoeba Project is a platform to be developed to embrace most of the latest Microsoft Technologies. Still in a conceptual stage however, it loo...The Fastcopy Helper: The Fastcopy Helper is a auxiliary tool for fastcopy.vow: vowWCF Client Generator: This code generator avoids the shortcomings of svcutil when generating proxies for services with a large number of methods.WebCycle: WebCycle is a screensaver application that cycles through web pages. This was originally created to cycle through Reporting Services reports so th...XGate2D - XNA 2D Game Engine: XGate2D is 2D game engine built using XNA Framework. XGate2D currently has 8 features: input handler, animation, Graphical User Interface (GUI), ...XNA Catapult Minigame for XNA 4: XNA 4 implementation of the Catapult Minigame Sample from XNA Creators Club.New ReleasesADefHelpDesk: ADefHelpDesk (Standard ASP.NET Version) 01.00.00: ADefHelpDesk a Help Desk / Ticket Tracker module * NOTE: This version is NOT a DotNetNuke module - It is a standard ASP.NET Application * SQL 2005...Bulk Actions for SharePoint: First Release: First Release - Includes following bulk list actions: *Delete *Checkin/Checkout *Publish/Unpublish *Move *Update MetadataCheck-in Wizard for ArenaChMS: v1.2.1: v 1.2.0 updated to work with Arena 2009.2 (see notes below). Added support for "At Kiosk" and "At Location" printing. Added support for print l...ConfigTray: 1.5: Version 1.5 will have a new UI for managing ConfigTray config. Instead of manually editing configtray.exe.config to add/delete/edit settings and fi...CrmContrib: CrmContribWorkflow 1.0 ALPHA1: This is an initial release of the CrmContribWorkflow 1.0 components. At the moment there are only two activities included in this release. Add Cont...DemotToolkit: DemotToolkit-0.1.0.50830: Initial release.DemotToolkit: DemotToolkit-0.1.1.51107: Fixed crashing in some circumstances.Dot Game: Dot Game Stable Release: Dot Game This is latest stable release without network play mode. (Network play mode is under development)Dynamic Survey Forms - SharePoint Web Part: Fix for missing dlls and documentation: Added missing assemblies to setup.zip. Installation instructions.EnhSim: V1.9.8.7: Added Sharpened Twilight ScaleEvent Scavenger: Viewer 3.2.2: Fixed a bug in the viewer where the previous view 'Top x' filter was not restored after the application was reopened.F# Project Extender: V0.9.2.0 (VS2008,VS2010): F# project extender for Visual Studio 2008 and Visual Studio 2010. Fixed bugs: -VS2010 crash on MoveUp(MoveDown) of renamed file -Adding files brea...FlickrNet API Library: 3.0 Beta 2: The final Beta for the 3.0 release. Fixes a major issue with Photosets.GetList as well as a number of smaller bugs, and adds the new Usage extras ...Folder Bookmarks: Folder Bookmarks 1.5.7: The latest version of Folder Bookmarks (1.5.7), with the new Help feature - all the instructions needed to use the software (If you have any sugges...Linkbutton Web Part: V1.1: Use WinZip to unzip. See docs folder for installation instructions.Live-Exchange Calendar Sync: Live-Exchange Calendar Sync Final: Live-Exchange Calendar Sync Beta May 14, 2010 release of Live-Exchange Calendar Sync 1.0 . (Version 46127) Getting StartedInfo about installation ...MEFedMVVM: MEFedMVVM: This version contains the MEFedMVVM ViewModelLocator and also some basic services such as Mediator and StateManager. You can download the code fr...Mentor Text Database: May 2010 Release with instrumentation: This should function the same as the previous version. Some enhancements have been made, and additional instrumentation has been added to help anal...Merthin: SSF 2010: Code and documentation presented at the Student Science Fair of the Faculty of Mathematics and Computer Science at the University of Habana. The ma...NB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store_02.01.00: NB_Store v2.1.0 THIS IS AN ALPHA RELEASE FOR TESTING ONLY......DO NOT USE IT ON A LIVE SYSTEM.NerdDinner.com - Where Geeks Eat: NerdDinner - Four Database Access Samples: Chris Sells worked with Nick Muhonen from Useable Concepts and Nick created four samples exploring how an ASP.NET MVC application can access databa...openrs: Devstart: Trunk release, empty project.Over Store: OverStore 1.19.0.0: - Version number is increased. - Add methods for specifying custom callback methods to TableMappingRepositoryConfiguration. - Object attaching fu...Rnwood.SmtpServer: Rnwood.SmtpServer 2.0: SmtpServer 2.0 is a .NET SMTP server component written in pure c#. It was written to power http://smtp4dev.codeplex.com/ but can easily be used by ...Scrum Sprint Monitor: v1.0.0.48524 (.NET 4-TFS 2010): What is new in this release? #6132 - Bug with open work hours; Added untested support for MSF for Agile process template; Improved data reporti...SharePoint Rsync List: 1.0.0.0: This initial 1.0 release includes a new feature which manages timer jobs on your sync listShould: Beta 1.1: Updated the namespaces. The extension methods are now in the root Should namespace. The other classes are not in child namespaces.SilverDude Toolkit for Silverlight: SilverDude Toolkit for Silverlight: Kindly give your comments about this project and tell how you feel about it. I'm still new in creating controls, hopefully you guys can support me....Silverlight Report: SilverlightReport_v0.1_alpha_bin: SilverlightReport v0.1 alphaSLARToolkit - Silverlight Augmented Reality Toolkit: SLARToolkit 1.0.2.0: Fixed a problem with long referenced DetectionResults that might have caused an IndexOutOfRangeException Added Marker.LoadFromResource to get rid...The Fastcopy Helper: My Fastcopy Helper 1.0: This Source Code Is use a method to run it . The method is thinked by my bain. So , The Performance maybe lower.Thinktecture.DataObjectModel: Thinktecture.DataObjectModel v0.12: Some bugs fixed. See ChangeLog.txt for more infos.Umbraco CMS: Umbraco 4.0.4.1: A stability release fixing 13 issues based on feedback from 4.0.3 users. Most importantly is a fix to a serious date bug where day and month could ...Usa*Usa Libraly: Smart.Web.Mobile ver 0.2: Smart.Web.Mobile pictgram convert library for japanese galapagos k-tai( ゚д゚) ver 0.2. - Custom encoding for HttpRequest.ContentEncoding / HttpResp...VCC: Latest build, v2.1.30520.0: Automatic drop of latest buildvow: dream: I have a dreamvow: test: testWCF Client Generator: Version 0.9.1.42927: Initial feature set complete. Detailed UI pending.WebCycle: WebCycle 1.0.20: Initial CodePlex releaseWebCycle: WebCycle 1.0.21: Added Uri validataion before saving settingsWhois Application: 1.5 release: - uses the whois.iana.org to dynamically lookup the whois server for each top level domain - enables enter key press for searchWing Beats: Wing Beats 0.9: This first release is focused on the core functionality and XHTML 1.0 strict generation in Asp.NET MVC.Most Popular ProjectsWeb Service Software FactoryPlasmaAquisição de Sinais Vitais em Tempo Real (Vital signs realtime data acquisition)Octtree XNA-GS DrawableGameComponentRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Most Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationPHPExcelBlogEngine.NETSQL Server PowerShell ExtensionsCaliburn: An Application Framework for WPF and SilverlightNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices: Windows Azure Security GuidanceFluent Ribbon Control Suite

    Read the article

  • Pluralsight Meet the Author Podcast on Structuring JavaScript Code

    - by dwahlin
    I had the opportunity to talk with Fritz Onion from Pluralsight about one of my recent courses titled Structuring JavaScript Code for one of their Meet the Author podcasts. We talked about why JavaScript patterns are important for building more re-useable and maintainable apps, pros and cons of different patterns, and how to go about picking a pattern as a project is started. The course provides a solid walk-through of converting what I call “Function Spaghetti Code” into more modular code that’s easier to maintain, more re-useable, and less susceptible to naming conflicts. Patterns covered in the course include the Prototype Pattern, Revealing Module Pattern, and Revealing Prototype Pattern along with several other tips and techniques that can be used. Meet the Author:  Dan Wahlin on Structuring JavaScript Code   The transcript from the podcast is shown below: [Fritz]  Hello, this is Fritz Onion with another Pluralsight author interview. Today we’re talking with Dan Wahlin about his new course, Structuring JavaScript Code. Hi, Dan, it’s good to have you with us today. [Dan]  Thanks for having me, Fritz. [Fritz]  So, Dan, your new course, which came out in December of 2011 called Structuring JavaScript Code, goes into several patterns of usage in JavaScript as well as ways of organizing your code and what struck me about it was all the different techniques you described for encapsulating your code. I was wondering if you could give us just a little insight into what your motivation was for creating this course and sort of why you decided to write it and record it. [Dan]  Sure. So, I got started with JavaScript back in the mid 90s. In fact, back in the days when browsers that most people haven’t heard of were out and we had JavaScript but it wasn’t great. I was on a project in the late 90s that was heavy, heavy JavaScript and we pretty much did what I call in the course function spaghetti code where you just have function after function, there’s no rhyme or reason to how those functions are structured, they just kind of flow and it’s a little bit hard to do maintenance on it, you really don’t get a lot of reuse as far as from an object perspective. And so coming from an object-oriented background in JAVA and C#, I wanted to put something together that highlighted kind of the new way if you will of writing JavaScript because most people start out just writing functions and there’s nothing with that, it works, but it’s definitely not a real reusable solution. So the course is really all about how to move from just kind of function after function after function to the world of more encapsulated code and more reusable and hopefully better maintenance in the process. [Fritz]  So I am sure a lot of people have had similar experiences with their JavaScript code and will be looking forward to seeing what types of patterns you’ve put forth. Now, a couple I noticed in your course one is you start off with the prototype pattern. Do you want to describe sort of what problem that solves and how you go about using it within JavaScript? [Dan]  Sure. So, the patterns that are covered such as the prototype pattern and the revealing module pattern just as two examples, you know, show these kind of three things that I harp on throughout the course of encapsulation, better maintenance, reuse, those types of things. The prototype pattern specifically though has a couple kind of pros over some of the other patterns and that is the ability to extend your code without touching source code and what I mean by that is let’s say you’re writing a library that you know either other teammates or other people just out there on the Internet in general are going to be using. With the prototype pattern, you can actually write your code in such a way that we’re leveraging the JavaScript property and by doing that now you can extend my code that I wrote without touching my source code script or you can even override my code and perform some new functionality. Again, without touching my code.  And so you get kind of the benefit of the almost like inheritance or overriding in object oriented languages with this prototype pattern and it makes it kind of attractive that way definitely from a maintenance standpoint because, you know, you don’t want to modify a script I wrote because I might roll out version 2 and now you’d have to track where you change things and it gets a little tricky. So with this you just override those pieces or extend them and get that functionality and that’s kind of some of the benefits that that pattern offers out of the box. [Fritz]  And then the revealing module pattern, how does that differ from the prototype pattern and what problem does that solve differently? [Dan]  Yeah, so the prototype pattern and there’s another one that’s kind of really closely lined with revealing module pattern called the revealing prototype pattern and it also uses the prototype key word but it’s very similar to the one you just asked about the revealing module pattern. [Fritz]  Okay. [Dan]  This is a really popular one out there. In fact, we did a project for Microsoft that was very, very heavy JavaScript. It was an HMTL5 jQuery type app and we use this pattern for most of the structure if you will for the JavaScript code and what it does in a nutshell is allows you to get that encapsulation so you have really a single function wrapper that wraps all your other child functions but it gives you the ability to do public versus private members and this is kind of a sort of debate out there on the web. Some people feel that all JavaScript code should just be directly accessible and others kind of like to be able to hide their, truly their private stuff and a lot of people do that. You just put an underscore in front of your field or your variable name or your function name and that kind of is the defacto way to say hey, this is private. With the revealing module pattern you can do the equivalent of what objective oriented languages do and actually have private members that you literally can’t get to as an external consumer of the JavaScript code and then you can expose only those members that you want to be public. Now, you don’t get the benefit though of the prototype feature, which is I can’t easily extend the revealing module pattern type code if you don’t like something I’m doing, chances are you’re probably going to have to tweak my code to fix that because we’re not leveraging prototyping but in situations where you’re writing apps that are very specific to a given target app, you know, it’s not a library, it’s not going to be used in other apps all over the place, it’s a pattern I actually like a lot, it’s very simple to get going and then if you do like that public/private feature, it’s available to you. [Fritz]  Yeah, that’s interesting. So it’s almost, you can either go private by convention just by using a standard naming convention or you can actually enforce it by using the prototype pattern. [Dan]  Yeah, that’s exactly right. [Fritz]  So one of the things that I know I run across in JavaScript and I’m curious to get your take on is we do have all these different techniques of encapsulation and each one is really quite different when you’re using closures versus simply, you know, referencing member variables and adding them to your objects that the syntax changes with each pattern and the usage changes. So what would you recommend for people starting out in a brand new JavaScript project? Should they all sort of decide beforehand on what patterns they’re going to stick to or do you change it based on what part of the library you’re working on? I know that’s one of the points of confusion in this space. [Dan]  Yeah, it’s a great question. In fact, I just had a company ask me about that. So which one do I pick and, of course, there’s not one answer fits all. [Fritz]  Right. [Dan]  So it really depends what you just said is absolutely in my opinion correct, which is I think as a, especially if you’re on a team or even if you’re just an individual a team of one, you should go through and pick out which pattern for this particular project you think is best. Now if it were me, here’s kind of the way I think of it. If I were writing a let’s say base library that several web apps are going to use or even one, but I know that there’s going to be some pieces that I’m not really sure on right now as I’m writing I and I know people might want to hook in that and have some better extension points, then I would look at either the prototype pattern or the revealing prototype. Now, really just a real quick summation between the two the revealing prototype also gives you that public/private stuff like the revealing module pattern does whereas the prototype pattern does not but both of the prototype patterns do give you the benefit of that extension or that hook capability. So, if I were writing a library that I need people to override things or I’m not even sure what I need them to override, I want them to have that option, I’d probably pick a prototype, one of the prototype patterns. If I’m writing some code that is very unique to the app and it’s kind of a one off for this app which is what I think a lot of people are kind of in that mode as writing custom apps for customers, then my personal preference is the revealing module pattern you could always go with the module pattern as well which is very close but I think the revealing module patterns a little bit cleaner and we go through that in the course and explain kind of the syntax there and the differences. [Fritz]  Great, that makes a lot of sense. [Fritz]  I appreciate you taking the time, Dan, and I hope everyone takes a chance to look at your course and sort of make these decisions for themselves in their next JavaScript project. Dan’s course is, Structuring JavaScript Code and it’s available now in the Pluralsight Library. So, thank you very much, Dan. [Dan]  Thanks for having me again.

    Read the article

  • Letter to Ballmer: Making Better Consumer Devices

    - by andrewbrust
    Last year, I wrote Steve Ballmer an email, and he was kind enough to write me back.  The email contained a scan of a column I wrote praising Microsoft’s BI strategy.  His reply contained three simple words: “Super nice  thanks.” Well, now I’d like to write to Steve again, in an open letter format, and this time the love may be a bit tougher.  But I’m still super earnest. The past two days have been eventful ones for Microsoft: The company announced the departure of company veterans Robbie Bach and J Allard and the market announced Apple is now besting Microsoft in market capitalization. Plus, announcements were made that make it plain that Ballmer will, in effect, be running Microsoft’s Entertainment & Devices division himself. With that in mind, I’d like to offer my list of a dozen things I think Microsoft’s CEO should do to improve that division’s offerings and, hopefully, its bottom line. So here goes:   1. On Windows Phone 7, Stay the Course The press is teeming with headlines and reader comments proclaiming the death-before-arrival of Windows Phone 7.  That’s plain silly.  You’ve got the makings of a great and unique SmartPhone platform, and you’re the only company (even considering RIM) that can offer full fidelity Exchange integration, not to mention implementing Office on the device.  Let the existing team finish this puppy and ship it. And then have them pump out a few updates, over-the-air, quickly.  Show them that Google Android’s not the only product that can do good, rapid dot releases. And another thing: make sure your OEMs’ devices have flawless touch screens.  If they don’t, then you shouldn’t certify them for delivery to customers.  Period. Oh, and kill the Kin, quietly.  It was DOA, and you know it.   2. Move Media Center to the Xbox Platform Media Center is, at its core, a good product.  But delivering a media distribution and DVR platform on a sophisticated PC operating system like Windows 7 just creates too many moving parts.  Xbox already functions as the best Media Center extender device – it should actually be the hub as well. Media Center is mostly based on .NET code – and XNA is a .NET environment for Xbox – find a way to bridge that small gap and make Media Center a joy to work with instead of a frustration.  Beating Apple TV out of this sub-market is the lowest hanging fruit on the tree (goofy pun, but it’s true).   3. Integrate Media Center with Mediaroom, or Kill the Latter You have two media products with almost identical names.  One is for standalone DVRs and the other is for IPTV cable set tops with DVR capabilities.  Can we merge these please?  My previous request of putting Media Center on Xbox would seem to tie into this nicely, since you’ve announced plans to do that with Mediaroom already.   4. Fix the Red Ring of Death People love the Xbox, but they really don’t love sending their consoles back every 18-24 months, when they get a bunch of red lights flashing on power up.  You’ve handled this defect about as gracefully as possible, but it’s been around for a long time now and it doesn’t seem to be fixed yet.  You can do better.  In fact, you must do better, or you insult your customers.   5. Add Blu Ray to Xbox I know, streaming movies are the future; physical media is legacy technology.  So if that’s true, why did you back HD DVD so hard?  You know why: for now, the film studios won’t allow a large selection of new release, HD, surround sound content be distributed on any medium other than Blu Ray or cable pay per view/on-demand.  Don’t you want home theater buffs to see the Xbox as a fantastic device for their rigs?  Don’t you want to put PlayStation 3 out of its misery?  And if you follow my suggestions above (move Media Center to the Xbox and fix the Red Ring problem), you’d have it all sewn up.  Do I think Blu Ray functionality will move a lot of units?  No.  Do I think that it would move more units with desperately needed influential home theater consumers?  You bet.  And you might sell more ZunePass subscriptions in the process. But while you’re at it, make the fan quieter, please.   6. Make More of Windows Home Server Home Server is a fantastic product.  And for reasons unknown to me, it seems like you’re letting it languish.  Development of the add-in ecosystem seems underfunded.  WHS’ unparalleled ease of use and reliability for home PC backup (and emergency restores) goes unsung.  Product cycles are slow.  Support for your OEMs, who are doing great work, especially in the green space with Atom CPUs, seems lacking.  You’ve married a trophy girl and you keep her cloistered at home!  That’s cruel, unusual and, um, incredibly ill-advised.  Make use of this ace card, and while you’re at it, give it real integration with Media Center.  The integration thus far proof-of-concept quality.  You should go way past that – both products will benefit immeasurably.   7. Set Up a Partner Platform for Custom Installers There’s a whole sub-industry of companies that install, integrate and configure home theater, security and connected home products.  They have an industry group. They are influential in the high-end of the consumer electronics industry, and so are their customers.  They love Media Center and they love Windows Home Server.  But I have talked to several of them at the Consumer Electronics Show and they tell me you don’t love them.  They find it very difficult to do business with Microsoft, even though they want nothing more than to sell and evangelize your platform.  This is a travesty.  Please fix it.  Get Allison Watson and the Microsoft Partner Network on board and have her hire someone who knows how to run a channel program for consumer electronics companies.  Problem solved.  Markets expanded.   8. Make Your Own Hardware In other areas, I know you love your partners.  I help run one, so I appreciate that.  But when it came to Xbox and Zune you built them it yourself (albeit on a contract basis, which is fine).  Windows Phone 7 has a chance to work as an OEM play, but it would work better if you produced the devices.  At least consider building a reference device that sells alongside your OEMs’ offerings.  That’s what Google did with the Nexxus One.  And while that phone was not itself a big seller, it catalyzed two wonderful things : (1) a quality bar was set and (2) partners exceeded it.  Before the Nexxus One, the best Android handset out there was the Motorola Droid. The Nexxus One was better, and the HTC Droid Incredible and Evo 4G are now even better than Google’s phone, which is why Verizon and Sprint decided not to carry it.  Imagine if all Windows Phone 6.x devices were on par with the HTC HD2.  I tend to believe you’d have a lot bigger market share than you do now.   9. Continue with Your Retail Initiative From what I hear, it sounds like it’s going well.  And this goes right along with making your own hardware.  When you build it, they will come.  And then it makes the likes of Best Buy and Staples do better.   10. Make an Acquisition (or Two) TiVo and/or Moxi look ripe for the picking.  With their ability to build stuff people love and your ability to run a business, you might just have something.  But do a better job than you did when you bought Danger.  Buy the ideas, not just the customers, eh?   11. Make Beautiful Stuff You’ve heard this one before, I know.  But I have some head-shrinking advice on this one.  You know that Apple obsesses over its industrial design.  You know that appeals to consumers.  But it seems you think doing so is Apple’s game exclusively and so you shouldn’t even try.  Bull dinky.  Come to New York and visit the Museum of Modern Art’s Architecture and Design gallery.  You’ll see that lots of companies and product categories have had very high design value well before Apple existed.  You can do this, and the Zune HD was a great start.  Now run with that.  Find those negative voices in your head that are telling you that you can’t and shut them up.  For good.   12. Burst the Bubble Some of the products you’ve built seem like they were conceived in a bizarro world.  That would appear to be the result of groupthink.  You must do better.  And there’s lots of people willing to advise you.  This includes just about everyone in the Regional Director program, and probably a bunch of MVPs.  Heck, I bet the guys at Engadget could help out too.  Imagine if you let them see the Kin before it shipped.  Talk to high-end gear consumers.  Talk to Best Buy and CostCo customers too.   Signing Off I hope this was of value to you.  As I wrote this I kept telling myself how obvious, even trite, some of these pieces of advice were and then, because of that, doubting they’d really help.  But I decided that they must not be obvious to Microsoft.  Sometimes when you get wrapped up in stuff, it’s hard to clear your head.  I think my head’s pretty clear here though (I’m wrapped up in other stuff), so maybe my perspective can help.  If not, well, then, I guess they all can’t be super nice.

    Read the article

  • Ten - oh, wait, eleven - Eleven things you should know about the ASP.NET Fall 2012 Update

    - by Jon Galloway
    Today, just a little over two months after the big ASP.NET 4.5 / ASP.NET MVC 4 / ASP.NET Web API / Visual Studio 2012 / Web Matrix 2 release, the first preview of the ASP.NET Fall 2012 Update is out. Here's what you need to know: There are no new framework bits in this release - there's no change or update to ASP.NET Core, ASP.NET MVC or Web Forms features. This means that you can start using it without any updates to your server, upgrade concerns, etc. This update is really an update to the project templates and Visual Studio tooling, conceptually similar to the ASP.NET MVC 3 Tools Update. It's a relatively lightweight install. It's a 41MB download. I've installed it many times and usually takes 5-7 minutes; it's never required a reboot. It adds some new project templates to ASP.NET MVC: Facebook Application and Single Page Application templates. It adds a lot of cool enhancements to ASP.NET Web API. It adds some tooling that makes it easy to take advantage of features like SignalR, Friendly URLs, and Windows Azure Authentication. Most of the new features are installed via NuGet packages. Since ASP.NET is open source, nightly NuGet packages are available, and the roadmap is published, most of this has really been publicly available for a while. The official name of this drop is the ASP.NET Fall 2012 Update BUILD Prerelease. Please do not attempt to say that ten times fast. While the EULA doesn't prohibit it, it WILL legally change your first name to Scott. As with all new releases, you can find out everything you need to know about the Fall Update at http://asp.net/vnext (especially the release notes!) I'm going to be showing all of this off, assisted by special guest code monkey Scott Hanselman, this Friday at BUILD: Bleeding edge ASP.NET: See what is next for MVC, Web API, SignalR and more… (and I've heard it will be livestreamed). Let's look at some of those things in more detail. No new bits ASP.NET 4.5, MVC 4 and Web API have a lot of great core features. I see the goal of this update release as making it easier to put those features to use to solve some useful scenarios by taking advantage of NuGet packages and template code. If you create a new ASP.NET MVC application using one of the new templates, you'll see that it's using the ASP.NET MVC 4 RTM NuGet package (4.0.20710.0): This means you can install and use the Fall Update without any impact on your existing projects and no worries about upgrading or compatibility. New Facebook Application Template ASP.NET MVC 4 (and ASP.NET 4.5 Web Forms) included the ability to authenticate your users via OAuth and OpenID, so you could let users log in to your site using a Facebook account. One of the new changes in the Fall Update is a new template that makes it really easy to create full Facebook applications. You could create Facebook application in ASP.NET already, you'd just need to go through a few steps: Search around to find a good Facebook NuGet package, like the Facebook C# SDK (written by my friend Nathan Totten and some other Facebook SDK brainiacs). Read the Facebook developer documentation to figure out how to authenticate and integrate with them. Write some code, debug it and repeat until you got something working. Get started with the application you'd originally wanted to write. What this template does for you: eliminate steps 1-3. Erik Porter, Nathan and some other experts built out the Facebook Application template so it automatically pulls in and configures the Facebook NuGet package and makes it really easy to take advantage of it in an ASP.NET MVC application. One great example is the the way you access a Facebook user's information. Take a look at the following code in a File / New / MVC / Facebook Application site. First, the Home Controller Index action: [FacebookAuthorize(Permissions = "email")] public ActionResult Index(MyAppUser user, FacebookObjectList<MyAppUserFriend> userFriends) { ViewBag.Message = "Modify this template to jump-start your Facebook application using ASP.NET MVC."; ViewBag.User = user; ViewBag.Friends = userFriends.Take(5); return View(); } First, notice that there's a FacebookAuthorize attribute which requires the user is authenticated via Facebook and requires permissions to access their e-mail address. It binds to two things: a custom MyAppUser object and a list of friends. Let's look at the MyAppUser code: using Microsoft.AspNet.Mvc.Facebook.Attributes; using Microsoft.AspNet.Mvc.Facebook.Models; // Add any fields you want to be saved for each user and specify the field name in the JSON coming back from Facebook // https://developers.facebook.com/docs/reference/api/user/ namespace MvcApplication3.Models { public class MyAppUser : FacebookUser { public string Name { get; set; } [FacebookField(FieldName = "picture", JsonField = "picture.data.url")] public string PictureUrl { get; set; } public string Email { get; set; } } } You can add in other custom fields if you want, but you can also just bind to a FacebookUser and it will automatically pull in the available fields. You can even just bind directly to a FacebookUser and check for what's available in debug mode, which makes it really easy to explore. For more information and some walkthroughs on creating Facebook applications, see: Deploying your first Facebook App on Azure using ASP.NET MVC Facebook Template (Yao Huang Lin) Facebook Application Template Tutorial (Erik Porter) Single Page Application template Early releases of ASP.NET MVC 4 included a Single Page Application template, but it was removed for the official release. There was a lot of interest in it, but it was kind of complex, as it handled features for things like data management. The new Single Page Application template that ships with the Fall Update is more lightweight. It uses Knockout.js on the client and ASP.NET Web API on the server, and it includes a sample application that shows how they all work together. I think the real benefit of this application is that it shows a good pattern for using ASP.NET Web API and Knockout.js. For instance, it's easy to end up with a mess of JavaScript when you're building out a client-side application. This template uses three separate JavaScript files (delivered via a Bundle, of course): todoList.js - this is where the main client-side logic lives todoList.dataAccess.js - this defines how the client-side application interacts with the back-end services todoList.bindings.js - this is where you set up events and overrides for the Knockout bindings - for instance, hooking up jQuery validation and defining some client-side events This is a fun one to play with, because you can just create a new Single Page Application and hit F5. Quick, easy install (with one gotcha) One of the cool engineering changes for this release is a big update to the installer to make it more lightweight and efficient. I've been running nightly builds of this for a few weeks to prep for my BUILD demos, and the install has been really quick and easy to use. The install takes about 5 minutes, has never required a reboot for me, and the uninstall is just as simple. There's one gotcha, though. In this preview release, you may hit an issue that will require you to uninstall and re-install the NuGet VSIX package. The problem comes up when you create a new MVC application and see this dialog: The solution, as explained in the release notes, is to uninstall and re-install the NuGet VSIX package: Start Visual Studio 2012 as an Administrator Go to Tools->Extensions and Updates and uninstall NuGet. Close Visual Studio Navigate to the ASP.NET Fall 2012 Update installation folder: For Visual Studio 2012: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio 2012 For Visual Studio 2012 Express for Web: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio Express 2012 for Web Double click on the NuGet.Tools.vsix to reinstall NuGet This took me under a minute to do, and I was up and running. ASP.NET Web API Update Extravaganza! Uh, the Web API team is out of hand. They added a ton of new stuff: OData support, Tracing, and API Help Page generation. OData support Some people like OData. Some people start twitching when you mention it. If you're in the first group, this is for you. You can add a [Queryable] attribute to an API that returns an IQueryable<Whatever> and you get OData query support from your clients. Then, without any extra changes to your client or server code, your clients can send filters like this: /Suppliers?$filter=Name eq ‘Microsoft’ For more information about OData support in ASP.NET Web API, see Alex James' mega-post about it: OData support in ASP.NET Web API ASP.NET Web API Tracing Tracing makes it really easy to leverage the .NET Tracing system from within your ASP.NET Web API's. If you look at the \App_Start\WebApiConfig.cs file in new ASP.NET Web API project, you'll see a call to TraceConfig.Register(config). That calls into some code in the new \App_Start\TraceConfig.cs file: public static void Register(HttpConfiguration configuration) { if (configuration == null) { throw new ArgumentNullException("configuration"); } SystemDiagnosticsTraceWriter traceWriter = new SystemDiagnosticsTraceWriter() { MinimumLevel = TraceLevel.Info, IsVerbose = false }; configuration.Services.Replace(typeof(ITraceWriter), traceWriter); } As you can see, this is using the standard trace system, so you can extend it to any other trace listeners you'd like. To see how it works with the built in diagnostics trace writer, just run the application call some API's, and look at the Visual Studio Output window: iisexpress.exe Information: 0 : Request, Method=GET, Url=http://localhost:11147/api/Values, Message='http://localhost:11147/api/Values' iisexpress.exe Information: 0 : Message='Values', Operation=DefaultHttpControllerSelector.SelectController iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=DefaultHttpControllerActivator.Create iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=HttpControllerDescriptor.CreateController iisexpress.exe Information: 0 : Message='Selected action 'Get()'', Operation=ApiControllerActionSelector.SelectAction iisexpress.exe Information: 0 : Operation=HttpActionBinding.ExecuteBindingAsync iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuting iisexpress.exe Information: 0 : Message='Action returned 'System.String[]'', Operation=ReflectedHttpActionDescriptor.ExecuteAsync iisexpress.exe Information: 0 : Message='Will use same 'JsonMediaTypeFormatter' formatter', Operation=JsonMediaTypeFormatter.GetPerRequestFormatterInstance iisexpress.exe Information: 0 : Message='Selected formatter='JsonMediaTypeFormatter', content-type='application/json; charset=utf-8'', Operation=DefaultContentNegotiator.Negotiate iisexpress.exe Information: 0 : Operation=ApiControllerActionInvoker.InvokeActionAsync, Status=200 (OK) iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuted, Status=200 (OK) iisexpress.exe Information: 0 : Operation=ValuesController.ExecuteAsync, Status=200 (OK) iisexpress.exe Information: 0 : Response, Status=200 (OK), Method=GET, Url=http://localhost:11147/api/Values, Message='Content-type='application/json; charset=utf-8', content-length=unknown' iisexpress.exe Information: 0 : Operation=JsonMediaTypeFormatter.WriteToStreamAsync iisexpress.exe Information: 0 : Operation=ValuesController.Dispose API Help Page When you create a new ASP.NET Web API project, you'll see an API link in the header: Clicking the API link shows generated help documentation for your ASP.NET Web API controllers: And clicking on any of those APIs shows specific information: What's great is that this information is dynamically generated, so if you add your own new APIs it will automatically show useful and up to date help. This system is also completely extensible, so you can generate documentation in other formats or customize the HTML help as much as you'd like. The Help generation code is all included in an ASP.NET MVC Area: SignalR SignalR is a really slick open source project that was started by some ASP.NET team members in their spare time to add real-time communications capabilities to ASP.NET - and .NET applications in general. It allows you to handle long running communications channels between your server and multiple connected clients using the best communications channel they can both support - websockets if available, falling back all the way to old technologies like long polling if necessary for old browsers. SignalR remains an open source project, but now it's being included in ASP.NET (also open source, hooray!). That means there's real, official ASP.NET engineering work being put into SignalR, and it's even easier to use in an ASP.NET application. Now in any ASP.NET project type, you can right-click / Add / New Item... SignalR Hub or Persistent Connection. And much more... There's quite a bit more. You can find more info at http://asp.net/vnext, and we'll be adding more content as fast as we can. Watch my BUILD talk to see as I demonstrate these and other features in the ASP.NET Fall 2012 Update, as well as some other even futurey-er stuff!

    Read the article

  • HTML5 Form Validation

    - by Stephen.Walther
    The latest versions of Google Chrome (16+), Mozilla Firefox (8+), and Internet Explorer (10+) all support HTML5 client-side validation. It is time to take HTML5 validation seriously. The purpose of the blog post is to describe how you can take advantage of HTML5 client-side validation regardless of the type of application that you are building. You learn how to use the HTML5 validation attributes, how to perform custom validation using the JavaScript validation constraint API, and how to simulate HTML5 validation on older browsers by taking advantage of a jQuery plugin. Finally, we discuss the security issues related to using client-side validation. Using Client-Side Validation Attributes The HTML5 specification discusses several attributes which you can use with INPUT elements to perform client-side validation including the required, pattern, min, max, step, and maxlength attributes. For example, you use the required attribute to require a user to enter a value for an INPUT element. The following form demonstrates how you can make the firstName and lastName form fields required: <!DOCTYPE html> <html > <head> <title>Required Demo</title> </head> <body> <form> <label> First Name: <input required title="First Name is Required!" /> </label> <label> Last Name: <input required title="Last Name is Required!" /> </label> <button>Register</button> </form> </body> </html> If you attempt to submit this form without entering a value for firstName or lastName then you get the validation error message: Notice that the value of the title attribute is used to display the validation error message “First Name is Required!”. The title attribute does not work this way with the current version of Firefox. If you want to display a custom validation error message with Firefox then you need to include an x-moz-errormessage attribute like this: <input required title="First Name is Required!" x-moz-errormessage="First Name is Required!" /> The pattern attribute enables you to validate the value of an INPUT element against a regular expression. For example, the following form includes a social security number field which includes a pattern attribute: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Pattern</title> </head> <body> <form> <label> Social Security Number: <input required pattern="^d{3}-d{2}-d{4}$" title="###-##-####" /> </label> <button>Register</button> </form> </body> </html> The regular expression in the form above requires the social security number to match the pattern ###-##-####: Notice that the input field includes both a pattern and a required validation attribute. If you don’t enter a value then the regular expression is never triggered. You need to include the required attribute to force a user to enter a value and cause the value to be validated against the regular expression. Custom Validation You can take advantage of the HTML5 constraint validation API to perform custom validation. You can perform any custom validation that you need. The only requirement is that you write a JavaScript function. For example, when booking a hotel room, you might want to validate that the Arrival Date is in the future instead of the past: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Constraint Validation API</title> </head> <body> <form> <label> Arrival Date: <input id="arrivalDate" type="date" required /> </label> <button>Submit Reservation</button> </form> <script type="text/javascript"> var arrivalDate = document.getElementById("arrivalDate"); arrivalDate.addEventListener("input", function() { var value = new Date(arrivalDate.value); if (value < new Date()) { arrivalDate.setCustomValidity("Arrival date must be after now!"); } else { arrivalDate.setCustomValidity(""); } }); </script> </body> </html> The form above contains an input field named arrivalDate. Entering a value into the arrivalDate field triggers the input event. The JavaScript code adds an event listener for the input event and checks whether the date entered is greater than the current date. If validation fails then the validation error message “Arrival date must be after now!” is assigned to the arrivalDate input field by calling the setCustomValidity() method of the validation constraint API. Otherwise, the validation error message is cleared by calling setCustomValidity() with an empty string. HTML5 Validation and Older Browsers But what about older browsers? For example, what about Apple Safari and versions of Microsoft Internet Explorer older than Internet Explorer 10? What the world really needs is a jQuery plugin which provides backwards compatibility for the HTML5 validation attributes. If a browser supports the HTML5 validation attributes then the plugin would do nothing. Otherwise, the plugin would add support for the attributes. Unfortunately, as far as I know, this plugin does not exist. I have not been able to find any plugin which supports both the required and pattern attributes for older browsers, but does not get in the way of these attributes in the case of newer browsers. There are several jQuery plugins which provide partial support for the HTML5 validation attributes including: · jQuery Validation — http://docs.jquery.com/Plugins/Validation · html5Form — http://www.matiasmancini.com.ar/jquery-plugin-ajax-form-validation-html5.html · h5Validate — http://ericleads.com/h5validate/ The jQuery Validation plugin – the most popular JavaScript validation library – supports the HTML5 required attribute, but it does not support the HTML5 pattern attribute. Likewise, the html5Form plugin does not support the pattern attribute. The h5Validate plugin provides the best support for the HTML5 validation attributes. The following page illustrates how this plugin supports both the required and pattern attributes: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>h5Validate</title> <style type="text/css"> .validationError { border: solid 2px red; } .validationValid { border: solid 2px green; } </style> </head> <body> <form id="customerForm"> <label> First Name: <input id="firstName" required /> </label> <label> Social Security Number: <input id="ssn" required pattern="^d{3}-d{2}-d{4}$" title="Expected pattern is ###-##-####" /> </label> <input type="submit" /> </form> <script type="text/javascript" src="Scripts/jquery-1.4.4.min.js"></script> <script type="text/javascript" src="Scripts/jquery.h5validate.js"></script> <script type="text/javascript"> // Enable h5Validate plugin $("#customerForm").h5Validate({ errorClass: "validationError", validClass: "validationValid" }); // Prevent form submission when errors $("#customerForm").submit(function (evt) { if ($("#customerForm").h5Validate("allValid") === false) { evt.preventDefault(); } }); </script> </body> </html> When an input field fails validation, the validationError CSS class is applied to the field and the field appears with a red border. When an input field passes validation, the validationValid CSS class is applied to the field and the field appears with a green border. From the perspective of HTML5 validation, the h5Validate plugin is the best of the plugins. It adds support for the required and pattern attributes to browsers which do not natively support these attributes such as IE9. However, this plugin does not include everything in my wish list for a perfect HTML5 validation plugin. Here’s my wish list for the perfect back compat HTML5 validation plugin: 1. The plugin would disable itself when used with a browser which natively supports HTML5 validation attributes. The plugin should not be too greedy – it should not handle validation when a browser could do the work itself. 2. The plugin should simulate the same user interface for displaying validation error messages as the user interface displayed by browsers which natively support HTML5 validation. Chrome, Firefox, and Internet Explorer all display validation errors in a popup. The perfect plugin would also display a popup. 3. Finally, the plugin would add support for the setCustomValidity() method and the other methods of the HTML5 validation constraint API. That way, you could implement custom validation in a standards compatible way and you would know that it worked across all browsers both old and new. Security It would be irresponsible of me to end this blog post without mentioning the issue of security. It is important to remember that any client-side validation — including HTML5 validation — can be bypassed. You should use client-side validation with the intention to create a better user experience. Client validation is great for providing a user with immediate feedback when the user is in the process of completing a form. However, client-side validation cannot prevent an evil hacker from submitting unexpected form data to your web server. You should always enforce your validation rules on the server. The only way to ensure that a required field has a value is to verify that the required field has a value on the server. The HTML5 required attribute does not guarantee anything. Summary The goal of this blog post was to describe the support for validation contained in the HTML5 standard. You learned how to use both the required and the pattern attributes in an HTML5 form. We also discussed how you can implement custom validation by taking advantage of the setCustomValidity() method. Finally, I discussed the available jQuery plugins for adding support for the HTM5 validation attributes to older browsers. Unfortunately, I am unaware of any jQuery plugin which provides a perfect solution to the problem of backwards compatibility.

    Read the article

  • Unable to boot Windows 7 after installing Ubuntu

    - by Devendra
    I have Windows 7 on my machine and then installed Ubuntu 12.04 using a live CD. I can see both Windows 7 and Ubuntu in the grub menu, but when I select Windows 7 it shows a black screen for about 2 seconds and the returns to the Grub menu. But if I select Ubuntu it's working fine. This is the contents of the boot-repair log: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info November 20th 2012] ============================= Boot Info Summary: =============================== => Grub2 (v2.00) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for (,msdos6)/boot/grub. sda1: __________________________________________________________________________ File system: ntfs Boot sector type: Grub2 (v1.99-2.00) Boot sector info: Grub2 (v2.00) is installed in the boot sector of sda1 and looks at sector 388911128 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for (,msdos6)/boot/grub. No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda2: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda3: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda4: __________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 2048. Operating System: Boot files: sda6: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.10 Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/i386-pc/core.img sda7: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 206,848 146,802,687 146,595,840 7 NTFS / exFAT / HPFS /dev/sda2 147,007,488 293,623,807 146,616,320 7 NTFS / exFAT / HPFS /dev/sda3 293,623,808 332,820,613 39,196,806 7 NTFS / exFAT / HPFS /dev/sda4 332,822,526 1,465,145,343 1,132,322,818 f W95 Extended (LBA) /dev/sda5 461,342,720 1,465,145,343 1,003,802,624 7 NTFS / exFAT / HPFS /dev/sda6 332,822,528 453,171,199 120,348,672 83 Linux /dev/sda7 453,173,248 461,338,623 8,165,376 82 Linux swap / Solaris "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/sda1 F6AE2C13AE2BCB47 ntfs /dev/sda2 DC2273012272DFC6 ntfs /dev/sda3 1E76E43376E40D79 ntfs New Volume /dev/sda5 5ED60ACDD60AA57D ntfs /dev/sda6 9e70fd16-b48b-4f88-adcf-e443aef83124 ext4 /dev/sda7 52f3dd94-6be7-4a7b-a3ae-f43eb8810483 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sda6 / ext4 (rw,errors=remount-ro) =========================== sda6/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=auto load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_IN insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=10 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9e70fd16-b48b-4f88-adcf-e443aef83124' { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi linux /boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.5.0-17-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-9e70fd16-b48b-4f88-adcf-e443aef83124' { menuentry 'Ubuntu, with Linux 3.5.0-17-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-17-generic-advanced-9e70fd16-b48b-4f88-adcf-e443aef83124' { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi echo 'Loading Linux 3.5.0-17-generic ...' linux /boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-17-generic } menuentry 'Ubuntu, with Linux 3.5.0-17-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-17-generic-recovery-9e70fd16-b48b-4f88-adcf-e443aef83124' { recordfail insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi echo 'Loading Linux 3.5.0-17-generic ...' linux /boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-17-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry 'Windows 7 (loader) (on /dev/sda1)' --class windows --class os $menuentry_id_option 'osprober-chain-F6AE2C13AE2BCB47' { insmod part_msdos insmod ntfs set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 F6AE2C13AE2BCB47 else search --no-floppy --fs-uuid --set=root F6AE2C13AE2BCB47 fi chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/30_uefi-firmware ### ### END /etc/grub.d/30_uefi-firmware ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda6/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda6 during installation UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=52f3dd94-6be7-4a7b-a3ae-f43eb8810483 none swap sw 0 0 -------------------------------------------------------------------------------- =================== sda6: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 162.831275940 = 174.838751232 boot/grub/grub.cfg 1 163.036647797 = 175.059267584 boot/initrd.img-3.5.0-17-generic 1 206.871749878 = 222.126850048 boot/vmlinuz-3.5.0-17-generic 1 163.036647797 = 175.059267584 initrd.img 1 163.036647797 = 175.059267584 initrd.img.old 1 206.871749878 = 222.126850048 vmlinuz 1 =============================== StdErr Messages: =============================== cat: write error: Broken pipe cat: write error: Broken pipe ADDITIONAL INFORMATION : =================== log of boot-repair 2012-12-11__00h59 =================== boot-repair version : 3.195~ppa28~quantal boot-sav version : 3.195~ppa28~quantal glade2script version : 3.2.2~ppa45~quantal boot-sav-extra version : 3.195~ppa28~quantal boot-repair is executed in installed-session (Ubuntu 12.10, quantal, Ubuntu, x86_64) CPU op-mode(s): 32-bit, 64-bit BOOT_IMAGE=/boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro quiet splash vt.handoff=7 =================== os-prober: /dev/sda6:The OS now in use - Ubuntu 12.10 CurrentSession:linux /dev/sda1:Windows 7 (loader):Windows:chain =================== blkid: /dev/sda1: UUID="F6AE2C13AE2BCB47" TYPE="ntfs" /dev/sda2: UUID="DC2273012272DFC6" TYPE="ntfs" /dev/sda3: LABEL="New Volume" UUID="1E76E43376E40D79" TYPE="ntfs" /dev/sda5: UUID="5ED60ACDD60AA57D" TYPE="ntfs" /dev/sda6: UUID="9e70fd16-b48b-4f88-adcf-e443aef83124" TYPE="ext4" /dev/sda7: UUID="52f3dd94-6be7-4a7b-a3ae-f43eb8810483" TYPE="swap" 1 disks with OS, 2 OS : 1 Linux, 0 MacOS, 1 Windows, 0 unknown type OS. Warning: extended partition does not start at a cylinder boundary. DOS and Linux will interpret the contents differently. =================== /etc/default/grub : # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" =================== /etc/grub.d/ : drwxr-xr-x 2 root root 4096 Oct 17 20:29 grub.d total 72 -rwxr-xr-x 1 root root 7541 Oct 14 23:06 00_header -rwxr-xr-x 1 root root 5488 Oct 4 15:00 05_debian_theme -rwxr-xr-x 1 root root 10891 Oct 14 23:06 10_linux -rwxr-xr-x 1 root root 10258 Oct 14 23:06 20_linux_xen -rwxr-xr-x 1 root root 1688 Oct 11 19:40 20_memtest86+ -rwxr-xr-x 1 root root 10976 Oct 14 23:06 30_os-prober -rwxr-xr-x 1 root root 1426 Oct 14 23:06 30_uefi-firmware -rwxr-xr-x 1 root root 214 Oct 14 23:06 40_custom -rwxr-xr-x 1 root root 216 Oct 14 23:06 41_custom -rw-r--r-- 1 root root 483 Oct 14 23:06 README =================== UEFI/Legacy mode: This installed-session is not in EFI-mode. EFI in dmesg. Please report this message to [email protected] [ 0.000000] ACPI: UEFI 00000000bafe7000 0003E (v01 DELL QA09 00000002 PTL 00000002) [ 0.000000] ACPI: UEFI 00000000bafe6000 00042 (v01 PTL COMBUF 00000001 PTL 00000001) [ 0.000000] ACPI: UEFI 00000000bafe3000 00256 (v01 DELL QA09 00000002 PTL 00000002) SecureBoot disabled. =================== PARTITIONS & DISKS: sda6 : sda, not-sepboot, grubenv-ok grub2, grub-pc , update-grub, 64, with-boot, is-os, not--efi--part, fstab-without-boot, fstab-without-efi, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, apt-get, grub-install, with--usr, fstab-without-usr, not-sep-usr, standard, farbios, . sda1 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, is-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, haswinload, no-recov-nor-hid, bootmgr, is-winboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, not-far, /mnt/boot-sav/sda1. sda2 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda2. sda3 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda3. sda5 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda5. sda : not-GPT, BIOSboot-not-needed, has-no-EFIpart, not-usb, has-os, 2048 sectors * 512 bytes =================== parted -l: Model: ATA WDC WD7500BPKT-7 (scsi) Disk /dev/sda: 750GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags 1 106MB 75.2GB 75.1GB primary ntfs boot 2 75.3GB 150GB 75.1GB primary ntfs 3 150GB 170GB 20.1GB primary ntfs 4 170GB 750GB 580GB extended lba 6 170GB 232GB 61.6GB logical ext4 7 232GB 236GB 4181MB logical linux-swap(v1) 5 236GB 750GB 514GB logical ntfs =================== parted -lm: BYT; /dev/sda:750GB:scsi:512:4096:msdos:ATA WDC WD7500BPKT-7; 1:106MB:75.2GB:75.1GB:ntfs::boot; 2:75.3GB:150GB:75.1GB:ntfs::; 3:150GB:170GB:20.1GB:ntfs::; 4:170GB:750GB:580GB:::lba; 6:170GB:232GB:61.6GB:ext4::; 7:232GB:236GB:4181MB:linux-swap(v1)::; 5:236GB:750GB:514GB:ntfs::; =================== mount: /dev/sda6 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) gvfsd-fuse on /run/user/dev/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=dev) /dev/sda1 on /mnt/boot-sav/sda1 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda2 on /mnt/boot-sav/sda2 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda3 on /mnt/boot-sav/sda3 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda5 on /mnt/boot-sav/sda5 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) =================== ls: /sys/block/sda (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro sda1 sda2 sda3 sda4 sda5 sda6 sda7 size slaves stat subsystem trace uevent /sys/block/sr0 (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro size slaves stat subsystem trace uevent /dev (filtered): alarm ashmem autofs binder block bsg btrfs-control bus cdrom cdrw char console core cpu cpu_dma_latency disk dri dvd dvdrw ecryptfs fb0 fb1 fd full fuse hpet input kmsg kvm log mapper mcelog mei mem net network_latency network_throughput null oldmem port ppp psaux ptmx pts random rfkill rtc rtc0 sda sda1 sda2 sda3 sda4 sda5 sda6 sda7 sg0 sg1 shm snapshot snd sr0 stderr stdin stdout uinput urandom v4l vga_arbiter vhost-net video0 zero ls /dev/mapper: control =================== df -Th: Filesystem Type Size Used Avail Use% Mounted on /dev/sda6 ext4 57G 2.7G 51G 6% / udev devtmpfs 1.9G 12K 1.9G 1% /dev tmpfs tmpfs 770M 892K 769M 1% /run none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 1.9G 260K 1.9G 1% /run/shm none tmpfs 100M 44K 100M 1% /run/user /dev/sda1 fuseblk 70G 36G 35G 51% /mnt/boot-sav/sda1 /dev/sda2 fuseblk 70G 66G 4.8G 94% /mnt/boot-sav/sda2 /dev/sda3 fuseblk 19G 87M 19G 1% /mnt/boot-sav/sda3 /dev/sda5 fuseblk 479G 436G 44G 92% /mnt/boot-sav/sda5 =================== fdisk -l: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x1dc69d0b Device Boot Start End Blocks Id System /dev/sda1 * 206848 146802687 73297920 7 HPFS/NTFS/exFAT /dev/sda2 147007488 293623807 73308160 7 HPFS/NTFS/exFAT /dev/sda3 293623808 332820613 19598403 7 HPFS/NTFS/exFAT /dev/sda4 332822526 1465145343 566161409 f W95 Ext'd (LBA) Partition 4 does not start on physical sector boundary. /dev/sda5 461342720 1465145343 501901312 7 HPFS/NTFS/exFAT /dev/sda6 332822528 453171199 60174336 83 Linux /dev/sda7 453173248 461338623 4082688 82 Linux swap / Solaris Partition table entries are not in disk order =================== Recommended repair Recommended-Repair This setting will reinstall the grub2 of sda6 into the MBR of sda. Additional repair will be performed: unhide-bootmenu-10s grub-install (GRUB) 2.00-7ubuntu11,grub-install (GRUB) 2. Reinstall the GRUB of sda6 into the MBR of sda Installation finished. No error reported. grub-install /dev/sda: exit code of grub-install /dev/sda:0 update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.5.0-17-generic Found initrd image: /boot/initrd.img-3.5.0-17-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Unhide GRUB boot menu in sda6/boot/grub/grub.cfg Boot successfully repaired. You can now reboot your computer. The boot files of [The OS now in use - Ubuntu 12.10] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, >200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. (https://help.ubuntu.com/community/BootPartition)

    Read the article

  • SQLAuthority Book Review – DBA Survivor: Become a Rock Star DBA

    - by pinaldave
    DBA Survivor: Become a Rock Star DBA – Thomas LaRock Link to Amazon Link to Flipkart First of all, I thank all my readers when I wrote that I could not get this book in any local book stores, because they offered me to send a copy of this good book. A very special mention goes to Sripada and Jayesh for they gave so much effort in finding my home address and sending me the hard copy. Before, I did not have the copy of the book, but now I have two of it already! It surprises me how my readers were able to find my home address, which I have not publicly shared. Quick Review: This is indeed a one easy-to-read and fun book. We all work day and night with technology yet we should not forget to show our love and care for our family at home. For our souls that starve for peace and guidance, this one book is the “it” book for all the technology enthusiasts. Though this book was specifically written for DBAs, the reach is not limited to DBAs only because the lessons incorporated in it actually applies to all. This is one of the most motivating technical books I have read. Detailed Review: Let us go over a few questions first: Who wants to be as famous as rockstars in the field of Database Administration? How can one learn what it takes to become a top notch software developer? If you are a beginner in your field, how will you go to next level? Your boss may be very kind or like Dilbert’s Boss, what will you do? How do you keep growing when Eco-system around you does not support you? You are almost at top but there is someone else at the TOP, what do you do and how do you avoid office politics? As a database developer what should be your basic responsibility? and many more… I was able to completely read book in one sitting and I loved it. Before I continue with my opinion, I want to echo the opinion of Kevin Kline who has written the Forward of the book. He has truly suggested that “You hold in your hands a collection of insights and wisdom on the topic of database administration gained through many years of hard-won experience, long nights of study, and direct mentorship under some of the industry’s most talented database professionals and information technology (IT) experts.” Today, IT field is getting bigger and better, while talking about terabytes of the database becomes “more” normal every single day. The gods and demigods of database professionals are taking care of these large scale databases and are carefully maintaining them. In this world, there are only a few beginnings on the first step. There are many experts in different technology fields who are asked to address the issues with databases. There is YOU and ME, who is just new to this work. So we ask ourselves WHERE to begin and HOW to begin. We adore and follow the religion of our rockstars, but oftentimes we really have no idea about their background and their struggles. Every rockstar has his success story which needs to be digested before learning his tricks and tips. This book starts with the same note and teaches the two most important lessons for anybody who wants to be a DBA Rockstar –  to focus on their single goal of learning and to excel the technology. The story starts with three simple guidelines – Get Prepared, Get Trained, Get Certified. Once a person learns the skills, and then, it would be about time that he needs to enrich or to improve those skills you have learned. I am sure that the right opportunity will come finding themselves and they will not have to go run behind it. However, the real challenge for any person is the first day or first week. A new employee, no matter how much experienced he is, sometimes has no clue about what should one do at new job. Chapter 2 and chapter 3 precisely talk about what one should do as soon as the new job begins. It is also written with keeping the fact in focus that each job can be very much different but there are few infrastructure setups and programming concepts are the same. Learning basics of database was really interesting. I like to focus on the roots of any technology. It is important to understand the structure of the database before suggesting what indexes needs to be created, the same way this book covers the most essential knowledge one must learn by most database developers. I think the title of the fourth chapter is my favorite sentence in this book. I can see that I will be saying this again and again in the future – “A Development Server Is a Production Server to a Developer“. I have worked in the software industry for almost 8 years now and I have seen so many developers sitting on their chairs and waiting for instructions from their lead about how to improve the code or what to do the next. When I talk to them, I suggest that the experiment with their server and try various techniques. I think they all should understand that for them, a development server is their production server and needs to pay proper attention to the code from the beginning. There should be NO any inappropriate code from the beginning. One has to fully focus and give their best, if they are not sure they should ask but should do something and stay active. Chapter 5 and 6 talks about two essential skills for any developer and database administration – what are the ethics of developers when they are working with production server and how to support software which is running on the production server. I have met many people who know the theory by heart but when put in front of keyboard they do not know where to start. The first thing they do opening the browser and searching online, instead of opening SQL Server Management Studio. This can very well happen to anybody who is experienced as well. Chapter 5 and 6 addresses that situation as well includes the handy scripts which can solve almost all the basic trouble shooting issues. “Where’s the Buffet?” By far, this is the best chapter in this book. If you have ever met me, you would know that I love food. I think after reading this chapter, I felt Thomas has written this just keeping me in mind. I think there will be many other people who feel the same way, too. Even my wife who read this chapter thought this was specifically written for me. I will not talk any more about this chapter as this is one must read chapter. And of course this is about real ‘FOOD‘. I am an SQL Server Trainer and Consultant and I totally agree with the point made in the chapter 8 of this book. Yes, it says here that what is necessary to train employees and people. Millions of dollars worth the labor is continuously done in the world which has faults and incorrect. Once something goes wrong, very expensive consultant comes in and fixes the problem. This whole cycle which can be stopped and improved if proper training is done. There is plenty of free trainings available as well, if one cannot afford paid training. “Connect. Learn. Share” – I think this is a great summary and bird’s eye view of this book. Networking is the key. Everything which is discussed in this book can be taken to next level if one properly uses this tips and continuously grow with it. Connecting with others, helping learn each other and building the good knowledge sharing environment should be the goal of everyone. Before I end the review I want to share a real experience. I have personally met one DBA who has worked in a single department in a company for so long that when he was put in a different department in his company due to closing that department, he could not adjust and quit the job despite the same people and company around him. Adjusting in the new environment gets much tougher as one person gets more and more experienced. This book precisely addresses the same issue along with their solutions. I just cannot stop comparing the book with my personal journey. I found so many things which are coincidently in the book is written as how we developer and DBA think. I must express special thanks to Thomas for taking time in his personal life and write this book for us. This book is indeed a book for everybody who wants to grow healthy in the tough and competitive environment. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Adventures in MVVM &ndash; My ViewModel Base

    - by Brian Genisio's House Of Bilz
    More Adventures in MVVM First, I’d like to say: THIS IS NOT A NEW MVVM FRAMEWORK. I tend to believe that MVVM support code should be specific to the system you are building and the developers working on it.  I have yet to find an MVVM framework that does everything I want it to without doing too much.  Don’t get me wrong… there are some good frameworks out there.  I just like to pick and choose things that make sense for me.  I’d also like to add that some of these features only work in WPF.  As of Silveright 4, they don’t support binding to dynamic properties, so some of the capabilities are lost. That being said, I want to share my ViewModel base class with the world.  I have had several conversations with people about the problems I have solved using this ViewModel base.  A while back, I posted an article about some experiments with a “Rails Inspired ViewModel”.  What followed from those ideas was a ViewModel base class that I take with me and use in my projects.  It has a lot of features, all designed to reduce the friction in writing view models. I have put the code out on Codeplex under the project: ViewModelSupport. Finally, this article focuses on the ViewModel and only glosses over the View and the Model.  Without all three, you don’t have MVVM.  But this base class is for the ViewModel, so that is what I am focusing on. Features: Automatic Command Plumbing Property Change Notification Strongly Typed Property Getter/Setters Dynamic Properties Default Property values Derived Properties Automatic Method Execution Command CanExecute Change Notification Design-Time Detection What about Silverlight? Automatic Command Plumbing This feature takes the plumbing out of creating commands.  The common pattern for commands in a ViewModel is to have an Execute method as well as an optional CanExecute method.  To plumb that together, you create an ICommand Property, and set it in the constructor like so: Before public class AutomaticCommandViewModel { public AutomaticCommandViewModel() { MyCommand = new DelegateCommand(Execute_MyCommand, CanExecute_MyCommand); } public void Execute_MyCommand() { // Do something } public bool CanExecute_MyCommand() { // Are we in a state to do something? return true; } public DelegateCommand MyCommand { get; private set; } } With the base class, this plumbing is automatic and the property (MyCommand of type ICommand) is created for you.  The base class uses the convention that methods be prefixed with Execute_ and CanExecute_ in order to be plumbed into commands with the property name after the prefix.  You are left to be expressive with your behavior without the plumbing.  If you are wondering how CanExecuteChanged is raised, see the later section “Command CanExecute Change Notification”. After public class AutomaticCommandViewModel : ViewModelBase { public void Execute_MyCommand() { // Do something } public bool CanExecute_MyCommand() { // Are we in a state to do something? return true; } }   Property Change Notification One thing that always kills me when implementing ViewModels is how to make properties that notify when they change (via the INotifyPropertyChanged interface).  There have been many attempts to make this more automatic.  My base class includes one option.  There are others, but I feel like this works best for me. The common pattern (without my base class) is to create a private backing store for the variable and specify a getter that returns the private field.  The setter will set the private field and fire an event that notifies the change, only if the value has changed. Before public class PropertyHelpersViewModel : INotifyPropertyChanged { private string text; public string Text { get { return text; } set { if(text != value) { text = value; RaisePropertyChanged("Text"); } } } protected void RaisePropertyChanged(string propertyName) { var handlers = PropertyChanged; if(handlers != null) handlers(this, new PropertyChangedEventArgs(propertyName)); } public event PropertyChangedEventHandler PropertyChanged; } This way of defining properties is error-prone and tedious.  Too much plumbing.  My base class eliminates much of that plumbing with the same functionality: After public class PropertyHelpersViewModel : ViewModelBase { public string Text { get { return Get<string>("Text"); } set { Set("Text", value);} } }   Strongly Typed Property Getters/Setters It turns out that we can do better than that.  We are using a strongly typed language where the use of “Magic Strings” is often frowned upon.  Lets make the names in the getters and setters strongly typed: A refinement public class PropertyHelpersViewModel : ViewModelBase { public string Text { get { return Get(() => Text); } set { Set(() => Text, value); } } }   Dynamic Properties In C# 4.0, we have the ability to program statically OR dynamically.  This base class lets us leverage the powerful dynamic capabilities in our ecosystem. (This is how the automatic commands are implemented, BTW)  By calling Set(“Foo”, 1), you have now created a dynamic property called Foo.  It can be bound against like any static property.  The opportunities are endless.  One great way to exploit this behavior is if you have a customizable view engine with templates that bind to properties defined by the user.  The base class just needs to create the dynamic properties at runtime from information in the model, and the custom template can bind even though the static properties do not exist. All dynamic properties still benefit from the notifiable capabilities that static properties do. For any nay-sayers out there that don’t like using the dynamic features of C#, just remember this: the act of binding the View to a ViewModel is dynamic already.  Why not exploit it?  Get over it :) Just declare the property dynamically public class DynamicPropertyViewModel : ViewModelBase { public DynamicPropertyViewModel() { Set("Foo", "Bar"); } } Then reference it normally <TextBlock Text="{Binding Foo}" />   Default Property Values The Get() method also allows for default properties to be set.  Don’t set them in the constructor.  Set them in the property and keep the related code together: public string Text { get { return Get(() => Text, "This is the default value"); } set { Set(() => Text, value);} }   Derived Properties This is something I blogged about a while back in more detail.  This feature came from the chaining of property notifications when one property affects the results of another, like this: Before public class DependantPropertiesViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); RaisePropertyChanged("Percentage"); RaisePropertyChanged("Output"); } } public int Percentage { get { return (int)(100 * Score); } } public string Output { get { return "You scored " + Percentage + "%."; } } } The problem is: The setter for Score has to be responsible for notifying the world that Percentage and Output have also changed.  This, to me, is backwards.    It certainly violates the “Single Responsibility Principle.” I have been bitten in the rear more than once by problems created from code like this.  What we really want to do is invert the dependency.  Let the Percentage property declare that it changes when the Score Property changes. After public class DependantPropertiesViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); } } [DependsUpon("Score")] public int Percentage { get { return (int)(100 * Score); } } [DependsUpon("Percentage")] public string Output { get { return "You scored " + Percentage + "%."; } } }   Automatic Method Execution This one is extremely similar to the previous, but it deals with method execution as opposed to property.  When you want to execute a method triggered by property changes, let the method declare the dependency instead of the other way around. Before public class DependantMethodsViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); WhenScoreChanges(); } } public void WhenScoreChanges() { // Handle this case } } After public class DependantMethodsViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); } } [DependsUpon("Score")] public void WhenScoreChanges() { // Handle this case } }   Command CanExecute Change Notification Back to Commands.  One of the responsibilities of commands that implement ICommand – it must fire an event declaring that CanExecute() needs to be re-evaluated.  I wanted to wait until we got past a few concepts before explaining this behavior.  You can use the same mechanism here to fire off the change.  In the CanExecute_ method, declare the property that it depends upon.  When that property changes, the command will fire a CanExecuteChanged event, telling the View to re-evaluate the state of the command.  The View will make appropriate adjustments, like disabling the button. DependsUpon works on CanExecute methods as well public class CanExecuteViewModel : ViewModelBase { public void Execute_MakeLower() { Output = Input.ToLower(); } [DependsUpon("Input")] public bool CanExecute_MakeLower() { return !string.IsNullOrWhiteSpace(Input); } public string Input { get { return Get(() => Input); } set { Set(() => Input, value);} } public string Output { get { return Get(() => Output); } set { Set(() => Output, value); } } }   Design-Time Detection If you want to add design-time data to your ViewModel, the base class has a property that lets you ask if you are in the designer.  You can then set some default values that let your designer see what things might look like in runtime. Use the IsInDesignMode property public DependantPropertiesViewModel() { if(IsInDesignMode) { Score = .5; } }   What About Silverlight? Some of the features in this base class only work in WPF.  As of version 4, Silverlight does not support binding to dynamic properties.  This, in my opinion, is a HUGE limitation.  Not only does it keep you from using many of the features in this ViewModel, it also keeps you from binding to ViewModels designed in IronRuby.  Does this mean that the base class will not work in Silverlight?  No.  Many of the features outlined in this article WILL work.  All of the property abstractions are functional, as long as you refer to them statically in the View.  This, of course, means that the automatic command hook-up doesn’t work in Silverlight.  You need to plumb it to a static property in order for the Silverlight View to bind to it.  Can I has a dynamic property in SL5?     Good to go? So, that concludes the feature explanation of my ViewModel base class.  Feel free to take it, fork it, whatever.  It is hosted on CodePlex.  When I find other useful additions, I will add them to the public repository.  I use this base class every day.  It is mature, and well tested.  If, however, you find any problems with it, please let me know!  Also, feel free to suggest patches to me via the CodePlex site.  :)

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • C# Neural Networks with Encog

    - by JoshReuben
    Neural Networks ·       I recently read a book Introduction to Neural Networks for C# , by Jeff Heaton. http://www.amazon.com/Introduction-Neural-Networks-C-2nd/dp/1604390093/ref=sr_1_2?ie=UTF8&s=books&qid=1296821004&sr=8-2-spell. Not the 1st ANN book I've perused, but a nice revision.   ·       Artificial Neural Networks (ANNs) are a mechanism of machine learning – see http://en.wikipedia.org/wiki/Artificial_neural_network , http://en.wikipedia.org/wiki/Category:Machine_learning ·       Problems Not Suited to a Neural Network Solution- Programs that are easily written out as flowcharts consisting of well-defined steps, program logic that is unlikely to change, problems in which you must know exactly how the solution was derived. ·       Problems Suited to a Neural Network – pattern recognition, classification, series prediction, and data mining. Pattern recognition - network attempts to determine if the input data matches a pattern that it has been trained to recognize. Classification - take input samples and classify them into fuzzy groups. ·       As far as machine learning approaches go, I thing SVMs are superior (see http://en.wikipedia.org/wiki/Support_vector_machine ) - a neural network has certain disadvantages in comparison: an ANN can be overtrained, different training sets can produce non-deterministic weights and it is not possible to discern the underlying decision function of an ANN from its weight matrix – they are black box. ·       In this post, I'm not going to go into internals (believe me I know them). An autoassociative network (e.g. a Hopfield network) will echo back a pattern if it is recognized. ·       Under the hood, there is very little maths. In a nutshell - Some simple matrix operations occur during training: the input array is processed (normalized into bipolar values of 1, -1) - transposed from input column vector into a row vector, these are subject to matrix multiplication and then subtraction of the identity matrix to get a contribution matrix. The dot product is taken against the weight matrix to yield a boolean match result. For backpropogation training, a derivative function is required. In learning, hill climbing mechanisms such as Genetic Algorithms and Simulated Annealing are used to escape local minima. For unsupervised training, such as found in Self Organizing Maps used for OCR, Hebbs rule is applied. ·       The purpose of this post is not to mire you in technical and conceptual details, but to show you how to leverage neural networks via an abstraction API - Encog   Encog ·       Encog is a neural network API ·       Links to Encog: http://www.encog.org , http://www.heatonresearch.com/encog, http://www.heatonresearch.com/forum ·       Encog requires .Net 3.5 or higher – there is also a Silverlight version. Third-Party Libraries – log4net and nunit. ·       Encog supports feedforward, recurrent, self-organizing maps, radial basis function and Hopfield neural networks. ·       Encog neural networks, and related data, can be stored in .EG XML files. ·       Encog Workbench allows you to edit, train and visualize neural networks. The Encog Workbench can generate code. Synapses and layers ·       the primary building blocks - Almost every neural network will have, at a minimum, an input and output layer. In some cases, the same layer will function as both input and output layer. ·       To adapt a problem to a neural network, you must determine how to feed the problem into the input layer of a neural network, and receive the solution through the output layer of a neural network. ·       The Input Layer - For each input neuron, one double value is stored. An array is passed as input to a layer. Encog uses the interface INeuralData to hold these arrays. The class BasicNeuralData implements the INeuralData interface. Once the neural network processes the input, an INeuralData based class will be returned from the neural network's output layer. ·       convert a double array into an INeuralData object : INeuralData data = new BasicNeuralData(= new double[10]); ·       the Output Layer- The neural network outputs an array of doubles, wraped in a class based on the INeuralData interface. ·        The real power of a neural network comes from its pattern recognition capabilities. The neural network should be able to produce the desired output even if the input has been slightly distorted. ·       Hidden Layers– optional. between the input and output layers. very much a “black box”. If the structure of the hidden layer is too simple it may not learn the problem. If the structure is too complex, it will learn the problem but will be very slow to train and execute. Some neural networks have no hidden layers. The input layer may be directly connected to the output layer. Further, some neural networks have only a single layer. A single layer neural network has the single layer self-connected. ·       connections, called synapses, contain individual weight matrixes. These values are changed as the neural network learns. Constructing a Neural Network ·       the XOR operator is a frequent “first example” -the “Hello World” application for neural networks. ·       The XOR Operator- only returns true when both inputs differ. 0 XOR 0 = 0 1 XOR 0 = 1 0 XOR 1 = 1 1 XOR 1 = 0 ·       Structuring a Neural Network for XOR  - two inputs to the XOR operator and one output. ·       input: 0.0,0.0 1.0,0.0 0.0,1.0 1.0,1.0 ·       Expected output: 0.0 1.0 1.0 0.0 ·       A Perceptron - a simple feedforward neural network to learn the XOR operator. ·       Because the XOR operator has two inputs and one output, the neural network will follow suit. Additionally, the neural network will have a single hidden layer, with two neurons to help process the data. The choice for 2 neurons in the hidden layer is arbitrary, and often comes down to trial and error. ·       Neuron Diagram for the XOR Network ·       ·       The Encog workbench displays neural networks on a layer-by-layer basis. ·       Encog Layer Diagram for the XOR Network:   ·       Create a BasicNetwork - Three layers are added to this network. the FinalizeStructure method must be called to inform the network that no more layers are to be added. The call to Reset randomizes the weights in the connections between these layers. var network = new BasicNetwork(); network.AddLayer(new BasicLayer(2)); network.AddLayer(new BasicLayer(2)); network.AddLayer(new BasicLayer(1)); network.Structure.FinalizeStructure(); network.Reset(); ·       Neural networks frequently start with a random weight matrix. This provides a starting point for the training methods. These random values will be tested and refined into an acceptable solution. However, sometimes the initial random values are too far off. Sometimes it may be necessary to reset the weights again, if training is ineffective. These weights make up the long-term memory of the neural network. Additionally, some layers have threshold values that also contribute to the long-term memory of the neural network. Some neural networks also contain context layers, which give the neural network a short-term memory as well. The neural network learns by modifying these weight and threshold values. ·       Now that the neural network has been created, it must be trained. Training a Neural Network ·       construct a INeuralDataSet object - contains the input array and the expected output array (of corresponding range). Even though there is only one output value, we must still use a two-dimensional array to represent the output. public static double[][] XOR_INPUT ={ new double[2] { 0.0, 0.0 }, new double[2] { 1.0, 0.0 }, new double[2] { 0.0, 1.0 }, new double[2] { 1.0, 1.0 } };   public static double[][] XOR_IDEAL = { new double[1] { 0.0 }, new double[1] { 1.0 }, new double[1] { 1.0 }, new double[1] { 0.0 } };   INeuralDataSet trainingSet = new BasicNeuralDataSet(XOR_INPUT, XOR_IDEAL); ·       Training is the process where the neural network's weights are adjusted to better produce the expected output. Training will continue for many iterations, until the error rate of the network is below an acceptable level. Encog supports many different types of training. Resilient Propagation (RPROP) - general-purpose training algorithm. All training classes implement the ITrain interface. The RPROP algorithm is implemented by the ResilientPropagation class. Training the neural network involves calling the Iteration method on the ITrain class until the error is below a specific value. The code loops through as many iterations, or epochs, as it takes to get the error rate for the neural network to be below 1%. Once the neural network has been trained, it is ready for use. ITrain train = new ResilientPropagation(network, trainingSet);   for (int epoch=0; epoch < 10000; epoch++) { train.Iteration(); Debug.Print("Epoch #" + epoch + " Error:" + train.Error); if (train.Error > 0.01) break; } Executing a Neural Network ·       Call the Compute method on the BasicNetwork class. Console.WriteLine("Neural Network Results:"); foreach (INeuralDataPair pair in trainingSet) { INeuralData output = network.Compute(pair.Input); Console.WriteLine(pair.Input[0] + "," + pair.Input[1] + ", actual=" + output[0] + ",ideal=" + pair.Ideal[0]); } ·       The Compute method accepts an INeuralData class and also returns a INeuralData object. Neural Network Results: 0.0,0.0, actual=0.002782538818034049,ideal=0.0 1.0,0.0, actual=0.9903741937121177,ideal=1.0 0.0,1.0, actual=0.9836807956566187,ideal=1.0 1.0,1.0, actual=0.0011646072586172778,ideal=0.0 ·       the network has not been trained to give the exact results. This is normal. Because the network was trained to 1% error, each of the results will also be within generally 1% of the expected value.

    Read the article

  • Linux Kernel not upgraded (from Ubuntu 12.04 to 12.10) - can't remove old kernels and can't install new apps

    - by Tony Breyal
    Question: How do I remove old kernel images which refuse to be removed? Context: Yesterday I upgraded Ubuntu from 12.04 to 12.10. However, the linux kernel has not upgraded from 3.2 to 3.5 as I would have expected. $ uname -r 3.2.0-32-generic $ uname -a Linux tony-b 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ cat /proc/version Linux version 3.2.0-32-generic (buildd@batsu) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 Not sure why that happened there. I wanted to install Audacity (v2.0.1-1_amd64) to edit a lecture audio file. When trying this operation through Ubuntu Software Center, it says that to install audacity, four items will need to be removed: linux-image-3.2.0-27-generic linux-image-3.2.0-29-generic linux-image-3.2.0-30-generic linux-image-3.2.0-31-generic So I click "Install Anyway" but it fails with the following output: installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 259675 files and directories currently installed.) Removing linux-image-3.2.0-27-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-27-generic /boot/vmlinuz-3.2.0-27-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-27-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-27-generic /boot/vmlinuz-3.2.0-27-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-27-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-27-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing linux-image-3.2.0-29-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-29-generic /boot/vmlinuz-3.2.0-29-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-29-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-29-generic /boot/vmlinuz-3.2.0-29-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-29-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-29-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing linux-image-3.2.0-30-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-30-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-30-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-30-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing linux-image-3.2.0-31-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-31-generic /boot/vmlinuz-3.2.0-31-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-31-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-31-generic /boot/vmlinuz-3.2.0-31-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-31-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-31-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-27-generic linux-image-3.2.0-29-generic linux-image-3.2.0-30-generic linux-image-3.2.0-31-generic Error in function: Setting up grub-pc (2.00-7ubuntu11) ... /usr/sbin/grub-bios-setup: warning: Sector 32 is already in use by the program `FlexNet'; avoiding it. This software may cause boot or other problems in future. Please ask its authors not to store data in the boot track. Installation finished. No error reported. Generating grub.cfg ... dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 1 It seems I need to remove the old linux images somehow. I have tried this through (1) Synaptic, (2) Ubuntu Tweak, and (3) Computer Janitor. The first two fail, whilst Computer Janitor won't even open. The output from Synaptic is: E: linux-image-3.2.0-27-generic: subprocess installed post-removal script returned error exit status 1 E: linux-image-3.2.0-29-generic: subprocess installed post-removal script returned error exit status 1 E: linux-image-3.2.0-30-generic: subprocess installed post-removal script returned error exit status 1 E: linux-image-3.2.0-31-generic: subprocess installed post-removal script returned error exit status 1 How do I remove these old images? Thank you kindly in advance for any help on this matter. P.S. Further information: $ dpkg --list | grep linux-image rH linux-image-3.2.0-27-generic 3.2.0-27.43 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP rH linux-image-3.2.0-29-generic 3.2.0-29.46 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP rH linux-image-3.2.0-30-generic 3.2.0-30.48 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP rH linux-image-3.2.0-31-generic 3.2.0-31.50 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-32-generic 3.2.0-32.51 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.5.0-17-generic 3.5.0-17.28 amd64 Linux kernel image for version 3.5.0 on 64 bit x86 SMP ii linux-image-extra-3.5.0-17-generic 3.5.0-17.28 amd64 Linux kernel image for version 3.5.0 on 64 bit x86 SMP ii linux-image-generic 3.5.0.17.19 amd64 Generic Linux kernel image But trying to remove using the command line fails too e.g.: $ sudo apt-get purge linux-image-3.2.0-27-generic Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED linux-image-3.2.0-27-generic linux-image-3.2.0-29-generic linux-image-3.2.0-30-generic linux-image-3.2.0-31-generic 0 upgraded, 0 newly installed, 4 to remove and 1 not upgraded. 5 not fully installed or removed. After this operation, 597 MB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 259675 files and directories currently installed.) Removing linux-image-3.2.0-27-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-27-generic /boot/vmlinuz-3.2.0-27-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-27-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-27-generic /boot/vmlinuz-3.2.0-27-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-27-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-27-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports has already been reached Removing linux-image-3.2.0-29-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-29-generic /boot/vmlinuz-3.2.0-29-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-29-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-29-generic /boot/vmlinuz-3.2.0-29-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-29-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-29-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports has already been reached Removing linux-image-3.2.0-30-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-30-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-30-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-30-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports has already been reached Removing linux-image-3.2.0-31-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-31-generic /boot/vmlinuz-3.2.0-31-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-31-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-31-generic /boot/vmlinuz-3.2.0-31-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-31-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-31-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports has already been reached Errors were encountered while processing: linux-image-3.2.0-27-generic linux-image-3.2.0-29-generic linux-image-3.2.0-30-generic linux-image-3.2.0-31-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • 2 way SSL between SOA and OSB

    - by Johnny Shum
    If you have a need to use 2 way SSL between SOA composite and external partner links, you can follow these steps. Create the identity keystores, trust keystores, and server certificates. Setup keystores and SSL on WebLogic Setup server to use 2 way SSL Configure your SOA composite's partner link to use 2 way SSL Configure SOA engine two ways SSL In this case,  I use SOA and OSB for the test.  I started with a separate OSB and SOA domains.  I deployed two soap based proxies on OSB and two composites on SOA.  In SOA, one composite invokes a OSB proxy service, the other is invoked by the OSB.  Similarly,  in OSB,  one proxy invokes a SOA composite and the other is invoked by SOA. 1. Create the identity keystores, trust keystores and the server certificates Since this is a development environment, I use JDK's keytool to create the stores and use self signing certificate.  For production environment, you should use certificates from a trusted certificate authority like Verisign.    I created a script below to show what is needed in this step.  The only requirement is when creating the SOA identity certificate, you MUST use the alias mykey. STOREPASS=welcome1KEYPASS=welcome1# generate identity keystore for soa and osb.  Note: For SOA, you MUST use alias mykeyecho "creating stores"keytool -genkey -alias mykey -keyalg "RSA" -sigalg "SHA1withRSA" -dname "CN=soa, C=US" -keystore soa-default-keystore.jks -storepass $STOREPASS -keypass $KEYPASS keytool -genkey -alias osbkey -keyalg "RSA" -sigalg "SHA1withRSA" -dname "CN=osb, C=US" -keystore osb-default-keystore.jks -storepass $STOREPASS -keypass $KEYPASS# listing keystore contentsecho "listing stores contents"keytool -list -alias mykey -keystore soa-default-keystore.jks -storepass $STOREPASSkeytool -list -alias osbkey -keystore osb-default-keystore.jks -storepass $STOREPASS# exporting certs from storesecho "export certs from  stores"keytool -exportcert -alias mykey -keystore soa-default-keystore.jks -storepass $STOREPASS -file soacert.derkeytool -exportcert -alias osbkey -keystore osb-default-keystore.jks -storepass $STOREPASS -file osbcert.der # import certs to trust storesecho "import certs"keytool -importcert -alias osbkey -keystore soa-trust-keystore.jks -storepass $STOREPASS -file osbcert.der -keypass $KEYPASSkeytool -importcert -alias mykey -keystore osb-trust-keystore.jks -storepass $STOREPASS -file soacert.der  -keypass $KEYPASS SOA suite uses the JDK's SSL implementation for outbound traffic instead of the WebLogic's implementation.  You will need to import the partner's public cert into the trusted keystore used by SOA.  The default trusted keystore for SOA is DemoTrust.jks and it is located in $MW_HOME/wlserver_10.3/server/lib.   (This is set in the startup script -Djavax.net.ssl.trustStore).   If you use your own trusted keystore, then you will need to import it into your own trusted keystore. keytool -importcert -alias osbkey -keystore $MW_HOME/wlserver_10.3/server/lib/DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase  -file osbcert.der -keypass $KEYPASS If you do not perform this step, you will encounter this exception in runtime when SOA invokes OSB service using 2 way SSL Message send failed: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target  2.  Setup keystores and SSL on WebLogic First, you will need to login to the WebLogic console, navigate to the server's configuration->Keystore's tab.   Change the Keystores type to Custom Identity and Custom Trust and enter the rest of the fields. Then you navigate to the SSL tab, enter the fields in the identity section and expand the Advanced section.  Since I am using self signing cert on my VM enviornment, I disabled Hostname verification.  In real production system, this should not be the case.   I also enabled the option "Use Server Certs", so that the application uses the server cert to initiate https traffic (it is important to enable this in OSB). Last, you enable SSL listening port in the Server's configuration->General tab. 3.  Setup server to use 2 way SSL If you follow the screen shot in previous step, you can see in the Server->Configuration->SSL->Advanced section, there is an option for Two Way Client Cert Behavior,  you should set this to Client Certs Requested and Enforced. Repeat step 2 and 3 done on OSB.  After all these configurations,  you have to restart all the servers. 4.  Configure your SOA composite's partner link to use 2 way SSL You do this by modifying the composite.xml in your project, locate the partner's link reference and add the property oracle.soa.two.way.ssl.enabled.   <reference name="callosb" ui:wsdlLocation="helloword.wsdl">    <interface.wsdl interface="http://www.examples.com/wsdl/HelloService.wsdl#wsdl.interface(Hello_PortType)"/>    <binding.ws port="http://www.examples.com/wsdl/HelloService.wsdl#wsdl.endpoint(Hello_Service/Hello_Port)"                location="helloword.wsdl" soapVersion="1.1">      <property name="weblogic.wsee.wsat.transaction.flowOption"                type="xs:string" many="false">WSDLDriven</property>   <property name="oracle.soa.two.way.ssl.enabled">true</property>    </binding.ws>  </reference> In OSB, you should have checked the HTTPS required flag in the proxy's transport configuration.  After this,  rebuilt the composite jar file and ready to deploy in the EM console later. 5.  Configure SOA engine two ways SSL Oracle SOA Suite uses both Oracle WebLogic Server and Sun Secure Socket Layer (SSL) stacks for two-way SSL configurations. For the inbound web service bindings, Oracle SOA Suite uses the Oracle WebLogic Server infrastructure and, therefore, the Oracle WebLogic Server libraries for SSL.  This is already done by step 2 and 3 in the previous section. For the outbound web service bindings, Oracle SOA Suite uses JRF HttpClient and, therefore, the Sun JDK libraries for SSL.  You do this by configuring the SOA Engine in the Enterprise Manager Console, select soa-infra->SOA Administration->Common Properties Then click at the link at the bottom of the page:  "More SOA Infra Advances Infrastructure Configuration Properties" and then enter the full path of soa identity keystore in the value field of the KeyStoreLocation attribute.  Click Apply and Return then navigate to the domain->security->credential. Here, you provide the password to the keystore.  Note: the alias of the certficate must be mykey as described in step 1, so you only need to provide the password to the identity keystore.   You accomplish this by: Click Create Map In the Map Name field, enter SOA, and click OK Click Create Key Enter the following details where the password is the password for the SOA identity keystore. 6.  Test and Trouble Shooting Once the setup is complete and server restarted, you can deploy the composite in the EM console and test it.  In case of error,  you can read the server log file to determine the cause of the error.  For example, If you have not setup step 5 and test 2 way SSL, you will see this in the log when invoking OSB from BPEL: java.lang.Exception: oracle.sysman.emSDK.webservices.wsdlapi.SoapTestException: oracle.fabric.common.FabricInvocationException: Unable to access the following endpoint(s): https://localhost.localdomain:7002/default/helloword ####<Sep 22, 2012 2:07:37 PM CDT> <Error> <oracle.soa.bpel.engine.ws> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0AFDAEF20610F8FD89C5> ............ <11d1def534ea1be0:-4034173:139ef56d9f0:-8000-00000000000002ec> <1348340857956> <BEA-000000> <got FabricInvocationException sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target If you have not enable WebLogic SSL to use server certificate in the console and invoke SOA composite from OSB using two ways SSL, you will see this error: ####<Sep 22, 2012 2:07:37 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-00000000000000e2> <1348340857776> <BEA-090485> <CERTIFICATE_UNKNOWN alert was received from localhost.localdomain - 127.0.0.1. The peer has an unspecified issue with the certificate. SSL debug tracing should be enabled on the peer to determine what the issue is.> ####<Sep 22, 2012 2:07:37 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-00000000000000e4> <1348340857786> <BEA-090485> <CERTIFICATE_UNKNOWN alert was received from localhost.localdomain - 127.0.0.1. The peer has an unspecified issue with the certificate. SSL debug tracing should be enabled on the peer to determine what the issue is.> ####<Sep 22, 2012 2:27:21 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-0000000000000124> <1348342041926> <BEA-090497> <HANDSHAKE_FAILURE alert received from localhost - 127.0.0.1. Check both sides of the SSL configuration for mismatches in supported ciphers, supported protocol versions, trusted CAs, and hostname verification settings.> References http://docs.oracle.com/cd/E23943_01/admin.1111/e10226/soacompapp_secure.htm#CHDCFABB   Section 5.6.4 http://docs.oracle.com/cd/E23943_01/web.1111/e13707/ssl.htm#i1200848

    Read the article

  • No sound after video card replaced (AMD Radeon HD 7770)

    - by Sean
    Issue: no sound System: Dual boot Windows 7 (sda) Ubuntu 12.04 (sdb) 2 harddrives Dell XPS 730 Video card: AMD Radeo HD 7770 Diamond Multimedia Sound card: Creative Labs SB X-Fi Additional info: My sound used to work. Then, my old video card (NVIDIA geforce 280) died. I bought and installed a new video card: Radeon HD 7770. After this, my sound no longer worked in ubuntu (Win7 audio still works). Everything else in ubuntu, such as video, works fine. I suspect it has something to do with the fact that the Radeon card includes sound capability. Problem Details: If I click on System Settings - Sound, the panel freezes and stops responding indefinitely. The sound volume icon at the top of the screen (by the clock) shows 3 dashes beside it "---", and an empty drop-down box shows if I click on it. (Possibly related to 1.) When I reboot my machine, I get the message: "gnome settings daemon not responding". I have to force the reboot. I reinstalled ubunbu (perserving my home directory) and the problem persists. Diagnostics info: Following procedure outlined here: https://help.ubuntu.com/community/SoundTroubleshooting The following is a list of terminal commands, and their output: $ aplay -l List of PLAYBACK Hardware Devices There is no listing beyond that, and the command freezes until I hit control-c $ lspci -v | grep -A7 -i "audio" 00:0f.1 Audio device: NVIDIA Corporation MCP55 High Definition Audio (rev a2) Subsystem: Dell Device 0224 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 Memory at dfff0000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel -- 01:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Device aab0 Subsystem: Diamond Multimedia Systems Device aab0 Flags: bus master, fast devsel, latency 0, IRQ 43 Memory at dfefc000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel -- 03:0a.0 Audio device: Creative Labs SB X-Fi Subsystem: Creative Labs Device 6002 Flags: bus master, medium devsel, latency 32, IRQ 18 Memory at dbff4000 (32-bit, non-prefetchable) [size=16K] Memory at dbc00000 (64-bit, non-prefetchable) [size=2M] Memory at d4000000 (64-bit, non-prefetchable) [size=64M] I/O ports at 8c00 [size=32] Capabilities: <access denied> Notice the Diamond Multimedia Systems Device - that seems to be my video card sound. My video card is Diamond multimedia. Also there's the weird NVIDIA device in there. That must either be a remnant of my now removed NVIDIA graphics card, or else some kind of on-board thing. Not sure which. $ killall pulseaudio This allows me to open system settings - sound. But the "Test Sound" button makes no sound And the output volume + mute controls are greyed / disabled at 0 volume. It also allows me to click on the sound control in the "task bar" (beside the clock), and a volume slider drops down, but it is disabled / greyed at 0 volume. $ find /lib/modules/uname -r | grep snd /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-88pm860x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-tlv320aic3x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8900.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8978.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-tlv320dac33.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm9090.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-sta32x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-max98088.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-max9850.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-rt5631.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8903.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8580.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8523.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-max9877.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ads117x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8955.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8804.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-sgtl5000.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8750.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm2000.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-tlv320aic32x4.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ak4642.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ad193x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8753.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ak4535.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8985.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8350.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-dfbmcs320.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-cs42l51.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-tlv320aic26.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8737.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-uda1380.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8776.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8995.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-tpa6130a2.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8727.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm5100.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8991.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8510.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-jz4740-codec.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8400.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-lm4857.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8960.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-alc5623.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-cs4270.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-tlv320aic23.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8993.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8961.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8940.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-uda134x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ad1836.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8994.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8782.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-cs4271.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8974.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8983.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8962.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ak4641.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm-hubs.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8971.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8996.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wl1273.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-adav80x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-spdif.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-pcm3008.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-cx20442.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ak4671.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8711.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ad73311.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-max98095.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm9081.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8741.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm1250-ev1.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8988.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-adau1373.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8731.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-l3.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ssm2602.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-da7210.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ak4104.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8904.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8728.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8770.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8990.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/snd-soc-core.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/synth/emux/snd-emux-synth.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/synth/snd-util-mem.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd-hrtimer.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd-hwdep.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd-pcm.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd-rawmidi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/oss/snd-mixer-oss.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd-page-alloc.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq-midi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq-dummy.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq-virmidi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq-device.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq-midi-event.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq-midi-emul.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd-timer.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pcmcia/pdaudiocf/snd-pdaudiocf.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pcmcia/vx/snd-vxpocket.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/6fire/snd-usb-6fire.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/snd-usbmidi-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/caiaq/snd-usb-caiaq.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/usx2y/snd-usb-usx2y.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/usx2y/snd-usb-us122l.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/snd-usb-audio.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/misc/snd-ua101.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/opl3/snd-opl3-synth.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/opl3/snd-opl3-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/opl4/snd-opl4-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/opl4/snd-opl4-synth.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-portman2x4.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-serial-u16550.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-mts64.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-mtpav.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/mpu401/snd-mpu401.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/mpu401/snd-mpu401-uart.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/vx/snd-vx-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-dummy.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-aloop.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/pcsp/snd-pcsp.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-virmidi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/firewire/snd-firewire-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/firewire/snd-firewire-speakers.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/firewire/snd-isight.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/snd-tea6330t.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/other/snd-tea575x-tuner.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/other/snd-ak4113.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/other/snd-pt2258.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/other/snd-ak4117.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/other/snd-ak4xxx-adda.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/other/snd-ak4114.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/snd-cs8427.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/snd-i2c.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/emu10k1/snd-emu10k1-synth.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/emu10k1/snd-emu10k1.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/emu10k1/snd-emu10k1x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/korg1212/snd-korg1212.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/au88x0/snd-au8830.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/au88x0/snd-au8820.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/au88x0/snd-au8810.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/aw2/snd-aw2.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-sis7019.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-ens1371.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/vx222/snd-vx222.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-via82xx.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-es1968.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-atiixp-modem.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-cs4281.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-sonicvibes.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-intel8x0.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-maestro3.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ac97/snd-ac97-codec.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-es1938.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-fm801.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/nm256/snd-nm256.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-realtek.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-cmedia.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-conexant.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-intel.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-analog.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-hdmi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-idt.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-ca0110.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-cirrus.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-via.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-ca0132.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-si3054.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/riptide/snd-riptide.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-ens1370.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-als4000.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-intel8x0m.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ca0106/snd-ca0106.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-cs5530.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/cs5535audio/snd-cs5535audio.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-rme32.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ymfpci/snd-ymfpci.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ctxfi/snd-ctxfi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-azt3328.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/cs46xx/snd-cs46xx.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/lx6464es/snd-lx6464es.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ice1712/snd-ice1712.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ice1712/snd-ice17xx-ak4xxx.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ice1712/snd-ice1724.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/mixart/snd-mixart.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ali5451/snd-ali5451.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/lola/snd-lola.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/oxygen/snd-oxygen-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/oxygen/snd-oxygen.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/oxygen/snd-virtuoso.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-via82xx-modem.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/pcxhr/snd-pcxhr.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-indigo.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-echo3g.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-mona.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-layla20.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-gina20.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-layla24.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-mia.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-indigoiox.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-darla24.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-indigoio.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-indigodjx.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-gina24.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-darla20.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-indigodj.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-cmipci.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/asihpi/snd-asihpi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-ad1889.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/rme9652/snd-rme9652.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/rme9652/snd-hdspm.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/rme9652/snd-hdsp.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/trident/snd-trident.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-atiixp.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-als300.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-bt87x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-rme96.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/opti9xx/snd-miro.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/opti9xx/snd-opti92x-ad1848.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/opti9xx/snd-opti93x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/opti9xx/snd-opti92x-cs4231.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/gus/snd-gusextreme.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/gus/snd-interwave.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/gus/snd-gusmax.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/gus/snd-interwave-stb.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/gus/snd-gus-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/gus/snd-gusclassic.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-emu8000-synth.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sb16-dsp.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sbawe.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sb8-dsp.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sb-common.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sb16.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sb16-csp.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sb8.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-jazz16.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-es18xx.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-azt2320.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-cmi8330.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-als100.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/msnd /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/msnd/snd-msnd-classic.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/msnd/snd-msnd-pinnacle.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/msnd/snd-msnd-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/cs423x/snd-cs4231.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/cs423x/snd-cs4236.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/es1688/snd-es1688-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/es1688/snd-es1688.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-adlib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/ad1848/snd-ad1848.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/ad1816a/snd-ad1816a.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/galaxy/snd-azt1605.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/galaxy/snd-azt2316.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/wavefront/snd-wavefront.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/wss/snd-wss-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-sc6000.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-sscape.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-opl3sa2.ko

    Read the article

  • ASP.NET Web Forms Extensibility: Providers

    - by Ricardo Peres
    Introduction This will be the first of a number of posts on ASP.NET extensibility. At this moment I don’t know exactly how many will be and I only know a couple of subjects that I want to talk about, so more will come in the next days. I have the sensation that the providers offered by ASP.NET are not widely know, although everyone uses, for example, sessions, they may not be aware of the extensibility points that Microsoft included. This post won’t go into details of how to configure and extend each of the providers, but will hopefully give some pointers on that direction. Canonical These are the most widely known and used providers, coming from ASP.NET 1, chances are, you have used them already. Good support for invoking client side, either from a .NET application or from JavaScript. Lots of server-side controls use them, such as the Login control for example. Membership The Membership provider is responsible for managing registered users, including creating new ones, authenticating them, changing passwords, etc. ASP.NET comes with two implementations, one that uses a SQL Server database and another that uses the Active Directory. The base class is Membership and new providers are registered on the membership section on the Web.config file, as well as parameters for specifying minimum password lengths, complexities, maximum age, etc. One reason for creating a custom provider would be, for example, storing membership information in a different database engine. 1: <membership defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </membership> Role The Role provider assigns roles to authenticated users. The base class is Role and there are three out of the box implementations: XML-based, SQL Server and Windows-based. Also registered on Web.config through the roleManager section, where you can also say if your roles should be cached on a cookie. If you want your roles to come from a different place, implement a custom provider. 1: <roleManager defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly" /> 4: </providers> 5: </roleManager> Profile The Profile provider allows defining a set of properties that will be tied and made available to authenticated or even anonymous ones, which must be tracked by using anonymous authentication. The base class is Profile and the only included implementation stores these settings in a SQL Server database. Configured through profile section, where you also specify the properties to make available, a custom provider would allow storing these properties in different locations. 1: <profile defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </profile> Basic OK, I didn’t know what to call these, so Basic is probably as good as a name as anything else. Not supported client-side (doesn’t even make sense). Session The Session provider allows storing data tied to the current “session”, which is normally created when a user first accesses the site, even when it is not yet authenticated, and remains all the way. The base class and only included implementation is SessionStateStoreProviderBase and it is capable of storing data in one of three locations: In the process memory (default, not suitable for web farms or increased reliability); A SQL Server database (best for reliability and clustering); The ASP.NET State Service, which is a Windows Service that is installed with the .NET Framework (ok for clustering). The configuration is made through the sessionState section. By adding a custom Session provider, you can store the data in different locations – think for example of a distributed cache. 1: <sessionState customProvider=”MyProvider”> 2: <providers> 3: <add name=”MyProvider” type=”MyClass, MyAssembly” /> 4: </providers> 5: </sessionState> Resource A not so known provider, allows you to change the origin of localized resource elements. By default, these come from RESX files and are used whenever you use the Resources expression builder or the GetGlobalResourceObject and GetLocalResourceObject methods, but if you implement a custom provider, you can have these elements come from some place else, such as a database. The base class is ResourceProviderFactory and there’s only one internal implementation which uses these RESX files. Configuration is through the globalization section. 1: <globalization resourceProviderFactoryType="MyClass, MyAssembly" /> Health Monitoring Health Monitoring is also probably not so well known, and actually not a good name for it. First, in order to understand what it does, you have to know that ASP.NET fires “events” at specific times and when specific things happen, such as when logging in, an exception is raised. These are not user interface events and you can create your own and fire them, nothing will happen, but the Health Monitoring provider will detect it. You can configure it to do things when certain conditions are met, such as a number of events being fired in a certain amount of time. You define these rules and route them to a specific provider, which must inherit from WebEventProvider. Out of the box implementations include sending mails, logging to a SQL Server database, writing to the Windows Event Log, Windows Management Instrumentation, the IIS 7 Trace infrastructure or the debugger Trace. Its configuration is achieved by the healthMonitoring section and a reason for implementing a custom provider would be, for example, locking down a web application in the event of a significant number of failed login attempts occurring in a small period of time. 1: <healthMonitoring> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </healthMonitoring> Sitemap The Sitemap provider allows defining the site’s navigation structure and associated required permissions for each node, in a tree-like fashion. Usually this is statically defined, and the included provider allows it, by supplying this structure in a Web.sitemap XML file. The base class is SiteMapProvider and you can extend it in order to supply you own source for the site’s structure, which may even be dynamic. Its configuration must be done through the siteMap section. 1: <siteMap defaultProvider="MyProvider"> 2: <providers><add name="MyProvider" type="MyClass, MyAssembly" /> 3: </providers> 4: </siteMap> Web Part Personalization Web Parts are better known by SharePoint users, but since ASP.NET 2.0 they are included in the core Framework. Web Parts are server-side controls that offer certain possibilities of configuration by clients visiting the page where they are located. The infrastructure handles this configuration per user or globally for all users and this provider is responsible for just that. The base class is PersonalizationProvider and the only included implementation stores settings on SQL Server. Add new providers through the personalization section. 1: <webParts> 2: <personalization defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </personalization> 7: </webParts> Build The Build provider is responsible for compiling whatever files are present on your web folder. There’s a base class, BuildProvider, and, as can be expected, internal implementations for building pages (ASPX), master pages (Master), user web controls (ASCX), handlers (ASHX), themes (Skin), XML Schemas (XSD), web services (ASMX, SVC), resources (RESX), browser capabilities files (Browser) and so on. You would write a build provider if you wanted to generate code from any kind of non-code file so that you have strong typing at development time. Configuration goes on the buildProviders section and it is per extension. 1: <buildProviders> 2: <add extension=".ext" type="MyClass, MyAssembly” /> 3: </buildProviders> New in ASP.NET 4 Not exactly new since they exist since 2010, but in ASP.NET terms, still new. Output Cache The Output Cache for ASPX pages and ASCX user controls is now extensible, through the Output Cache provider, which means you can implement a custom mechanism for storing and retrieving cached data, for example, in a distributed fashion. The base class is OutputCacheProvider and the only implementation is private. Configuration goes on the outputCache section and on each page and web user control you can choose the provider you want to use. 1: <caching> 2: <outputCache defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </outputCache> 7: </caching> Request Validation A big change introduced in ASP.NET 4 (and refined in 4.5, by the way) is the introduction of extensible request validation, by means of a Request Validation provider. This means we are not limited to either enabling or disabling event validation for all pages or for a specific page, but we now have fine control over each of the elements of the request, including cookies, headers, query string and form values. The base provider class is RequestValidator and the configuration goes on the httpRuntime section. 1: <httpRuntime requestValidationType="MyClass, MyAssembly" /> Browser Capabilities The Browser Capabilities provider is new in ASP.NET 4, although the concept exists from ASP.NET 2. The idea is to map a browser brand and version to its supported capabilities, such as JavaScript version, Flash support, ActiveX support, and so on. Previously, this was all hardcoded in .Browser files located in %WINDIR%\Microsoft.NET\Framework(64)\vXXXXX\Config\Browsers, but now you can have a class inherit from HttpCapabilitiesProvider and implement your own mechanism. Register in on the browserCaps section. 1: <browserCaps provider="MyClass, MyAssembly" /> Encoder The Encoder provider is responsible for encoding every string that is sent to the browser on a page or header. This includes for example converting special characters for their standard codes and is implemented by the base class HttpEncoder. Another implementation takes care of Anti Cross Site Scripting (XSS) attacks. Build your own by inheriting from one of these classes if you want to add some additional processing to these strings. The configuration will go on the httpRuntime section. 1: <httpRuntime encoderType="MyClass, MyAssembly" /> Conclusion That’s about it for ASP.NET providers. It was by no means a thorough description, but I hope I managed to raise your interest on this subject. There are lots of pointers on the Internet, so I only included direct references to the Framework classes and configuration sections. Stay tuned for more extensibility!

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Trying to install realtek drivers for ASUS USB-N13, encountering "Compile make driver error: 2"

    - by limp_chimp
    I'm trying to put an ASUS USB-N13 wireless adapter in my desktop running Ubuntu 12.04. The details of my problem are identical to the one described in this question: Connecting Asus USB-N13 Wireless Adapter. As such, I'm running through the exact steps laid out in the top-rated answer to that question. All was going well until I get to building the drivers. sudo bash install.sh produces the following output: ################################################## Realtek Wi-Fi driver Auto installation script Novembor, 21 2011 v1.1.0 ################################################## Decompress the driver source tar ball: rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405.tar.gz rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/autoconf_rtl8712_usb_linux.h rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/clean rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl8712_cmd.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/config rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/crypto/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/crypto/rtl871x_security.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/debug/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/debug/rtl871x_debug.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/eeprom/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/eeprom/rtl871x_eeprom.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/efuse/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/efuse/rtl8712_efuse.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/hal/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/hal/rtl8712/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/hal/rtl8712/hal_init.c [...truncated for space...] rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/ioctl/rtl871x_ioctl_linux.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/ioctl/rtl871x_ioctl_query.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/ioctl/rtl871x_ioctl_rtl.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/ioctl/rtl871x_ioctl_set.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/Kconfig rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/led/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/led/rtl8712_led.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/Makefile rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/mlme/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/mlme/ieee80211.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/mlme/rtl871x_mlme.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/mp/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/mp/rtl871x_mp.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/mp/rtl871x_mp_ioctl.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_dep/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_dep/linux/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_dep/linux/cmd_linux.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_dep/linux/ioctl_cfg80211.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_dep/linux/io_linux.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_dep/linux/mlme_linux.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_dep/linux/recv_linux.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_dep/linux/rtw_android.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_dep/linux/xmit_linux.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_intf/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_intf/linux/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_intf/linux/os_intfs.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_intf/linux/usb_intf.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/os_intf/osdep_service.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/pwrctrl/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/pwrctrl/rtl871x_pwrctrl.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/recv/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/recv/rtl8712_recv.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/recv/rtl871x_recv.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/rf/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/rf/rtl8712_rf.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/rf/rtl871x_rf.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/runwpa rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/sta_mgt/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/sta_mgt/rtl871x_sta_mgt.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/wlan0dhcp rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/wpa1.conf rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/xmit/ rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/xmit/rtl8712_xmit.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/xmit/rtl871x_xmit.c rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405 Authentication requested [root] for make clean: rm -fr *.mod.c *.mod *.o .*.cmd *.ko *~ rm .tmp_versions -fr ; rm Module.symvers -fr cd cmd ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd crypto ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd debug ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd eeprom ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd hal/rtl8712 ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd io ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd ioctl ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd led ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd mlme ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd mp ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd os_dep/linux ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd os_intf ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd os_intf/linux ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd pwrctrl ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd recv ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd rf ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd sta_mgt ; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd xmit; rm -fr *.mod.c *.mod *.o .*.cmd *.ko cd efuse; rm -fr *.mod.c *.mod *.o .*.cmd *.ko Authentication requested [root] for make driver: make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/3.2.0-23-generic/build M=/home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405 modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-23-generic' CC [M] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.o In file included from /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.c:23:0: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/osdep_service.h: In function ‘_init_timer’: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/osdep_service.h:151:17: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] In file included from /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/rtl871x_ht.h:25:0, from /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/drv_types.h:67, from /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.c:24: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h: In function ‘get_da’: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:350:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:350:9: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:353:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:353:9: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:356:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:356:9: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:359:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:359:9: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h: In function ‘get_sa’: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:374:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:374:9: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:377:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:377:9: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:380:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:380:9: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:383:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:383:9: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h: In function ‘get_hdr_bssid’: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:397:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:397:9: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:400:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:400:9: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:403:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/wifi.h:403:9: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] In file included from /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/drv_types.h:70:0, from /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.c:24: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/rtl871x_cmd.h: At top level: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/rtl871x_cmd.h:107:25: error: field ‘event_tasklet’ has incomplete type In file included from /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/drv_types.h:72:0, from /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.c:24: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/rtl871x_xmit.h:355:24: error: field ‘xmit_tasklet’ has incomplete type In file included from /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/drv_types.h:73:0, from /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.c:24: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/rtl871x_recv.h:205:24: error: field ‘recv_tasklet’ has incomplete type In file included from /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/drv_types.h:73:0, from /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.c:24: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/rtl871x_recv.h: In function ‘rxmem_to_recvframe’: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/rtl871x_recv.h:435:30: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/include/rtl871x_recv.h:435:9: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.c: In function ‘_init_cmd_priv’: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.c:93:75: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.c:101:60: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.c: In function ‘_init_evt_priv’: /home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.c:135:59: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] make[2]: *** [/home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/cmd/rtl871x_cmd.o] Error 1 make[1]: *** [_module_/home/thinkpad20/Downloads/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405/driver/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-23-generic' make: *** [modules] Error 2 ################################################## Compile make driver error: 2 Please check error Mesg ################################################## I'm not a superuser, only a hobbyist. I really just want this to work ~.~ so I can get on with my life. Sigh. Anyway, grumbling aside, I hope people can help.

    Read the article

  • Project Navigation and File Nesting in ASP.NET MVC Projects

    - by Rick Strahl
    More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important. Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently. To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page. Alternatives I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow. The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that. The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular. Here’s an example of what this looks like in a relatively small project: The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win. Nesting Page specific content In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder. This looks something like this: In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer> With this in place, from inside of your Views you can then reference those same resources like this:<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" /> and<script src="~/Views/Admin/QuizPrognosisItems.js"></script> which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well. Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file). However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view. I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together. Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project. Client Side Folders The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources. For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently. In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder. Here’s what that looks like for me – here this is an AngularJs project: In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application. In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files. This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders. Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why! Skipping Project Navigation altogether with Resharper I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search. Here’s what Resharper’s jump to anything looks like: Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE. Work how you like to work Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently. As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you. Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit. You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment. © Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • C#: Adding Functionality to 3rd Party Libraries With Extension Methods

    - by James Michael Hare
    Ever have one of those third party libraries that you love but it's missing that one feature or one piece of syntactical candy that would make it so much more useful?  This, I truly think, is one of the best uses of extension methods.  I began discussing extension methods in my last post (which you find here) where I expounded upon what I thought were some rules of thumb for using extension methods correctly.  As long as you keep in line with those (or similar) rules, they can often be useful for adding that little extra functionality or syntactical simplification for a library that you have little or no control over. Oh sure, you could take an open source project, download the source and add the methods you want, but then every time the library is updated you have to re-add your changes, which can be cumbersome and error prone.  And yes, you could possibly extend a class in a third party library and override features, but that's only if the class is not sealed, static, or constructed via factories. This is the perfect place to use an extension method!  And the best part is, you and your development team don't need to change anything!  Simply add the using for the namespace the extensions are in! So let's consider this example.  I love log4net!  Of all the logging libraries I've played with, it, to me, is one of the most flexible and configurable logging libraries and it performs great.  But this isn't about log4net, well, not directly.  So why would I want to add functionality?  Well, it's missing one thing I really want in the ILog interface: ability to specify logging level at runtime. For example, let's say I declare my ILog instance like so:     using log4net;     public class LoggingTest     {         private static readonly ILog _log = LogManager.GetLogger(typeof(LoggingTest));         ...     }     If you don't know log4net, the details aren't important, just to show that the field _log is the logger I have gotten from log4net. So now that I have that, I can log to it like so:     _log.Debug("This is the lowest level of logging and just for debugging output.");     _log.Info("This is an informational message.  Usual normal operation events.");     _log.Warn("This is a warning, something suspect but not necessarily wrong.");     _log.Error("This is an error, some sort of processing problem has happened.");     _log.Fatal("Fatals usually indicate the program is dying hideously."); And there's many flavors of each of these to log using string formatting, to log exceptions, etc.  But one thing there isn't: the ability to easily choose the logging level at runtime.  Notice, the logging levels above are chosen at compile time.  Of course, you could do some fun stuff with lambdas and wrap it, but that would obscure the simplicity of the interface.  And yes there is a Logger property you can dive down into where you can specify a Level, but the Level properties don't really match the ILog interface exactly and then you have to manually build a LogEvent and... well, it gets messy.  I want something simple and sexy so I can say:     _log.Log(someLevel, "This will be logged at whatever level I choose at runtime!");     Now, some purists out there might say you should always know what level you want to log at, and for the most part I agree with them.  For the most party the ILog interface satisfies 99% of my needs.  In fact, for most application logging yes you do always know the level you will be logging at, but when writing a utility class, you may not always know what level your user wants. I'll tell you, one of my favorite things is to write reusable components.  If I had my druthers I'd write framework libraries and shared components all day!  And being able to easily log at a runtime-chosen level is a big need for me.  After all, if I want my code to really be re-usable, I shouldn't force a user to deal with the logging level I choose. One of my favorite uses for this is in Interceptors -- I'll describe Interceptors in my next post and some of my favorites -- for now just know that an Interceptor wraps a class and allows you to add functionality to an existing method without changing it's signature.  At the risk of over-simplifying, it's a very generic implementation of the Decorator design pattern. So, say for example that you were writing an Interceptor that would time method calls and emit a log message if the method call execution time took beyond a certain threshold of time.  For instance, maybe if your database calls take more than 5,000 ms, you want to log a warning.  Or if a web method call takes over 1,000 ms, you want to log an informational message.  This would be an excellent use of logging at a generic level. So here was my personal wish-list of requirements for my task: Be able to determine if a runtime-specified logging level is enabled. Be able to log generically at a runtime-specified logging level. Have the same look-and-feel of the existing Debug, Info, Warn, Error, and Fatal calls.    Having the ability to also determine if logging for a level is on at runtime is also important so you don't spend time building a potentially expensive logging message if that level is off.  Consider an Interceptor that may log parameters on entrance to the method.  If you choose to log those parameter at DEBUG level and if DEBUG is not on, you don't want to spend the time serializing those parameters. Now, mine may not be the most elegant solution, but it performs really well since the enum I provide all uses contiguous values -- while it's never guaranteed, contiguous switch values usually get compiled into a jump table in IL which is VERY performant - O(1) - but even if it doesn't, it's still so fast you'd never need to worry about it. So first, I need a way to let users pass in logging levels.  Sure, log4net has a Level class, but it's a class with static members and plus it provides way too many options compared to ILog interface itself -- and wouldn't perform as well in my level-check -- so I define an enum like below.     namespace Shared.Logging.Extensions     {         // enum to specify available logging levels.         public enum LoggingLevel         {             Debug,             Informational,             Warning,             Error,             Fatal         }     } Now, once I have this, writing the extension methods I need is trivial.  Once again, I would typically /// comment fully, but I'm eliminating for blogging brevity:     namespace Shared.Logging.Extensions     {         // the extension methods to add functionality to the ILog interface         public static class LogExtensions         {             // Determines if logging is enabled at a given level.             public static bool IsLogEnabled(this ILog logger, LoggingLevel level)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         return logger.IsDebugEnabled;                     case LoggingLevel.Informational:                         return logger.IsInfoEnabled;                     case LoggingLevel.Warning:                         return logger.IsWarnEnabled;                     case LoggingLevel.Error:                         return logger.IsErrorEnabled;                     case LoggingLevel.Fatal:                         return logger.IsFatalEnabled;                 }                                 return false;             }             // Logs a simple message - uses same signature except adds LoggingLevel             public static void Log(this ILog logger, LoggingLevel level, object message)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message);                         break;                     case LoggingLevel.Informational:                         logger.Info(message);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message);                         break;                     case LoggingLevel.Error:                         logger.Error(message);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message);                         break;                 }             }             // Logs a message and exception to the log at specified level.             public static void Log(this ILog logger, LoggingLevel level, object message, Exception exception)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message, exception);                         break;                     case LoggingLevel.Informational:                         logger.Info(message, exception);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message, exception);                         break;                     case LoggingLevel.Error:                         logger.Error(message, exception);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message, exception);                         break;                 }             }             // Logs a formatted message to the log at the specified level.              public static void LogFormat(this ILog logger, LoggingLevel level, string format,                                          params object[] args)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.DebugFormat(format, args);                         break;                     case LoggingLevel.Informational:                         logger.InfoFormat(format, args);                         break;                     case LoggingLevel.Warning:                         logger.WarnFormat(format, args);                         break;                     case LoggingLevel.Error:                         logger.ErrorFormat(format, args);                         break;                     case LoggingLevel.Fatal:                         logger.FatalFormat(format, args);                         break;                 }             }         }     } So there it is!  I didn't have to modify the log4net source code, so if a new version comes out, i can just add the new assembly with no changes.  I didn't have to subclass and worry about developers not calling my sub-class instead of the original.  I simply provide the extension methods and it's as if the long lost extension methods were always a part of the ILog interface! Consider a very contrived example using the original interface:     // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsWarnEnabled)             {                 _log.WarnFormat("Statement {0} took too long to execute.", statement);             }             ...         }     }     Now consider this alternate call where the logging level could be perhaps a property of the class          // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // allow logging level to be specified by user of class instead         public LoggingLevel ThresholdLogLevel { get; set; }                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsLogEnabled(ThresholdLogLevel))             {                 _log.LogFormat(ThresholdLogLevel, "Statement {0} took too long to execute.",                     statement);             }             ...         }     } Next time, I'll show one of my favorite uses for these extension methods in an Interceptor.

    Read the article

  • Quick guide to Oracle IRM 11g: Server installation

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index This is the first of a set of articles designed to assist with the successful installation, configuration and deployment of a document security solution using Oracle IRM. This article goes through a set of simple instructions which detail how to download, install and configure the IRM server, the starting point for building a document security solution. This article contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Downloading the software Preparing a database Creating the schema WebLogic Server installation Installing Oracle IRM Introduction Because we are using Oracle Enterprise Linux in this guide, and before we get into the detail of IRM, i'd like to share some tips with Linux to make life a bit easier.Use a 64bit platform, IRM 11g runs just fine on a 32bit server but with 64bit you will build a more future proof service. Download and install the latest Java JDK package. Make sure you get the 64bit version if you are on a 64bit server. Configure Linux to use a good Yum server to simplify installing packages. For Oracle Enterprise Linux we maintain a great public Yum here. Have at least 20GB of free disk space on the partition you intend to install the IRM server. The downloads are big, then you extract them and then install. This quickly consumes disk space which you can easily recover by deleting the downloaded and extracted files after wards. But it's nice to have the disk space spare to keep these around in case you need to restart any part of the installation process again. Downloading the software OK, so before you can do anything, you need the software install kits. Luckily Oracle allows you to freely download every technology we create. You'll need to get the following; Oracle WebLogic Server Oracle Database Oracle Repository Creation Utility (rcu) Oracle IRM server You can use Microsoft SQL server 2005 or 2008, in this guide i've used Oracle RDBMS 11gR2 for Linux. Preparing the database I'm not going to go through the finer points of installing the database. There are many very good guides on installing the Oracle Database. However one thing I would suggest you think about is enabling TDE, network encryption and using Database Vault. These Oracle database security technologies are excellent for creating a complete end to end security solution. No point in going to all the effort to secure document access with IRM when someone can go directly to the database and assign themselves rights to documents. To understand this further, you can see a video of the IRM service using these database security technologies here. With a database up and running we need to create a schema to hold the IRM data. This schema contains the rights model, cryptographic keys, user account id's and associated rights etc. Creating the IRM database schema Oracle uses the Repository Creation Tool which builds your schema, extract the files from the rcu zip. Then in a terminal window; cd /oracle/install/rcu/bin ./rcu This will launch the Repository Creation Tool and you will be presented with the image to the right. Hit next and continue onto the next dialog. You are asked if you are going to be creating a new schema or wish to drop an existing one, you obviously just need to click next at this point to create a new schema. The RCU next needs to know where your database is so you'll need the following details of your database instance. Below, for reference, is the information for my installation. Hostname: irm.oracle.demo Port: 1521 (This is the default TCP port for the Oracle Database) Service Name: irm.oracle.demo. Note this is not the SID, but the service name. Username: sys Password: ******** Role: SYSDBA And then select next. Because the RCU contains schemas for many of the Oracle Technologies, you now need to select to just deploy the Oracle IRM schema. Open the section under "Enterprise Content Management" and tick the "Oracle Information Rights Management" component. Note that you also get the chance to select a prefix which defaults to "DEV" (for development). I usually change this to something that reflects my own install. PROD for a production system, INT for internal only etc. The next step asks for the passwords for the schema users. We are only creating one schema here so you just enter one password. Some brave souls store this password in an Excel spreadsheet which is then secure against the IRM server you're about to install in this guide. Nearing the end of the schema creation is the mapping of the tablespaces to the schema. Note I had setup a table space already that was encrypted using TDE and at this point I was able to select that tablespace by clicking in the "Default Tablespace" column. The next dialog confirms your actions and clicking on next causes it to create the schema and default data. After this you are presented with the completion summary. WebLogic Server installation The database is now ready and the next step is to install the application server. Oracle IRM 11g is a JEE application and currently only supported in Oracle WebLogic Server. So the next step is get WebLogic Server installed, which is pretty easy. Depending on the version you download, you either run the binary or for a 64 bit platform (like mine) run the following command. java -d64 -jar wls1033_generic.jar And in the resulting dialog hit next to start walking through the install. Next choose a directory into which you will install WebLogic Server. I like to change from the default and install into /oracle/. Then all my software goes into this one folder, all owned by the "oracle" user. The next dialog asks for your Oracle support information to ensure you are kept up to date. If you have an Oracle support account, enter your details but for most evaluation systems I leave these fields blank. Again, for evaluation or development systems, I usually stick with the "Typical" install type which you are next asked for. Next you are asked for the JDK which will be used for the server. When installing from the generic jar on a 64bit platform like in this guide, no JDK is bundled with the installer. But as you can see in the image on the right, that it does a good job of detecting the one you've got installed. Defaults for the install directories are usually taken, no changes here, just click next. And finally we are ready to install, hit next, sit back and relax. Typically this takes about 10 minutes. After the install, do not run the quick start, we need to deploy the IRM install itself from which we will create a new WebLogic domain. For now just hit done and lets move to the final step of the installation process. Installing Oracle IRM The last piece of the puzzle to getting your environment ready is to deploy the IRM files themselves. Unzip the Oracle Enterprise Content Management 11g zip file and it will create a Disk1 directory. Switch to this folder and in the console run ./runInstaller. This will launch the installer which will also ask for the location of the JDK. Look at the image on the right for the detail. You should now see the first stage of the IRM installation. The dialog warns you need to have a WebLogic server installed and have created the schema's, but you've just done all that above (I hope) so we are ready to go. The installer now checks that you have all the required libraries installed and other system parameters are correct. Because nearly all of my development and evaluation installations have the database server on the same system, the installer passes these checks without issue... Next... Now chose where to install the IRM files, you must install into the same Middleware Home as the WebLogic Server installation you just performed. Usually the installer already defaults to this location anyway. I also tend to change the Oracle Home Directory to Oracle_IRM so it's clear this is just an IRM install. The summary page tells you about space needed to deploy the files. Unfortunately the IRM install comes with all of the other Oracle ECM software, you can't just select the IRM files, everything gets deployed to disk and uses 1.6GB of space! Not fun, but Oracle has to package up similar technologies otherwise we would have a very large number of installers to QA and manage, again, not fun. Hit Install, time for another drink, maybe a piece of cake or a donut... on a half decent system this part of the install took under 10 minutes. Finally the installation of your IRM server is complete, click on finish and the next phase is to create the WebLogic domain and start configuring your server. Now move onto the next article in this guide... configuring your IRM server ready to seal your first document.

    Read the article

  • Metro: Query Selectors

    - by Stephen.Walther
    The goal of this blog entry is to explain how to perform queries using selectors when using the WinJS library. In particular, you learn how to use the WinJS.Utilities.query() method and the QueryCollection class to retrieve and modify the elements of an HTML document. Introduction to Selectors When you are building a Web application, you need some way of easily retrieving elements from an HTML document. For example, you might want to retrieve all of the input elements which have a certain class. Or, you might want to retrieve the one and only element with an id of favoriteColor. The standard way of retrieving elements from an HTML document is by using a selector. Anyone who has ever created a Cascading Style Sheet has already used selectors. You use selectors in Cascading Style Sheets to apply formatting rules to elements in a document. For example, the following Cascading Style Sheet rule changes the background color of every INPUT element with a class of .required in a document to the color red: input.red { background-color: red } The “input.red” part is the selector which matches all INPUT elements with a class of red. The W3C standard for selectors (technically, their recommendation) is entitled “Selectors Level 3” and the standard is located here: http://www.w3.org/TR/css3-selectors/ Selectors are not only useful for adding formatting to the elements of a document. Selectors are also useful when you need to apply behavior to the elements of a document. For example, you might want to select a particular BUTTON element with a selector and add a click handler to the element so that something happens whenever you click the button. Selectors are not specific to Cascading Style Sheets. You can use selectors in your JavaScript code to retrieve elements from an HTML document. jQuery is famous for its support for selectors. Using jQuery, you can use a selector to retrieve matching elements from a document and modify the elements. The WinJS library enables you to perform the same types of queries as jQuery using the W3C selector syntax. Performing Queries with the WinJS.Utilities.query() Method When using the WinJS library, you perform a query using a selector by using the WinJS.Utilities.query() method.  The following HTML document contains a BUTTON and a DIV element: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <button>Click Me!</button> <div style="display:none"> <h1>Secret Message</h1> </div> </body> </html> The document contains a reference to the following JavaScript file named \js\default.js: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.Utilities.query("button").listen("click", function () { WinJS.Utilities.query("div").clearStyle("display"); }); } }; app.start(); })(); The default.js script uses the WinJS.Utilities.query() method to retrieve all of the BUTTON elements in the page. The listen() method is used to wire an event handler to the BUTTON click event. When you click the BUTTON, the secret message contained in the hidden DIV element is displayed. The clearStyle() method is used to remove the display:none style attribute from the DIV element. Under the covers, the WinJS.Utilities.query() method uses the standard querySelectorAll() method. This means that you can use any selector which is compatible with the querySelectorAll() method when using the WinJS.Utilities.query() method. The querySelectorAll() method is defined in the W3C Selectors API Level 1 standard located here: http://www.w3.org/TR/selectors-api/ Unlike the querySelectorAll() method, the WinJS.Utilities.query() method returns a QueryCollection. We talk about the methods of the QueryCollection class below. Retrieving a Single Element with the WinJS.Utilities.id() Method If you want to retrieve a single element from a document, instead of matching a set of elements, then you can use the WinJS.Utilities.id() method. For example, the following line of code changes the background color of an element to the color red: WinJS.Utilities.id("message").setStyle("background-color", "red"); The statement above matches the one and only element with an Id of message. For example, the statement matches the following DIV element: <div id="message">Hello!</div> Notice that you do not use a hash when matching a single element with the WinJS.Utilities.id() method. You would need to use a hash when using the WinJS.Utilities.query() method to do the same thing like this: WinJS.Utilities.query("#message").setStyle("background-color", "red"); Under the covers, the WinJS.Utilities.id() method calls the standard document.getElementById() method. The WinJS.Utilities.id() method returns the result as a QueryCollection. If no element matches the identifier passed to WinJS.Utilities.id() then you do not get an error. Instead, you get a QueryCollection with no elements (length=0). Using the WinJS.Utilities.children() method The WinJS.Utilities.children() method enables you to retrieve a QueryCollection which contains all of the children of a DOM element. For example, imagine that you have a DIV element which contains children DIV elements like this: <div id="discussContainer"> <div>Message 1</div> <div>Message 2</div> <div>Message 3</div> </div> You can use the following code to add borders around all of the child DIV element and not the container DIV element: var discussContainer = WinJS.Utilities.id("discussContainer").get(0); WinJS.Utilities.children(discussContainer).setStyle("border", "2px dashed red");   It is important to understand that the WinJS.Utilities.children() method only works with a DOM element and not a QueryCollection. Notice that the get() method is used to retrieve the DOM element which represents the discussContainer. Working with the QueryCollection Class Both the WinJS.Utilities.query() method and the WinJS.Utilities.id() method return an instance of the QueryCollection class. The QueryCollection class derives from the base JavaScript Array class and adds several useful methods for working with HTML elements: addClass(name) – Adds a class to every element in the QueryCollection. clearStyle(name) – Removes a style from every element in the QueryCollection. conrols(ctor, options) – Enables you to create controls. get(index) – Retrieves the element from the QueryCollection at the specified index. getAttribute(name) – Retrieves the value of an attribute for the first element in the QueryCollection. hasClass(name) – Returns true if the first element in the QueryCollection has a certain class. include(items) – Includes a collection of items in the QueryCollection. listen(eventType, listener, capture) – Adds an event listener to every element in the QueryCollection. query(query) – Performs an additional query on the QueryCollection and returns a new QueryCollection. removeClass(name) – Removes a class from the every element in the QueryCollection. removeEventListener(eventType, listener, capture) – Removes an event listener from every element in the QueryCollection. setAttribute(name, value) – Adds an attribute to every element in the QueryCollection. setStyle(name, value) – Adds a style attribute to every element in the QueryCollection. template(templateElement, data, renderDonePromiseContract) – Renders a template using the supplied data.  toggleClass(name) – Toggles the specified class for every element in the QueryCollection. Because the QueryCollection class derives from the base Array class, it also contains all of the standard Array methods like forEach() and slice(). Summary In this blog post, I’ve described how you can perform queries using selectors within a Windows Metro Style application written with JavaScript. You learned how to return an instance of the QueryCollection class by using the WinJS.Utilities.query(), WinJS.Utilities.id(), and WinJS.Utilities.children() methods. You also learned about the methods of the QueryCollection class.

    Read the article

  • CodePlex Daily Summary for Sunday, April 15, 2012

    CodePlex Daily Summary for Sunday, April 15, 2012Popular ReleasesAssaultCube Reloaded: 2.4.1 Valor: POSSIBLE KNIFE CRASH FIX Codename Valor as suggested by LMFAO! on the forums Weapon tweaks Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, download the Linux package. The server pack is ready for both Windows and Linux, but you might need to compile your own for linux (source included)callisto: callisto 2.0.25: removed 2 ip ranges from hotspot shield black list.KBCsv: KBCsv V1.4.0.0: #11872 (skipping records with delimited text can break parser) #11873 (globalization support in CsvWriter) #11185 (more versatile constructor and method overloads)National Geographic Photo of the Day Wallpaper Changer: Photo of the Day Wallpaper Changer v2.0: National Geographic - Photo of the Day Wallpaper Changer v2.0 is an improved version. It has some new features like improved GUI, automatic update and date to date photo archiver etc. please check out the user guide for more information. Please copy the exe in a directory and run. Its that simple to use :).Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.5.6: Bug Fixes nuget was broken for Timespan and complete build due to how i did the target. Corrected and made all 3 match. Color Slider (and by default Color Picker) didn't respect view state for being disabled depending on how it was set. Test application now has test cases.Json.NET: Json.NET 4.5 Release 3: Change - DefaultContractResolver.IgnoreSerializableAttribute is now true by default Fix - Fixed MaxDepth on JsonReader recursively throwing an error Fix - Fixed SerializationBinder.BindToName not being called with full assembly namesVisual Studio Team Foundation Server Branching and Merging Guide: v2 - For Visual Studio 11: Welcome to the BETA of the Branching and Merging Guide preview As this is a BETA release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these BETA artifacts in a production environment. Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has not been through an independent technical review Documentation has been reviewed by the quality and recording te...Media Companion: MC 3.435b Release: This release should be the last beta for 3.4xx. A handful of problems have been sorted out since last weeks release. If there are no major problems this time, it will upgraded to 3.500 Stable at the end of the week! General The .NET Framework has been modified to use the Client profile, as provided by normal Windows updates; no longer is there a requirement to download and install the Full profile! mc_com.exe has been worked on to mimic proper Media Companion output (a big thanks to vbat99...Wholemy.LinkedLists: wholemy.linkedlists.2012.04.12.38: libs and srcTHE NVL Maker: The NVL Maker Ver 3.12: SIM??????,TRA??????,ZIP????。 ????????????????,??????~(??????????????????) ??????? simpatch1440x900 trapatch1440x900 ?????1400x900??1440x900,?????????????Data.xp3。 ???? ?????3.12?EXE????????????????, ??????????????,??Tool/krkrconf.exe,??Editor.exe, ???????????????「??????」。 ?????Editor.exe??????。 ???? ???? http://etale.us/gameupload/THE_NVL_Maker_ver3.12_sim.zip ???? http://www.mediafire.com/?je51683g22bz8vo ??Infinite Creation?? http://bbs.etale.us/forum.php ?????? ???? 3.12 ??? ???、????...Quick Performance Monitor: Version 1.8.2: Version 1.8.2. Add the ability for qpmset files to also store the Window location/size so predefined 'sets' can be forced to always open on the same place of the screen.SnmpMessenger: 0.1.1.1: Project Description SnmpMessenger, a messenger. Using the SNMP protocol to exchange messages. It's developed in C#. SnmpMessenger For .Net 4.0, Mono 2.8. Support SNMP V1, V2, V3. Features Send get, set and other requests and get the response. Send and receive traps. Handle requests and return the response. Note This library is compliant with the Common Language Specification(CLS). The latest version is 0.1.1.1. It is only a messenger, does not involve VACM. Any problems, Please mailto: wa...Python Tools for Visual Studio: 1.1.1: We’re pleased to announce the release of Python Tools for Visual Studio 1.1.1. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython and IronPython • Python editor with advanced member and signature intellisense • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging • Profiling with multiple view...Supporting Guidance and Whitepapers: v1 - Team Foundation Service Whitepapers: Welcome to the BETA release of the Team Foundation Service Whitepapers preview As this is a BETA release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these BETA artifacts in a production environment. Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review All critical bugs have been resolved Known Issue...LINQ to Twitter: LINQ to Twitter Beta v2.0.24: Supports .NET 3.5, .NET 4.0, Silverlight 4.0, Windows Phone 7.1, and Client Profile. 100% Twitter API coverage. Also available via NuGet.Kendo UI ASP.NET Sample Applications: Sample Applications (2012-04-11): Sample application(s) demonstrating the use of Kendo UI in ASP.NET applications.SCCM Client Actions Tool: SCCM Client Actions Tool v1.12: SCCM Client Actions Tool v1.12 is the latest version. It comes with following changes since last version: Improved WMI date conversion to be aware of timezone differences and DST. Fixed new version check. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA on Windows XP. Cmdkey.exe is natively availab...Dual Browsing: Dual Browser: Please note the following: I setup the address bar temporarily to only accepts http:// .com addresses. Just type in the name of the website excluding: http://, www., and .com; (Ex: for www.youtube.com just type: youtube then click OK). The page splitter can be grabbed by holding down your left mouse button and move left or right. By right clicking on the page background, you can choose to refresh, go back a page and so on. Demo video: http://youtu.be/L7NTFVM3JUYLiberty: v3.2.0.1 Release 9th April 2012: Change Log-Fixed -Reach Fixed a bug where the object editor did not work on non-English operating systemsPath Copy Copy: 10.1: This release addresses the following work items: 11357 11358 11359 This release is a recommended upgrade, especially for users who didn't install the 10.0.1 version.New ProjectsADENA: This project consists in the development of a Graphical User Interface (GUI) for a proof assistant for Propositional Classical Logic based on Natural Deduction method.Alternity Warships Editor: This is a tool designed to easy the process of creating a new spaceship for Alternity using the Warship rules.BigBallz: Projeto de site de Bolões para campeonatos diversos. A princípio pensado para copa do mundo de futebol de 2010Crescent: bunch of mac scriptsDemoVasquez: Demo Proyecto 1Didrotuos: DidrotuosEat Out Advocate: Eat Out Advocate --------------------Kuttiflow: A simple implementation of Workflow for .NetLuan Van Cuoi Khoa: Lu?n van cu?i khoa ÐH Tây Nguyênmapaconwindowsphone: probar un mapa de bing mapsMovie Renamer: Is your movie collection a mess? No idea which movie is which? This tool easily renames movies based on the title and the year of the movie. Helps you sort out that movie collection! It will download from IMDB the closest matches based on the name of the file. Very simple written in WPF, so it will need the .NET 4 framework. At the moment you cannot set how you want to movie to be renamed, it will always be: "MovieTitle (Year).ext".Online Math Calculator: This is an Online Math Calculator. What this does is take your equation in the usual form and convert every variable to x, y, and every constant to a, b, c, d, e, etc... use wolfram|alpha and then convert back to the input format. This allows to input most equations.Orchard on Windows Azure with Dynamic Deploy: Dynamic Deploy is a cloud deployment platform. This project includes the Orchard source code that was used to create a Windows Azure build. The original source has not been modified. We have just added more themes and modules and modified web.config with a machine key. visit httppcvvpes: pcvvpesPesquisa de Satisfacao: PesquisaPit of Despair: An XNA 4.0 game in C# focused on learning to write overhead dungeon crawl games. Inpired from games such as Zelda, Wizardy, perhaps some original Final Fantasy.Prova Branquinho: Prova BranquinhoRibHat: RibHat is a framework for building websites, forums, blogs, and web-based information systems. It is a set of libraries that help the programmer to get rid from the immobility of CMS, obtaining a maximum level of customization.SetupWizard: a SetupWizard, install windows service, create IIS site, create database during installation.SharkOS: This operating system is the system Arkadia OS but with a GUI, it is created in c #, it will be fast and no lag.Shopping List: Shopping List is a simple WP7 application that enables tracking of items to buy when going for groceries.Sim Cricket: Cricket simulation game.. For cricket and C# fans.. Simple InterNET Daemon: Simple InterNET DaemonSteggy: Steganography project. SychevIgor Win8 Apps Source Code: SychevIgor Win8 Apps Source Codetest53768492156478: nothingTool to change monitor display frequency on HTPC: This is a small application that can change the monitors refresh-rate by simply running one of the applications for the desired refresh-rate. It was developed to make it easy to change the refresh rate, when launching external media players from XBMC and so on... And it was published here, so that others can see how easily it can be done.TravelSaver: Hajj Umrah USA, Travel SaverUpdateBot: UpdateBot is a GUI application that simplifies and automates the downloading and parsing of FileHippo.com's Update Checker result pages.

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • MVC Automatic Menu

    - by Nuri Halperin
    An ex-colleague of mine used to call his SQL script generator "Super-Scriptmatic 2000". It impressed our then boss little, but was fun to say and use. We called every batch job and script "something 2000" from that day on. I'm tempted to call this one Menu-Matic 2000, except it's waaaay past 2000. Oh well. The problem: I'm developing a bunch of stuff in MVC. There's no PM to generate mounds of requirements and there's no Ux Architect to create wireframe. During development, things change. Specifically, actions get renamed, moved from controller x to y etc. Well, as the site grows, it becomes a major pain to keep a static menu up to date, because the links change. The HtmlHelper doesn't live up to it's name and provides little help. How do I keep this growing list of pesky little forgotten actions reigned in? The general plan is: Decorate every action you want as a menu item with a custom attribute Reflect out all menu items into a structure at load time Render the menu using as CSS  friendly <ul><li> HTML. The MvcMenuItemAttribute decorates an action, designating it to be included as a menu item: [AttributeUsage(AttributeTargets.Method, AllowMultiple = true)] public class MvcMenuItemAttribute : Attribute {   public string MenuText { get; set; }   public int Order { get; set; }   public string ParentLink { get; set; }   internal string Controller { get; set; }   internal string Action { get; set; }     #region ctor   public MvcMenuItemAttribute(string menuText) : this(menuText, 0) { } public MvcMenuItemAttribute(string menuText, int order) { MenuText = menuText; Order = order; }       internal string Link { get { return string.Format("/{0}/{1}", Controller, this.Action); } }   internal MvcMenuItemAttribute ParentItem { get; set; } #endregion } The MenuText allows overriding the text displayed on the menu. The Order allows the items to be ordered. The ParentLink allows you to make this item a child of another menu item. An example action could then be decorated thusly: [MvcMenuItem("Tracks", Order = 20, ParentLink = "/Session/Index")] . All pretty straightforward methinks. The challenge with menu hierarchy becomes fairly apparent when you try to render a menu and highlight the "current" item or render a breadcrumb control. Both encounter an  ambiguity if you allow a data source to have more than one menu item with the same URL link. The issue is that there is no great way to tell which link a person click. Using referring URL will fail if a user bookmarked the page. Using some extra query string to disambiguate duplicate URLs essentially changes the links, and also ads a chance of collision with other query parameters. Besides, that smells. The stock ASP.Net sitemap provider simply disallows duplicate URLS. I decided not to, and simply pick the first one encountered as the "current". Although it doesn't solve the issue completely – one might say they wanted the second of the 2 links to be "current"- it allows one to include a link twice (home->deals and products->deals etc), and the logic of deciding "current" is easy enough to explain to the customer. Now that we got that out of the way, let's build the menu data structure: public static List<MvcMenuItemAttribute> ListMenuItems(Assembly assembly) { var result = new List<MvcMenuItemAttribute>(); foreach (var type in assembly.GetTypes()) { if (!type.IsSubclassOf(typeof(Controller))) { continue; } foreach (var method in type.GetMethods()) { var items = method.GetCustomAttributes(typeof(MvcMenuItemAttribute), false) as MvcMenuItemAttribute[]; if (items == null) { continue; } foreach (var item in items) { if (String.IsNullOrEmpty(item.Controller)) { item.Controller = type.Name.Substring(0, type.Name.Length - "Controller".Length); } if (String.IsNullOrEmpty(item.Action)) { item.Action = method.Name; } result.Add(item); } } } return result.OrderBy(i => i.Order).ToList(); } Using reflection, the ListMenuItems method takes an assembly (you will hand it your MVC web assembly) and generates a list of menu items. It digs up all the types, and for each one that is an MVC Controller, digs up the methods. Methods decorated with the MvcMenuItemAttribute get plucked and added to the output list. Again, pretty simple. To make the structure hierarchical, a LINQ expression matches up all the items to their parent: public static void RegisterMenuItems(List<MvcMenuItemAttribute> items) { _MenuItems = items; _MenuItems.ForEach(i => i.ParentItem = items.FirstOrDefault(p => String.Equals(p.Link, i.ParentLink, StringComparison.InvariantCultureIgnoreCase))); } The _MenuItems is simply an internal list to keep things around for later rendering. Finally, to package the menu building for easy consumption: public static void RegisterMenuItems(Type mvcApplicationType) { RegisterMenuItems(ListMenuItems(Assembly.GetAssembly(mvcApplicationType))); } To bring this puppy home, a call in Global.asax.cs Application_Start() registers the menu. Notice the ugliness of reflection is tucked away from the innocent developer. All they have to do is call the RegisterMenuItems() and pass in the type of the application. When you use the new project template, global.asax declares a class public class MvcApplication : HttpApplication and that is why the Register call passes in that type. protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes);   MvcMenu.RegisterMenuItems(typeof(MvcApplication)); }   What else is left to do? Oh, right, render! public static void ShowMenu(this TextWriter output) { var writer = new HtmlTextWriter(output);   renderHierarchy(writer, _MenuItems, null); }   public static void ShowBreadCrumb(this TextWriter output, Uri currentUri) { var writer = new HtmlTextWriter(output); string currentLink = "/" + currentUri.GetComponents(UriComponents.Path, UriFormat.Unescaped);   var menuItem = _MenuItems.FirstOrDefault(m => m.Link.Equals(currentLink, StringComparison.CurrentCultureIgnoreCase)); if (menuItem != null) { renderBreadCrumb(writer, _MenuItems, menuItem); } }   private static void renderBreadCrumb(HtmlTextWriter writer, List<MvcMenuItemAttribute> menuItems, MvcMenuItemAttribute current) { if (current == null) { return; } var parent = current.ParentItem; renderBreadCrumb(writer, menuItems, parent); writer.Write(current.MenuText); writer.Write(" / ");   }     static void renderHierarchy(HtmlTextWriter writer, List<MvcMenuItemAttribute> hierarchy, MvcMenuItemAttribute root) { if (!hierarchy.Any(i => i.ParentItem == root)) return;   writer.RenderBeginTag(HtmlTextWriterTag.Ul); foreach (var current in hierarchy.Where(element => element.ParentItem == root).OrderBy(i => i.Order)) { if (ItemFilter == null || ItemFilter(current)) {   writer.RenderBeginTag(HtmlTextWriterTag.Li); writer.AddAttribute(HtmlTextWriterAttribute.Href, current.Link); writer.AddAttribute(HtmlTextWriterAttribute.Alt, current.MenuText); writer.RenderBeginTag(HtmlTextWriterTag.A); writer.WriteEncodedText(current.MenuText); writer.RenderEndTag(); // link renderHierarchy(writer, hierarchy, current); writer.RenderEndTag(); // li } } writer.RenderEndTag(); // ul } The ShowMenu method renders the menu out to the provided TextWriter. In previous posts I've discussed my partiality to using well debugged, time test HtmlTextWriter to render HTML rather than writing out angled brackets by hand. In addition, writing out using the actual writer on the actual stream rather than generating string and byte intermediaries (yes, StringBuilder being no exception) disturbs me. To carry out the rendering of an hierarchical menu, the recursive renderHierarchy() is used. You may notice that an ItemFilter is called before rendering each item. I figured that at some point one might want to exclude certain items from the menu based on security role or context or something. That delegate is the hook for such future feature. To carry out rendering of a breadcrumb recursion is used again, this time simply to unwind the parent hierarchy from the leaf node, then rendering on the return from the recursion rather than as we go along deeper. I guess I was stuck in LISP that day.. recursion is fun though.   Now all that is left is some usage! Open your Site.Master or wherever you'd like to place a menu or breadcrumb, and plant one of these calls: <% MvcMenu.ShowBreadCrumb(this.Writer, Request.Url); %> to show a breadcrumb trail (notice lack of "=" after <% and the semicolon). <% MvcMenu.ShowMenu(Writer); %> to show the menu.   As mentioned before, the HTML output is nested <UL> <LI> tags, which should make it easy to style using abundant CSS to produce anything from static horizontal or vertical to dynamic drop-downs.   This has been quite a fun little implementation and I was pleased that the code size remained low. The main crux was figuring out how to pass parent information from the attribute to the hierarchy builder because attributes have restricted parameter types. Once I settled on that implementation, the rest falls into place quite easily.

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >