Search Results

Search found 11567 results on 463 pages for 'map provider'.

Page 453/463 | < Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >

  • 14+ Real Estate WordPress Themes

    - by Aditi
    If you are looking for a great WordPress real estate theme. Below is a list of some of the best wordpress real estate themes, so you can find one, which is the best suited for you and be at par with increasing industry demands in real estates business.We have covered only the best themes available. The Themes are flexible & can be used by anybody in real estate business. If you are realtor, agent, appraiser or realty these can be modified as per your use. Estate It is an immensely powerful and simple to manage business theme. It offers advanced SEO control, clean code and styling modification features. It has new “Properties” management facility when installed – proving it’s far more than just a WordPress theme. It offers flexible page templates, an advanced search facility that allows you to drill down into properties based on very specific criteria, Google Maps integration and smart property images management. It is a complete web solution. It also has IDX functionality due to dsIDXpress plugin integration, which allows multi-listing services. Price: $200 View Demo Download ElegantEstate It makes your WordPress blog into a full-feature real estate website. The theme makes browsing your listings easy, and adds special integration features for property info, photos, Google Maps and more. Help increase sales by establishing an elegant and professional online presence today. It has opera compatibility, Netscape compatibility, Safari compatibility, WordPress 3.0 compatibility. It comes with five color schemes, threaded comments, optional blog-style structure, Gravatar ready, firefox compatible, IE8 + IE7 + IE6 compatible, advertisement ready, widget ready sidebars, theme options page, custom thumbnail images, PSD files, valid XHTML + CSS, smooth table less design, ePanel theme options, page templates, complete localization and many more features. Price: $39 (Package includes more than 55 themes) View Demo Download Open House Open House is fully compatible with WordPress 3.0+ and a highly customizable Real Estate WordPress theme. It has Google Maps Integration with Street View. It has a professional look for Agents and Realtors both. It is best suited for all markets and countries with theme localization, translation and internationalization. It provides for English, Spanish and Portuguese language files in the Developer Package. It has custom scripts, which makes it easy to add/delete/modify listings. It also includes photo gallery with a lightbox effect, gorgeous photo fade animations and automatic Google Maps integration. The theme can be used as a single or multi-agent website with individual Agent-Realtor pages with listings and biography information, Agent photo uploader, financing calculator.There is Multi Category search for potential customers to locate the house they want. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Residence Real Estate It is a WordPress 3.0+ compatible stunning real estate theme. It has a dynamic real estate framework management module for easy edit-delete-add more features options, which makes this theme super easy to customize to the market needs. It allows you to add your own labels and values in your own language and switch the theme to your own language with English and Spanish files included with the ability to add your own language. It offers Multi-Category search with breadcrumb filtered results, easy photo gallery management with drag-drop sorting of images. It allows you to build your own multi-category search section menu with custom labels-choices and unlimited dropdown menus. They have been presented in a professional module with search results in breadcrumb navigation. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Smooth Smooth is a WordPress Real Estate theme. It is a complete theme, which comes with Multi Category Search, Google Maps Integration, Agent Photo and Logo uploader that offers a professional and extremely affordable solution for Realtors and Agents to showcase their properties with ease. You can add your listings with the extremely easy and flexible Dynamic Real Estate Framework, edit-add-modify-delete all features, labels and values within the WordPress administration and upload unlimited photos to your galleries with latest WordPress 3.0+ features. It is a complete solution for real estate sites. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Homeowners It is another WordPress Real Estate theme, which is a fast loading optimized theme with Google Maps Integration, fully compatible with WordPress 3.0 features and all Real Estate markets. It has a professional clean look and it is full of features extremely easy to modify. It also provides for 12 new styles provided. English, Spanish and Portuguese language files are provided in the Developer Package. Homeowners WordPress Real Estate features custom scripts that make add/delete/modify listings an easy task with an included photo gallery with a lightbox effect and automatic Google Map integration with street view (New) Agents will have access only to their own listings and add the listing management for their account making this theme an ideal affordable solution for Realtors and Real Estate agencies. The theme can be used as a single or multi-agent website with individual Agent-Realtor pages with listings and biography information, Agent photo uploader, financing calculator. Multi category search has also been provided. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Real Agent Real Estate This theme is a WordPress 3.0+ compatible clean grid based real estate theme. It has a dynamic real estate framework management module for easy edit-delete-add more features options. It is easy to customize according to market. It allows you to add your own labels and values in your own language switch the theme to your own language with English and Spanish files included with the ability to add your own language. Multi-Category search with breadcrumb filtered results, easy photo gallery management with drag-drop sorting of images. You can upload property photos in bulk with the native WordPress uploader and the new image editing and resizing options in WordPress 3.0+. The theme features 5 different color styles, blue, black, red, green and purple with professional layouts, logo and agent photo uploaders. This theme is best suited for individual or multiple agents both. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Agent Press The AgentPress theme is an ideal solution for real estate agents. It offers multiple page templates that can be used to create a complete real estate website. You can create from single property templates to a custom homepage easily with it. It is compatible to WordPress 3.0 and 3.1. It has custom background/header, property template, 6 layout options, fixed width, threaded comments and many more features. Price: $99.95 View Demo Download Real Estate It is one of the best Real Estate themes. It offers single click auto install of the site, Allow user to pay & submit properties on your site, Multi-agent site with profiles, Strategically built real estate site with professional design, User dashboard to edit/renew their submissions, Auto generated Google Maps and Image Slideshows and many more unique features. Once the users search property as per their criteria, the properties are listed with all the necessary parameters that let them select the property of their choice. Users can also add the property to favorite so they can check the property later from their member area dashboard. Admin may display different sidebar on this page and add widgets of their choice. This theme is full of custom, dynamic widgets such as top agents, finance calculator, user login; advertise blocks, testimonials and so on. There is a property details page where users can see the actual property. The agent details is displayed with the full contact details and appropriate links so the visitor can get all info about the property being sold, seller and may contact them by filling out a simple form. The email will be sent directly to the person who listed the property. Price: $89.95 Single | $159.95 Developer View Demo Download Broker Real Estate It is also a WordPress 3.0+ compatible real estate theme. It has a featured property slideshow, dynamic real estate framework management module for easy edit-delete-add more features. You can add your own labels and values in your own language. It offers multi-category search with breadcrumb-filtered results, easy photo gallery management with drag-drop sorting of images. You can also build your own multi-category search section menu with custom labels-choices and unlimited dropdown menus. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Decasa It has custom search panel that lets your user easily browse your properties by keyword search or category select drop downs. It offers the property exposé, which is a user-friendly overview over the most important details of each real estate object. You can easily add this data through a post settings meta box on the post edit screen. You can easily create a real estate image gallery. Its theme options panel makes it easy to make the basic theme settings. It supports the new WordPress post thumbnail feature. When uploading an image file the theme will automatically create all the necessary image size. You can also create your own custom menu easily and fast with drag and drop without touching any code. Price: 39 € View Demo Download RealtorPress A real estate premium WordPress theme from PremiumPress. Versatile WordPress Theme that can be used by individual agents or real estate companies. The theme allows you to easily add property listings via the custom backend admin area or import CSV spreadsheets. It features customisable search options, Google maps integration, real estate data custom field creator, image management tools and more. Price: $79 | Premium Collection: $259 (all PremiumPress themes) View Demo Download Related posts:21+ WordPress Photo Blog & Portfolio Themes 14+ WordPress Portfolio Themes Professional WordPress Business Themes

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • Cutting edge technology, a lone Movember ranger and a 5-a-side football club ...meet the team at Oracle’s Belfast Offices.

    - by user10729410
    Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Olivia O’Connell To see what’s in store at Oracle’s next Open Day which comes to Belfast this week, I visited the offices with some colleagues to meet the team and get a feel for what‘s in store on November 29th. After being warmly greeted by Frances and Francesca, who make sure Front of House and Facilities run smoothly, we embarked on a quick tour of the 2 floors Oracle occupies, led by VP Bo, it was time to seek out some willing volunteers to be interviewed/photographed - what a shy bunch! A bit of coaxing from the social media team was needed here! In a male-dominated environment, the few women on the team caught my eye immediately. I got chatting to Susan, a business analyst and Bronagh, a tech writer. It becomes clear during our chat that the male/female divide is not an issue – “everyone here just gets on with the job,” says Suzanne, “We’re all around the same age and have similar priorities and luckily everyone is really friendly so there are no problems. ” A graduate of Queen’s University in Belfast majoring in maths & computer science, Susan works closely with product management and the development teams to ensure that the final project delivered to clients meets and exceeds their expectations. Bronagh, who joined us following working for a tech company in Montreal and gaining her post-grad degree at University of Ulster agrees that the work is challenging but “the environment is so relaxed and friendly”. Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Software developer David is taking the Movember challenge for the first time to raise vital funds and awareness for men’s health. Like other colleagues in the office, he is a University of Ulster graduate and works on Reference applications and Merchandising Tools which enable customers to establish e-shops using Oracle technologies. The social activities are headed up by Gordon, a software engineer on the commerce team who joined the team 4 years ago after graduating from the University of Strathclyde at Glasgow with a degree in Computer Science. Everyone is unanimous that the best things about working at Oracle’s Belfast offices are the casual friendly environment and the opportunity to be at the cutting edge of technology. We’re looking forward to our next trip to Belfast for some cool demos and meet candidates. And as for the camera-shyness? Look who came out to have their picture taken at the end of the day! Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Oracle offices in Belfast are located on the 6th floor, Victoria House, Gloucester Street, Belfast BT1 4LS, UK View Larger Map Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Open day takes place on Thursday, 29th November 4pm – 8pm. Visit the 5 Demo Stations to find out more about each teams' activities and projects to date. See live demos including "Engaging the Customer", "Managing Your Store", "Helping the Customer", "Shopping on-line" and "The Commerce Experience" processes. The "Working @Oracle" stand will give you the chance to connect with our recruitment team and get information about the Recruitment process and making your career path in Oracle. Register here.

    Read the article

  • ASP.NET MVC ‘Extendable-hooks’ – ControllerActionInvoker class

    - by nmarun
    There’s a class ControllerActionInvoker in ASP.NET MVC. This can be used as one of an hook-points to allow customization of your application. Watching Brad Wilsons’ Advanced MP3 from MVC Conf inspired me to write about this class. What MSDN says: “Represents a class that is responsible for invoking the action methods of a controller.” Well if MSDN says it, I think I can instill a fair amount of confidence into what the class does. But just to get to the details, I also looked into the source code for MVC. Seems like the base class Controller is where an IActionInvoker is initialized: 1: protected virtual IActionInvoker CreateActionInvoker() { 2: return new ControllerActionInvoker(); 3: } In the ControllerActionInvoker (the O-O-B behavior), there are different ‘versions’ of InvokeActionMethod() method that actually call the action method in question and return an instance of type ActionResult. 1: protected virtual ActionResult InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary<string, object> parameters) { 2: object returnValue = actionDescriptor.Execute(controllerContext, parameters); 3: ActionResult result = CreateActionResult(controllerContext, actionDescriptor, returnValue); 4: return result; 5: } I guess that’s enough on the ‘behind-the-screens’ of this class. Let’s see how we can use this class to hook-up extensions. Say I have a requirement that the user should be able to get different renderings of the same output, like html, xml, json, csv and so on. The user will type-in the output format in the url and should the get result accordingly. For example: http://site.com/RenderAs/ – renders the default way (the razor view) http://site.com/RenderAs/xml http://site.com/RenderAs/csv … and so on where RenderAs is my controller. There are many ways of doing this and I’m using a custom ControllerActionInvoker class (even though this might not be the best way to accomplish this). For this, my one and only route in the Global.asax.cs is: 1: routes.MapRoute("RenderAsRoute", "RenderAs/{outputType}", 2: new {controller = "RenderAs", action = "Index", outputType = ""}); Here the controller name is ‘RenderAsController’ and the action that’ll get called (always) is the Index action. The outputType parameter will map to the type of output requested by the user (xml, csv…). I intend to display a list of food items for this example. 1: public class Item 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public Cuisine Cuisine { get; set; } 6: } 7:  8: public class Cuisine 9: { 10: public int CuisineId { get; set; } 11: public string Name { get; set; } 12: } Coming to my ‘RenderAsController’ class. I generate an IList<Item> to represent my model. 1: private static IList<Item> GetItems() 2: { 3: Cuisine cuisine = new Cuisine { CuisineId = 1, Name = "Italian" }; 4: Item item = new Item { Id = 1, Name = "Lasagna", Cuisine = cuisine }; 5: IList<Item> items = new List<Item> { item }; 6: item = new Item {Id = 2, Name = "Pasta", Cuisine = cuisine}; 7: items.Add(item); 8: //... 9: return items; 10: } My action method looks like 1: public IList<Item> Index(string outputType) 2: { 3: return GetItems(); 4: } There are two things that stand out in this action method. The first and the most obvious one being that the return type is not of type ActionResult (or one of its derivatives). Instead I’m passing the type of the model itself (IList<Item> in this case). We’ll convert this to some type of an ActionResult in our custom controller action invoker class later. The second thing (a little subtle) is that I’m not doing anything with the outputType value that is passed on to this action method. This value will be in the RouteData dictionary and we’ll use this in our custom invoker class as well. It’s time to hook up our invoker class. First, I’ll override the Initialize() method of my RenderAsController class. 1: protected override void Initialize(RequestContext requestContext) 2: { 3: base.Initialize(requestContext); 4: string outputType = string.Empty; 5:  6: // read the outputType from the RouteData dictionary 7: if (requestContext.RouteData.Values["outputType"] != null) 8: { 9: outputType = requestContext.RouteData.Values["outputType"].ToString(); 10: } 11:  12: // my custom invoker class 13: ActionInvoker = new ContentRendererActionInvoker(outputType); 14: } Coming to the main part of the discussion – the ContentRendererActionInvoker class: 1: public class ContentRendererActionInvoker : ControllerActionInvoker 2: { 3: private readonly string _outputType; 4:  5: public ContentRendererActionInvoker(string outputType) 6: { 7: _outputType = outputType.ToLower(); 8: } 9: //... 10: } So the outputType value that was read from the RouteData, which was passed in from the url, is being set here in  a private field. Moving to the crux of this article, I now override the CreateActionResult method. 1: protected override ActionResult CreateActionResult(ControllerContext controllerContext, ActionDescriptor actionDescriptor, object actionReturnValue) 2: { 3: if (actionReturnValue == null) 4: return new EmptyResult(); 5:  6: ActionResult result = actionReturnValue as ActionResult; 7: if (result != null) 8: return result; 9:  10: // This is where the magic happens 11: // Depending on the value in the _outputType field, 12: // return an appropriate ActionResult 13: switch (_outputType) 14: { 15: case "json": 16: { 17: JavaScriptSerializer serializer = new JavaScriptSerializer(); 18: string json = serializer.Serialize(actionReturnValue); 19: return new ContentResult { Content = json, ContentType = "application/json" }; 20: } 21: case "xml": 22: { 23: XmlSerializer serializer = new XmlSerializer(actionReturnValue.GetType()); 24: using (StringWriter writer = new StringWriter()) 25: { 26: serializer.Serialize(writer, actionReturnValue); 27: return new ContentResult { Content = writer.ToString(), ContentType = "text/xml" }; 28: } 29: } 30: case "csv": 31: controllerContext.HttpContext.Response.AddHeader("Content-Disposition", "attachment; filename=items.csv"); 32: return new ContentResult 33: { 34: Content = ToCsv(actionReturnValue as IList<Item>), 35: ContentType = "application/ms-excel" 36: }; 37: case "pdf": 38: string filePath = controllerContext.HttpContext.Server.MapPath("~/items.pdf"); 39: controllerContext.HttpContext.Response.AddHeader("content-disposition", 40: "attachment; filename=items.pdf"); 41: ToPdf(actionReturnValue as IList<Item>, filePath); 42: return new FileContentResult(StreamFile(filePath), "application/pdf"); 43:  44: default: 45: controllerContext.Controller.ViewData.Model = actionReturnValue; 46: return new ViewResult 47: { 48: TempData = controllerContext.Controller.TempData, 49: ViewData = controllerContext.Controller.ViewData 50: }; 51: } 52: } A big method there! The hook I was talking about kinda above actually is here. This is where different kinds / formats of output get returned based on the output type requested in the url. When the _outputType is not set (string.Empty as set in the Global.asax.cs file), the razor view gets rendered (lines 45-50). This is the default behavior in most MVC applications where-in a view (webform/razor) gets rendered on the browser. As you see here, this gets returned as a ViewResult. But then, for an outputType of json/xml/csv, a ContentResult gets returned, while for pdf, a FileContentResult is returned. Here are how the different kinds of output look like: This is how we can leverage this feature of ASP.NET MVC to developer a better application. I’ve used the iTextSharp library to convert to a pdf format. Mike gives quite a bit of detail regarding this library here. You can download the sample code here. (You’ll get an option to download once you open the link). Verdict: Hot chocolate: $3; Reebok shoes: $50; Your first car: $3000; Being able to extend a web application: Priceless.

    Read the article

  • CodePlex Daily Summary for Monday, March 15, 2010

    CodePlex Daily Summary for Monday, March 15, 2010New ProjectsAT Accounts: AT Accounts helps developers to intergrate accounting functionality in their applications. It has both the WPF userinterface and SilverlightChild page list(for dnn4/5): A free module which can display sub pages list for a selected tab. It is template based and support options like Recursive/Child tab prefix/link...dashCommerce: dashCommerce is the leading ASP.NET e-commerce platform.Fire Utilities: My Development Utiltites and base classes: New Zealand Bank Account ValidatorFlyCatch (Bugtracking System): A simple webbased Bugtracking System.fracback: Fractal feedback concepts, based on video feedbackftc3650: code for ftc 3650Google AJAX Search Services for jQuery: This plug-in encapsulates part of the Google AJAX Search API to streamline the process of Google Search integration.Little Black Book DB: This is the Database for the following Projects: SQL Azure PHP Connection SQL Azure Ruby Connection SQL Azure Python Connection SQL Azure .NE...MediaCommMVC: MediaCommMVC is a community platform focusing on photos, videos and discussions. It's based on ASP.NET MVC and uses (fluent) nhibernate, jquery an...Miracle OS: The Miracle OS is an OS from Fox. We work on it, but it isn't ready. Do you want help us? Please send a mail to victor@fox.fi.stMultiwfn: (1)Plotting various graph(filled color/contour/relief map...) (2)Generate Cube file (3)Manipulate & analyze wavefunction Supportting lots of proper...MySpace DataRelay: Data Relay is the foundation of MySpace's middle tier. At its heart, it is a messaging system for relaying information both between clients and ser...NinjaCMS: Ninja CMS is an asp.net based content management system which provides a designer friendly, developer friendly interface to work with. It's flexibl...open gaze and mouse analyzer: Ogama allows recording and analyzing eye- and mouse-tracking data from slideshow eyetracking experiments in parallel. It´s developed in C#.NET and ...Özkasoft.Net | E-Commerce: Özkasoft's E-Commerce ProjectProfiCV: Profi CVpyTarget: Implement a powerful iscsi target in python, and easily use under most popular systems. It also includes the following features: multi-target, mult...SharePoint Platform Extensions: SharePoint Platform Extensions by Espora. Sorting Algorithm Visualization: Sorting Algorithm Visualization Displays Bead Sort, Binary Tree Sort, Bubble Sort, Bucket Sort, Cocktail Sort, Counting Sort, Gnome Sort, In Place ...Specify: A framework for creating executable specifications in .NET. Spell Corrector: A spell corrector that uses Bayes algorithm and BK (Burkhard-Keller) tree.SQL Azure Ruby Connection: This is a demo to show how to connect to SQL Azure with Ruby on Rails.uManage - AD Self-Service Portal: uManage is an Active Directory Self-Service Portal as well as Help Desk web application designed for use on intranet systems. It allows users to u...Winforms Rounded Group Box Control: Rounded Group Box - A Grouping control with Rounded Corners, Gradients, and Drop ShadowWizard Engine: Host application agnostic wizard engine platform, that allows you to fluently define complex conditional flows and provides means for execution of ...WS-Transfer based File Upload: WS-Transer based upload of large files in multiple partsXAMLStylePad: XAMLStylePad - is a simple in use styles and templates XAML-editor. It designed for comfortable coding in XAML with real-time preview result on aut...Your Twitt Engine: Ovo je aplikacija za sve ljude koji su na svom radnom mjestu pod prismotrom poslodavca ili sefa, koji kontroliraju njihov monitor. Tako uz ovu apl...New ReleasesAmiBroker Plug-ins with C#. A non official AmiBroker Plug-in SDK: AniBroker Plug-in SDK v0.0.5: Removed dependency on .NET 4.0, now it works fine with .NET 2.0BeerMath.net: 0.1: Version 0.1Initial set of calculations supported: IBUs Color ABV/ABWChild page list(for dnn4/5): Child Page List 2.6: Source code is also include in module package.dashCommerce: dashCommerce Releases: You can download both Source and WebReady packages at http://www.dashcommerce.org. If you wish to submit patches, then use the Source Code tab her...ExcelDna: ExcelDna Version 0.23: ExcelDna Version 0.23 2010/03/14 - Packing and other features This release adds a number of features to ExcelDna: Add ExplicitExports attribute to ...Family Tree Analyzer: Version 1.0.7.1: Version 1.0.7.0 Update Census form to show family totals Fix England and Wales Lost Cousins reports to be England OR Wales Problems with Gedcom in...Foursquare BlogEngine Widget: foursquare widget for BlogEngine.NET Version 0.2: To see the changes which have been made, visit http://philippkueng.ch/post/Foursquare-BlogEngineNET-Widget-Version-02.aspx For installation instruc...GLB Virtual Player Builder: 0.4.0 Official Archetypes Release: Updated for new archetypes. The builder still includes the old player formats, and you can still import your old players' builds. Please PM me an...Home Access Plus+: v3.1.4.0: Version 3.1.3.1 Release Change Log: Added Breadcrumbs to My Computer File Changes: ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdb ~/images/arro...Little Black Book DB: Little Black Book R1: This is the first release of the Little black book presentation I presented at Confoo. I decided to package the Database along with the Windows Az...mite.net - .NET API for mite: Version 1.2.1: Added Support for budget type Modified TimerMapper to return timers Fixed Encoding issue in xml conversionMultiwfn: multiwfn1.0: multiwfn1.0Multiwfn: multiwfn1.0_source: multiwfn1.0_sourceMultiwfn: multiwfn1.1: multiwfn1.1Multiwfn: multiwfn1.1_source: multiwfn1.1_sourceMultiwfn: multiwfn1.2: 1.2 2010-FEB-9 *加入了对10f型轨道的支持。 *新支持非限制性Post-HF波函数用以计算自旋密度。 *新增加直接读入高斯03/09的fch文件的支持,可以观看NBO轨道,详见readme实例4.10。 *绘制平面图时允许通过输入三个点坐标定义平面,允许自定义平面的原点与平移向...Multiwfn: multiwfn1.2_source: Include all the file that needed by compilation in CVF6.5PowerShell Community Extensions: 2.0 Beta 2: Release NotesThis is a pretty close to final release. We have eliminated all of the names that ran afound of the module loading mechanism which me...pyTarget: pyTarget.binary-for-windows-x86.rar: pyTarget.binary-for-windows-x86.rarpyTarget: pyTarget.src.tar.bz2: pyTarget.src.tar.bz2RedBulb for XNA Framework: RedBulbConsole (Console, Menu and TrackHUD Sample): http://bayimg.com/image/jalhmaacd.jpgScrum Sprint Monitor: 1.0.0.45262 (.NET 4.0 RC): Tested against TFS 2010 RC. For the .NET 3.5 SP1 platform, use the .NET 3.5 SP1 download. What is new in this release? Major performance increase ...sELedit: sELedit v1.1: Removed: Clone and Delete Button Added: Context Menu to Item List Added: Clone and Delete button to Context Menu Added: Export / Import Item ...Sorting Algorithm Visualization: Beta 1: Sorting Algorithm VisualizationSpecify: Version 1.0: Version 1.0Spell Corrector: Spell Corrector 0.1: A basic version that supports basic functionality.Spell Corrector: Spell Corrector 0.1 Source Code: Source code of version 0.1Spiral Architecture Driven Development (SADD): SADD v.0.9: Pre-final release with the NEW materials now all in English ! The Final release is coming soon. After guest column for SADD publication in MS Ar...Spiral Architecture Driven Development (SADD) for Russian: SADD v.0.9: Pre-final release with the NEW materials now all in English ! The Final release is coming soon. After guest column for SADD publication in MS Ar...SQL Azure Ruby Connection: Little Black Book Ruby R1: This is the Ruby Demo that I demostrated at Confoo. Special Thanks to Tony Thompson for putting this demo together. To check out Tony's Portfolio ...The Scrum Factory: The Scrum Factory Server - V1a: This is the newest version of the server. Some minor bugs from version v1 were fixed, and some slighted changed were made some database views.twNowplaying: twNowplaying 1.0.0.4: Please note that the user has to press the Twitter logo to log in the first time the application is started.uManage - AD Self-Service Portal: uManage - v1.0 (.NET 4.0 RC): Initial Release of uManage. NOTE: Designed for ASP.NET and .NET 4.0 RC ONLY! This is the initial release of uManage and covers the first phase of ...Virtu: Virtu 0.8: Source Requirements.NET Framework 3.5 with Service Pack 1 Visual Studio 2008 with Service Pack 1, or Visual C# 2008 Express Edition with Service Pa...Visual Studio DSite: Speech Synthesizer (Text to Speech) in Visual C++: A very simple text to speech program written in visual c 2008.White Tiger: 0.0.4.0: *now you can disable the file security checks *winforms aplications created to manage tablesWinforms Rounded Group Box Control: Release 1.0: To use this control simply add the class to your project and compile it. It will then show up in the projects components section in the toolbox. ...WS-Transfer based File Upload: 0.5: Implements the binary file transfer mechanism onlyXsltDb - DotNetNuke XSLT module: 01.00.89: Super modules configuration names. 16767 - Fixed more bug fixes...Yakiimo3D: DirectX11 Rheinhard Tonemapping Source and Binary: DirectX11 Rheinhard tonemapping source and binary.Your Twitt Engine: test: Slobodno probajte sa vasim twitter korisničkim računomMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitASP.NET Ajax LibraryWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMost Active ProjectsLINQ to TwitterRawrN2 CMSBlogEngine.NETpatterns & practices – Enterprise LibrarySharePoint Team-MailerjQuery Library for SharePoint Web ServicesCaliburn: An Application Framework for WPF and SilverlightFarseer Physics EngineCalcium: A modular application toolset leveraging Prism

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Day 2 - Game Design Documentation

    - by dapostolov
    So yesterday I didn't cut any code for my game but I was able to do a tiny bit of research on the XNA Game Development Technology and the communities out there and do you know what? I feel I'm a bit closer to my goal. The bad news is today I didn't cut code either. However, not all is lost because I wanted to get my ideas on paper and today I just did that.  Today, I began to jot down notes about the game and how I felt the visual elements would interact with each other. Unlike my workplace, my personal level of documentation is nothing more than a task list or a mind map of my ideas; it helps me streamline my solutions quiet effectively and circumvent the long process of articulating each thought to the n-th degree. I truly dislike documentation (because I have an extremely hard time articulating my thought and solutions); however, because I tend to do a really good job with documentation I tend to get stuck writing the buggers. But as a generalist remark: 'No Developer likes documentation.' For now let's stick with my basic notes and call this post a living document. Here are my notes, fresh, from after watching the new first episode of Merlin second season! Actually, a quick recommendation to anyone who is reading this (if anyone is): I truly recommend you envelope yourself in the medium or task you're trying to tackle. Be one with moment and feel it! For instance: Are you writing a fantasy script / game? What would the music of the genre sound like? For me the Conan the Barbarian soundtrack by Basil Poledouris is frackin awesome. There are many other good CD's out there, which I listen to (some who even use medival instruments, but Conan I keep returning to. It's a creative trigger for me. Ask yourself what would the imagery look like? Time to surf google for artist renditions of fantasy! What would the game feel like? Start playing some of your favorite games that inspire you, be wary though, have some self control and don't let it absorb your time. Anyhow, onto the documentation... Screens, Scenes, and Sprites. Oh My! (groan...) The first thing that came to mind were the screens, I thought the following would suffice: Menu Screen Character Customisation Screen Loading Screen? Battle Ground The Menu Screen Ok. So, the thought here is when the game loads a huge title is displayed: Wizard Wars. The player is prompted with 3 menu items: 1 Player Game, 2 Player Game, and Exit. Since I'm targetting the PC platform, as a non-networked game to start, I picture myself running my mouse over each menu option and the visual element of the menu item changes, along with a sound to indicate that I am over a curent menu item. And as I move my mouse away, it changes back, and possibly an exit mouse sound. Maybe on the screen somewhere is a brazier alit with a magical tome open right beside it, OR, maybe the tome is the menu! I hear the menu music as mellow, not obtrusive or piercing. On a menu item select, a confirmation sound bellows to indicate the players selection. The Esc key will always return me to the previous screens or desktop. The menu screen must feel...dark, like a really important ritual is about to happen and thus the music should build up. 1 Player Game - > Customize Character(s) 2 Player Game - > Customize Character(s) Exit - > Back to Windows Notes: So the first thing I pick up here are a couple things: First and foremost, my artistic abilities suck crap, so I may have to hire an artist (now that i've said that, lets get techy) graphical objects will be positioned within a scene on each screen / window. Menu items will be represented grapically, possibly animated, and have sound / animation effects triggered by user input or a time line. I have an animated scene involving a brazier or fire on a stick IF I was to move this game to the xbox, I'd have to track which menu item is currently selected (unless I do a mouse pointer type thing.) WindowObject has a scene A Scene has many GameObjects GameObject has a position graphic or animation MenuObject is a GameObject which has a mouse in, mouse out, and click event which either does something graphically (animation), does something with sound, or moves to another screen.  Character Customisation Screen With either the 1 or 2 player option selected, both selections will come to this screen; a wizard requires a name, powers, and vestements of course! Player one will configure his character first and then player two. I considered a split screen for PC but to have two people fighting over a keyboard would probably suck. For XBox, a split screen could work; maybe when I get into the networking portion (phase 2 blog?) of this game I will remove the 2 player option for PC and provide only multiplayer and I will leave 2 player for xbox...hmm... Anyhow...I picture the creation process as follows: Name: (textbox / keyboard entry) - for xbox, this would have to be different. Robe Color: (color box, or something) Stats: Speed, Oomph, and Health. (as sliders) 1 as minimum and 10 as maximum. Ok, Back, and Cancel buttons / options. Each stat has a benefit which are listed below. The idea is the player decides if he wants his wizard to run fast, be a tank and ... hit with a purse.Regardless, the player will have a pool of 12 points to use. Ideally, A balanced wizard will have 5 in each attribute. Spells? The only spell of choice is a ball of fire which comes without question. The music and screen should still feel like a ritual. The Character Speed Basically, how fast your character moves and casts. Oomph (Best Monster Truck Voice): PURE POWAH!!! The damage output of your fireball. Health How much damage you can take. Notes: I realise the game dynamics may sound uninteresting at the moment; but I think after a couple releases, we could have some other grand ideas such as: saved profiles, gold to upgrade arsenal of spells, talents, etc...but for now...a vanilla fireball thrower mage will suffice for this experiment. OK. So... a MenuObject  may need to be loosely coupled to allow future items such as networking? may be a button? a CharacterObject has a name speed oomph health and a funky robe color. cap on the three stats (1-10) an arsenal of 1 spell (possibly could expand this) The Loading Screen As is. The Battleground Screen For now, I'm keeping the screen as max resolution for the PC. The screen isn't going to move or even be a split screen. I'm not aiming high here because I want to see what level of change is involved when new features / concepts are added to game content. I'm interested to find out if we could apply techniques such as MVC or MVVM to this type of development or is it too tightly coupled? This reminds me when when my best friend and I were brainstorming our game idea (this is going back a while...1994, 6?) and he cringed at the thought of bringing business technology into games, especially when I suggested a database to store character information and COM / DCOM as the medium, but it seems I wasn't far off (reflecting); just like his implementation of a xml "config file" for dynamic direct-x menus back before .net in 1999...anyhow...i digress... The Battle One screen, two characters lobing balls of fire at each other...It doesn't get better than that. Every so often a scroll appears...and the fireballs bounce off walls, or the wizard has rapid fire, or even scrolls of healing! The scroll options are endless. Two bars at the top, each the color of the wizard (with their name beside the bar) indicate how much health they have. Possibly the appearance of the scrolls means the battle is taking too long? I'm thinking 1 player controls: up, down, left, right and space to fire the button. Or even possibly, mouse click and shift - mouse button to fire a spell in the direction they are facing. Two player controls: a, s, d, f and space AND arrows (up, down, left, right) and Del key or Crtl. The game ends when a player has 0 health and a dialog box appears asking for a rematch / reconfigure / exit. Health goes down when a fireball (friendly or not), connects with a wizard. When a wizard connects with a scroll, a countdown clock / icon appears near the health bar and the wizard begins to glow. For the most part, a wizard can have only scroll 1 effect on him at a time. Notes: Ok, there's alot to cover here. a CharacterObject is a GameObject it travels at a set velocity it travels in a direction it has sounds (walking, running, casting, impact, dying, laughing, whistling, other?) it has animations (walking, running, casting, impact, dying, laughing, idle, other?) it has a lifespan (determined by health) it is alive or dead it has a position a ScrollObject is a GameObject it carries a transferance of points "damage" (or healing, bad scroll effect?) (determinde by caster) it carries a transferance of "other" it is stationary it has a sound on impact it has a stationary animation it has an impact animation / or transfers an impact animation it has a fade animation? it has a lifespan (determined by game) it is alive or dead it has a position a WallObject is a GameObject it has a sound on fireball impact? it is a still image / stationary it has an impact animation / or transfers an impact animation it is dead it has a position A FireBall is a GameObject it carries a transferance of poinst "damage" (or healing, bad scroll effect?) (determinde by caster) it travels at a set velocity it travels in a direction it has a sound it has a travel animation it has an impact animation / or transfers an impact animation it has a fade animation? it has a lifespan (determined by caster) it is alive or dead it has a position As I look at this, I can see some common attributes in each object that I can carry up to the GameObject. I think I'm going to end the documentation here, it's taken me a bit of time to type this all out, tomorrow. I'll load up my IDE and my paint studio to get some good old fashioned cowboy hacking going!   D.

    Read the article

  • Add Free Google Apps to Your Website or Blog

    - by Matthew Guay
    Would you like to have an email address from your own domain, but prefer Gmail’s interface and integration with Google Docs?  Here’s how you can add the free Google Apps Standard to your site and get the best of both worlds. Note: To signup for Google Apps and get it setup on your domain, you will need to be able to add info to your WordPress blog or change Domain settings manually. Getting Started Head to the Google Apps signup page (link below), and click the Get Started button on the right.  Note that we are signing up for the free Google Apps which allows a max of 50 users; if you need more than 50 email addresses for your domain, you can choose Premiere Edition instead for $50/year. Select that you are the Administrator of the domain, and enter the domain or subdomain you want to use with Google Apps.  Here we’re adding Google Apps to the techinch.com site, but we could instead add Apps to mail.techinch.com if needed…click Get Started. Enter your name, phone number, an existing email address, and other Administrator information.  The Apps signup page also includes some survey questions about your organization, but you only have to fill in the required fields. On the next page, enter a username and password for the administrator account.  Note that the user name will also be the administrative email address as [email protected]. Now you’re ready to authenticate your Google Apps account with your domain.  The steps are slightly different depending on whether your site is on WordPress.com or on your own hosting service or server, so we’ll show how to do it both ways.   Authenticate and Integrate Google Apps with WordPress.com To add Google Apps to a domain you have linked to your WordPress.com blog, select Change yourdomain.com CNAME record and click Continue. Copy the code under #2, which should be something like googleabcdefg123456.  Do not click the button at the bottom; wait until we’ve completed the next step.   Now, in a separate browser window or tab, open your WordPress Dashboard.  Click the arrow beside Upgrades, and select Domains from the menu. Click the Edit DNS link beside the domain name you’re adding to Google Apps. Scroll down to the Google Apps section, and paste your code from Google Apps into the verification code field.  Click Generate DNS records when you’re done. This will add the needed DNS settings to your records in the box above the Google Apps section.  Click Save DNS records. Now, go back to the Google Apps signup page, and click I’ve completed the steps above. Authenticate Google Apps on Your Own Server If your website is hosted on your own server or hosting account, you’ll need to take a few more steps to add Google Apps to your domain.  You can add a CNAME record to your domain host using the same information that you would use with a WordPress account, or you can upload an HTML file to your site’s main directory.  In this test we’re going to upload an HTML file to our site for verification. Copy the code under #1, which should be something like googleabcdefg123456.  Do not click the button at the bottom; wait until we’ve completed the next step first. Create a new HTML file and paste the code in it.  You can do this easily in Notepad: create a new document, paste the code, and then save as googlehostedservice.html.  Make sure to select the type as All Files or otherwise the file will have a .txt extension. Upload this file to your web server via FTP or a web dashboard for your site.  Make sure it is in the top level of your site’s directory structure, and try visiting it at yoursite.com/googlehostedservice.html. Now, go back to the Google Apps signup page, and click I’ve completed the steps above. Setup Your Email on Google Apps When this is done, your Google Apps account should be activated and ready to finish setting up.  Google Apps will offer to launch a guide to step you through the rest of the process; you can click Launch guide if you want, or click Skip this guide to continue on your own and go directly to the Apps dashboard.   If you choose to open the guide, you’ll be able to easily learn the ropes of Google Apps administration.  Once you’ve completed the tutorial, you’ll be taken to the Google Apps dashboard. Most of the Google Apps will be available for immediate use, but Email may take a bit more setup.  Click Activate email to get your Gmail-powered email running on your domain.    Add Google MX Records to Your Server You will need to add Google MX records to your domain registrar in order to have your mail routed to Google.  If your domain is hosted on WordPress.com, you’ve already made these changes so simply click I have completed these steps.  Otherwise, you’ll need to manually add these records before clicking that button.   Adding MX Entries is fairly easy, but the steps may depend on your hosting company or registrar.  With some hosts, you may have to contact support to have them add the MX records for you.  Our site’s host uses the popular cPanel for website administration, so here’s how we added the MX Entries through cPanel. Add MX Entries through cPanel Login to your site’s cPanel, and click the MX Entry link under Mail. Delete any existing MX Records for your domain or subdomain first to avoid any complications or interactions with Google Apps.  If you think you may want to revert to your old email service in the future, save a copy of the records so you can switch back if you need. Now, enter the MX Records that Google listed.  Here’s our account after we added all of the entries to our account. Finally, return to your Google Apps Dashboard and click the I have completed these steps button at the bottom of the page. Activating Service You’re now officially finished activating and setting up your Google Apps account.  Google will first have to check the MX records for your domain; this only took around an hour in our test, but Google warns it can take up to 48 hours in some cases. You may then see that Google is updating its servers with your account information.  Once again, this took much less time than Google’s estimate. When everything’s finished, you can click the link to access the inbox of your new Administrator email account in Google Apps. Welcome to Gmail … at your own domain!  All of the Google Apps work just the same in this version as they do in the public @gmail.com version, so you should feel right at home. You can return to the Google Apps dashboard from the Administrative email account by clicking the Manage this domain at the top right. In the Dashboard, you can easily add new users and email accounts, as well as change settings in your Google Apps account and add your site’s branding to your Apps. Your Google Apps will work just like their standard @gmail.com counterparts.  Here’s an example of an inbox customized with the techinch logo and a Gmail theme. Links to Remember Here are the common links to your Google Apps online.  Substitute your domain or subdomain for yourdomain.com. Dashboard https://www.google.com/a/cpanel/yourdomain.com Email https://mail.google.com/a/yourdomain.com Calendar https://www.google.com/calendar/hosted/yourdomain.com Docs https://docs.google.com/a/yourdomain.com Sites https://sites.google.com/a/yourdomain.com Conclusion Google Apps offers you great webapps and webmail for your domain, and let’s you take advantage of Google’s services while still maintaining the professional look of your own domain.  Setting up your account can be slightly complicated, but once it’s finished, it will run seamlessly and you’ll never have to worry about email or collaboration with your team again. Signup for the free Google Apps Standard Similar Articles Productive Geek Tips Mysticgeek Blog: Create Your Own Simple iGoogle GadgetAccess Your Favorite Google Services in Chrome the Easy WayRevo Uninstaller Pro [REVIEW]Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPFind Similar Websites in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox

    Read the article

  • CodePlex Daily Summary for Wednesday, March 10, 2010

    CodePlex Daily Summary for Wednesday, March 10, 2010New ProjectsASP.NET jQuery MessageBox: The ASP.NET jQuery it's an Web User Control that uses jQuery framework to enable diferent ways to present information to the user, by using these ...CommentRemover: Utility for removing comments from source codes. Support PL/SQL, Delphi, C/C#/C++ Developed in C# Requirement Microsoft .NET Framework 3.5DotNetNuke® RadMenu: DNNRadMenu makes it easy to create skins which use telerik RadMenu functionality. Licensing permits anyone (including designers) to use the compon...DotNetNuke® Skin AlphaBrisk: A DotNetNuke Design Challenge skin package submitted to the "Web Standards" category by dnnskin.net. Eight themes using transparent png, div, CSS, ...DotNetNuke® Skin Collaborate: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Cuong Dang of R2Integrated. This package is 100% XHTML an...DotNetNuke® Skin TR: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Tracy Wittenkeller of T-Worx. This package is 100% XHTML, ...Encrypted Notes: Encrypted Notes is similar to Notes, but uses Triple DES to encrypt text and files. It has a random key generator, and can save the key. It is deve...FalconLobby: FalconLobby is an authorized AddOn for Falcon 4.0 Allied Force which was created to support the multiplayer experience. FalconLobby retrieves the l...INETA Europe WebSite: Website for INETA EuropeInsert a Favorite (Bookmark) plugin for Windows Live Writer: This Windows Live Writer plugin allows you to select a Favorite (Bookmark) and insert it into your blog entry.Javascript Lib: an javascript libraryjqGrid ASP.Net MVC Control: A fully integrated ASP.Net MVC (2.0) grid control based on the successfull jqGrid plugin for the jQuery jscript framework. Among the features of...Mosaictor: Mosaictor is a per project of mine that I started halfway my education. It is a photo mosaic creator using locally saved files and files obtained t...Notes: Notes is a simple but fast text editor. It can save in many text formats, and includes many features, such as templates (soon to be customizable), ...notmuchweb: A web frontend for notmuchPervasiveID: The PID is actively involved in Open Source ID community-building and education. PID members frequently travel the world to attend ID conferences a...Proyect Electronica: Proyecto de electronicaRapidshare Downloader 2: Rapidshare Downloader 2ROAD is Rapid Oberon Application Development: A suite of integrated tools for the develpment of Oberon-2 applicationSDNTFSIntegration: TFS Integration.SilverlightImageUpload: SilverlightImageUploadSMIL - SharePoint Map Integration Layer: .Useful SharePoint Site Workflow Utilities: This project aims to make it easy use SharePoint 2010's Site Workflows as "event handlers" for various back end systems by providing ways to start ...Windows Media Autorization: Windows Media Autorizaton PlugIn for windows media 9 WinMo Twitter Widget StarterKit: This project will allow you to quickly create Widgets that run on a Windows Mobile 6.5 phone to allow you to view Tweets designated by a hash tag. ...XNA 3D World Studio Content Pipeline: XNA 3D World Studio Content Pipeline New ReleasesAPSales - CRM Software as a Service: APSales 0.1.2: This version add some interesting features to the project: Implements a Grid Control Custom View Query Use lastest version(2.0.2) of APEnnead.net ...ASP.NET jQuery MessageBox: ASP.NET jQuery MessageBox 0.1: Project Description The ASP.NET jQuery it's an Web User Control que uses jQuery framework to enable diferent ways to present information to the use...BTP Tools: CSBC+CUVC+HCSBC.dict files 2010-03-09: a space character should be only between <Strong Number Pattern> and <Count> like: <Text><Strong Number pattern><space character> <Count> The abov...Citrix HDX MediaStream for Flash System Verifier: HDX Flash Verifier Beta (v1.20): Reduced the number of exceptions that terminate the verification process.Code examples, utilities and misc from Lars Wilhelmsen [MVP]: LarsW.MexEdmxFixer 1.5: Added some missing sub elements from the EDMX file's Designer element; Connection and Output. Without them, some of the properties in the designer ...CommonLibrary.NET: CommonLibrary.NET 0.9.4 - Beta 2: A collection of very reusable code and components in C# 3.5 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars,...Encrypted Notes: Source Code: This has the all the code for Encrypted Notes in a Text file.Hybrid Windows Service: Release Assembly: Main Assembly. Usage: 1. Add reference to this dll in your 'Windows Service' project. 2. Replace references to ServiceBase to HybridServiceBase in...jqGrid ASP.Net MVC Control: Version 1.0.0.0: Initial Versionkdar: KDAR 0.0.16: KDAR - Kernel Debugger Anti Rootkit - KINTERRUPT object check added - load image notifier check addedlatex2mathml: 1.0 alpha: This is the first public release of Latex2MathML. Lots are left to add and fix. I encourage you to test it. If something goes wrong, send me the lo...MapWindow GIS: MapWindow 6.0 msi (March 9): This fixes a bug with saving and opening maps.Microsoft Research Biology Extension for Excel: MSR Biology Extension for Excel - Beta 2 (Update): This is an updated release for the Beta 2 Installer for the MSR Biology Extension for Excel. A couple of identified issues with the installation f...Notes: Notes 5.2: This is the latest version of Notes (5.2). It has an installer - it will create a directory 'CPascoe' in My Documents. Once you have extracted the...Notes: Source Code: This has the all the code for Notes in a Text file.RedBulb for XNA Framework: Tree Massacre XMAS Edition (Sample): Tree Massacre XMAS Edition Source Code and Creators Club Package http://bayimg.com/image/jalkiaacb.jpgRoTwee: RoTwee (7.0.2.0): Now color mode is introduced to RoTwee. Push change color button and you can change color mode of RoTwee. Recommended mode is active rainbow mode :)SharePoint Team-Mailer: SharePoint Team-Mailer v1.0: Recommended versionsPWadmin: pwAdmin v0.7_nightly: Nightly Build --------------------- + Target JRE -> 1.5.0_21 + Target ApplicationServer -> Apache Tomcat 5.5.28 + Added xml editor (only working fo...SQL Server PowerShell Extensions: 2.1 Production: Release 2.1 re-implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 9 modules with 133 advanced functions, 2 cmdlets and 7 scri...TMap for VS2010: TMap for VS2010 (MSF Agile) RC Release: Release of the TMap process template for VS2010 combined with the MSF Agile process template basd on the Release Candidate. The references to the g...TS3QueryLib.Net: TS3QueryLib.Net Version 0.19.14.0: Changelog Added property "IsClientRecording" to class "ClientListEntry" which is used in method "GetClientList" of QueryRunner class. (Change of Be...VCC: Latest build, v2.1.30309.0: Automatic drop of latest buildWinMo Twitter Widget StarterKit: Tweet Viewer Files: Files necessary to create your own Tweet ViewerWPF AutoComplete TextBox Control: Version 1.1: This release includes accumulated bug fixes since the initial release. Besides, adds experimental asynchronous support. Sample application gets...XNA 3D World Studio Content Pipeline: XNA 3DWS Content Pipeline: This is an rar file containing the latest content importer codeMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesASP.NET Ajax LibraryMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsjQuery Library for SharePoint Web ServicesBlogEngine.NETN2 CMSFasterflect - A Fast and Simple Reflection APIFarseer Physics Enginepatterns & practices – Enterprise LibraryCaliburn: An Application Framework for WPF and Silverlight

    Read the article

  • Node.js Adventure - Node.js on Windows

    - by Shaun
    Two weeks ago I had had a talk with Wang Tao, a C# MVP in China who is currently running his startup company and product named worktile. He asked me to figure out a synchronization solution which helps his product in the future. And he preferred me implementing the service in Node.js, since his worktile is written in Node.js. Even though I have some experience in ASP.NET MVC, HTML, CSS and JavaScript, I don’t think I’m an expert of JavaScript. In fact I’m very new to it. So it scared me a bit when he asked me to use Node.js. But after about one week investigate I have to say Node.js is very easy to learn, use and deploy, even if you have very limited JavaScript skill. And I think I became love Node.js. Hence I decided to have a series named “Node.js Adventure”, where I will demonstrate my story of learning and using Node.js in Windows and Windows Azure. And this is the first one.   (Brief) Introduction of Node.js I don’t want to have a fully detailed introduction of Node.js. There are many resource on the internet we can find. But the best one is its homepage. Node.js was created by Ryan Dahl, sponsored by Joyent. It’s consist of about 80% C/C++ for core and 20% JavaScript for API. It utilizes CommonJS as the module system which we will explain later. The official definition of Node.js is Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. First of all, Node.js utilizes JavaScript as its development language and runs on top of V8 engine, which is being used by Chrome. It brings JavaScript, a client-side language into the backend service world. So many people said, even though not that actually, “Node.js is a server side JavaScript”. Additionally, Node.js uses an event-driven, non-blocking IO model. This means in Node.js there’s no way to block currently working thread. Every operation in Node.js executed asynchronously. This is a huge benefit especially if our code needs IO operations such as reading disks, connect to database, consuming web service, etc.. Unlike IIS or Apache, Node.js doesn’t utilize the multi-thread model. In Node.js there’s only one working thread serves all users requests and resources response, as the ST star in the figure below. And there is a POSIX async threads pool in Node.js which contains many async threads (AT stars) for IO operations. When a user have an IO request, the ST serves it but it will not do the IO operation. Instead the ST will go to the POSIX async threads pool to pick up an AT, pass this operation to it, and then back to serve any other requests. The AT will actually do the IO operation asynchronously. Assuming before the AT complete the IO operation there is another user comes. The ST will serve this new user request, pick up another AT from the POSIX and then back. If the previous AT finished the IO operation it will take the result back and wait for the ST to serve. ST will take the response and return the AT to POSIX, and then response to the user. And if the second AT finished its job, the ST will response back to the second user in the same way. As you can see, in Node.js there’s only one thread serve clients’ requests and POSIX results. This thread looping between the users and POSIX and pass the data back and forth. The async jobs will be handled by POSIX. This is the event-driven non-blocking IO model. The performance of is model is much better than the multi-threaded blocking model. For example, Apache is built in multi-threaded blocking model while Nginx is in event-driven non-blocking mode. Below is the performance comparison between them. And below is the memory usage comparison between them. These charts are captured from the video NodeJS Basics: An Introductory Training, which presented at Cloud Foundry Developer Advocate.   Node.js on Windows To execute Node.js application on windows is very simple. First of you we need to download the latest Node.js platform from its website. After installed, it will register its folder into system path variant so that we can execute Node.js at anywhere. To confirm the Node.js installation, just open up a command windows and type “node”, then it will show the Node.js console. As you can see this is a JavaScript interactive console. We can type some simple JavaScript code and command here. To run a Node.js JavaScript application, just specify the source code file name as the argument of the “node” command. For example, let’s create a Node.js source code file named “helloworld.js”. Then copy a sample code from Node.js website. 1: var http = require("http"); 2:  3: http.createServer(function (req, res) { 4: res.writeHead(200, {"Content-Type": "text/plain"}); 5: res.end("Hello World\n"); 6: }).listen(1337, "127.0.0.1"); 7:  8: console.log("Server running at http://127.0.0.1:1337/"); This code will create a web server, listening on 1337 port and return “Hello World” when any requests come. Run it in the command windows. Then open a browser and navigate to http://localhost:1337/. As you can see, when using Node.js we are not creating a web application. In fact we are likely creating a web server. We need to deal with request, response and the related headers, status code, etc.. And this is one of the benefit of using Node.js, lightweight and straightforward. But creating a website from scratch again and again is not acceptable. The good news is that, Node.js utilizes CommonJS as its module system, so that we can leverage some modules to simplify our job. And furthermore, there are about ten thousand of modules available n the internet, which covers almost all areas in server side application development.   NPM and Node.js Modules Node.js utilizes CommonJS as its module system. A module is a set of JavaScript files. In Node.js if we have an entry file named “index.js”, then all modules it needs will be located at the “node_modules” folder. And in the “index.js” we can import modules by specifying the module name. For example, in the code we’ve just created, we imported a module named “http”, which is a build-in module installed alone with Node.js. So that we can use the code in this “http” module. Besides the build-in modules there are many modules available at the NPM website. Thousands of developers are contributing and downloading modules at this website. Hence this is another benefit of using Node.js. There are many modules we can use, and the numbers of modules increased very fast, and also we can publish our modules to the community. When I wrote this post, there are totally 14,608 modules at NPN and about 10 thousand downloads per day. Install a module is very simple. Let’s back to our command windows and input the command “npm install express”. This command will install a module named “express”, which is a MVC framework on top of Node.js. And let’s create another JavaScript file named “helloweb.js” and copy the code below in it. I imported the “express” module. And then when the user browse the home page it will response a text. If the incoming URL matches “/Echo/:value” which the “value” is what the user specified, it will pass it back with the current date time in JSON format. And finally my website was listening at 12345 port. 1: var express = require("express"); 2: var app = express(); 3:  4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: app.get("/Echo/:value", function(req, res) { 9: var value = req.params.value; 10: res.json({ 11: "Value" : value, 12: "Time" : new Date() 13: }); 14: }); 15:  16: console.log("Web application opened."); 17: app.listen(12345); For more information and API about the “express”, please have a look here. Start our application from the command window by command “node helloweb.js”, and then navigate to the home page we can see the response in the browser. And if we go to, for example http://localhost:12345/Echo/Hello Shaun, we can see the JSON result. The “express” module is very populate in NPM. It makes the job simple when we need to build a MVC website. There are many modules very useful in NPM. - underscore: A utility module covers many common functionalities such as for each, map, reduce, select, etc.. - request: A very simple HTT request client. - async: Library for coordinate async operations. - wind: Library which enable us to control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps.   Node.js and IIS I demonstrated how to run the Node.js application from console. Since we are in Windows another common requirement would be, “can I host Node.js in IIS?” The answer is “Yes”. Tomasz Janczuk created a project IISNode at his GitHub space we can find here. And Scott Hanselman had published a blog post introduced about it.   Summary In this post I provided a very brief introduction of Node.js, includes it official definition, architecture and how it implement the event-driven non-blocking model. And then I described how to install and run a Node.js application on windows console. I also described the Node.js module system and NPM command. At the end I referred some links about IISNode, an IIS extension that allows Node.js application runs on IIS. Node.js became a very popular server side application platform especially in this year. By leveraging its non-blocking IO model and async feature it’s very useful for us to build a highly scalable, asynchronously service. I think Node.js will be used widely in the cloud application development in the near future.   In the next post I will explain how to use SQL Server from Node.js.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Wireless will not connect

    - by azz0r
    Hello, I have installed Ubuntu 10.10 on the same machine as my windows setup. However, it will not connect to my wireless network. It can see its there, it can attempt to connect, yet it will never connect. It will keep bringing up the password prompt everyso often. I have tried turning my security to WEP, I ended up turning it back to WPA2. It is set to AES (noted a few threads on google about that). Can you assist? I would love to dive into Ubuntu, but without the internet its pointless. --- lshw -C network --- *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 02 serial: 00:1d:92:ea:cc:62 capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8168 driverversion=8.020.00-NAPI duplex=half latency=0 link=no multicast=yes port=twisted pair resources: irq:29 ioport:e800(size=256) memory:feaff000-feafffff memory:f8ff0000-f8ffffff(prefetchable) memory:feac0000-feadffff(prefetchable) *-network description: Wireless interface physical id: 1 logical name: wlan0 serial: 00:15:af:72:a4:38 capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=IEEE 802.11bgn --- iwconfig ---- lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Wuggawoo" Mode:Managed Frequency:2.437 GHz Access Point: Not-Associated Tx-Power=9 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on --- cat /etc/network/interfaces ---- auto lo iface lo inet loopback logs deamon.log --- Jan 19 04:17:09 ubuntu wpa_supplicant[1289]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 04:17:09 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 04:17:09 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 04:17:11 ubuntu wpa_supplicant[1289]: WPS-AP-AVAILABLE Jan 19 04:17:11 ubuntu wpa_supplicant[1289]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 04:17:11 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0/wireless): association took too long. Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): device state change: 5 -> 6 (reason 0) Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0/wireless): asking for new secrets Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) scheduled... Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) started... Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): device state change: 6 -> 4 (reason 0) Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) scheduled... Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) complete. Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) starting... Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): device state change: 4 -> 5 (reason 0) Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0/wireless): connection 'Wuggawoo' has security, and secrets exist. No new secrets needed. Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: added 'ssid' value 'Wuggawoo' Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: added 'scan_ssid' value '1' Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: added 'key_mgmt' value 'WPA-PSK' Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: added 'psk' value '<omitted>' Jan 19 04:17:12 ubuntu NetworkManager: nm_setting_802_1x_get_pkcs11_engine_path: assertion `NM_IS_SETTING_802_1X (setting)' failed Jan 19 04:17:12 ubuntu NetworkManager: nm_setting_802_1x_get_pkcs11_module_path: assertion `NM_IS_SETTING_802_1X (setting)' failed Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) complete. Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: set interface ap_scan to 1 Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 04:17:13 ubuntu wpa_supplicant[1289]: WPS-AP-AVAILABLE Jan 19 04:17:13 ubuntu wpa_supplicant[1289]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 04:17:13 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 04:17:23 ubuntu wpa_supplicant[1289]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 04:17:23 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 04:17:23 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 04:17:24 ubuntu AptDaemon: INFO: Initializing daemon Jan 19 04:17:25 ubuntu wpa_supplicant[1289]: WPS-AP-AVAILABLE Jan 19 04:17:25 ubuntu wpa_supplicant[1289]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 04:17:25 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 04:17:27 ubuntu NetworkManager: <info> wlan0: link timed out. --- kern.log --- Jan 19 04:18:11 ubuntu kernel: [ 142.420024] wlan0: direct probe to AP 94:44:52:0d:22:0d timed out Jan 19 04:18:13 ubuntu kernel: [ 144.333847] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 1) Jan 19 04:18:13 ubuntu kernel: [ 144.539996] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 2) Jan 19 04:18:13 ubuntu kernel: [ 144.750027] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 3) Jan 19 04:18:14 ubuntu kernel: [ 144.940022] wlan0: direct probe to AP 94:44:52:0d:22:0d timed out Jan 19 04:18:25 ubuntu kernel: [ 155.832995] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 1) Jan 19 04:18:25 ubuntu kernel: [ 156.030046] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 2) Jan 19 04:18:25 ubuntu kernel: [ 156.230039] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 3) Jan 19 04:18:25 ubuntu kernel: [ 156.430039] wlan0: direct probe to AP 94:44:52:0d:22:0d timed out --- syslog --- Jan 19 04:18:46 ubuntu wpa_supplicant[1289]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 04:18:46 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 04:18:46 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 04:18:48 ubuntu wpa_supplicant[1289]: WPS-AP-AVAILABLE Jan 19 04:18:48 ubuntu wpa_supplicant[1289]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 04:18:48 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 04:18:48 ubuntu kernel: [ 178.833905] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 1) Jan 19 04:18:48 ubuntu kernel: [ 179.030035] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 2) Jan 19 04:18:48 ubuntu kernel: [ 179.230020] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 3) Jan 19 04:18:48 ubuntu kernel: [ 179.433634] wlan0: direct probe to AP 94:44:52:0d:22:0d timed out lspci and lsusb lspci -- 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:02.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (ext gfx port 0) 00:05.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 1) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] 00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:12.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:13.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 3a) 00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: ATI Technologies Inc SB700/SB800 LPC host controller 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge 00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 01:00.0 VGA compatible controller: nVidia Corporation G80 [GeForce 8800 GTS] (rev a2) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02) 03:00.0 FireWire (IEEE 1394): JMicron Technology Corp. IEEE 1394 Host Controller -- lsusb -- Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 003: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser Bus 004 Device 002: ID 045e:0730 Microsoft Corp. Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 003: ID 13d3:3247 IMC Networks 802.11 n/g/b Wireless LAN Adapter Bus 002 Device 002: ID 0718:0628 Imation Corp. Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 046d:08c2 Logitech, Inc. QuickCam PTZ Bus 001 Device 002: ID 0424:2228 Standard Microsystems Corp. 9-in-2 Card Reader Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub With no security on my router I still can't connect, I get: Jan 19 15:58:01 ubuntu wpa_supplicant[1165]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 15:58:01 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 15:58:01 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 15:58:02 ubuntu wpa_supplicant[1165]: WPS-AP-AVAILABLE Jan 19 15:58:02 ubuntu wpa_supplicant[1165]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 15:58:02 ubuntu wpa_supplicant[1165]: Association request to the driver failed Jan 19 15:58:02 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 15:58:05 ubuntu NetworkManager: <info> wlan0: link timed out. Jan 19 15:58:07 ubuntu wpa_supplicant[1165]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 15:58:07 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 15:58:07 ubuntu NetworkManager: <info> (wlan0): supplicant connec

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • CodePlex Daily Summary for Wednesday, March 17, 2010

    CodePlex Daily Summary for Wednesday, March 17, 2010New Projectschaosreader: A simple RSS reader.CRM 4.0 Customization GUID Update: The CRM 4.0 customization GUID update is an open source C# console application that automatically replaces GUID values in your exported workflow cu...DotNetNuke® Skin Bright: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Noel Jerke of SiteToolset. This simple and clean business...DotNetNuke® Skin Go: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by DnnGo Corporation. The skin uses web standard DIV+CSS tec...DotNetNuke® Skin J10blend: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Timthy Maler of 2M Studio Design. J10-Black v01.00.00 inc...DotNetNuke® Skin Recipe: A DotNetNuke Design Challenge skin package submitted to the "Standards" category by dnnprofis.at. For mobile devices the skin changes to a mobile...DotNetNuke® Skin SpaceSmurfs: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Eric Johnson of Personify Design. This fun personal skin was ins...ERDOS6 - Web: A Web Project about ERDOS 6Flickrlight: Flickrlight is a personal fun project out of love of Flickr and Silverlight. You can experience it here: http://www.flickrlight.net.GsGrid: Extracting data from Gaussian grid file and grid file calculationiLocator: iLocator is a collaborative educational mapping game for children developed on Microsoft Surface. This game encourages players to collaborate with ...Javascript CallObject SOAP AJAX Helper: CallObject is a Javascript based AJAX helper, it facilitates wrapping of basic soap calls (as long as simple data types are used), asynchronous ret...kbTrainer: kbTrainer is a simple to use HTML application for typing speed training. A lot of features completed in basic. 2 learning keywords layouts -- engli...Laboratório de Engenharia de Software - Projeto: Criado para estudar e aplicar novas tecnologias web.Maxilds Powershell Scripts: Repo of my powershell scriptsNamespacifier: Namespacifier is a C#.NET library and console application to fix XML documents containing multiple default namespaces. It gives prefixes to defaul...OData SDK for Objective-C: This is a CTP of the OData SDK for Objective-C. The library targets iPhone devices and Mac OS X and it is designed to facilitate the connection wit...Open Data App Framework (ODAF): The Open Data Application Framework (ODAF) is a framework that allows cities to easily map existing civic Open Data landmarks, and allow users to r...QuickieB2B: QuickieB2B is web application which main target is to provide quick info about products. It's designed for small companies who have a big number of...RayView: Rayview is an easy-to-use Raytracing-Framework based on Microsoft XNA.Robotics Studio application to navigate Lego Mindstorms robot through labyrinth: A project for Software Systems Analysis and Design Tools subject at the Kaunas University of Technology. The main point of the project is to code L...SharePoint Icon Integration: SharePoint Icon Integration makes it easier for SharePoint Administrators / Developers to add a icon (pdf) to the SharePoint farm. You will no long...TestVersion: Testing VersionieringTimecard: SoftSource Timecard project.T-shirt Cannon: So the Coding4Fun team had two weeks to build two robots able to drive, aim, and shoot t-shirts with a Windows Phone during a MIX10 Keynote demo of...USTF: This project is a bit secretive right now.Windows Azure Command-line Tools for PHP Developers: The “Windows Azure Command-line tools for PHP” provide a command-line experience to developers who wish to develop, package, and deploy PHP applic...New ReleasesCaramel Engine: CaramelEngine Alpha Build 0.0.0.1a: This is an early alpha release of the Engine and it's functionality. Be sure to have the using CaramelEngine statement. This release is for people...Coot: Preview: Basic preview On the first use you have to click Create New Session and Login. After this you can just click Screen Saver each time. Settings sho...CycleMania Starter Kit EAP - ASP.NET 4.0 Problem - Design - Solution: Cyclemania 0.08.32: The latest alpha release.DeepZoomContainer, Expanded DeepZoom for Silverlight & Windows Phone 7 Series: Release ver. 1.20 for Windows Phone 7 Series: SolutionMerge PathAnimation solution into one MouseWheel elimination PathAnimationWP7 Port DeepZoomContainerProject rebuilt for WP7 support De...Desktop Google Reader: 1.3 (the social release): NewsSharing Liking Mail item Labels / Tags Send to Twitter Read It Later http://readitlaterlist.com/ Instapaper http://www.instapaper.com/ Favicons...DotNetNuke® Blog: 04.00.00 RC 2: PLEASE NOTE: Please do not upgrade previous version of the Beta releases - please start from 03.05.01 This is a RELEASE CANDIDATE, and as such ...DotNetNuke® Community Edition: 05.03.00: New FeaturesTemplated User Profiles - User profile pages are now publicly viewable Photo field in User Profile - Users can upload a photo to thei...DotNetNuke® Skin Bright: Bright Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Noel Jerke of SiteToolset. This simple and clean business...DotNetNuke® Skin Go: Go Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by DnnGo Corporation. The skin uses web standard DIV+CSS tec...DotNetNuke® Skin J10blend: J10 Blend Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Timthy Maler of 2M Studio Design. J10-Black v01.00.00 incl...DotNetNuke® Skin Recipe: Recipe Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Standards" category by dnnprofis.at. For mobile devices the skin changes to a mobile f...DotNetNuke® Skin SpaceSmurfs: Space Smurfs Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Eric Johnson of Personify Design. This fun personal skin was ins...Dynamo: Dynamo v0.1 Beta: The following is included: Dynamo dlls Antlr dlls Hello world Simple Plugin example Application Dependency injection Singleton Managment ...ExtremeML: ExtremeML v1.0 Beta 1: Timed to accompany the RTM release of the OpenXML SDK v2.0, this is the first Beta release of ExtremeML (it was previously classified as a preview ...Family Tree Analyzer: Version 1.1.1.1: Version 1.1.1.1 Lots of Gedcom parsing fixes it should crash a whole lot less often and tolerate more "interesting" or "quirky" Gedcom entries. Add...Family Tree Analyzer: Version 1.2.0.1: Version 1.2.0.1 Added option to treat residence facts as Census Facts IGI Search now permits default country selection ie: what to use if it doesn...Flickrlight: Flickrlight: Current release is for idea sharing. There are not many design patterns being used. Please bare with the mess. :-) In order to run the applicat...Gherkin editor: Alpha 0.1: Most of the code at this point is the same as the Avalon.Sample from code project, just changed the name, removed extra languages and added syntax ...GsGrid: gsgrid1.6.4: gsgrid1.6.4GsGrid: gsgrid1.6.4-src: gsgrid1.6.4-srcHTML Template Repeater Module: Version 01.00.02: GeneralThe HTML Template Repeater Module is a direct replacement for the Core DotNetNuke Text/HTML module. Use it where you need to repeat the form...Images Compiler: Release 0.1: Last alpha buildJavascript CallObject SOAP AJAX Helper: Beta Release, 0.2.1: Beta Release, 0.2.1 Contains only core objectskbTrainer: kbtrainer 1.25u: kbTrainer is a simple to use speed typing training HTML application. A lot of features. All ither info availiable on http://code.google.com/p/kbtr...MapWindow6: MapWindow 6.0 msi (March 16): This version fixes a bug where selected points were not drawing correctly.Mesopotamia Experiment: Mesopotamia 1.2.43: Release Notes New Features - Scenario Name on title bar - Show organisms in Scnearios with simple stats Bug Fixes - Removed app domain recyling an...MFCMAPI: March 2010 Release: If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. Build: 6.0.0.1018 The 64 bit bu...MVVM Light Toolkit: MVVM Light Toolkit V3: Download the Zip file and extract it to a local folder. Then, follow the instructions on the Installation page http://www.galasoft.ch/mvvm/installi...NETXPF: 1.0.2: Changes: - Added a class "IOUtils" with methods for reading streams and GZip-compressing HTTP responses - Fixed a bug in the size formatter (excep...OData SDK for Objective-C: OData SDK for Objective-C CTP: The current release supports read-only operations only and it has been tested on a limited set of scenarios. The download include a sample iPhone a...Open Data App Framework (ODAF): ODAF 1.0: Initial beta release.Selection Maker: Selection Maker 1.1: New FeaturesContext Menu for ListView added Bug FixesFixed: If the users press Copy/Cut Button when no item is selected in ListView the ListView cl...Selection Maker: Selection Maker 1.2: Bug Fixes:a minor bug fixedSimple.NET: Simple.Mocking 1.0.0.5: Initial version of a new mocking framework for .NET Revision 1: Expect.AnyInocationOn<T>(T target) changed to Expect.AnyInocationOn(object target...SQL Server Extended Properties Quick Editor: New release 1.5.4: Whats new: Move preferences to application settings and add a form to edit preferences. Support to add, modify and delete operations could be made ...SuperModel - A Dynamic View-Model Generator: 1.0.0.0 - Tyra: The final 1.0 release, now less intrusive! If you don't want to implement ISuperModel, simply implement INotifyPropertyChanged.Timecard: Timecard Initial Release: The zipped version of the Initial Checkin.Transparent Persistence.Net: TP.Net 0.1.0: This is the initial alpha release. It's working for small set of use-cases (basic access to Cassandra).VCC: Latest build, v2.1.30316.0: Automatic drop of latest buildVFPnfe: Projeto Ajuda PAF-ECF: Este projeto visa ajudar aos desenvolvedores para homologação do PAF-ECF , sob licença publica GNU/GPL para ver mais detalhes do mesmo assista o vi...Visual Studio DSite: Gif Animator: This program will make an animate gif. (Program written in Vb.Net 2008)Most Popular ProjectsMetaSharpLiveUpload to FacebookSkype Voice ChangerLiveUpload to YouTubeSIPSorceryChartPart for SharePointTFS Branching Guide 2.0TouchFlo DetacherNPandaySnippet EditorMost Active ProjectsLINQ to TwitterRawrOData SDK for PHPDirectQpatterns & practices – Enterprise LibraryBlogEngine.NETN2 CMSOpen Data App Framework (ODAF)NB_Store - Free DotNetNuke Ecommerce Catalog ModuleMapWindow6

    Read the article

  • Oracle Coherence & Oracle Service Bus: REST API Integration

    - by Nino Guarnacci
    This post aims to highlight one of the features found in Oracle Coherence which allows it to be easily added and integrated inside a wider variety of projects.  The features in question are the REST API exposed by the Coherence nodes, with which you can interact in the wider mode in memory data grid.Oracle Coherence and Oracle Service Bus are natively integrated through a feature found in the Oracle Service Bus, which allows you to use the coherence grid cache during the configuration phase of a business service. This feature allows you to use an intermediate layer of cache to retrieve the answers from previous invocations of the same service, without necessarily having to invoke the real business service again. Directly from the web console of Oracle Service Bus, you can decide the policies of eviction of the objects / answers and define the discriminating parameters that identify their uniqueness.The coherence REST APIs, however, allow you to integrate both products for other necessities enabling realization of new architectures design.  Consider coherence’s node as a simple service which interoperates through the stardard services and in particular REST (with JSON and XML). Thinking of coherence as a company’s shared service, able to have an implementation of a centralized “map and reduce” which you can access  by a huge variety of protocols (transport and envelopes).An amazing step forward for those who still imagine connectors and code. This type of integration does not require writing custom code or complex implementation to be self-supported. The added value is made unique by the incredible value of both products independently, and still more out of their simple and robust integration.As already mentioned this scenario discovers a hidden new door behind the columns of these two products. The door leads to new ideas and perspectives for enterprise architectures that increasingly wink to next-generation applications: simple and dynamic, perhaps towards the mobile and web 2.0.Below, a small and simple demo useful to demonstrate how easily is to integrate these two products using the Coherence REST API. This demo is also intended to imagine new enterprise architectures using this approach.The idea is to create a centralized system of alerting, fed easily from any company’s application, regardless of the technology with which they were built . Then use a representation standard protocol: RSS, using a service exposed by the service bus; So you can browse and search only the alerts that you are interested on, by category, author, title, date, etc etc.. The steps needed to implement this system are very simple and very few. Here they are listed below and described to be easily replicated within your environment. I would remind you that the demo is only meant to demonstrate how easily is to integrate Oracle Coherence and the Oracle Service Bus, and stimulate your imagination to new technological approaches.1) Install the two products: In this demo used (if necessary, consult the installation guides of 2 products)  - Oracle Service Bus ver. 11.1.1.5.0 http://www.oracle.com/technetwork/middleware/service-bus/downloads/index.html - Oracle Coherence ver. 3.7.1 http://www.oracle.com/technetwork/middleware/coherence/downloads/index.html 2) Because you choose to create a centralized alerting system, we need to define a structure type containing some alerting attributes useful to preserve and organize the information of the various alerts sent by the different applications. Here, then it was built a java class named Alert containing the canonical properties of an alarm information:- Title- Description- System- Time- Severity 3) Therefore, we need to create two configuration files for the coherence node, in order to save the Alert objects within the grid, through the rest/http protocol (more than the native API for Java, C + +, C,. Net). Here are the two minimal configuration files for Coherence:coherence-rest-config.xml resty-server-config.xml This minimum configuration allows me to use a distributed cache named "alerts" that can  also be accessed via http - rest on the host "localhost" over port "8080", objects are of type “oracle.cohsb.Alert”. 4) Below  a simple Java class that represents the type of alert messages: 5) At this point we just need to startup our coherence node, able to listen on http protocol to manage the “alerts” cache, which will receive incoming XML or JSON objects of type Alert. Remember to include in the classpath of the coherence node, the Alert java class and the following coherence libraries and configuration files:  At this point, just run the coherence class node “com.tangosol.net.DefaultCacheServer”advising you to set the following parameters:-Dtangosol.coherence.log.level=9 -Dtangosol.coherence.log=stdout -Dtangosol.coherence.cacheconfig=[PATH_TO_THE_FILE]\resty-server-config.xml 6) Let's create a procedure to test our configuration of Coherence and in order to insert some custom alerts in our cache. The technology with which you want to achieve this functionality is fully not considerable: Javascript, Python, Ruby, Scala, C + +, Java.... Because the protocol to communicate with Coherence is simply HTTP / JSON or XML. For this little demo i choose Java: A method to send/put the alert to the cache: A method to query and view the content of the cache: Finally the main method that execute our methods:  No special library added in the classpath for our class (json struct static defined), when it will be executed, it asks some information such as title, description,... in order to compose and send an alert to the cache and then it will perform an inquiry, to the same cache. At this point, a good exercise at this point, may be to create the same procedure using other technologies, such as a simple html page containing some JavaScript code, and then using Python, Ruby, and so on.7) Now we are ready to start configuring the Oracle Service Bus in order to integrate the two products. First integrate the internal alerting system of Oracle Service Bus with our centralized alerting system based on coherence node. This ensures that by monitoring, or directly from within our Proxy Message Flow, we can throw alerts and save them directly into the Coherence node. To do this I choose to use the jms technology, natively present inside the Oracle Weblogic / Service Bus. Access to the Oracle WebLogic Administration console and create and configure a new JMS connection factory and a new jms destination (queue). Now we should create a new resource of type “alert destination” within our Oracle Service Bus project. The new “alert destination” resource should be configured using the newly created connection factory jms and jms destination. Finally, in order to withdraw the message alert enqueued in our JMS destination and send it to our coherence node, we just need to create a new business service and proxy service within our Oracle Service Bus project.Our business service is responsible for sending a message to our REST service Coherence using as a method action: PUT Finally our proxy service have to collect all messages enqueued on the destination, execute an xquery transformation on those messages  in order to translate them into valid XML / alert objects useful to be sent to our coherence service, through the newly created business service. The message flow pipeline containing the xquery transformation: Incredibly,  we just did a basic first integration between the native alerting system of Oracle Service Bus and our centralized alerting system by simply configuring our coherence node without developing anything.It's time to test it out. To do this I create a proxy service able to generate an alert using our "alert destination", whenever the proxy is invoked. After some invocation to our proxy that generates fake alerts, we could open an Internet browser and type the URL  http://localhost: 8080/alerts/  so we could see what has been inserted within the coherence node. 8) We are ready for the final step.  We would create a new message flow, that can be used to search and display the results in standard mode. To do this I choosen the standard representation of RSS, to display a formatted result on a huge variety of devices such as readers for the iPhone and Android. The inquiry may be defined already at the time of the request able to return only feed / items related to our needs. To do this we need to create a new business service, a new proxy service, and finally a new XQuery Transformation to take care of translating the collection of alerts that will be return from our coherence node in a nicely formatted RSS standard document.So we start right from this resource (xquery), which has the task of transforming a collection of alerts / xml returned from the node coherence in a type well-formatted feed RSS 2.0 our new business service that will search the alerts on our coherence node using the Rest API. And finally, our last resource, the proxy service that will be exposed as an RSS / feeds to various mobile devices and traditional web readers, in which we will intercept any search query, and transform the result returned by the business service in an RSS feed 2.0. The message flow with the transformation phase (Alert TO Feed Items): Finally some little tricks to follow during the routing to the business service, - check for any queries present in the url to require a subset of alerts  - the http header "Accept" to help get an answer XML instead of JSON: In our little demo we also static added some coherence parameters to the request:sort=time:desc;start=0;count=100I would like to get from Coherence that the results will be sorted by date, and starting from 1 up to a maximum of 100.Done!!Just incredible, our centralized alerting system is ready. Inheriting all the qualities and capabilities of the two products involved Oracle Coherence & Oracle Service Bus: - RASP (Reliability, Availability, Scalability, Performance)Now try to use your mobile device, or a normal Internet browser by accessing the RSS just published: Some urls you may test: Search for the last 100 alerts : http://localhost:7001/alarmsSearch for alerts that do not have time set to null (time is not null):http://localhost:7001/alarms?q=time+is+not+nullSearch for alerts that the system property is “Web Browser” (system = ‘Web Browser’):http://localhost:7001/alarms?q=system+%3D+%27Web+Browser%27Search for alerts that the system property is “Web Browser” and the severity property is “Fatal” and the title property contain the word “Javascript”  (system = ‘Web Broser’ and severity = ‘Fatal’ and title like ‘%Javascript%’)http://localhost:8080/alerts?q=system+%3D+%27Web+Browser%27+AND+severity+%3D+%27Fatal%27+AND+title+LIKE+%27%25Javascript%25%27 To compose more complex queries about your need I would suggest you to read the chapter in the coherence documentation inherent the Cohl language (Coherence Query Language) http://download.oracle.com/docs/cd/E24290_01/coh.371/e22837/api_cq.htm . Some useful links: - Oracle Coherence REST API Documentation http://download.oracle.com/docs/cd/E24290_01/coh.371/e22839/rest_intro.htm - Oracle Service Bus Documentation http://download.oracle.com/docs/cd/E21764_01/soa.htm#osb - REST explanation from Wikipedia http://en.wikipedia.org/wiki/Representational_state_transfer At this URL could be downloaded the whole materials of this demo http://blogs.oracle.com/slc/resource/cosb/coh-sb-demo.zip Author: Nino Guarnacci.

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • WLS MBeans

    - by Jani Rautiainen
    WLS provides a set of Managed Beans (MBeans) to configure, monitor and manage WLS resources. We can use the WLS MBeans to automate some of the tasks related to the configuration and maintenance of the WLS instance. The MBeans can be accessed a number of ways; using various UIs and programmatically using Java or WLST Python scripts.For customization development we can use the features to e.g. manage the deployed customization in MDS, control logging levels, automate deployment of dependent libraries etc. This article is an introduction on how to access and use the WLS MBeans. The goal is to illustrate the various access methods in a single article; the details of the features are left to the linked documentation.This article covers Windows based environment, steps for Linux would be similar however there would be some differences e.g. on how the file paths are defined. MBeansThe WLS MBeans can be categorized to runtime and configuration MBeans.The Runtime MBeans can be used to access the runtime information about the server and its resources. The data from runtime beans is only available while the server is running. The runtime beans can be used to e.g. check the state of the server or deployment.The Configuration MBeans contain information about the configuration of servers and resources. The configuration of the domain is stored in the config.xml file and the configuration MBeans can be used to access and modify the configuration data. For more information on the WLS MBeans refer to: Understanding WebLogic Server MBeans WLS MBean reference Java Management Extensions (JMX)We can use JMX APIs to access the WLS MBeans. This allows us to create Java programs to configure, monitor, and manage WLS resources. In order to use the WLS MBeans we need to add the following library into the class-path: WL_HOME\lib\wljmxclient.jar Connecting to a WLS MBean server The WLS MBeans are contained in a Mbean server, depending on the requirement we can connect to (MBean Server / JNDI Name): Domain Runtime MBean Server weblogic.management.mbeanservers.domainruntime Runtime MBean Server weblogic.management.mbeanservers.runtime Edit MBean Server weblogic.management.mbeanservers.edit To connect to the WLS MBean server first we need to create a map containing the credentials; Hashtable<String, String> param = new Hashtable<String, String>(); param.put(Context.SECURITY_PRINCIPAL, "weblogic");        param.put(Context.SECURITY_CREDENTIALS, "weblogic1");        param.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); These define the user, password and package containing the protocol. Next we create the connection: JMXServiceURL serviceURL =     new JMXServiceURL("t3","127.0.0.1",7101,     "/jndi/weblogic.management.mbeanservers.domainruntime"); JMXConnector connector = JMXConnectorFactory.connect(serviceURL, param); MBeanServerConnection connection = connector.getMBeanServerConnection(); With the connection we can now access the MBeans for the WLS instance. For a complete example see Appendix A of this post. For more details refer to Accessing WebLogic Server MBeans with JMX Accessing WLS MBeans The WLS MBeans are structured hierarchically; in order to access content we need to know the path to the MBean we are interested in. The MBean is accessed using “MBeanServerConnection. getAttribute” API.  WLS provides entry points to the hierarchy allowing us to navigate all the WLS MBeans in the hierarchy (MBean Server / JMX object name): Domain Runtime MBean Server com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean Runtime MBean Servers com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean Edit MBean Server com.bea:Name=EditService,Type=weblogic.management.mbeanservers.edit.EditServiceMBean For example we can access the Domain Runtime MBean using: ObjectName service = new ObjectName( "com.bea:Name=DomainRuntimeService," + "Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean"); Same syntax works for any “child” WLS MBeans e.g. to find out all application deployments we can: ObjectName domainConfig = (ObjectName)connection.getAttribute(service,"DomainConfiguration"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); Alternatively we could access the same MBean using the full syntax: ObjectName domainConfig = new ObjectName("com.bea:Location=DefaultDomain,Name=DefaultDomain,Type=Domain"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); For more details refer to Accessing WebLogic Server MBeans with JMX Invoking operations on WLS MBeans The WLS MBean operations can be invoked with MBeanServerConnection. invoke API; in the following example we query the state of “AppsLoggerService” application: ObjectName appRuntimeStateRuntime = new ObjectName("com.bea:Name=AppRuntimeStateRuntime,Type=AppRuntimeStateRuntime"); Object[] parameters = { "AppsLoggerService", "DefaultServer" }; String[] signature = { "java.lang.String", "java.lang.String" }; String result = (String)connection.invoke(appRuntimeStateRuntime,"getCurrentState",parameters, signature); The result returned should be "STATE_ACTIVE" assuming the "AppsLoggerService" application is up and running. WebLogic Scripting Tool (WLST) The WebLogic Scripting Tool (WLST) is a command-line scripting environment that we can access the same WLS MBeans. The tool is located under: $MW_HOME\oracle_common\common\bin\wlst.bat Do note that there are several instances of the wlst script under the $MW_HOME, each of them works, however the commands available vary, so we want to use the one under “oracle_common”. The tool is started in offline mode. In offline mode we can access and manipulate the domain configuration. In online mode we can access the runtime information. We connect to the Administration Server : connect("weblogic","weblogic1", "t3://127.0.0.1:7101") In both online and offline modes we can navigate the WLS MBean using commands like "ls" to print content and "cd" to navigate between objects, for example: All the commands available can be obtained with: help('all') For details of the tool refer to WebLogic Scripting Tool and for the commands available WLST Command and Variable Reference. Also do note that the WLST tool can be invoked from Java code in Embedded Mode. Running Scripts The WLST tool allows us to automate tasks using Python scripts in Script Mode. The script can be manually created or recorded by the WLST tool. Example commands of recording a script: startRecording("c:/temp/recording.py") <commands that we want to record> stopRecording() We can run the script from WLST: execfile("c:/temp/recording.py") We can also run the script from the command line: C:\apps\Oracle\Middleware\oracle_common\common\bin\wlst.cmd c:/temp/recording.py There are various sample scripts are provided with the WLS instance. UI to Access the WLS MBeans There are various UIs through which we can access the WLS MBeans. Oracle Enterprise Manager Fusion Middleware Control Oracle WebLogic Server Administration Console Fusion Middleware Control MBean Browser In the integrated JDeveloper environment only the Oracle WebLogic Server Administration Console is available to us. For more information refer to the documentation, one noteworthy feature in the console is the ability to record WLST scripts based on the navigation. In addition to the UIs above the JConsole included in the JDK can be used to access the WLS MBeans. The JConsole needs to be started with specific parameter to force WLS objects to be used and jar files in the classpath: "C:\apps\Oracle\Middleware\jdk160_24\bin\jconsole" -J-Djava.class.path=C:\apps\Oracle\Middleware\jdk160_24\lib\jconsole.jar;C:\apps\Oracle\Middleware\jdk160_24\lib\tools.jar;C:\apps\Oracle\Middleware\wlserver_10.3\server\lib\wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote For more details refer to the Accessing Custom MBeans from JConsole. Summary In this article we have covered various ways we can access and use the WLS MBeans in context of integrated WLS in JDeveloper to be used for Fusion Application customization development. References Developing Custom Management Utilities With JMX for Oracle WebLogic Server Accessing WebLogic Server MBeans with JMX WebLogic Server MBean Reference WebLogic Scripting Tool WLST Command and Variable Reference Appendix A package oracle.apps.test; import java.io.IOException;import java.net.MalformedURLException;import java.util.Hashtable;import javax.management.MBeanServerConnection;import javax.management.MalformedObjectNameException;import javax.management.ObjectName;import javax.management.remote.JMXConnector;import javax.management.remote.JMXConnectorFactory;import javax.management.remote.JMXServiceURL;import javax.naming.Context;/** * This class contains simple examples on how to access WLS MBeans using JMX. */public class BlogExample {    /**     * Connection to the WLS MBeans     */    private MBeanServerConnection connection;    /**     * Constructor that takes in the connection information for the      * domain and obtains the resources from WLS MBeans using JMX.     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     */    public BlogExample(String hostName, String port, String userName,                       String password) {        super();        try {            initConnection(hostName, port, userName, password);        } catch (Exception e) {            throw new RuntimeException("Unable to connect to the domain " +                                       hostName + ":" + port);        }    }    /**     * Default constructor.     * Tries to create connection with default values. Runtime exception will be     * thrown if the default values are not used in the local instance.     */    public BlogExample() {        this("127.0.0.1", "7101", "weblogic", "weblogic1");    }    /**     * Initializes the JMX connection to the WLS Beans     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     * @throws IOException error connecting to the WLS MBeans     * @throws MalformedURLException error connecting to the WLS MBeans     * @throws MalformedObjectNameException error connecting to the WLS MBeans     */    private void initConnection(String hostName, String port, String userName,                                String password)                                 throws IOException, MalformedURLException,                                        MalformedObjectNameException {        String protocol = "t3";        String jndiroot = "/jndi/";        String mserver = "weblogic.management.mbeanservers.domainruntime";        JMXServiceURL serviceURL =            new JMXServiceURL(protocol, hostName, Integer.valueOf(port),                              jndiroot + mserver);        Hashtable<String, String> h = new Hashtable<String, String>();        h.put(Context.SECURITY_PRINCIPAL, userName);        h.put(Context.SECURITY_CREDENTIALS, password);        h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,              "weblogic.management.remote");        JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);        connection = connector.getMBeanServerConnection();    }    /**     * Main method used to invoke the logic for testing     * @param args arguments passed to the program     */    public static void main(String[] args) {        BlogExample blogExample = new BlogExample();        blogExample.testEntryPoint();        blogExample.testDirectAccess();        blogExample.testInvokeOperation();    }    /**     * Example of using an entry point to navigate the WLS MBean hierarchy.     */    public void testEntryPoint() {        try {            System.out.println("testEntryPoint");            ObjectName service =             new ObjectName("com.bea:Name=DomainRuntimeService,Type=" +"weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean");            ObjectName domainConfig =                (ObjectName)connection.getAttribute(service,                                                    "DomainConfiguration");            ObjectName[] appDeployments =                (ObjectName[])connection.getAttribute(domainConfig,                                                      "AppDeployments");            for (ObjectName appDeployment : appDeployments) {                String resourceIdentifier =                    (String)connection.getAttribute(appDeployment,                                                    "SourcePath");                System.out.println(resourceIdentifier);            }        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of accessing WLS MBean directly with a full reference.     * This does the same thing as testEntryPoint in slightly difference way.     */    public void testDirectAccess() {        try {            System.out.println("testDirectAccess");            ObjectName appDeployment =                new ObjectName("com.bea:Location=DefaultDomain,"+                               "Name=AppsLoggerService,Type=AppDeployment");            String resourceIdentifier =                (String)connection.getAttribute(appDeployment, "SourcePath");            System.out.println(resourceIdentifier);        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of invoking operation on a WLS MBean.     */    public void testInvokeOperation() {        try {            System.out.println("testInvokeOperation");            ObjectName appRuntimeStateRuntime =                new ObjectName("com.bea:Name=AppRuntimeStateRuntime,"+                               "Type=AppRuntimeStateRuntime");            String identifier = "AppsLoggerService";            String serverName = "DefaultServer";            Object[] parameters = { identifier, serverName };            String[] signature = { "java.lang.String", "java.lang.String" };            String result =                (String)connection.invoke(appRuntimeStateRuntime, "getCurrentState",                                          parameters, signature);            System.out.println("State of " + identifier + " = " + result);        } catch (Exception e) {            throw new RuntimeException(e);        }    }}

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • CodePlex Daily Summary for Sunday, December 19, 2010

    CodePlex Daily Summary for Sunday, December 19, 2010Popular ReleasesTeam Foundation Server Administration Tool: 2.1: TFS Administration Tool 2.1, is the first version of the TFS Administration Tool which is built on top of the Team Foundation Server 2010 object model. TFS Administration Tool 2.1 can be installed on machines that are running either Team Explorer 2010, or Team Foundation Server 2010.SQL Monitor: SQL Monitor 3.0 alpha 5: 1. fix a problem with not expanding nodes in object explorer, sorryHacker Passwords: HackerPasswords.zip: Source code, executable and documentationWatchersNET.SiteMap: WatchersNET.SiteMap 01.03.03: Whats NewSkin Object: You can now filter by Terms for Example use: <object id="dnnSITEMAPSL" codetype="dotnetnuke/server" codebase="SITEMAPSL"> <param name="TaxMode" value="terms" /> <param name="TaxTerms" value="TermName1,TermName2" /> </object> changes Tax Term Filter should work correct nowSubtitleTools: SubtitleTools 1.3: - Added .srt FileAssociation & Win7 ShowRecentCategory feature. - Applied UnifiedYeKe to fix Persian search problems. - Reduced file size of Persian subtitles for uploading @OSDB.EnhSim: EnhSim 2.2.3 ALPHA: 2.2.3 ALPHAThis release adds in the changes for 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in th...Facebook C# SDK: 4.1.0: - Lots of bug fixes - Removed Dynamic Runtime Language dependencies from non-dynamic platforms. - Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample - Changed internal serialization to use Json.net - BREAKING CHANGE: Canvas Session is no longer support. Use Signed Request instead. Canvas Session has been deprecated by Facebook. - BREAKING CHANGE: Some renames and changes with Authorizer, CanvasAuthorizer, and Authorization ac...NuGet (formerly NuPack): NuGet 1.0 build 11217.102: Note: this release is slightly newer than RC1, and fixes a couple issues relating to updating packages to newer versions. NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (h...WCF Community Site: WCF Web APIs 10.12.17: Welcome to the second release of WCF Web APIs on codeplex Here is what is new in this release. WCF Support for jQuery - create WCF web services that are easy to consume from JavaScript clients, in particular jQuery. Better support for using JsonValue as dynamic Support for JsonValue change notification events for databinding and other purposes Support for going between JsonValue and CLR types WCF HTTP - create HTTP / REST based web services. This is a minor release which contains fixe...Orchard Project: Orchard 0.9: Orchard Release Notes Build: 0.9.253 Published: 12/16/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard-Using-Web-PI.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.0.9.253.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orch...Pyxis 2: Beta 2.2: An issue stopping the Checkbox from working properly has been resolved (gradient background has also been added) A TabDialog control has been added NumericUpDown control added A driver for the VS1053 has been added (not yet in use) PyxisAPI.OpenFile now works with all its features PyxisAPI.SaveFile now works with all features Settings window is now available from the Pyxis menu You can now connect using Static IP or DHCP You can now set your system time from the Settings windo...sgMotion Animation Library: sgMotion v1.3 for SunBurn 2.0.9 [Includes Sample]: sgMotion 1.3 release for use with SunBurn 2.0.9 (all editions). Includes example project with assets, and full Windows and Xbox support. (tested on all platforms)DotSpatial: DotSpatial 12-15-2010: This release contains a few minor bug fixes and hopefully the GDAL libraries for the 3.5 x86 build actually built to the correct directory this time.DotNetNuke® Community Edition: 05.06.01 Beta: This is the initial Beta of DotNetNuke 5.6.1. See the DotNetNuke Roadmap a full list of changes in this release.MSBuild Extension Pack: December 2010: Release Blog Post The MSBuild Extension Pack December 2010 release provides a collection of over 380 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...TweetSharp: TweetSharp v2.0.0.0 - Preview 5: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 5 ChangesMaintenance release with user reported fixes Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numerous fixes reported by preview users Preview 3 ChangesNumerous ...Silverlight Contrib: Silverlight Contrib 2010.1.0: 2010.1.0 New FeaturesCompatibility Release for Silverlight 4 and Visual Studio 2010FlickrNet API Library: 3.1.4000: Newest release. Now contains dedicated Windows Phone 7 DLL as well as all previous DLLs. Also contains Windows Help file documentation now as standard.mojoPortal: 2.3.5.8: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2358-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-12-13: Code samples for Visual Studio 2010New ProjectsANLP - Another .NET Lexer Parser: This project aims to have a lexer/parser working in Silverlight and help people to write their own grammar and make the lexer/parser available in Silverlight.Better App Tab Shortcuts Firefox Extension: An extension for Firefox 4.0 which improves the shortcuts for selecting tabs.BizTalk Map Converter: Converts BizTalk Maps into ExcelCampo Minado: Campo minado em SilverlightCSharpHASH: Hash project is an college project conserning business software development. Written in C#.DS_HW1: hw1DynamicViewModel: MVVM using POCOs with .NET 4.0: This project aims to provide a way to implement the Model View ViewModel (MVVM) architectural pattern using Plain Old CLR Objects (POCOs) while taking full advantage of .NET 4.0 DynamicObject Class.Evolution Game: Evolution game that uses genetic algorithms to visualize life of different species.Fluent Parser: Fluent Parser is a library allowing to build a syntaxic analyser in C#. The grammar is built using only .NET objects into the form of a Parsing Expression Grammar (PEG).GoogleMusic Player: Choose and play music from newly launched google Music http://www.google.co.in/music create and save playlists Using C#Gracefully update SharePoint 2010 document metadata: Users want to update the metadata for a document in a document library, for which they don’t have access or if they wanted to update some read-only fields. Hacker Passwords: Hacker Passwords generator - free open sourceHTML5 Canvas FlowView: FlowView is a simple to use API for creating/implementing a flowview style element in HTML5/Canvas. The FlowView looks much like something from Apple's cover view.L#: A F# program to provide similar functionalities offered by Prolog for F# oriented applications.MapInfo Tools: Utilities for working with MapInfo files.Notepad X: Notepad X is an alternative open source text editor for Microsoft Windows, with a lot of customization options created to help users managing text documents, featuring tab navigation.OpenFlyClient: An educational open source of a client written in C# for FlyFF. This is for educational and evaluation purposes only. Using this for private servers is not allowed.Orkut API Library: Orkut API Library (more of a wrapper) for .NET makes it easier for .NET developers creating web and/or desktop applications to leverage Orkut OpenSocial API from within the familiar Visual Studio environment. PasteHtml.NET: PasteHtml.NET is a .NET API for using the PasteHtml.com service.Photon: Silverlight MVVM Framework RSATasync: Running analyses in RiskSpectrum PSA Professional software is a time-consuming task. RSATasync is a middleware, which uses .NET4 Parallel Extensions to parallelize analyses sent by the editor to the analyzer in the RiskSpectrum software.The Killie01 Project: all killie01 projects (opensource)United Nations News for Windows Phone 7: Open source project for United Nations News for Windows Phone 7. WCF SIP Stack: WCF SIPWordpress .NET: WPDotnet is a library to ease the Wordpress integration with ASP.NET. Some Server controls are implemented. If you don't want to use them, you can just use the classes.

    Read the article

  • First toe in the water with Object Databases : DB4O

    - by REA_ANDREW
    I have been wanting to have a play with Object Databases for a while now, and today I have done just that.  One of the obvious choices I had to make was which one to use.  My criteria for choosing one today was simple, I wanted one which I could literally wack in and start using, which means I wanted one which either had a .NET API or was designed/ported to .NET.  My decision was between two being: db4o MongoDb I went for db4o for the single reason that it looked like I could get it running and integrated the quickest.  I am making a Blogging application and front end as a project with which I can test and learn with these object databases.  Another requirement which I thought I would mention is that I also want to be able to use the said database in a shared hosting environment where I cannot install, run and maintain a server instance of said object database.  I can do exactly this with db4o. I have not tried to do this with MongoDb at time of writing.  There are quite a few in the industry now and you read an interesting post about different ones and how they are used with some of the heavy weights in the industry here : http://blog.marcua.net/post/442594842/notes-from-nosql-live-boston-2010 In the example which I am building I am using StructureMap as my IOC.  To inject the object for db4o I went with a Singleton instance scope as I am using a single file and I need this to be available to any thread on in the process as opposed to using the server implementation where I could open and close client connections with the server handling each one respectively.  Again I want to point out that I have chosen to stick with the non server implementation of db4o as I wanted to use this in a shared hosting environment where I cannot have such servers installed and run.     public static class Bootstrapper    {        public static void ConfigureStructureMap()        {            ObjectFactory.Initialize(x => x.AddRegistry(new MyApplicationRegistry()));        }    }    public class MyApplicationRegistry : Registry    {        public const string DB4O_FILENAME = "blog123";        public string DbPath        {            get            {                return Path.Combine(Path.GetDirectoryName(Assembly.GetAssembly(typeof(IBlogRepository)).Location), DB4O_FILENAME);            }        }        public MyApplicationRegistry()        {            For<IObjectContainer>().Singleton().Use(                () => Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), DbPath));            Scan(assemblyScanner =>            {                assemblyScanner.TheCallingAssembly();                assemblyScanner.WithDefaultConventions();            });        }    } So my code above is the structure map plumbing which I use for the application.  I am doing this simply as a quick scratch pad to play around with different things so I am simply segregating logical layers with folder structure as opposed to different assemblies.  It will be easy if I want to do this with any segment but for the purposes of example I have literally just wacked everything in the one assembly.  You can see an example file structure I have on the right.  I am planning on testing out a few implementations of the object databases out there so I can program to an interface of IBlogRepository One of the things which I was unsure about was how it performed under a multi threaded environment which it will undoubtedly be used 9 times out of 10, and for the reason that I am using the db context as a singleton, I assumed that the library was of course thread safe but I did not know as I have not read any where in the documentation, again this is probably me not reading things correctly.  In short though I threw together a simple test where I simply iterate to a limit each time kicking a common task off with a thread from a thread pool.  This task simply created and added an random Post and added it to the storage. The execution of the threads I put inside the Setup of the Test and then simply ensure the number of posts committed to the database is equal to the number of iterations I made; here is the code I used to do the multi thread jobs: [TestInitialize] public void Setup() { var sw = new System.Diagnostics.Stopwatch(); sw.Start(); var resetEvent = new ManualResetEvent(false); ThreadPool.SetMaxThreads(20, 20); for (var i = 0; i < MAX_ITERATIONS; i++) { ThreadPool.QueueUserWorkItem(delegate(object state) { var eventToReset = (ManualResetEvent)state; var post = new Post { Author = MockUser, Content = "Mock Content", Title = "Title" }; Repository.Put(post); var counter = Interlocked.Decrement(ref _threadCounter); if (counter == 0) eventToReset.Set(); }, resetEvent); } WaitHandle.WaitAll(new[] { resetEvent }); sw.Stop(); Console.WriteLine("{0:00}.{1:00} seconds", sw.Elapsed.Seconds, sw.Elapsed.Milliseconds); }   I was not doing this to test out the speed performance of db4o but while I was doing this I could not help but put in a StopWatch and see out of sheer interest how fast it would take to insert a number of Posts.  I tested it out in this case with 10000 inserts of a small, simple POCO and it resulted in an average of:  899.36 object inserts / second.  Again this is just  simple crude test which came out of my curiosity at how it performed under many threads when using the non server implementation of db4o. The spec summary of the computer I used is as follows: With regards to the actual Repository implementation itself, it really is quite straight forward and I have to say I am very surprised at how easy it was to integrate and get up and running.  One thing I have noticed in the exposure I have had so far is that the Query returns IList<T> as opposed to IQueryable<T> but again I have not looked into this in depth and this could be there already and if not they have provided everything one needs to make there own repository.  An example of a couple of methods from by db4o implementation of the BlogRepository is below: public class BlogRepository : IBlogRepository { private readonly IObjectContainer _db; public BlogRepository(IObjectContainer db) { _db = db; } public void Put(DomainObject obj) { _db.Store(obj); } public void Delete(DomainObject obj) { _db.Delete(obj); } public Post GetByKey(object key) { return _db.Query<Post>(post => post.Key == key).FirstOrDefault(); } … Anyways I hope to get a few more implementations going of the object databases and literally just get familiarized with them and the concept of no sql databases. Cheers for now, Andrew

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Ubuntu 14.04 Failed to load module udlfb

    - by jar276705
    DisplayLink doesn't load and run. The adapter is recognized and /dev/FB1 is created. USB bus info: Bus 001 Device 006: ID 17e9:0198 DisplayLink Xorg.0.log: X.Org X Server 1.15.1 Release Date: 2014-04-13 [ 44708.386] X Protocol Version 11, Revision 0 [ 44708.389] Build Operating System: Linux 3.2.0-37-generic i686 Ubuntu [ 44708.392] Current Operating System: Linux rrl 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:08:14 UTC 2014 i686 [ 44708.392] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.13.0-24-generic root=UUID=6b719a77-29e0-4668-8f16-57d0d3a73a3f ro quiet splash vt.handoff=7 [ 44708.399] Build Date: 16 April 2014 01:40:08PM [ 44708.402] xorg-server 2:1.15.1-0ubuntu2 (For technical support please see http://www.ubuntu.com/support) [ 44708.405] Current version of pixman: 0.30.2 [ 44708.412] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 44708.412] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 44708.427] (==) Log file: "/var/log/Xorg.0.log", Time: Thu May 1 09:38:27 2014 [ 44708.431] (==) Using config file: "/etc/X11/xorg.conf" [ 44708.434] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 44708.435] (==) ServerLayout "X.org Configured" [ 44708.435] (**) |-->Screen "DisplayLinkScreen" (0) [ 44708.435] (**) | |-->Monitor "DisplayLinkMonitor" [ 44708.435] (**) | |-->Device "DisplayLinkDevice" [ 44708.435] (**) |-->Screen "Screen0" (1) [ 44708.435] (**) | |-->Monitor "Monitor0" [ 44708.435] (**) | |-->Device "Card0" [ 44708.435] (**) |-->Input Device "Mouse0" [ 44708.435] (**) |-->Input Device "Keyboard0" [ 44708.435] (==) Automatically adding devices [ 44708.435] (==) Automatically enabling devices [ 44708.435] (==) Automatically adding GPU devices [ 44708.435] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (**) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, built-ins, /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, built-ins [ 44708.435] (**) ModulePath set to "/usr/lib/xorg/modules" [ 44708.435] (WW) Hotplugging is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled. [ 44708.435] (WW) Disabling Mouse0 [ 44708.435] (WW) Disabling Keyboard0 [ 44708.435] (II) Loader magic: 0xb77106c0 [ 44708.435] (II) Module ABI versions: [ 44708.435] X.Org ANSI C Emulation: 0.4 [ 44708.435] X.Org Video Driver: 15.0 [ 44708.435] X.Org XInput driver : 20.0 [ 44708.435] X.Org Server Extension : 8.0 [ 44708.436] (II) xfree86: Adding drm device (/dev/dri/card0) [ 44708.436] (II) xfree86: Adding drm device (/dev/dri/card1) [ 44708.437] (--) PCI:*(0:1:5:0) 1002:9616:105b:0e26 rev 0, Mem @ 0xf0000000/134217728, 0xfeae0000/65536, 0xfe900000/1048576, I/O @ 0x0000b000/256 [ 44708.441] Initializing built-in extension Generic Event Extension [ 44708.444] Initializing built-in extension SHAPE [ 44708.448] Initializing built-in extension MIT-SHM [ 44708.452] Initializing built-in extension XInputExtension [ 44708.456] Initializing built-in extension XTEST [ 44708.460] Initializing built-in extension BIG-REQUESTS [ 44708.464] Initializing built-in extension SYNC [ 44708.468] Initializing built-in extension XKEYBOARD [ 44708.471] Initializing built-in extension XC-MISC [ 44708.475] Initializing built-in extension SECURITY [ 44708.479] Initializing built-in extension XINERAMA [ 44708.483] Initializing built-in extension XFIXES [ 44708.487] Initializing built-in extension RENDER [ 44708.491] Initializing built-in extension RANDR [ 44708.494] Initializing built-in extension COMPOSITE [ 44708.498] Initializing built-in extension DAMAGE [ 44708.502] Initializing built-in extension MIT-SCREEN-SAVER [ 44708.506] Initializing built-in extension DOUBLE-BUFFER [ 44708.510] Initializing built-in extension RECORD [ 44708.513] Initializing built-in extension DPMS [ 44708.517] Initializing built-in extension Present [ 44708.521] Initializing built-in extension DRI3 [ 44708.525] Initializing built-in extension X-Resource [ 44708.528] Initializing built-in extension XVideo [ 44708.532] Initializing built-in extension XVideo-MotionCompensation [ 44708.535] Initializing built-in extension SELinux [ 44708.539] Initializing built-in extension XFree86-VidModeExtension [ 44708.542] Initializing built-in extension XFree86-DGA [ 44708.546] Initializing built-in extension XFree86-DRI [ 44708.549] Initializing built-in extension DRI2 [ 44708.549] (II) "glx" will be loaded. This was enabled by default and also specified in the config file. [ 44708.549] (WW) "xmir" is not to be loaded by default. Skipping. [ 44708.549] (II) LoadModule: "glx" [ 44708.549] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 44708.550] (II) Module glx: vendor="X.Org Foundation" [ 44708.550] compiled for 1.15.1, module version = 1.0.0 [ 44708.550] ABI class: X.Org Server Extension, version 8.0 [ 44708.550] (==) AIGLX enabled [ 44708.553] Loading extension GLX [ 44708.553] (II) LoadModule: "udlfb" [ 44708.554] (WW) Warning, couldn't open module udlfb [ 44708.554] (II) UnloadModule: "udlfb" [ 44708.554] (II) Unloading udlfb [ 44708.554] (EE) Failed to load module "udlfb" (module does not exist, 0) [ 44708.554] (II) LoadModule: "modesetting" [ 44708.554] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 44708.554] (II) Module modesetting: vendor="X.Org Foundation" [ 44708.554] compiled for 1.15.0, module version = 0.8.1 [ 44708.554] Module class: X.Org Video Driver [ 44708.554] ABI class: X.Org Video Driver, version 15.0 [ 44708.554] (==) Matched fglrx as autoconfigured driver 0 [ 44708.554] (==) Matched ati as autoconfigured driver 1 [ 44708.554] (==) Matched fglrx as autoconfigured driver 2 [ 44708.554] (==) Matched ati as autoconfigured driver 3 [ 44708.554] (==) Matched modesetting as autoconfigured driver 4 [ 44708.554] (==) Matched fbdev as autoconfigured driver 5 [ 44708.554] (==) Matched vesa as autoconfigured driver 6 [ 44708.554] (==) Assigned the driver to the xf86ConfigLayout [ 44708.554] (II) LoadModule: "fglrx" [ 44708.554] (WW) Warning, couldn't open module fglrx [ 44708.554] (II) UnloadModule: "fglrx" [ 44708.554] (II) Unloading fglrx [ 44708.554] (EE) Failed to load module "fglrx" (module does not exist, 0) [ 44708.554] (II) LoadModule: "ati" [ 44708.554] (II) Loading /usr/lib/xorg/modules/drivers/ati_drv.so [ 44708.554] (II) Module ati: vendor="X.Org Foundation" [ 44708.554] compiled for 1.15.0, module version = 7.3.0 [ 44708.554] Module class: X.Org Video Driver [ 44708.554] ABI class: X.Org Video Driver, version 15.0 [ 44708.554] (II) LoadModule: "radeon" [ 44708.555] (II) Loading /usr/lib/xorg/modules/drivers/radeon_drv.so [ 44708.555] (II) Module radeon: vendor="X.Org Foundation" [ 44708.555] compiled for 1.15.0, module version = 7.3.0 [ 44708.555] Module class: X.Org Video Driver [ 44708.555] ABI class: X.Org Video Driver, version 15.0 [ 44708.555] (II) LoadModule: "modesetting" [ 44708.555] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 44708.555] (II) Module modesetting: vendor="X.Org Foundation" [ 44708.555] compiled for 1.15.0, module version = 0.8.1 [ 44708.555] Module class: X.Org Video Driver [ 44708.555] ABI class: X.Org Video Driver, version 15.0 [ 44708.555] (II) UnloadModule: "modesetting" [ 44708.555] (II) Unloading modesetting [ 44708.555] (II) Failed to load module "modesetting" (already loaded, 0) [ 44708.555] (II) LoadModule: "fbdev" [ 44708.555] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 44708.555] (II) Module fbdev: vendor="X.Org Foundation" [ 44708.555] compiled for 1.15.0, module version = 0.4.4 [ 44708.555] Module class: X.Org Video Driver [ 44708.555] ABI class: X.Org Video Driver, version 15.0 [ 44708.555] (II) LoadModule: "vesa" [ 44708.555] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 44708.555] (II) Module vesa: vendor="X.Org Foundation" [ 44708.555] compiled for 1.15.0, module version = 2.3.3 [ 44708.555] Module class: X.Org Video Driver [ 44708.555] ABI class: X.Org Video Driver, version 15.0 [ 44708.555] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 44708.555] (II) RADEON: Driver for ATI Radeon chipsets: [ 44708.560] (II) FBDEV: driver for framebuffer: fbdev [ 44708.560] (II) VESA: driver for VESA chipsets: vesa [ 44708.560] (--) using VT number 7 [ 44708.578] (II) modesetting(0): using drv /dev/dri/card0 [ 44708.578] (II) modesetting(G0): using drv /dev/dri/card1 [ 44708.578] (WW) Falling back to old probe method for fbdev [ 44708.578] (II) Loading sub module "fbdevhw" [ 44708.578] (II) LoadModule: "fbdevhw" [ 44708.578] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 44708.578] (II) Module fbdevhw: vendor="X.Org Foundation" [ 44708.578] compiled for 1.15.1, module version = 0.0.2 [ 44708.578] ABI class: X.Org Video Driver, version 15.0 [ 44708.578] (WW) Falling back to old probe method for vesa [ 44708.578] (**) modesetting(0): Depth 16, (--) framebuffer bpp 16 [ 44708.578] (==) modesetting(0): RGB weight 565 [ 44708.578] (==) modesetting(0): Default visual is TrueColor [ 44708.578] (II) modesetting(0): ShadowFB: preferred YES, enabled YES [ 44708.608] (II) modesetting(0): Output VGA-0 using monitor section DisplayLinkMonitor [ 44708.610] (II) modesetting(0): Output DVI-0 has no monitor section [ 44708.640] (II) modesetting(0): EDID for output VGA-0 [ 44708.640] (II) modesetting(0): Manufacturer: ACR Model: 74 Serial#: 2483090993 [ 44708.640] (II) modesetting(0): Year: 2009 Week: 40 [ 44708.640] (II) modesetting(0): EDID Version: 1.3 [ 44708.640] (II) modesetting(0): Analog Display Input, Input Voltage Level: 0.700/0.700 V [ 44708.640] (II) modesetting(0): Sync: Separate [ 44708.640] (II) modesetting(0): Max Image Size [cm]: horiz.: 53 vert.: 29 [ 44708.640] (II) modesetting(0): Gamma: 2.20 [ 44708.640] (II) modesetting(0): DPMS capabilities: StandBy Suspend Off; RGB/Color Display [ 44708.641] (II) modesetting(0): First detailed timing is preferred mode [ 44708.641] (II) modesetting(0): redX: 0.649 redY: 0.338 greenX: 0.289 greenY: 0.609 [ 44708.641] (II) modesetting(0): blueX: 0.146 blueY: 0.070 whiteX: 0.313 whiteY: 0.329 [ 44708.641] (II) modesetting(0): Supported established timings: [ 44708.641] (II) modesetting(0): 720x400@70Hz [ 44708.641] (II) modesetting(0): 640x480@60Hz [ 44708.641] (II) modesetting(0): 640x480@72Hz [ 44708.641] (II) modesetting(0): 640x480@75Hz [ 44708.641] (II) modesetting(0): 800x600@56Hz [ 44708.641] (II) modesetting(0): 800x600@60Hz [ 44708.641] (II) modesetting(0): 800x600@72Hz [ 44708.641] (II) modesetting(0): 800x600@75Hz [ 44708.641] (II) modesetting(0): 1024x768@60Hz [ 44708.641] (II) modesetting(0): 1024x768@70Hz [ 44708.641] (II) modesetting(0): 1024x768@75Hz [ 44708.641] (II) modesetting(0): 1280x1024@75Hz [ 44708.641] (II) modesetting(0): Manufacturer's mask: 0 [ 44708.641] (II) modesetting(0): Supported standard timings: [ 44708.641] (II) modesetting(0): #0: hsize: 1280 vsize 1024 refresh: 60 vid: 32897 [ 44708.641] (II) modesetting(0): #1: hsize: 1152 vsize 864 refresh: 75 vid: 20337 [ 44708.641] (II) modesetting(0): #2: hsize: 1440 vsize 900 refresh: 60 vid: 149 [ 44708.641] (II) modesetting(0): #3: hsize: 1440 vsize 900 refresh: 75 vid: 3989 [ 44708.641] (II) modesetting(0): #4: hsize: 1600 vsize 1200 refresh: 60 vid: 16553 [ 44708.641] (II) modesetting(0): #5: hsize: 1680 vsize 1050 refresh: 60 vid: 179 [ 44708.641] (II) modesetting(0): Supported detailed timing: [ 44708.641] (II) modesetting(0): clock: 138.5 MHz Image Size: 531 x 298 mm [ 44708.641] (II) modesetting(0): h_active: 1920 h_sync: 1968 h_sync_end 2000 h_blank_end 2080 h_border: 0 [ 44708.641] (II) modesetting(0): v_active: 1080 v_sync: 1083 v_sync_end 1088 v_blanking: 1111 v_border: 0 [ 44708.641] (II) modesetting(0): Monitor name: H243H [ 44708.641] (II) modesetting(0): Ranges: V min: 56 V max: 76 Hz, H min: 31 H max: 83 kHz, PixClock max 185 MHz [ 44708.641] (II) modesetting(0): Serial No: LEW0C0044002 [ 44708.641] (II) modesetting(0): EDID (in hex): [ 44708.641] (II) modesetting(0): 00ffffffffffff000472740031f60094 [ 44708.641] (II) modesetting(0): 2813010368351d78ea6085a6564a9c25 [ 44708.641] (II) modesetting(0): 125054afcf008180714f9500950fa940 [ 44708.641] (II) modesetting(0): b300010101011a3680a070381f403020 [ 44708.641] (II) modesetting(0): 3500132a2100001a000000fc00483234 [ 44708.642] (II) modesetting(0): 33480a20202020202020000000fd0038 [ 44708.642] (II) modesetting(0): 4c1f5312000a202020202020000000ff [ 44708.642] (II) modesetting(0): 004c45573043303034343030320a003c [ 44708.642] (II) modesetting(0): Printing probed modes for output VGA-0 [ 44708.642] (II) modesetting(0): Modeline "1280x1024"x75.0 135.00 1280 1296 1440 1688 1024 1025 1028 1066 +hsync +vsync (80.0 kHz UeP) [ 44708.642] (II) modesetting(0): Modeline "1920x1080"x59.9 138.50 1920 1968 2000 2080 1080 1083 1088 1111 +hsync -vsync (66.6 kHz eP) [ 44708.642] (II) modesetting(0): Modeline "1600x1200"x60.0 162.00 1600 1664 1856 2160 1200 1201 1204 1250 +hsync +vsync (75.0 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1680x1050"x60.0 146.25 1680 1784 1960 2240 1050 1053 1059 1089 -hsync +vsync (65.3 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1280x1024"x60.0 108.00 1280 1328 1440 1688 1024 1025 1028 1066 +hsync +vsync (64.0 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1440x900"x75.0 136.75 1440 1536 1688 1936 900 903 909 942 -hsync +vsync (70.6 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1440x900"x59.9 106.50 1440 1520 1672 1904 900 903 909 934 -hsync +vsync (55.9 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1152x864"x75.0 108.00 1152 1216 1344 1600 864 865 868 900 +hsync +vsync (67.5 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1024x768"x75.1 78.80 1024 1040 1136 1312 768 769 772 800 +hsync +vsync (60.1 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1024x768"x70.1 75.00 1024 1048 1184 1328 768 771 777 806 -hsync -vsync (56.5 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1024x768"x60.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz e) [ 44708.642] (II) modesetting(0): Modeline "800x600"x72.2 50.00 800 856 976 1040 600 637 643 666 +hsync +vsync (48.1 kHz e) [ 44708.642] (II) modesetting(0): Modeline "800x600"x75.0 49.50 800 816 896 1056 600 601 604 625 +hsync +vsync (46.9 kHz e) [ 44708.642] (II) modesetting(0): Modeline "800x600"x60.3 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz e) [ 44708.642] (II) modesetting(0): Modeline "800x600"x56.2 36.00 800 824 896 1024 600 601 603 625 +hsync +vsync (35.2 kHz e) [ 44708.642] (II) modesetting(0): Modeline "640x480"x75.0 31.50 640 656 720 840 480 481 484 500 -hsync -vsync (37.5 kHz e) [ 44708.642] (II) modesetting(0): Modeline "640x480"x72.8 31.50 640 664 704 832 480 489 491 520 -hsync -vsync (37.9 kHz e) [ 44708.642] (II) modesetting(0): Modeline "640x480"x60.0 25.20 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz e) [ 44708.642] (II) modesetting(0): Modeline "720x400"x70.1 28.32 720 738 846 900 400 412 414 449 -hsync +vsync (31.5 kHz e) [ 44708.645] (II) modesetting(0): EDID for output DVI-0 [ 44708.645] (II) modesetting(0): Output VGA-0 connected [ 44708.645] (II) modesetting(0): Output DVI-0 disconnected [ 44708.645] (II) modesetting(0): Using user preference for initial modes [ 44708.645] (II) modesetting(0): Output VGA-0 using initial mode 1280x1024 [ 44708.645] (II) modesetting(0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. [ 44708.645] (==) modesetting(0): DPI set to (96, 96) [ 44708.645] (II) Loading sub module "fb" [ 44708.645] (II) LoadModule: "fb" [ 44708.645] (II) Loading /usr/lib/xorg/modules/libfb.so [ 44708.645] (II) Module fb: vendor="X.Org Foundation" [ 44708.645] compiled for 1.15.1, module version = 1.0.0 [ 44708.645] ABI class: X.Org ANSI C Emulation, version 0.4 [ 44708.645] (II) Loading sub module "shadow" [ 44708.645] (II) LoadModule: "shadow" [ 44708.646] (II) Loading /usr/lib/xorg/modules/libshadow.so [ 44708.646] (II) Module shadow: vendor="X.Org Foundation" [ 44708.646] compiled for 1.15.1, module version = 1.1.0 [ 44708.646] ABI class: X.Org ANSI C Emulation, version 0.4 [ 44708.646] (**) modesetting(G0): Depth 16, (--) framebuffer bpp 16 [ 44708.646] (==) modesetting(G0): RGB weight 565 [ 44708.646] (==) modesetting(G0): Default visual is TrueColor [ 44708.646] (II) modesetting(G0): ShadowFB: preferred NO, enabled NO [ 44708.727] (II) modesetting(G0): Output DVI-1-0 using monitor section DisplayLinkMonitor [ 44708.808] (II) modesetting(G0): EDID for output DVI-1-0 [ 44708.808] (II) modesetting(G0): Manufacturer: WDE Model: 1702 Serial#: 0 [ 44708.808] (II) modesetting(G0): Year: 2005 Week: 14 [ 44708.808] (II) modesetting(G0): EDID Version: 1.3 [ 44708.808] (II) modesetting(G0): Analog Display Input, Input Voltage Level: 0.700/0.700 V [ 44708.808] (II) modesetting(G0): Sync: Separate [ 44708.808] (II) modesetting(G0): Max Image Size [cm]: horiz.: 34 vert.: 27 [ 44708.808] (II) modesetting(G0): Gamma: 2.20 [ 44708.808] (II) modesetting(G0): DPMS capabilities: StandBy Suspend Off; RGB/Color Display [ 44708.808] (II) modesetting(G0): Default color space is primary color space [ 44708.808] (II) modesetting(G0): First detailed timing is preferred mode [ 44708.808] (II) modesetting(G0): GTF timings supported [ 44708.808] (II) modesetting(G0): redX: 0.643 redY: 0.352 greenX: 0.283 greenY: 0.608 [ 44708.808] (II) modesetting(G0): blueX: 0.147 blueY: 0.102 whiteX: 0.313 whiteY: 0.329 [ 44708.808] (II) modesetting(G0): Supported established timings: [ 44708.808] (II) modesetting(G0): 720x400@70Hz [ 44708.808] (II) modesetting(G0): 640x480@60Hz [ 44708.808] (II) modesetting(G0): 640x480@67Hz [ 44708.808] (II) modesetting(G0): 640x480@72Hz [ 44708.808] (II) modesetting(G0): 640x480@75Hz [ 44708.808] (II) modesetting(G0): 800x600@56Hz [ 44708.808] (II) modesetting(G0): 800x600@60Hz [ 44708.808] (II) modesetting(G0): 800x600@72Hz [ 44708.808] (II) modesetting(G0): 800x600@75Hz [ 44708.808] (II) modesetting(G0): 832x624@75Hz [ 44708.808] (II) modesetting(G0): 1024x768@60Hz [ 44708.808] (II) modesetting(G0): 1024x768@70Hz [ 44708.808] (II) modesetting(G0): 1024x768@75Hz [ 44708.809] (II) modesetting(G0): 1280x1024@75Hz [ 44708.809] (II) modesetting(G0): Manufacturer's mask: 0 [ 44708.809] (II) modesetting(G0): Supported standard timings: [ 44708.809] (II) modesetting(G0): #0: hsize: 1280 vsize 1024 refresh: 60 vid: 32897 [ 44708.809] (II) modesetting(G0): #1: hsize: 1152 vsize 864 refresh: 75 vid: 20337 [ 44708.809] (II) modesetting(G0): Supported detailed timing: [ 44708.809] (II) modesetting(G0): clock: 108.0 MHz Image Size: 338 x 270 mm [ 44708.809] (II) modesetting(G0): h_active: 1280 h_sync: 1328 h_sync_end 1440 h_blank_end 1688 h_border: 0 [ 44708.809] (II) modesetting(G0): v_active: 1024 v_sync: 1025 v_sync_end 1028 v_blanking: 1066 v_border: 0 [ 44708.809] (II) modesetting(G0): Ranges: V min: 50 V max: 75 Hz, H min: 30 H max: 82 kHz, PixClock max 145 MHz [ 44708.809] (II) modesetting(G0): Monitor name: WDE LCM-17v2 [ 44708.809] (II) modesetting(G0): Serial No: 0 [ 44708.809] (II) modesetting(G0): EDID (in hex): [ 44708.809] (II) modesetting(G0): 00ffffffffffff005c85021700000000 [ 44708.809] (II) modesetting(G0): 0e0f010368221b78ef8bc5a45a489b25 [ 44708.809] (II) modesetting(G0): 1a5054bfef008180714f010101010101 [ 44708.809] (II) modesetting(G0): 010101010101302a009851002a403070 [ 44708.809] (II) modesetting(G0): 1300520e1100001e000000fd00324b1e [ 44708.809] (II) modesetting(G0): 520e000a202020202020000000fc0057 [ 44708.809] (II) modesetting(G0): 4445204c434d2d313776320a000000ff [ 44708.809] (II) modesetting(G0): 00300a202020202020202020202000e7 [ 44708.809] (II) modesetting(G0): Printing probed modes for output DVI-1-0 [ 44708.809] (II) modesetting(G0): Modeline "1280x1024"x60.0 108.00 1280 1328 1440 1688 1024 1025 1028 1066 +hsync +vsync (64.0 kHz UeP) [ 44708.809] (II) modesetting(G0): Modeline "1280x1024"x75.0 135.00 1280 1296 1440 1688 1024 1025 1028 1066 +hsync +vsync (80.0 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x960"x60.0 108.00 1280 1376 1488 1800 960 961 964 1000 +hsync +vsync (60.0 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x800"x74.9 106.50 1280 1360 1488 1696 800 803 809 838 -hsync +vsync (62.8 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x800"x59.8 83.50 1280 1352 1480 1680 800 803 809 831 +hsync -vsync (49.7 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1152x864"x75.0 108.00 1152 1216 1344 1600 864 865 868 900 +hsync +vsync (67.5 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x768"x74.9 102.25 1280 1360 1488 1696 768 771 778 805 +hsync -vsync (60.3 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x768"x59.9 79.50 1280 1344 1472 1664 768 771 778 798 -hsync +vsync (47.8 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "1024x768"x75.1 78.80 1024 1040 1136 1312 768 769 772 800 +hsync +vsync (60.1 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "1024x768"x70.1 75.00 1024 1048 1184 1328 768 771 777 806 -hsync -vsync (56.5 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "1024x768"x60.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "1024x576"x60.0 46.97 1024 1064 1168 1312 576 577 580 597 -hsync +vsync (35.8 kHz) [ 44708.810] (II) modesetting(G0): Modeline "832x624"x74.6 57.28 832 864 928 1152 624 625 628 667 -hsync -vsync (49.7 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "800x600"x72.2 50.00 800 856 976 1040 600 637 643 666 +hsync +vsync (48.1 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "800x600"x75.0 49.50 800 816 896 1056 600 601 604 625 +hsync +vsync (46.9 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "800x600"x60.3 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "800x600"x56.2 36.00 800 824 896 1024 600 601 603 625 +hsync +vsync (35.2 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "848x480"x60.0 33.75 848 864 976 1088 480 486 494 517 +hsync +vsync (31.0 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "640x480"x75.0 31.50 640 656 720 840 480 481 484 500 -hsync -vsync (37.5 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "640x480"x72.8 31.50 640 664 704 832 480 489 491 520 -hsync -vsync (37.9 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "640x480"x66.7 30.24 640 704 768 864 480 483 486 525 -hsync -vsync (35.0 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "640x480"x60.0 25.20 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "720x400"x70.1 28.32 720 738 846 900 400 412 414 449 -hsync +vsync (31.5 kHz e) [ 44708.810] (II) modesetting(G0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. [ 44708.810] (==) modesetting(G0): DPI set to (96, 96) [ 44708.810] (II) Loading sub module "fb" [ 44708.810] (II) LoadModule: "fb" [ 44708.810] (II) Loading /usr/lib/xorg/modules/libfb.so [ 44708.810] (II) Module fb: vendor="X.Org Foundation" [ 44708.810] compiled for 1.15.1, module version = 1.0.0 [ 44708.811] ABI class: X.Org ANSI C Emulation, version 0.4 [ 44708.811] (II) UnloadModule: "radeon" [ 44708.811] (II) Unloading radeon [ 44708.811] (II) UnloadModule: "fbdev" [ 44708.811] (II) Unloading fbdev [ 44708.811] (II) UnloadSubModule: "fbdevhw" [ 44708.811] (II) Unloading fbdevhw [ 44708.811] (II) UnloadModule: "vesa" [ 44708.811] (II) Unloading vesa [ 44708.811] (==) modesetting(G0): Backing store enabled [ 44708.811] (==) modesetting(G0): Silken mouse enabled [ 44708.812] (II) modesetting(G0): RandR 1.2 enabled, ignore the following RandR disabled message. [ 44708.812] (==) modesetting(G0): DPMS enabled [ 44708.812] (WW) modesetting(G0): Option "fbdev" is not used [ 44708.812] (==) modesetting(0): Backing store enabled [ 44708.812] (==) modesetting(0): Silken mouse enabled [ 44708.812] (II) modesetting(0): RandR 1.2 enabled, ignore the following RandR disabled message. [ 44708.812] (==) modesetting(0): DPMS enabled [ 44708.812] (WW) modesetting(0): Option "fbdev" is not used [ 44708.856] (--) RandR disabled [ 44708.867] (II) SELinux: Disabled on system [ 44708.868] (II) AIGLX: Screen 0 is not DRI2 capable [ 44708.868] (EE) AIGLX: reverting to software rendering [ 44708.878] (II) AIGLX: Loaded and initialized swrast [ 44708.878] (II) GLX: Initialized DRISWRAST GL provider for screen 0 [ 44708.879] (II) modesetting(G0): Damage tracking initialized [ 44708.879] (II) modesetting(0): Damage tracking initialized [ 44708.879] (II) modesetting(0): Setting screen physical size to 338 x 270 [ 44708.900] (II) XKB: generating xkmfile /tmp/server-B20D7FC79C7F597315E3E501AEF10E0D866E8E92.xkm [ 44708.918] (II) config/udev: Adding input device Power Button (/dev/input/event1) [ 44708.918] (**) Power Button: Applying InputClass "evdev keyboard catchall" [ 44708.918] (II) LoadModule: "evdev" [ 44708.918] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 44708.918] (II) Module evdev: vendor="X.Org Foundation" [ 44708.918] compiled for 1.15.0, module version = 2.8.2 [ 44708.918] Module class: X.Org XInput Driver [ 44708.918] ABI class: X.Org XInput driver, version 20.0 [ 44708.918] (II) Using input driver 'evdev' for 'Power Button' [ 44708.918] (**) Power Button: always reports core events [ 44708.918] (**) evdev: Power Button: Device: "/dev/input/event1" [ 44708.918] (--) evdev: Power Button: Vendor 0 Product 0x1 [ 44708.918] (--) evdev: Power Button: Found keys [ 44708.918] (II) evdev: Power Button: Configuring as keyboard [ 44708.918] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/LNXPWRBN:00/input/input1/event1" [ 44708.918] (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD, id 6) [ 44708.918] (**) Option "xkb_rules" "evdev" [ 44708.918] (**) Option "xkb_model" "pc105" [ 44708.918] (**) Option "xkb_layout" "us" [ 44708.919] (II) config/udev: Adding input device Power Button (/dev/input/event0) [ 44708.919] (**) Power Button: Applying InputClass "evdev keyboard catchall" [ 44708.919] (II) Using input driver 'evdev' for 'Power Button' [ 44708.919] (**) Power Button: always reports core events [ 44708.919] (**) evdev: Power Button: Device: "/dev/input/event0" [ 44708.919] (--) evdev: Power Button: Vendor 0 Product 0x1 [ 44708.919] (--) evdev: Power Button: Found keys [ 44708.919] (II) evdev: Power Button: Configuring as keyboard [ 44708.919] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0/event0" Is there anything I can do to fix this problem.

    Read the article

  • VLC 2.0.3 on Lubuntu 12.04: No audio?

    - by drezabek
    I am on Lubuntu 12.04, and I have installed VLC media player version 2.0.3. When I try and play an audio file, it appears to load fine, and the media position bar displays the progress, and it says it is playing, but I can't here any thing through my speakers. I can hear game audio, web audio, and audio from SMPlayer just fine, but with VLC, I can't here anything. Below is the "Messages" output with the verbosity option set to "2 (debug)" main debug: processing request item: The Bottom, node: Playlist, skip: 0 main debug: resyncing on The Bottom main debug: The Bottom is at 0 main debug: starting playback of the new playlist item main debug: resyncing on The Bottom main debug: The Bottom is at 0 main debug: creating new input thread main debug: Creating an input for 'The Bottom' main debug: TIMER input launching for 'Floex - Machinarium Soundtrack - 01 The Bottom.flac' : 23.706 ms - Total 23.706 ms / 1 intvls (Avg 23.706 ms) main debug: using timeshift granularity of 50 MiB, in path '/tmp' main debug: `file:///home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' gives access `file' demux `' path `/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' main debug: creating demux: access='file' demux='' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' file='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for access_demux module: 3 candidates main debug: no access_demux module matching "file" could be loaded main debug: TIMER module_need() : 2.332 ms - Total 2.332 ms / 1 intvls (Avg 2.332 ms) main debug: creating access 'file' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac', path='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for access module: 2 candidates filesystem debug: opening file `/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: using access module "filesystem" main debug: TIMER module_need() : 0.762 ms - Total 0.762 ms / 1 intvls (Avg 0.762 ms) main debug: Using stream method for AStream* main debug: starting pre-buffering main debug: received first data after 0 ms main debug: pre-buffering done 1024 bytes in 0s - 43478 KiB/s main debug: looking for stream_filter module: 7 candidates main debug: no stream_filter module matching "any" could be loaded main debug: TIMER module_need() : 0.236 ms - Total 0.236 ms / 1 intvls (Avg 0.236 ms) main debug: looking for stream_filter module: 1 candidate main debug: using stream_filter module "stream_filter_record" main debug: TIMER module_need() : 0.156 ms - Total 0.156 ms / 1 intvls (Avg 0.156 ms) main debug: creating demux: access='file' demux='' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' file='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for demux module: 54 candidates flacsys debug: Picture type=3 mime=image/png description='' file length=679371 qt4 debug: IM: Setting an input main debug: looking for packetizer module: 21 candidates main debug: using packetizer module "packetizer_flac" main debug: TIMER module_need() : 0.211 ms - Total 0.211 ms / 1 intvls (Avg 0.211 ms) main debug: using demux module "flacsys" main debug: TIMER module_need() : 4.023 ms - Total 4.023 ms / 1 intvls (Avg 4.023 ms) main debug: looking for a subtitle file in /home/doug/Music/unsorted/Floex - Machinarium Soundtrack/ main debug: looking for meta reader module: 2 candidates main debug: using meta reader module "taglib" main debug: TIMER module_need() : 5.245 ms - Total 5.245 ms / 1 intvls (Avg 5.245 ms) main debug: removing module "taglib" main debug: `file:///home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' successfully opened main debug: selecting program id=0 main debug: looking for decoder module: 30 candidates main debug: using decoder module "flac" main debug: TIMER module_need() : 0.442 ms - Total 0.442 ms / 1 intvls (Avg 0.442 ms) main debug: Buffering 0% flac debug: decode STREAMINFO flac debug: channels:2 samplerate:44100 bitspersamples:16 flac debug: STREAMINFO decoded main debug: Buffering 30% main debug: recycling audio output main debug: looking for audio output module: 3 candidates main debug: Buffering 61% pulse debug: using stereo channel map pulse debug: using library version 1.1.0 pulse debug: (compiled with version 1.1.0, protocol 26) main debug: Buffering 92% main debug: Stream buffering done (371 ms in 2 ms) pulse debug: connected locally to unix:/home/doug/.pulse/dce22254e867f905188a2ce200000003-runtime/native as client #14 pulse debug: using protocol 26, server protocol 26 pulse debug: using buffer metrics: maxlength=4194304, tlength=9880, prebuf=0, minreq=3528 pulse debug: connected to sink 0: alsa_output.pci-0000_00_14.2.analog-stereo main debug: using audio output module "pulse" main debug: TIMER module_need() : 4.571 ms - Total 4.571 ms / 1 intvls (Avg 4.571 ms) main debug: output 's16l' 44100 Hz Stereo frame=1 samples/4 bytes main debug: mixer 'f32l' 44100 Hz Stereo frame=1 samples/8 bytes main debug: filter(s) 'f32l'->'s16l' 44100 Hz->44100 Hz Stereo->Stereo main debug: looking for audio filter module: 14 candidates audio_format debug: f32l->s16l, bits per sample: 32->16 main debug: using audio filter module "audio_format" main debug: TIMER module_need() : 0.187 ms - Total 0.187 ms / 1 intvls (Avg 0.187 ms) main debug: conversion pipeline completed main debug: looking for audio mixer module: 2 candidates main debug: using audio mixer module "float32_mixer" main debug: TIMER module_need() : 0.125 ms - Total 0.125 ms / 1 intvls (Avg 0.125 ms) main debug: input 's16l' 44100 Hz Stereo frame=1 samples/4 bytes main debug: looking for audio filter module: 1 candidate scaletempo debug: format: 44100 rate, 2 nch, 4 bps, fl32 scaletempo debug: params: 30 stride, 0.200 overlap, 14 search scaletempo debug: 1.000 scale, 1323.000 stride_in, 1323 stride_out, 1059 standing, 264 overlap, 617 search, 2204 queue, fl32 mode main debug: using audio filter module "scaletempo" main debug: TIMER module_need() : 0.233 ms - Total 0.233 ms / 1 intvls (Avg 0.233 ms) main debug: filter(s) 's16l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo pulse debug: listing sink alsa_output.pci-0000_00_14.2.analog-stereo (0): Built-in Audio Analog Stereo main debug: looking for audio filter module: 14 candidates audio_format debug: s16l->f32l, bits per sample: 16->32 main debug: using audio filter module "audio_format" main debug: TIMER module_need() : 0.147 ms - Total 0.147 ms / 1 intvls (Avg 0.147 ms) main debug: conversion pipeline completed pulse debug: base volume: 65536 main debug: looking for audio filter module: 1 candidate equalizer debug: equalizer loaded for 44100 Hz with 10 bands 2 pass equalizer debug: 60 Hz -> factor:0.000000 alpha:0.003013 beta:0.993973 gamma:1.993901 equalizer debug: 170 Hz -> factor:0.000000 alpha:0.008490 beta:0.983019 gamma:1.982437 equalizer debug: 310 Hz -> factor:0.000000 alpha:0.015374 beta:0.969252 gamma:1.967331 equalizer debug: 600 Hz -> factor:0.000000 alpha:0.029328 beta:0.941343 gamma:1.934254 equalizer debug: 1000 Hz -> factor:0.000000 alpha:0.047918 beta:0.904163 gamma:1.884869 equalizer debug: 3000 Hz -> factor:0.000000 alpha:0.130408 beta:0.739184 gamma:1.582718 equalizer debug: 6000 Hz -> factor:0.000000 alpha:0.226555 beta:0.546889 gamma:1.015267 equalizer debug: 12000 Hz -> factor:0.000000 alpha:0.344937 beta:0.310127 gamma:-0.181410 equalizer debug: 14000 Hz -> factor:0.000000 alpha:0.366438 beta:0.267123 gamma:-0.521151 equalizer debug: 16000 Hz -> factor:0.000000 alpha:0.379009 beta:0.241981 gamma:-0.808451 main debug: using audio filter module "equalizer" main debug: TIMER module_need() : 0.353 ms - Total 0.353 ms / 1 intvls (Avg 0.353 ms) main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: looking for visualization2 module: 1 candidate main debug: looking for text renderer module: 2 candidates freetype debug: Building font databases. freetype debug: Took 0 microseconds freetype debug: Using Serif Bold as font from file /usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf freetype debug: using fontsize: 2 main debug: using text renderer module "freetype" main debug: TIMER module_need() : 3.278 ms - Total 3.278 ms / 1 intvls (Avg 3.278 ms) main debug: looking for video filter2 module: 18 candidates swscale debug: 32x32 chroma: YUVA -> 16x16 chroma: RGBA with scaling using Bicubic (good quality) main debug: using video filter2 module "swscale" main debug: TIMER module_need() : 1.037 ms - Total 1.037 ms / 1 intvls (Avg 1.037 ms) main debug: looking for video filter2 module: 18 candidates yuvp debug: YUVP to YUVA converter main debug: using video filter2 module "yuvp" main debug: TIMER module_need() : 0.156 ms - Total 0.156 ms / 1 intvls (Avg 0.156 ms) main debug: Deinterlacing available main debug: deinterlace 0, mode blend, is_needed 0 main debug: Opening vout display wrapper main debug: looking for vout display module: 6 candidates main debug: looking for vout window xid module: 4 candidates qt4 debug: requesting video... qt4 debug: Video was requested 0, 0 main debug: using vout window xid module "qt4" main debug: TIMER module_need() : 61.671 ms - Total 61.671 ms / 1 intvls (Avg 61.671 ms) main debug: looking for inhibit module: 2 candidates main debug: using inhibit module "xdg_screensaver" main debug: TIMER module_need() : 0.336 ms - Total 0.336 ms / 1 intvls (Avg 0.336 ms) xdg_screensaver debug: started xdg-screensaver (PID = 6682) xcb_xv debug: connected to X11.0 server xcb_xv debug: vendor : The X.Org Foundation xcb_xv debug: version: 11103000 xcb_xv debug: using screen 0x15a xcb_xv debug: using XVideo extension v2.2 xcb_xv debug: using adaptor NV17 Video Texture xcb_xv debug: using port 310 xcb_xv debug: using image format 0x30323449 xcb_xv debug: using X11 visual ID 0x21 (depth: 24) xcb_xv debug: using X11 window 0x03400000 xcb_xv debug: using X11 graphic context 0x03400002 main debug: VoutDisplayEvent 'fullscreen' 0 main debug: VoutDisplayEvent 'resize' 800x500 window main debug: using vout display module "xcb_xv" main debug: TIMER module_need() : 69.890 ms - Total 69.890 ms / 1 intvls (Avg 69.890 ms) main debug: original format sz 800x500, of (0,0), vsz 800x500, 4cc I420, sar 1:1, msk r0x0 g0x0 b0x0 main debug: removing module "freetype" main debug: looking for text renderer module: 2 candidates freetype debug: Building font databases. freetype debug: Took 0 microseconds freetype debug: Using Serif Bold as font from file /usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf freetype debug: using fontsize: 2 main debug: using text renderer module "freetype" main debug: TIMER module_need() : 4.552 ms - Total 4.552 ms / 1 intvls (Avg 4.552 ms) main debug: using visualization2 module "visual" main debug: TIMER module_need() : 84.104 ms - Total 84.104 ms / 1 intvls (Avg 84.104 ms) main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: filter(s) 'f32l'->'f32l' 48510 Hz->44100 Hz Stereo->Stereo main debug: looking for audio filter module: 14 candidates main debug: using audio filter module "samplerate" main debug: TIMER module_need() : 0.375 ms - Total 0.375 ms / 1 intvls (Avg 0.375 ms) main debug: conversion pipeline completed main debug: End of audio preroll main debug: Decoder buffering done in 91 ms main warning: PTS is out of range (-9269), dropping buffer pulse debug: deferring start (190703 us) main debug: looking for video blending module: 1 candidate main debug: using video blending module "blend" main debug: TIMER module_need() : 0.275 ms - Total 0.275 ms / 1 intvls (Avg 0.275 ms) main debug: Detected interlaced video main debug: deinterlace 0, mode blend, is_needed 1 xcb_xv debug: display is visible pulse debug: starting deferred pulse warning: too late by 93760 us pulse debug: changed sample rate to 44186 Hz pulse debug: started pulse warning: too late by 94474 us pulse debug: changed sample rate to 44229 Hz pulse warning: too late by 93532 us pulse debug: changed sample rate to 44272 Hz pulse warning: too late by 92829 us pulse debug: changed sample rate to 44315 Hz pulse warning: too late by 92132 us pulse debug: changed sample rate to 44358 Hz xcb_xv debug: display is visible pulse warning: too late by 91534 us pulse debug: changed sample rate to 44401 Hz xcb_xv debug: display is visible pulse warning: too late by 89482 us pulse debug: changed sample rate to 44440 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible pulse warning: too late by 87529 us pulse debug: changed sample rate to 44479 Hz pulse warning: too late by 84577 us pulse debug: changed sample rate to 44504 Hz main debug: auto hiding mouse cursor pulse warning: too late by 78562 us pulse debug: changed sample rate to 44492 Hz pulse warning: too late by 68015 us pulse debug: changed sample rate to 44422 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible main debug: auto hiding mouse cursor pulse debug: changed sample rate to 44336 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible main debug: auto hiding mouse cursor I have had issues with VLC in the past- the audio quality was extremely crackly, as if the headphone jack was plugged in only half way, and the sounds were extremely sharp and caused my speakers to make a ringing/vibrating noise... It would eventually start working after I messed around with the audio settings, but it happened every restart. I eventually switched to SMPlayer, but now I need some of the features that VLC offers, but I still can't use VLC. At this point, the audio can not be heard at all, and the method I used before, messing around with the audio settings, isn't getting me anywhere. (note, I reposted this on VideoLan's forums, link is here: http://forum.videolan.org/viewtopic.php?f=13&t=104726) Please let me know if you need more information, or are confused by something I posted! Thanks!

    Read the article

  • Windows 8 Will be Here Tomorrow; but Should Silverlight be Gone Today?

    - by andrewbrust
    The software industry lives within an interesting paradox. IT in the enterprise moves slowly and cautiously, upgrading only when safe and necessary.  IT interests intentionally live in the past.  On the other hand, developers, and Independent Software Vendors (ISVs) not only want to use the latest and greatest technologies, but this constituency prides itself on gauging tech’s future, and basing its present-day strategy upon it.  Normally, we as an industry manage this paradox with a shrug of the shoulder and musings along the lines of “it takes all kinds.”  Different subcultures have different tendencies.  So be it. Microsoft, with its Windows operating system (OS), can’t take such a laissez-faire view of the world though.  Redmond relies on IT to deploy Windows and (at the very least) influence its procurement, but it also relies on developers to build software for Windows, especially software that has a dependency on features in new versions of the OS.  It must indulge and nourish developers’ fetish for an early birthing of the next generation of software, even as it acknowledges the IT reality that the next wave will arrive on-schedule in Redmond and will travel very slowly to end users. With the move to Windows 8, and the corresponding shift in application development models, this paradox is certainly in place. On the one hand, the next version of Windows is widely expected sometime in 2012, and its full-scale deployment will likely push into 2014 or even later.  Meanwhile, there’s a technology that runs on today’s Windows 7, will continue to run in the desktop mode of Windows 8 (the next version’s codename), and provides absolutely the best architectural bridge to the Windows 8 Metro-style application development stack.  That technology is Silverlight.  And given what we now know about Windows 8, one might think, as I do, that Microsoft ecosystem developers should be flocking to it. But because developers are trying to get a jump on the future, and since many of them believe the impending v5.0 release of Silverlight will be the technology’s last, not everyone is flocking to it; in fact some are fleeing from it.  Is this sensible?  Is it not unprecedented?  What options does it lead to?  What’s the right way to think about the situation? Is v5.0 really the last major version of the technology called Silverlight?  We don’t know.  But Scott Guthrie, the “father” and champion of the technology, left the Developer Division of Microsoft months ago to work on the Windows Azure team, and he took his people with him.  John Papa, who was a very influential Redmond-based evangelist for Silverlight (and is a Visual Studio Magazine author), left Microsoft completely.  About a year ago, when initial suspicion of Silverlight’s demise reached significant magnitude, Papa interviewed Guthrie on video and their discussion served to dispel developers’ fears; but now they’ve moved on. So read into that what you will and let’s suppose, for the sake of argument, speculation that Silverlight’s days of major revision and iteration are over now is correct.  Let’s assume the shine and glimmer has dimmed.  Let’s assume that any Silverlight application written today, and that therefore any investment of financial and human resources made in Silverlight development today, is destined for rework and extra investment in a few years, if the application’s platform needs to stay current. Is this really so different from any technology investment we make?  Every framework, language, runtime and operating system is subject to change, to improvement, to flux and, yes, to obsolescence.  What differs from project to project, is how near-term that obsolescence is and how disruptive the change will be.  The shift from .NET 1.1. to 2.0 was incremental.  Some of the further changes were too.  But the switch from Windows Forms to WPF was major, and the change from ASP.NET Web Services (asmx) to Windows Communication Foundation (WCF) was downright fundamental. Meanwhile, the transition to the .NET development model for Windows 8 Metro-style applications is actually quite gentle.  The finer points of this subject are covered nicely in Magenic’s excellent white paper “Assessing the Windows 8 Development Platform.” As the authors of that paper (including Rocky Lhotka)  point out, Silverlight code won’t just “port” to Windows 8.  And, no, Silverlight user interfaces won’t either; Metro always supports XAML, but that relationship is not commutative.  But the concepts, the syntax, the architecture and developers’ skills map from Silverlight to Windows 8 Metro and the Windows Runtime (WinRT) very nicely.  That’s not a coincidence.  It’s not an accident.  This is a protected transition.  It’s not a slap in the face. There are few things that are unnerving about this transition, which make it seem markedly different from others: The assumed end of the road for Silverlight is something many think they can see.  Instead of being ignorant of the technology’s expiration date, we believe we know it.  If ignorance is bliss, it would seem our situation lacks it. The new technology involving WinRT and Metro involves a name change from Silverlight. .NET, which underlies both Silverlight and the XAML approach to WinRT development, has just about reached 10 years of age.  That’s equivalent to 80 in human years, or so many fear. My take is that the combination of these three factors has contributed to what for many is a psychologically compelling case that Silverlight should be abandoned today and HTML 5 (the agnostic kind, not the Windows RT variety) should be embraced in its stead.  I understand the logic behind that.  I appreciate the preemptive, proactive, vigilant conscientiousness involved in its calculus.  But for a great many scenarios, I don’t agree with it.  HTML 5 clients, no matter how impressive their interactivity and the emulation of native application interfaces they present may be, are still second-class clients.  They are getting better, especially when hardware acceleration and fast processors are involved.  But they still lag.  They still feel like they’re emulating something, like they’re prototypes, like they’re not comfortable in their own skins.  They are based on compromise, and they feel compromised too. HTML 5/JavaScript development tools are getting better, and will get better still, but they are not as productive as tools for other environments, like Flash, like Silverlight or even more primitive tooling for iOS or Android.  HTML’s roots as a document markup language, rather than an application interface, create a disconnect that impedes productivity.  I do not necessarily think that problem is insurmountable, but it’s here today. If you’re building line-of-business applications, you need a first-class client and you need productivity.  Lack of productivity increases your costs and worsens your backlog.  A second class client will erode user satisfaction, which is never good.  Worse yet, this erosion will be inconspicuous, rather than easily identified and diagnosed, because the inferiority of an HTML 5 client over a native one is hard to identify and, notably, doing so at this juncture in the industry is unpopular.  Why would you fault a technology that everyone believes is revolutionary?  Instead, user disenchantment will remain latent and yet will add to the malaise caused by slower development. If you’re an ISV and you’re coveting the reach of running multi-platform, it’s a different story.  You’ve likely wanted to move to HTML 5 already, and the uncertainty around Silverlight may be the only remaining momentum or pretext you need to make the shift.  You’re deploying many more copies of your application than a line-of-business developer is anyway; this makes the economic hit from lower productivity less impactful, and the wider potential installed base might even make it profitable. But no matter who you are, it’s important to take stock of the situation and do it accurately.  Continued, but merely incremental changes in a development model lead to conservatism and general lack of innovation in the underlying platform.  Periods of stability and equilibrium are necessary, but permanence in that equilibrium leads to loss of platform relevance, market share and utility.  Arguably, that’s already happened to Windows.  The change Windows 8 brings is necessary and overdue.  The marked changes in using .NET if we’re to build applications for the new OS are inevitable.  We will ultimately benefit from the change, and what we can reasonably hope for in the interim is a migration path for our code and skills that is navigable, logical and conceptually comfortable. That path takes us to a place called WinRT, rather than a place called Silverlight.  But considering everything that is changing for the good, the number of disruptive changes is impressively minimal.  The name may be changing, and there may even be some significance to that in terms of Microsoft’s internal management of products and technologies.  But as the consumer, you should care about the ingredients, not the name.  Turkish coffee and Greek coffee are much the same. Although you’ll find plenty of interested parties who will find the names significant, drinkers of the beverage should enjoy either one.  It’s all coffee, it’s all sweet, and you can tell your fortune from the grounds that are left at the end.  Back on the software side, it’s all XAML, and C# or VB .NET, and you can make your fortune from the product that comes out at the end.  Coffee drinkers wouldn’t switch to tea.  Why should XAML developers switch to HTML?

    Read the article

< Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >