Search Results

Search found 792 results on 32 pages for 'sparc t4'.

Page 32/32 | < Previous Page | 28 29 30 31 32 

  • CodePlex Daily Summary for Wednesday, March 09, 2011

    CodePlex Daily Summary for Wednesday, March 09, 2011Popular ReleasesDirectQ: Release 1.8.7 (RC2): More fixes and improvements. Note for multiplayer - you may need to set r_waterwarp to 0 or 2 before connecting to a server, otherwise you will get a "Mod_PointInLeaf: bad model" error and not be able to connect. You can set it back to 1 after you connect, of course. This only came to light after releasing, and will be fixed in the next one.Microsoft All-In-One Code Framework: Visual Studio 2008 Code Samples 2011-03-09: Code samples for Visual Studio 2008myCollections: Version 1.3: New in version 1.3 : Added Editor management for Books Added Amazon API for Books Us, Fr, De Added Amazon Us, Fr, De for Movies Added The MovieDB for Fr and De Added Author for Books Added Editor and Platform for Games Added Amazon Us, De for Games Added Studio for XXX Added Background for XXX Bug fixing with Softonic API Bug fixing with IMDB UI improvement Removed GraceNote Added Amazon Us,Fr, De for Series Added TVDB Fr and De for Series Added Tracks for Musi...Facebook Graph Toolkit: Facebook Graph Toolkit 1.1: Version 1.1 (8 Mar 2011)new Dialog class for redirecting users to Facebook dialogs new Async publishing methods new Check for Extended Permissions option fixed bug: inappropiate condition of redirecting to login in Api class fixed bug: IframeRedirect method not workingpatterns & practices : Composite Services: Composite Services Guidance - CTP2: This is the second CTP of the p&p Composite Service Guidance.Python Tools for Visual Studio: 1.0 Beta 1: Beta 1You can't install IronPython Tools for Visual Studio side-by-side with Python Tools for Visual Studio. A race condition sometimes causes local MPI debugging to miss breakpoints. When MPI jobs on a cluster fail they don’t get cleaned up correctly, which can cause debugging to stall because the associated MPI job is stuck in the queue. The "Threads" view has a race condition which can cause it not to display properly at times. VS2010 shortcuts that are pinned to the taskbar are so...DotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 2: What is new in DotNetAge 2.0 ? Completely update DJME to DJME2, enhance user experience ,more beautiful and more interactively visit DJME project home to lean more about DJME http://www.dotnetage.com/sites/home/djme.html A new widget engine has came! Faster and easiler. Runtime performance enhanced. SEO enhanced. UI Designer enhanced. A new web resources explorer. Page manager enhanced. BlogML supports added that allows you import/export your blog data to/from dotnetage publishi...Kooboo CMS: Kooboo CMS 3.0 Beta: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB and MongoDB. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder t...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.2: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager added fullscreen for the popup and popupformIronPython: 2.7 Release Candidate 2: On behalf of the IronPython team, I am pleased to announce IronPython 2.7 Release Candidate 2. The releases contains a few minor bug fixes, including a working webbrowser module. Please see the release notes for 61395 for what was fixed in previous releases.LINQ to Twitter: LINQ to Twitter Beta v2.0.20: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation.IIS Tuner: IIS Tuner 1.0: IIS and ASP.NET performance optimization toolMinemapper: Minemapper v0.1.6: Once again supports biomes, thanks to an updated Minecraft Biome Extractor, which added support for the new Minecraft beta v1.3 map format. Updated mcmap to support new biome format.Sandcastle Help File Builder: SHFB v1.9.3.0 Release: This release supports the Sandcastle June 2010 Release (v2.6.10621.1). It includes full support for generating, installing, and removing MS Help Viewer files. This new release is compiled under .NET 4.0, supports Visual Studio 2010 solutions and projects as documentation sources, and adds support for projects targeting the Silverlight Framework. This release uses the Sandcastle Guided Installation package used by Sandcastle Styles. Download and extract to a folder and then run SandcastleI...AutoLoL: AutoLoL v1.6.4: It is now possible to run the clicker anyway when it can't detect the Masteries Window Fixed a critical bug in the open file dialog Removed the resize button Some UI changes 3D camera movement is now more intuitive (Trackball rotation) When an error occurs on the clicker it will attempt to focus AutoLoLYAF.NET (aka Yet Another Forum.NET): v1.9.5.5 RTW: YAF v1.9.5.5 RTM (Date: 3/4/2011 Rev: 4742) Official Discussion Thread here: http://forum.yetanotherforum.net/yaf_postsm47149_v1-9-5-5-RTW--Date-3-4-2011-Rev-4742.aspx Changes in v1.9.5.5 Rev. #4661 - Added "Copy" function to forum administration -- Now instead of having to manually re-enter all the access masks, etc, you can just duplicate an existing forum and modify after the fact. Rev. #4642 - New Setting to Enable/Disable Last Unread posts links Rev. #4641 - Added Arabic Language t...Snippet Designer: Snippet Designer 1.3.1: Snippet Designer 1.3.1 for Visual Studio 2010This is a bug fix release. Change logFixed bug where Snippet Designer would fail if you had the most recent Productivity Power Tools installed Fixed bug where "Export as Snippet" was failing in non-english locales Fixed bug where opening a new .snippet file would fail in non-english localesChiave File Encryption: Chiave 1.0: Final Relase for Chave 1.0 Stable: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Now with added support to Windows XP! Change Log from 0.9.2 to 1.0: ==================== Added: > Added Icon Overlay for Windows 7 Taskbar Icon. >Added Thumbnail Toolbar buttons to make the navigation easier...Chirpy - VS Add In For Handling Js, Css, DotLess, and T4 Files: Margogype Chirpy (ver 2.0): Chirpy loves Americans. Chirpy hates Americanos.ASP.NET: Sprite and Image Optimization Preview 3: The ASP.NET Sprite and Image Optimization framework is designed to decrease the amount of time required to request and display a page from a web server by performing a variety of optimizations on the page’s images. This is the third preview of the feature and works with ASP.NET Web Forms 4, ASP.NET MVC 3, and ASP.NET Web Pages (Razor) projects. The binaries are also available via NuGet: AspNetSprites-Core AspNetSprites-WebFormsControl AspNetSprites-MvcAndRazorHelper It includes the foll...New ProjectsA-Inventory: Inventory Management System * Purchase Orders * Sales Orders * Multiple warehouses * Stock Transfers * Financial Transaction Tracking * ReportsAsync Execution Lib: This library simplifies the process of executing code on a different thread and separating the caller from the actual command logic. To do this messages are put into an execution module and the library automatically calls the target message handlers.Bing Wallpaper Downloader: Downloads wallpapers from Bing and displays them as the desktop wallpaper. Based on UI and concepts of Bing4Free.CloudBox: This is a custom storage controller for DropBox. It lets you create multiple DropBox accounts an will then treat them as one large storage. Controller2: Projeto para desenvolvimento de Sistema para o Projeto Integrador do Curso de Análise e Desenvolvimento de Sistemas do CesumarCurso_Virtual_FPSEP: En este proyecto se esta elaborando el sistema para el manejo de un curso virtual que se tiene pensado impartir en la CFE, este curso se esta desarrollando bajo el mando del Ingeniero Earl Amazurrutia Carson y esta dirigido para el personal de protecciones.DBSJ: testEveTools: EveTools is a set of classes to aid in the development of programs that access the EVE Online API. It is written with a very event-driven model; all normally blocking, non-compute-bound workloads will instead run asynchronously, freeing up your program to do as it pleases!GeoIp: .Net MaxMind GeoIP client libraryKieuHungProject: Doan Vien managmentMimoza: ?????? ??? ????? ?????????, ??????? ????? ???????????? ??? ?????? ????????? ???????????? ?? ?????? ??????...mmoss: Medical Marijuana Open Source System. To manage Point-of-sale, inventory, grow and compliance issues related to the sale of MMJNetCassa: .Net Cassandra client library.Neudesic Pulse SDK: The Neudesic Pulse SDK allows developers the ability to quickly and efficiently build solutions that interact with the Neudesic Pulse social framework APIs.Nuget Package Creation and Publishing Wizard: simplifies the creation and publishing of an nuget packagePetscareinlondon: This project is all about pets care.Pool based Batch Processing: A simple framework that allows pool based processing of batches. A new batch is picked up when pools are empty. The framework exposes simple events that allows user to process jobs at the back end (Windows Service).Project Nonnon: Keep-It-Simple Softwares for Win32 MinGW GCC 3.x C Language + Batch Files POSIX-based Base Layer Library Win32 Applications Easy2Compile Easy2Make Easy2Use Python Tools for Visual Studio: Python Tools for Visual Studio adds support for Intellisense, Debugging, Profiling, IPython (.11+), Cluster & Cloud Computing to Visual Studio. It supports both CPython (2.4-3.1) and IronPython (2.7). python_lib: like protobuf,parse xml definition of c++struct,and develop lots of usageRapidMEF: A collection of tools to help developers author and debug applications that use MEF.Reflective: Reflective adds lots of new extension methods related to reflection and Reflection.Emit, to make it easier to build code dynamically at runtime.Remote Desktop Organizer: This is a fun little application that lets you easily manage lots of different Remote Desktops. It allows the user to apply custom alias's and descriptions so that it is easy denote which desktop is which and allows for easy customization and managementSharePoint data population: SharePoint data populationSpriteEditor: Basic sprite editorStock3243254635254325435: 345234324324324324Time Management Application: Based upon Stephen Covey's 7 Habits of Highly Effective People, I was looking for a place to digitally record my time. When I could not find one I liked, I set out to build my own. This also covers several of the coding practices and patterns that I have been putting together.Tuned N: Tuned N is a Playlist.com based media application. Allows listening to playlists via desktop app and allows downloading of tracks in playlists.Winforms BetterBindingSource: A better windows forms (winforms) bindingsource control which enables you to add class based datasources without the hassle of adding datasource files and using the slow wizard to add data sources.WinShutdown: Just a small application to countdown the windows shutdown/restart. When you want the windows shutdown after some time or after some application finishes its work. Please if you have a better project for this purpouse or if you have an update for my code. Let me know.

    Read the article

  • CodePlex Daily Summary for Thursday, March 03, 2011

    CodePlex Daily Summary for Thursday, March 03, 2011Popular ReleasesDDRMenu: 01.99.00 (aka 2.0 beta 1): First beta of version 2.0.AutoLoL: AutoLoL v1.6.0: Implemented 3D Model / Skin viewer Added JarvanIV and Maokai Fixed Renekton and Karma image sizes Added more hotkeys to Auto Chat Fix: Update information is now cached daily (instead of forever)Chirpy - VS Add In For Handling Js, Css, DotLess, and T4 Files: Margogype Chirpy (ver 2.0): Chirpy loves Americans. Chirpy hates Americanos.Document.Editor: 2011.9: Whats new for Document.Editor 2011.9: New Templates System New Plug-in System New Replace dialog New reset settings Minor Bug Fix's, improvements and speed upsTortoiseHg: TortoiseHg 2.0: TortoiseHg 2.0 is a complete rewrite of TortoiseHg 1.1, switching from PyGtk to PyQtSandcastle Help File Builder: SHFB v1.9.2.0 Release: This release supports the Sandcastle June 2010 Release (v2.6.10621.1). It includes full support for generating, installing, and removing MS Help Viewer files. This new release is compiled under .NET 4.0, supports Visual Studio 2010 solutions and projects as documentation sources, and adds support for projects targeting the Silverlight Framework. NOTE: The included help file and the online help have not been completely updated to reflect all changes in this release. A refresh will be issue...Network Monitor Open Source Parsers: Microsoft Network Monitor Parsers 3.4.2554: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, we have added 4 new protocol parsers and updated 79 existing parsers in the NetworkMonitor_Pa...Image Resizer for Windows: Image Resizer 3 Preview 1: Prepare to have your minds blown. This is the first preview of what will eventually become 39613. There are still a lot of rough edges and plenty of areas still under construction, but for your basic needs, it should be relativly stable. Note: You will need the .NET Framework 4 installed to use this version. Below is a status report of where this release is in terms of the overall goal for version 3. If you're feeling a bit technically ambitious and want to check out some of the features th...JSON Toolkit: JSON Toolkit 1.1: updated GetAllJsonObjects() method and GetAllProperties() methods to JsonObject and Properties propertiesFacebook Graph Toolkit: Facebook Graph Toolkit 1.0: Refer to http://computerbeacon.net for Documentation and Tutorial New features:added FQL support added Expires property to Api object added support for publishing to a user's friend / Facebook Page added support for posting and removing comments on posts added support for adding and removing likes on posts and comments added static methods for Page class added support for Iframe Application Tab of Facebook Page added support for obtaining the user's country, locale and age in If...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.1: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager small improvements for some helpers and AjaxDropdown has Data like the Lookup except it's value gets reset and list refilled if any element from data gets changedManaged Extensibility Framework: MEF 2 Preview 3: This release aims .net 4.0 and Silverlight 4.0. Accordingly, there are two solutions files. The assemblies are named System.ComponentModel.Composition.Codeplex.dll as a way to avoid clashing with the version shipped with the 4th version of the framework. Introduced CompositionOptions to container instantiation CompositionOptions.DisableSilentRejection makes MEF throw an exception on composition errors. Useful for diagnostics Support for open generics Support for attribute-less registr...PHPExcel: PHPExcel 1.7.6 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.4: Version: 2.0.0.4 (Milestone 4): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...VidCoder: 0.8.2: Updated auto-naming to handle seconds and frames ranges as well. Deprecated the {chapters} token for auto-naming in favor of {range}. Allowing file drag to preview window and enabling main window shortcut keys to work no matter what window is focused. Added option in config to enable giving custom names to audio tracks. (Note that these names will only show up certain players like iTunes or on the iPod. Players that support custom track names normally may not show them.) Added tooltips ...SQL Server Compact Toolbox: Standalone version 2.0 for SQL Server Compact 4.0: Download the Visual Studio add-in for SQL Server Compact 4.0 and 3.5 from here Standalone version of (most of) the same functionality as the add-in, for SQL Server Compact 4.0. Useful for anyone not having Visual Studio Professional or higher installed. Requires .NET 4.0. Any feedback much appreciated.Claims Based Identity & Access Control Guide: Drop 1 - Claims Identity Guide V2: Highlights of drop #1 This is the first drop of the new "Claims Identity Guide" edition. In this release you will find: All previous samples updated and enhanced. All code upgraded to .NET 4 and Visual Studio 2010. Extensive cleanup. Refactored Simulated Issuers: each solution now gets its own issuers. This results in much cleaner and simpler to understand code. Added Single Sign Out support. Added first sample using ACS ("ACS as a Federation Provider"). This sample extends the ori...Simple Notify: Simple Notify Beta 2011-02-25: Feature: host the service with a single click in console Feature: host the service as a windows service Feature: notification cient application Feature: push client application Feature: push notifications from your powershell script Feature: C# wrapper libraries for your applicationsMono.Addins: Mono.Addins 0.6: The 0.6 release of Mono.Addins includes many improvements, bug fixes and new features: Add-in engine Add-in name and description can now be localized. There are new custom attributes for defining them, and can also be specified as xml elements in an add-in manifest instead of attributes. Support for custom add-in properties. It is now possible to specify arbitrary properties in add-ins, which can be queried at install time (using the Mono.Addins.Setup API) or at run-time. Custom extensio...Minemapper: Minemapper v0.1.5: Now supports new Minecraft beta v1.3 map format, thanks to updated mcmap. Disabled biomes, until Minecraft Biome Extractor supports new format.New ProjectsARCHFORMEMVC - Just a reference MVC architecture for me and perhaps you: ARCHFORMEMVC stands for giving me (and perhaps you) a default architecture for handling Cross Cutting Concerns and a quick way to start new project using MVCBackupLib: SingularityShift.BackupLib provides a framework for an application's backup needs. It offers common interfaces, abstract implementations, and many base classes ready for use in any .NET application. It can be easily extended to use any backup method you might require.ComponentModel: ComponentModel provides a simple API for building an application based on a hierarchy of various types of components, which are then extended to fill essentially any need. It attempts to simplify much of the work of building such a system.DataModels: DataModels is a project which aims to allow for easy reuse of specific data models using a very simple API. euler 14 problem: euler 14 problemGameLib: GameLib is a library for rapid game tool development. It offers an API and many abstract/concrete implementations for referencing and managing a game and its modifications. It is the driving force behind FOMS (Fallout Mod Studio).GSH Reasoner: GSHR (Gloriously Slow Haskell Reasoner) is a simple, partially incomplete and very slow reasoner for OWL 2 ontologies which uses rules for inference and consistency checking. Written in Haskell.internal DataBase for C# and .Net/Mono: This Project will create a DataBase which can only be used intern in a programm, so there is no need to have a DB installed on the users pc. SQL Support will come too. So u can eaasily migrate existing projects.Morro.VPN: morro bay various netMovieManager: Movie manager is a tool to keep you updated about movie series you watch. First step is to create your movie series database. Next you select episodes you already watched. Program will automatically show you new episodes, air dates of the upcoming episodes.NHL.App: NHL.App makes it easier for people to browse news in the NHL and current standings in a format that is nice and easy on the eyes. It is developed in C# with Visual Studio 2010 Express for Windows Phone.RegexRenamer: RegexRenamer allows file moving and renaming using .NET regular expressions. It supports regular expression chains so files can be renamed/moved in multiple steps. It is developed in C# using .NET Framework 4 and WPF.Research: Personal research projectSAB BizTalk Archiving Pipeline Component: The SAB BizTalk Archiving Pipeline Component can be added to any stage of receive and send ports to archive processed messages to file locations.Simple Configuration Facade: SingularityShift.Preferences is an abstraction layer over any configuration library allowing you to keep whichever configuration library you choose out of your main code entirely, so you are free to change it later without repercussion or the need to make extensive code changes.Simple Dependency Injection Facade: SDIF allows decoupling not only from your dependencies, but also from the framework that injects them. It is a lightweight layer between your code and the DI framework that wires up your dependencies, so that you can easily adapt to other frameworks as necessary.Simple Json: A simple set of utilities for json and rest.SingularityShift.Common: SingularityShift.Common is where we put all of the code that we use in many or all of our other libraries and applications. This obviously helps us, but is also designed to help other developers by providing these common interfaces and classes for use in other projects as well.TLDRML: Python/JSON inspired markup language designed to be extremely terse.TripleA for Silverlight: Axis and allies port from the Java versionWCF Credentials Manager in WPF (MVVM): This is a WPF (MVVM) version of the IDesign WCF credentials manager. It provides a more responsive user interface for managing users and roles in applications that use the ASP.NET membership and role providers.Windawesome: A highly customizable dynamic window manager for Windows, which works on top of explorer.exe and NOT as a shell replacement. Written in C# and scriptable and pluggable in IronRuby, IronPython and any .NET language that compiles down to an assembly.Zack's Fiasco - eWallet to KeePass: Convert an eWallet export to a KeePass 2.x XML file, which can be imported into KeePass 2.x

    Read the article

  • CodePlex Daily Summary for Monday, February 21, 2011

    CodePlex Daily Summary for Monday, February 21, 2011Popular ReleasesA2Command: 2011-02-21 - Version 1.0: IntroductionThis is the full release version of A2Command 1.0, dated February 21, 2011. These notes supersede any prior version's notes. All prior releases may be found on the project's website at http://a2command.codeplex.com/releases/ where you can read the release notes for older versions as well as download them. This version of A2Command is intended to replace any previous version you may have downloaded in the past. There were several bug fixes made after Release Candidate 2 and all...Chiave File Encryption: Chiave 0.9: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Feedbacks are Welcome!....Rawr: Rawr 4.0.20 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...Azure Storage Samples: Version 1.0 (February 2011): These downloads contain source code. Each is a complete sample that fully exercises Windows Azure Storage across blobs, queues, and tables. The difference between the downloads is implementation approach. Storage DotNet CS.zip is a .NET StorageClient library implementation in the C# language. This library come with the Windows Azure SDK. Contains helper classes for accessing blobs, queues, and tables. Storage REST CS.zip is a REST implementation in the C# language. The code to implement R...MiniTwitter: 1.66: MiniTwitter 1.66 ???? ?? ?????????? 2 ??????????????????? User Streams ?????????View Layout Replicator for Microsoft Dynamics CRM 2011: View Layout Replicator (1.0.119.18): Initial releaseWindows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Advanced Explorer for Wp7: Advanced Explorer for Wp7 Version 1.4 Test8: Added option to run under Lockscreen. Fixed a bug when you open a pdf/mobi file without starting adobe reader/amazon kindle first boost loading time for folders added \Windows directory (all devices) you can now interact with the filesystem while it is loading!Game Files Open - Map Editor: Game Files Open - Map Editor Beta 2 v1.0.0.0: The 2° beta release of the Map Editor, we have fixed a big bug of the files regen.Document.Editor: 2011.6: Whats new for Document.Editor 2011.6: New Left to Right and Left to Right support New Indent more/less support Improved Home tab Improved Tooltips/shortcut keys Minor Bug Fix's, improvements and speed upsCatel - WPF and Silverlight MVVM library: 1.2: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.2 =========== Release date: ============= 2011/02/17 Added/fixed: ============ (+) DataObjectBase now supports Isolated Storage out of the box: Person.Save(myStream) stores a whole object graph in Silverlight (+) DataObjectBase can now be converted to Json via Person.ToJson(); (+)...??????????: All-In-One Code Framework ??? 2011-02-18: ?????All-In-One Code Framework?2011??????????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????,?????AzureBingMaps??????,??Azure,WCF, Silverlight, Window Phone????????,????????????????????????。 ???: Windows Azure SQL Azure Windows Azure AppFabric Windows Live Messenger Connect Bing Maps ?????: ??????HTML??? ??Windows PC?Mac?Silverlight??? ??Windows Phone?Silverlight??? ?????:http://blog.csdn.net/sjb5201/archive/2011...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 29 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 29 (beta)New: Added VsTortoise Solution Explorer integration for Web Project Folder, Web Folder and Web Item. Fix: TortoiseProc was called with invalid parameters, when using TSVN 1.4.x or older #7338 (thanks psifive) Fix: Add-in does not work, when "TortoiseSVN/bin" is not added to PATH environment variable #7357 Fix: Missing error message when ...Sense/Net CMS - Enterprise Content Management: SenseNet 6.0.3 Community Edition: Sense/Net 6.0.3 Community Edition We are happy to introduce you the latest version of Sense/Net with integrated ECM Workflow capabilities! In the past weeks we have been working hard to migrate the product to .Net 4 and include a workflow framework in Sense/Net built upon Windows Workflow Foundation 4. This brand new feature enables developers to define and develop workflows, and supports users when building and setting up complicated business processes involving content creation and response...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...AllNewsManager.NET: AllNewsManager.NET 1.3: AllNewsManager.NET 1.3. This new version provide several new features, improvements and bug fixes. Some new features: Online Users. Avatars. Copy function (to create a new article from another one). SEO improvements (friendly urls). New admin buttons. And more...Facebook Graph Toolkit: Facebook Graph Toolkit 0.8: Version 0.8 (15 Feb 2011)moved to Beta stage publish photo feature "email" field of User object added new Graph Api object: Group, Event new Graph Api connection: likes, groups, eventsDJME - The jQuery extensions for ASP.NET MVC: DJME2 -The jQuery extensions for ASP.NET MVC beta2: The source code and runtime library for DJME2. For more product info you can goto http://www.dotnetage.com/djme.html What is new ?The Grid extension added The ModelBinder added which helping you create Bindable data Action. The DnaFor() control factory added that enabled Model bindable extensions. Enhance the ListBox , ComboBox data binding.New ProjectsAuto-mobile Forum: Automobileforum is a website regarding cars where users can share their ideas about cars and whatever information they want about a car they can get that from this website. The language used for development of this website is ASP.NET Azure Storage Samples: Azure Storage Samples are Windows Azure Storage code samples. They show how to perform every Windows Azure Storage operation for blobs, queues, and tables. There are 2 identical implementations, one using REST and the other using the .NET Storage Client Library (both in C#).b1234: b1234 is b1234betrayal: bahothBungee Jumper: This is a community website for a sport known as Bungee Jumping which is one of the most dangerous adventure sport. Chiave File Encryption: Application for file encryption and decryption using 512 Bit Rijndael encryption algorithm with simple UI to use. Its written in C# and compiled in .Net version 3.5, it incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass.DecEncIT: DecEncIT rend l’expérience de l’utilisateur commun avec la cryptographie (cryptage) plus aisée. Pas de soucies pour les réglages et les termes techniques, vagues et difficiles à comprendre ; inclus également une option pour automatiser toutes les tâches recommandées. VB.Net/3.5ELearningTutorial: A list of tutorialsExtjs Practice - Open our extj journey: ????extjs 1.demo ?? 2.partice ?? 3.work ??Functional: This class library for .NET 3.5 and 4.0 provides extension methods for delegates, to help with functional style programming.HauntedNetworking: HauntedNetworking is a site where people can share their paranormal experiences and related photographs. They can also share pictures of haunted and scary places across the world.KharaSoft Nexus: Nexus is the last inbox you'll ever need.Kojax: kojax projectLEAD - Learn Execute And Develop: It’s a portal to share your knowledge, learn from blogs, wikis, post your questions in query board, learn new things under video training, Execute and develop these, helping gaining knowledge.Luac For 5.1 ????: luac??????? Lunar Lander: Simple WPF Lunar Lander simulator, with reactive intelligent agent autopilot.MasterSchool: Tworzenie swiadectwmovies blogspot: The Movies blog- spot is an interactive site for the information related to movies, actors, personnel and fictional characters features. MvcTemplates: MvcTemplates is a project devoted to creating a complete suite of Razor Editor and Display templates that utilize attributes, built in models, and jQuery. It is developed in C# and packages the templates into an easy to use dll.MySearcher: Quickly and easily search the major search engines with MySearcher. No clicking around, no switching windows; No matter how many windows you have open, or what you're doing, Searching the major search engines is as simple as moving your mouse to the side of the screen!Photography in Kerala: Photography in Kerala's a website which makes easier for people who are into naturistic Photography to share their photos and other Info of Kerala(also called God's own country).You'll no longer have to search about Kerala,Or share your own info/pics here.Developed in ASP.Net.Pixels(Storage Engine): Pixels(Storage Engine)PortsManager: C# Project to Simplify working with Ports Currnetly it's working with COMs in Receive only and soon it'll be updated to work in send and expand the library to handle send informations even in network.Public Web Browser: Public Web Browser is a webbrowser client with alot of featuresRealStatus - MS Communicator 2007 & A Traffic Light Indicator Hardware: A demonstration of a traffic light indicator hardware corresponding to status events of MS Communicator in real-time. A demo video can be found here: http://www.youtube.com/watch?v=GwgfSVeXVksSABnzbdNet: A SABnzbd monitor and administrator. Built in C# WPF.Sendill: Kerfi fyrir sendibílastöðvarT4 Metadata and Data Annotations Template: This T4 template handles generating metadata classes from an Entity Framework 4 model and decorates primitive and navigation properties with data annotation attributes such as [Required] and [StringLength]. The [DataType] attribute is also applied when appropriate.urlshorten service: urlshorten service need storoom @ http://storoom.codeplex.com/ developed in vb.netuVersionClientCache: uVersionClientCache is a custom macro to always automatically version (URL querstring parameter) your files based on a MD5 hash of the file contents or file last modified date to prevent issues with client browsers caching an old file after you have changed it.Video Game Network Library: A lightweight network library written in C++ for video games.View Layout Replicator for Microsoft Dynamics CRM 2011: View Layout Replicator make it easier for Microsoft Dynamics CRM 2011 customizers to copying the layout of a view and paste it to the layout of other views in the same entityVLC MakeIT: This is virtual learning center yogaasana4you: The website is on Yoga Asana for the community who want to feel the amazing experience of Yoga asana - Yoga Exercises and Postures.

    Read the article

  • Common Live Upgrade problems

    - by user12611829
    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here. Live Upgrade copies over ZFS root clone This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like. # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . The error indicator ----- /usr/lib/lu/lumkfs: test: unknown operator zfs Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . This should not happen ------ Copying. Ctrl-C and cleanup If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy. This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works. # patchadd -p | grep 121431 Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur # unzip 121431-58 # patchadd 121431-58 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 121431-58 Checking installed patches... Executing prepatch script... Installing patch packages... Patch 121431-58 has been successfully installed. See /var/sadm/patch/121431-58/log for details Executing postpatch script... Patch packages installed: SUNWlucfg SUNWlur SUNWluu # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. INFORMATION: Unable to determine size or capacity of slice . Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. INFORMATION: Unable to determine size or capacity of slice . Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone. # cat /etc/lu/ICF.3 s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608 s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0 s10u8-baseline:/vbox:pandora/vbox:zfs:0 s10u8-baseline:/setup:pandora/setup:zfs:0 s10u8-baseline:/export:pandora/export:zfs:0 s10u8-baseline:/pandora:pandora:zfs:0 s10u8-baseline:/panroot:panroot:zfs:0 s10u8-baseline:/workshop:pandora/workshop:zfs:0 s10u8-baseline:/export/iso:pandora/iso:zfs:0 s10u8-baseline:/export/home:pandora/home:zfs:0 s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0 s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0 Solaris 10 9/10 introduces new autoregistration file This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too. Here's what the "error" looks like. # luupgrade -u -s /mnt -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ERROR: The auto registration file does not exist or incomplete. The auto registration file is mandatory for this upgrade. Use -k argument along with luupgrade command. autoreg_file is path to auto registration information file. See sysidcfg(4) for a list of valid keywords for use in this file. The format of the file is as follows. oracle_user=xxxx oracle_pw=xxxx http_proxy_host=xxxx http_proxy_port=xxxx http_proxy_user=xxxx http_proxy_pw=xxxx For more details refer "Oracle Solaris 10 9/10 Installation Guide: Planning for Installation and Upgrade". As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade. Here is an example. # echo "autoreg=disable" /var/tmp/no-autoreg # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ####################################################################### NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting. You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously. For information about what configuration data is communicated and how to control this facility, see the Release Notes or www.oracle.com/goto/solarisautoreg. INFORMATION: After activated and booted into new BE , Auto Registration happens automatically with the following Information autoreg=disable ####################################################################### Validating the contents of the media . The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains version . Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE . Checking for GRUB menu on ABE . Saving GRUB menu on ABE . Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE . Performing the operating system upgrade of the BE . CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that. Technocrati Tags: Oracle Solaris Patching Live Upgrade var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

  • CodePlex Daily Summary for Sunday, November 28, 2010

    CodePlex Daily Summary for Sunday, November 28, 2010Popular ReleasesMDownloader: MDownloader-0.15.25.7002: Fixed updater Fixed FileServe Fixed LetItBitNotepad.NET: Notepad.NET 0.7 Preview 1: Whats New?* Optimized Code Generation: Which means it will run significantly faster. * Preview of Syntax Highlighting: Only VB.NET highlighting is supported, C# and Ruby will come in Preview 2. * Improved Editing Updates (when the line number, etc updates) to be more graceful. * Recent Documents works! * Images can be inserted but they're extremely large. Known Bugs* The Update Process hangs: This is a bug apparently spawning since 0.5. It will be fixed in Preview 2. Until then, perform a fr...Flickr Schedulr: Schedulr v2.2.3984: This is v2.2 of Flickr Schedulr, a Windows desktop application that automatically uploads pictures and videos to Flickr based on a schedule (e.g. to post a new picture every day at a certain time). What's new in this release? Videos are now supported as well. The application can now automatically add pictures to the queue when it starts up, from specific monitored folders. Sets and Groups can now be displayed as text only (without icons) to speed up the application. Fixed issue that ta...Cropper: 1.9.4: Mostly fixes for issues with a few feature requests. Fixed Issues 2730 & 3638 & 14467 11044 11447 11448 11449 14665 Implemented Features 6123 11581PFC: PFC for PB 11.5: This is just a migration from the 11.0 code. No changes have been made yet (and they are needed) for it to work properly with 11.5.PDF Rider: PDF Rider 0.5: This release does not add any new feature for pdf manipulation, but enables automatic updates checking, so it is reccomended to install it in order to stay updated with next releases. Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0...XamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery-WPF v1.2 (Runtime, Source): This is the first release of popular XamlQuery library for WPF. XamlQuery has already gained recognition among Silverlight developers.Math.NET Numerics: Beta 1: First beta of Math.NET Numerics. Only contains the managed linear algebra provider. Beta 2 will include the native linear algebra providers along with better documentation and examples.Minecraft GPS: Minecraft GPS 1.1: 1.1 Release New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-25: Code samples for Visual Studio 2010Wii Backup Fusion: Wii Backup Fusion 0.8.5 Beta: - WBFS repair (default) options fixed - Transfer to image fixed - Settings ui widget names fixed - Some little bug fixes You need to reset the settings! Delete WiiBaFu's config file or registry entries on windows: Linux: ~/.config/WiiBaFu/wiibafu.conf Windows: HKEY_CURRENT_USER\Software\WiiBaFu\wiibafu Mac OS X: ~/Library/Preferences/com.wiibafu.wiibafu.plist Caution: This is a BETA version! Errors, crashes and data loss not impossible! Use in test environments only, not on productive syste...Minemapper: Minemapper v0.1.3: Added process count and world size calculation progress to the status bar. Added View->'Status Bar' menu item to show/hide the status bar. Status bar is automatically shown when loading a world. Added a prompt, when loading a world, to use or clear cached images.Sexy Select: sexy select v0.4: Changes in v0.4 Added method : elements. This returns all the option elements that are currently added to the select list Added method : selectOption. This method accepts two values, the element to be modified and the selected state. (true/false)Deep Zoom for WPF: First Release: This first release of the Deep Zoom control has the same source code, binaries and demos as the CodeProject article (http://www.codeproject.com/KB/WPF/DeepZoom.aspx).BlogEngine.NET: BlogEngine.NET 2.0 RC: This is a Release Candidate version for BlogEngine.NET 2.0. The most current, stable version of BlogEngine.NET is version 1.6. Find out more about the BlogEngine.NET 2.0 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation and the installation screencast. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. As this ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.156: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a feature for aggregating the overall metrics in a folder full of NodeXL workbooks, adds geographical coordinates to the Twitter import features, and fixes a memory-related bug. See the Complete NodeXL Release History for details. Please Note: There is a new option in the setup program to install for "Just Me" or "Everyone." Most people...VFPX: FoxBarcode v.0.11: FoxBarcode v.0.11 - Released 2010.11.22 FoxBarcode is a 100% Visual FoxPro class that provides a tool for generating images with different bar code symbologies to be used in VFP forms and reports, or exported to other applications. Its use and distribution is free for all Visual FoxPro Community. Whats is new? Added a third parameter to the BarcodeImage() method Fixed some minor bugs History FoxBarcode v.0.10 - Released 2010.11.19 - 85 Downloads Project page: FoxBarcodeDotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 1.1.0.5: What is new in DotNetAge 1.1.0.5 ?Document Library features and template added. Resolve issues of templates Improving publishing service performance Opml support added. What is new in DotNetAge 1.1 ? D.N.A Core updatesImprove runtime performance , more stabilize. The DNA core objects model added. Personalization features added that allows users create the personal website, manage their resources, store personal data DynamicUIFixed the PageManager could not move page node bug. ...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.3.1 and demos: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form and Pager tested on mozilla, safari, chrome, opera, ie 9b/8/7/6DotSpatial: DotSpatial 11-21-2010: This release introduces the following Fixed bugs related to dispose, which caused issues when reordering layers in the legend Fixed bugs related to assigning categories where NULL values are in the fields New fast-acting resize using a bitmap "prediction" of what the final resize content will look like. ImageData.ReadBlock, ImageData.WriteBlock These allow direct file access for reading or writing a rectangular window. Bitmaps are used for holding the values. Removed the need to stor...New ProjectsAC: Just a simple AC ProjectAvailability Manager Project (CMPE290): Availability Manager Project (CMPE290)BCLExtensions: BCLExtensions is an extension method library for .NET 3.5 or higher which offers various extension methods for classes in the .NET Base Class Library (BCL)Command Browser 2.0 for Visual Studio: The Command Browser is a visual studio add-in that Lets you browse, and execute Visual Studio commands on the fly. It’s very handy for people that don’t tend to use bindings on daily basis and simply wants to learn new commands out of interest.EDM And T4: Generate a WCF Service from your edmxEmperor of the Multiverse: Szoftverarchitekturak rocks :)KINECT CALL: Kinect CallMultiSafepay for nopCommerce: This is MultiSafepay shop plugin voor nopCommerce. The current version supported is nopCommerce 1.60.MyGadgebydeleted: MyGadgebydeletedNHashcash: NHashcash is a .NET implementation of the Hashcash mechanism for paying for e-mail with burnt CPU cycles.OhYeah!: WP7 Note Demo ApplicationOMax - Online Market and Exchange for Real Estate: OMax Project - Online Real Estate Management System OMax stands for Online Market and eXchange and is a Real Estate management system focusing on both individuals (brokers, agents) and organizations (real estate companies) looking for online property management solutions.Opus - Silverlight UI Framework: The opus framework provides a simple way for creating UIs for Silverlight by leveraging your current MVVM structure. Outliner, the edge detection utility: Outliner is a vectorizer of the edges in the raster pictures. The new edge detection algorithm is used. It is developed in Visual C++ 2010 Express Edition, using WinAPI and STL.SharpDropBox Client for .NET: SharpDropBox Client is a DropBox client for (at first) WP7. It caches files/directories in ISO storage, and also uses Sterling DB to metadata info so that it can easily do hash checks against DropBox making it download only when there are changes. It can also be used offline.SicunSimpleDiary: ???????(????)Silverlight Navigation With Caliburn.Micro: Bring the power of the Cabliurn.Micro MVVM framework to Siliverlight 4 navigation applications. This sample application for CM shows how it can be used to create MVVM friendly Navigation applications which formerly required a view-first approach. SilverTrace - a tracer and logger for Silverlight applications: SilverTrace - a tracer and logger for Silverlight applications.StackDeck: StackDeck is a StackOverflow analysis tool written in WPFSVNHelper -- a small clean tool for visual studio: A small tool that used to clean your bin direcotry and temporary directory for the whole solutions. I usually use it before commit soure to a SVN Server. System.Xml.Serialization: Expressions.Compiler is a lightweight xml serialization framework. It is an addition to the standart .NET XML serialization. It is designed to support out-of-the-box features: * interface-driven Xml serialization * class-driven Xml serialization * runtime serializationVoXign: - Proposer une formation introductive à la langue des signes, en e-Learning. - autonomie de gestion de l’emploi du temps, du rythme - apprentissage d’un lexique de « première nécessité » - introduction à la culture sourde et approche des spécificités linguistiques de la LSF Weyland: A .NET metaprogramming framework.

    Read the article

  • Oracle Solaris Zones Physical to virtual (P2V)

    - by user939057
    IntroductionThis document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.Using an example and various scenarios, this paper describes how to take advantage of theOracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace. The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.Oracle Solaris ZonesOracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.This technology provides an isolated and secure environment for running applications. A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.Oracle Solaris Zones Physical-to-Virtual (P2V)A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical system and migrate it into a virtualized operating system environmentThere are three main steps using this tool1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image. 2. Preparing the target system by configuring a new zone that will host the new image.3. Image installation on the target system using the image we created on step 1. The host, where the image is built, is referred to as the source system and the host, where theimage is installed, is referred to as the target system. Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)Here are some benefits of this new feature:  Simple- easy build process using Oracle Solaris 10 built-in commands.  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page. PrerequisitesThe target Oracle Solaris system should be running the latest version of the patching patch cluster. and the minimum Solaris version on the target system should be Solaris 10 9/10.Refer to the latest Administration Guide for Oracle Solaris for a complete procedure on how todownload and install Oracle Solaris. NOTE: If the source system that used to build the image is an older version then the targetsystem, then during the process, the operating system will be upgraded to Solaris 10 9/10(update on attach).Creating the Image Used to distribute the software.We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine, or build it into a NFS shared storage andmount the NFS file system from the target machine.Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.An image is created by using the flarcreate command:Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flarThe command does the following:  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).  -n specifies the image name.  -L specifies the archive format (i.e cpio). Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flarYou can see example of the archive identification section in Appendix A: archive identification section.We can compress the flar image using the gzip command or adding the -c option to the flarcreate commandSource # gzip /var/tmp/solaris_10_up9.flarAn md5 checksum can be created for the image in order to ensure no data tamperingSource # digest -v -a md5 /var/tmp/solaris_10_up9.flar Moving the image into the target system.If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmpConfiguring the Zone on the target systemAfter copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )ZFS integrationA flash archive can be created on a system that is running a UFS or a ZFS root file system.NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then bydefault, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.This image cannot be used to install a zone. You must create the flar with an explicit cpio or paxarchive when the system has a ZFS root.Use the flarcreate command with the -L archiver option, specifying cpio or pax as themethod to archive the files. (For example, see Step 1 in the previous section).Optionally, on the target system you can create the zone root folder on a ZFS file system inorder to benefit from the ZFS features (clones, snapshots, etc...).Target # zpool create zones c2t2d0 Create the zone root folder:Target # chmod 700 /zones Target # zonecfg -z solaris10-up9-zonesolaris10-up9-zone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:solaris10-up9-zone> createzonecfg:solaris10-up9-zone> set zonepath=/zoneszonecfg:solaris10-up9-zone> set autoboot=truezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone:net> set address=192.168.0.1zonecfg:solaris10-up9-zone:net> set physical=nxge0zonecfg:solaris10-up9-zone:net> endzonecfg:solaris10-up9-zone> verifyzonecfg:solaris10-up9-zone> commitzonecfg:solaris10-up9-zone> exit Installing the Zone on the target system using the imageInstall the configured zone solaris10-up9-zone by using the zoneadm command with the install -a option and the path to the archive.The following example shows how to create an Image and sys-unconfig the zone.Target # zoneadm -z solaris10-up9-zone install -u -a/var/tmp/solaris_10_up9.flarLog File: /var/tmp/solaris10-up9-zone.install_log.AJaGveInstalling: This may take several minutes...The following example shows how we can preserve system identity.Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar Resource management Some applications are sensitive to the number of CPUs on the target Zone. You need tomatch the number of CPUs on the Zone using the zonecfg command:zonecfg:solaris10-up9-zone>add dedicated-cpuzonecfg:solaris10-up9-zone> set ncpus=16DTrace integrationSome applications might need to be analyzing using DTrace on the target zone, you canadd DTrace support on the zone using the zonecfg command:zonecfg:solaris10-up9-zone>setlimitpriv="default,dtrace_proc,dtrace_user" Exclusive IP stack An Oracle Solaris Container running in Oracle Solaris 10 can have a shared IP stack with the global zone, or it can have an exclusive IP stack (which was released in Oracle Solaris 10 8/07). An exclusive IP stack provides a complete, tunable, manageable and independent networking stack to each zone. A zone with an exclusive IP stack can configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec. For an example of how to configure an Oracle Solaris zone with an exclusive IP stack, see the following example zonecfg:solaris10-up9-zone set ip-type=exclusivezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone> set physical=nxge0 When the installation completes, use the zoneadm list -i -v options to list the installedzones and verify the status.Target # zoneadm list -i -vSee that the new Zone status is installedID NAME STATUS PATH BRAND IP0 global running / native shared- solaris10-up9-zone installed /zones native sharedNow boot the ZoneTarget # zoneadm -z solaris10-up9-zone bootWe need to login into the Zone order to complete the zone set up or insert a sysidcfg file beforebooting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg filesectionTarget # zlogin -C solaris10-up9-zoneTroubleshootingIf an installation fails, review the log file. On success, the log file is in /var/log inside the zone. Onfailure, the log file is in /var/tmp in the global zone.If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F to reset the zone to the configured state.Target # zoneadm -z solaris10-up9-zone uninstall -FTarget # zonecfg -z solaris10-up9-zone delete -FConclusionOracle Solaris Zones P2V tool provides the flexibility to build pre-configuredimages with different software configuration for faster deployment and server consolidation.In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.Appendix A: archive identification sectionWe can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access theidentification section that contains the detailed description.Target # head -n 20 /var/tmp/solaris_10_up9.flarFlAsH-aRcHiVe-2.0section_begin=identificationarchive_id=e4469ee97c3f30699d608b20a36011befiles_archived_method=cpiocreation_date=20100901160827creation_master=mdet5140-1content_name=s10-systemcreation_node=mdet5140-1creation_hardware_class=sun4vcreation_platform=SUNW,T5140creation_processor=sparccreation_release=5.10creation_os_name=SunOScreation_os_version=Generic_142909-16files_compressed_method=nonecontent_architectures=sun4vtype=FULLsection_end=identificationsection_begin=predeploymentbegin 755 predeployment.cpio.ZAppendix B: sysidcfg file sectionTarget # cat sysidcfgsystem_locale=Ctimezone=US/Pacificterminal=xtermssecurity_policy=NONEroot_password=HsABA7Dt/0sXXtimeserver=localhostname_service=NONEnetwork_interface=primary {hostname= solaris10-up9-zonenetmask=255.255.255.0protocol_ipv6=nodefault_route=192.168.0.1}name_service=NONEnfs4_domain=dynamicWe need to copy this file before booting the zoneTarget # cp sysidcfg /zones/solaris10-up9-zone/root/etc/

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • ??????????????? ?????? ? ?????????? ??????? IPS ? Solaris 11

    - by Roman Ivanov
    ? ???? ?? ???????? ????? Solaris 11 ? ?? ??? ? ????????? ????? ? ???? ??? ???? ????? ????????? ??? ?????????? ???????? Oracle VM Server for SPARC (aka LDoms). ???? ?? Python/GTK/NetBeans. ?? ?? ??? ???????. ??????? ? ???, ??? ??? ???????????? ????? ? ??????? ????? pylibssh2 ??? ????, ????? ???????????? ?? python ?? ssh ? ????????? ??????.???????? ?? ????? ???????? pylibssh2 ? libssh2, ??????? ? ?????????. ?? ? ???????, ????? ??? ?????? ???? ????????? ? ???? ??????? Solaris IPS. ?????? ? ????? ? ????????? ???????? ??????.????? ?????????, ? ?? ??????? ?????????? ?? ???????????? ? ?????????. ? ???? ????, ??? ????? ????????? configure ? make ??? ??????? ? README ;)???????????? ??????? ????? ???????? ??? ?? ???????????? LD_LIBRARY_PATH. ??? ????? ?? ? ???? ?????? ?? ??????????? ?? ? /etc/profile.? ?? ???? ????? ???????? ????????? ?????? ???. ?? ???????? ????? ?????????? ? ???? ? ?? ?????? ? ????? ?????. ??????????, ???? ???????????? ? ? ????????.????, ??? 0. ????????? ??????????? ? ???? ????? ??????? ??????????. ??? ????? ???????????, ????? ??????? ? ? ???? ???????, ????? ????? ???? ???????? ?????? ?????? ????????. # zfs create rpool/export/repo # zfs set atime=off rpool/export/repo # chown roivanov:staff /export/repo $ pkgrepo create /export/repo $ pkgrepo set -s /export/repo publisher/prefix=tools # pkg set-publisher -g /export/repo tools??? 1, ???????? ?????. ??? ?????? ??????? ? ????????? ? $HOME/Projects/IPS/<??? ??????>, ?? ??? ?? ?????????????. ????? ????, ??? ?????? ??????? ?????? ? ???????? ????????? ????????, ????? ?? ???????????? ????????? ?????????. ??? ?????? ??? ?????????? SunStudio cc ??? gcc. $ export PKGREPO=/export/repo $ mkdir -p $HOME/Projects/IPS/libssh2 $ cd $HOME/Projects/IPS/libssh2 $ export PKGROOT=`pwd` $ unset LDFLAGS $ PATH=$PATH:/opt/solarisstudio12.3/bin $ export CC=cc??? $ export CC=gcc?? ????? ?????? ?????? ?????????? ?????????? (??????????????) ????? ? ../root ?????? /usr $ export DESTDIR=$PKGROOT/root? ????????? ?????????? ../root ? ???? ?????????? ????????? ?????. ????????????? ??????????? ????? ? /usr. $ [ -d root ] && rm -rf root $ cp ~/??????????/libssh2-1.4.2.tar.gz . $ tar xzf libssh2-1.4.2.tar.gz $ cd libssh2-1.4.2? ??????, ???? ????? ?????????? ?????????? ?? /usr/local/lib, ????????????? LDFLAGS ? _????????_ ??? LD_LIBRARY_PATH $ export LDFLAGS="-L/usr/local/lib -R/usr/local/lib" $ ./configure $ gmake && gmake install $ cd ..?????? ?????? Python (pylibssh2) ???????????? ????????? ????? $ python setup.py install --root=../root??? 2. ??????? ???? ? ????????? ?????? $ cat > MANIFEST.files.mog << EOF set name=pkg.fmri value=library/[email protected],0.5.11-11 set name=pkg.description \     value="libssh2 is a client-side C library implementing the SSH2 protocol" set name=pkg.summary value="libssh2 library" set name=maintainer value="First Last <[email protected]>" set name=info.upstream-url value=http://www.libssh2.org/ set name=variant.arch value=$(ARCH) license ../libssh2-1.4.2/COPYING license=BSD <transform dir path=usr$ -> edit group bin sys> EOF ???: library/libssh2 ??? ???????? ??????, 1.4.2 ?????? ??????, 0.5.11 ?????, 11 ????? ?????? ??????. description ??? ????????, ? summary ??? ???????? ???????? ??????. variant.arch ??? ????? ????????? ?????? ?????. ???? ??????????? ? ????? ?????? ????? ????? ??? ?????????? ????????, ?? ??? ? ?????? ???? ?? ????. license ???? ? ??? ???????? transform ????????? ??? ????, ????? ? ????????????? ????? ?????? ???? ????????? ?????????? ?????? ????????? ?????????? /usr ???????? ?????? ?????? ?????? $ pkgsend generate root > MANIFEST.files.1 ????????? ?????????? ?? ????? ???????? ? ?????????? ??????????? ????????? $ pkgmogrify -DARCH=`uname -p` MANIFEST.files.1 MANIFEST.files.mog > MANIFEST.files.2 ??????? ?????? ???? ???????????? $ pkgdepend generate -md root MANIFEST.files.2 | pkgfmt > MANIFEST.files.3 ????????? ?????? ???????? ???????????? ? ?????? ???????. ???? ???? ?????? ????????? ?????. $ pkgdepend resolve -m MANIFEST.files.3 ?? ?????? ???????? ??????? ???? MANIFEST.files.3.res ? ????????? ??????.??? ??????? ????? ????????? ???? ???? ?? ??????? ?????????? ? ?????????? ?????????????,?????? ??? ????? ????? ???????????? ???????????. $ pkglint -c ../lint-cache -r http://pkg.oracle.com/solaris/release/ MANIFEST.files.3.res $ pkglint -c ../lint-cache-local -r /export/repo MANIFEST.files.3.res ? ??????????, ????????? ????? $ pkgsend publish -s $PKGREPO -d `pwd`/root MANIFEST.files.3.res ????????? ?????? ? ?????????? ????????????????? ?????????? ????? ?????? ???? ??????????? $ pkgrepo list -s /export/repo/ ????? ??????? ?????????? ????? ?? ??????????? $ pkgrepo remove -s /export/repo/ [email protected],0.5.11-8:* ????? ?????????? ?????????? ? ?????? ? ??????????? $ pkg info -r libssh2 ????? ?????????? ??? ?????? ?????????, ??? ???????? ????????? ?????? $ sudo pkg install -nv libssh2 ????? ?????????? ????? $ sudo pkg install libssh2 ????? ???????? ????? $ sudo pkg refresh $ sudo pkg update ?????? ??????:[1] How to Create and Publish Packages to an IPS Repository on Oracle Solaris 11,http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-097-create-pkg-ips-524496.html[2] Publishing your own packages with IPS - getting started.https://blogs.oracle.com/barts/entry/publishing_your_own_packages_with[3] How to create your own IPS packages (Ghost Busting)http://blogs.oracle.com/cwb/entry/how_to_create_your_own[4] Introduction to IPS for Developershttp://www.oracle.com/technetwork/systems/hands-on-labs/introduction-to-ips-1534596.html

    Read the article

  • GRUB doesn't recognize partitions on one harddisk

    - by knizz
    I have a dualboot computer with Windows Vista (on hd0) and Ubuntu 9.10. The bootloader is GRUB and the windows bootloader lets me decide between Vista and Ubuntu-Installation (broken WuBi). But now (i don't know why that changed) I can't use start the windows-bootloader anymore. I tried "ls" on the grub-prompt and it gave me a list like: (hd0) (hd1) (hd1,0) (hd1,1) (hd1,2) ... (fd0) It recognizes all partitions of hd1 (the ubuntu-harddisk) but not of hd0(the win-disk). .. WHY? Here is the result of the "boot info script" for the technical details: Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== => Grub 2 is installed in the MBR of /dev/sda and looks for (UUID=a7c510e3-2399-437b-ab92-fa609e48d63f)/boot/grub. => No boot loader is installed in the MBR of /dev/sdb sda1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows Vista Boot files/dirs: /bootmgr /Boot/BCD /Windows/System32/winload.exe /wubildr.mbr /wubildr sda2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb1: _________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unbekannter Dateisystemtyp „“ sdb2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb3: _________________________________________________________________________ File system: Bios Boot Partition Boot sector type: - Boot sector info: sdb4: _________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 9.10 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb5: _________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: =========================== Drive/Partition Info: ============================= Drive: sda ___________________ _____________________________________________________ Platte /dev/sda: 640.1 GByte, 640135028736 Byte 255 Köpfe, 63 Sektoren/Spuren, 77825 Zylinder, zusammen 1250263728 Sektoren Einheiten = Sektoren von 1 × 512 = 512 Bytes Disk identifier: 0x52554d66 Partition Boot Start End Size Id System /dev/sda1 * 2,048 307,202,047 307,200,000 7 HPFS/NTFS /dev/sda2 307,202,048 1,250,258,943 943,056,896 7 HPFS/NTFS Drive: sdb ___________________ _____________________________________________________ Platte /dev/sdb: 640.1 GByte, 640135028736 Byte 255 Köpfe, 63 Sektoren/Spuren, 77825 Zylinder, zusammen 1250263728 Sektoren Einheiten = Sektoren von 1 × 512 = 512 Bytes Disk identifier: 0x00000000 Partition Boot Start End Size Id System /dev/sdb1 1 1,250,263,727 1,250,263,727 ee GPT GUID Partition Table detected. Partition Start End Size System /dev/sdb1 34 262,177 262,144 Microsoft Windows /dev/sdb2 262,178 1,131,253,933 1,130,991,756 Linux or Data /dev/sdb3 1,131,253,934 1,131,255,887 1,954 Bios Boot Partition /dev/sdb4 1,131,255,888 1,245,312,528 114,056,641 Linux or Data /dev/sdb5 1,245,312,529 1,250,263,694 4,951,166 Linux Swap blkid -c /dev/null: ____________________________________________________________ Device UUID TYPE LABEL /dev/sda1 AE1440441440122F ntfs /dev/sda2 3AE66E4DE66E0A09 ntfs data /dev/sdb2 5419D16119DAA4DE ntfs LaufwerkD /dev/sdb4 a7c510e3-2399-437b-ab92-fa609e48d63f ext4 /dev/sdb5 60a0143a-e01b-450a-bbd1-f22059e47b65 swap ============================ "mount | grep ^/dev output: =========================== Device Mount_Point Type Options /dev/sdb4 / ext4 (rw,errors=remount-ro) =========================== sdb4/boot/grub/grub.cfg: =========================== # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s /boot/grub/grubenv ]; then have_grubenv=true load_env fi set default="0" if [ ${prev_saved_entry} ]; then saved_entry=${prev_saved_entry} save_env saved_entry prev_saved_entry= save_env prev_saved_entry fi insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi if [ ${recordfail} = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/white ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry "Ubuntu, Linux 2.6.31-20-generic" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi set quiet=1 insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-20-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro quiet splash initrd /boot/initrd.img-2.6.31-20-generic } menuentry "Ubuntu, Linux 2.6.31-20-generic (recovery mode)" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-20-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro single initrd /boot/initrd.img-2.6.31-20-generic } menuentry "Ubuntu, Linux 2.6.31-14-generic" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi set quiet=1 insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-14-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro quiet splash initrd /boot/initrd.img-2.6.31-14-generic } menuentry "Ubuntu, Linux 2.6.31-14-generic (recovery mode)" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-14-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro single initrd /boot/initrd.img-2.6.31-14-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows Vista (loader) (on /dev/sda1)" { insmod ntfs set root=(hd0,1) search --no-floppy --fs-uuid --set ae1440441440122f chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### =============================== sdb4/etc/fstab: =============================== # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 # / was on /dev/sdb4 during installation UUID=a7c510e3-2399-437b-ab92-fa609e48d63f / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb5 during installation UUID=60a0143a-e01b-450a-bbd1-f22059e47b65 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 =================== sdb4: Location of files loaded by Grub: =================== 583.8GB: boot/grub/core.img 583.8GB: boot/grub/grub.cfg 579.7GB: boot/initrd.img-2.6.31-14-generic 580.0GB: boot/initrd.img-2.6.31-20-generic 579.7GB: boot/vmlinuz-2.6.31-14-generic 579.8GB: boot/vmlinuz-2.6.31-20-generic 580.0GB: initrd.img 579.7GB: initrd.img.old 579.8GB: vmlinuz 579.7GB: vmlinuz.old =========================== Unknown MBRs/Boot Sectors/etc ======================= Unknown BootLoader on sdb1 00000000 54 34 dc 3b 8b ff 6c fa 3e 59 3d 24 25 af 5f 9b |T4.;..l.>Y=$%._.| 00000010 72 f8 36 3d 56 30 22 fd c6 08 5e 39 7f dc 29 48 |r.6=V0"...^9..)H| 00000020 48 e5 24 52 77 b0 fc 64 b6 ce 48 c3 07 ce b5 81 |H.$Rw..d..H.....| 00000030 06 68 60 4f 6e fb 83 92 df 3a 54 b9 df 21 2a cd |.h`On....:T..!*.| 00000040 1e 2f e2 49 fe cf 81 2d 52 17 1a 4e 66 b4 f3 f0 |./.I...-R..Nf...| 00000050 41 25 e3 96 26 28 fe 19 61 72 75 f8 40 a3 b7 ef |A%..&(..aru.@...| 00000060 5f 79 dc cb 28 44 44 7c 9b 9a 7b 6c 4b 4b 60 0f |_y..(DD|..{lKK`.| 00000070 a9 97 87 bc 85 9f db bb d2 1a 88 9f aa 49 18 d5 |.............I..| 00000080 92 2d db 7e fe f7 8d 7a 18 c0 33 c5 bd 7a 46 07 |.-.~...z..3..zF.| 00000090 c8 27 13 66 94 49 62 9f bc 99 56 55 25 fb 94 a9 |.'.f.Ib...VU%...| 000000a0 3f b2 a7 0a 87 d0 a4 4e 51 f1 09 02 c4 29 eb ff |?......NQ....)..| 000000b0 26 3b 51 3e 5a 0c db ee a6 57 a7 c3 ba a1 74 90 |&;Q>Z....W....t.| 000000c0 ee 70 08 18 cc b8 d0 22 ce 96 c7 cb 68 40 98 20 |.p....."....h@. | 000000d0 49 3d 07 ec df d1 8d cf 19 bc 42 90 70 24 01 b4 |I=........B.p$..| 000000e0 28 cf c6 50 d3 95 5a 1b 18 15 33 c7 b2 a8 95 92 |(..P..Z...3.....| 000000f0 bb 93 fe 18 2b 81 c1 6b 9c 30 f1 65 50 6a 80 3d |....+..k.0.ePj.=| 00000100 74 37 a8 59 a6 51 8a 63 b6 d8 16 9f a9 47 2a 7c |t7.Y.Q.c.....G*|| 00000110 04 a7 fe 69 47 02 bf e9 b7 1b 7a ea 60 5c 3c 53 |...iG.....z.`\<S| 00000120 5b 10 78 dc 4d d2 a8 22 30 45 37 fb 56 06 9f 06 |[.x.M.."0E7.V...| 00000130 aa df cf 87 3a 3e cf 72 f2 e5 a6 c6 aa e2 7c 1c |....:>.r......|.| 00000140 64 c2 fc 80 ce 02 fc 7f 0f c6 60 81 bf cd 3b 5a |d.........`...;Z| 00000150 37 a5 38 1b 0c 1b 39 2e d6 f6 3d a2 36 e5 87 c3 |7.8...9...=.6...| 00000160 17 b5 fd ee 33 c7 ce a3 d9 c2 57 dc ee 85 48 9d |....3.....W...H.| 00000170 33 60 02 cd c5 83 44 44 ea b6 07 25 0a 4b a6 6e |3`....DD...%.K.n| 00000180 fc 51 42 cd 84 0b 65 b6 19 a1 e5 b2 eb 14 0c fa |.QB...e.........| 00000190 24 77 f5 44 6e 5d 39 dd b6 8e cc f8 30 fe 21 46 |$w.Dn]9.....0.!F| 000001a0 9c ff 95 c6 c7 b5 0a df 54 ca d2 ac bc 64 d0 97 |........T....d..| 000001b0 94 54 d9 29 0f 91 60 20 c3 e4 53 c2 b0 e4 40 72 |.T.)..` ..S...@r| 000001c0 7e 25 bc 81 06 ad 05 46 14 a7 e6 71 6b 5c db 9c |~%.....F...qk\..| 000001d0 0a 5e 76 23 ae 06 01 36 98 21 65 2c 90 e7 4b 1a |.^v#...6.!e,..K.| 000001e0 2a 2d 80 a5 48 db 9e 14 e0 9f e9 aa 00 e3 77 32 |*-..H.........w2| 000001f0 0f fd 94 db 55 a6 64 46 be ae ca de da ee 89 68 |....U.dF.......h| 00000200 =======Devices which don't seem to have a corresponding hard drive============== sdc sdd sde

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • How to shoot yourself in the foot (DO NOT Read in the office)

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/21/how-to-shoot-yourself-in-the-foot-do-not-read.aspxLet me make it absolutely clear - the following is:merely collated by your Geek from http://www.codeproject.com/Lounge.aspx?msg=3917012#xx3917012xxvery, very very funny so you read it in the presence of others at your own riskso here is the list - you have been warned!C You shoot yourself in the foot.   C++ You accidently create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical assistance is impossible since you can't tell which are bitwise copies and which are just pointing at others and saying "That's me, over there."   FORTRAN You shoot yourself in each toe, iteratively, until you run out of toes, then you read in the next foot and repeat. If you run out of bullets, you continue anyway because you have no exception-handling facility.   Modula-2 After realizing that you can't actually accomplish anything in this language, you shoot yourself in the head.   COBOL USEing a COLT 45 HANDGUN, AIM gun at LEG.FOOT, THEN place ARM.HAND.FINGER on HANDGUN.TRIGGER and SQUEEZE. THEN return HANDGUN to HOLSTER. CHECK whether shoelace needs to be retied.   Lisp You shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds...   BASIC Shoot yourself in the foot with a water pistol. On big systems, continue until entire lower body is waterlogged.   Forth Foot yourself in the shoot.   APL You shoot yourself in the foot; then spend all day figuring out how to do it in fewer characters.   Pascal The compiler won't let you shoot yourself in the foot.   Snobol If you succeed, shoot yourself in the left foot. If you fail, shoot yourself in the right foot.   HyperTalk Put the first bullet of the gun into foot left of leg of you. Answer the result.   Prolog You tell your program you want to be shot in the foot. The program figures out how to do it, but the syntax doesn't allow it to explain.   370 JCL You send your foot down to MIS with a 4000-page document explaining how you want it to be shot. Three years later, your foot comes back deep-fried.   FORTRAN-77 You shoot yourself in each toe, iteratively, until you run out of toes, then you read in the next foot and repeat. If you run out of bullets, you continue anyway because you still can't do exception-processing.   Modula-2 (alternative) You perform a shooting on what might be currently a foot with what might be currently a bullet shot by what might currently be a gun.   BASIC (compiled) You shoot yourself in the foot with a BB using a SCUD missile launcher.   Visual Basic You'll really only appear to have shot yourself in the foot, but you'll have so much fun doing it that you won't care.   Forth (alternative) BULLET DUP3 * GUN LOAD FOOT AIM TRIGGER PULL BANG! EMIT DEAD IF DROP ROT THEN (This takes about five bytes of memory, executes in two to ten clock cycles on any processor and can be used to replace any existing function of the language as well as in any future words). (Welcome to bottom up programming - where you, too, can perform compiler pre-processing instead of writing code)   APL (alternative) You hear a gunshot and there's a hole in your foot, but you don't remember enough linear algebra to understand what happened. or @#&^$%&%^ foot   Pascal (alternative) Same as Modula-2 except that the bullet is not the right type for the gun and your hand is blown off.   Snobol (alternative) You grab your foot with your hand, then rewrite your hand to be a bullet. The act of shooting the original foot then changes your hand/bullet into yet another foot (a left foot).   Prolog (alternative) You attempt to shoot yourself in the foot, but the bullet, failing to find its mark, backtracks to the gun, which then explodes in your face.   COMAL You attempt to shoot yourself in the foot with a water pistol, but the bore is clogged, and the pressure build-up blows apart both the pistol and your hand. or draw_pistol aim_at_foot(left) pull_trigger hop(swearing)   Scheme As Lisp, but none of the other appendages are aware of this happening.   Algol You shoot yourself in the foot with a musket. The musket is aesthetically fascinating and the wound baffles the adolescent medic in the emergency room.   Ada If you are dumb enough to actually use this language, the United States Department of Defense will kidnap you, stand you up in front of a firing squad and tell the soldiers, "Shoot at the feet." or The Department of Defense shoots you in the foot after offering you a blindfold and a last cigarette. or After correctly packaging your foot, you attempt to concurrently load the gun, pull the trigger, scream and shoot yourself in the foot. When you try, however, you discover that your foot is of the wrong type. or After correctly packing your foot, you attempt to concurrently load the gun, pull the trigger, scream, and confidently aim at your foot knowing it is safe. However the cordite in the round does an Unchecked Conversion, fires and shoots you in the foot anyway.   Eiffel   You create a GUN object, two FOOT objects and a BULLET object. The GUN passes both the FOOT objects a reference to the BULLET. The FOOT objects increment their hole counts and forget about the BULLET. A little demon then drives a garbage truck over your feet and grabs the bullet (both of it) on the way. Smalltalk You spend so much time playing with the graphics and windowing system that your boss shoots you in the foot, takes away your workstation and makes you develop in COBOL on a character terminal. or You send the message shoot to gun, with selectors bullet and myFoot. A window pops up saying Gunpowder doesNotUnderstand: spark. After several fruitless hours spent browsing the methods for Trigger, FiringPin and IdealGas, you take the easy way out and create ShotFoot, a subclass of Foot with an additional instance variable bulletHole. Object Oriented Pascal You perform a shooting on what might currently be a foot with what might currently be a bullet fired from what might currently be a gun.   PL/I You consume all available system resources, including all the offline bullets. The Data Processing & Payroll Department doubles its size, triples its budget, acquires four new mainframes and drops the original one on your foot. Postscript foot bullets 6 locate loadgun aim gun shoot showpage or It takes the bullet ten minutes to travel from the gun to your foot, by which time you're long since gone out to lunch. The text comes out great, though.   PERL You stab yourself in the foot repeatedly with an incredibly large and very heavy Swiss Army knife. or You pick up the gun and begin to load it. The gun and your foot begin to grow to huge proportions and the world around you slows down, until the gun fires. It makes a tiny hole, which you don't feel. Assembly Language You crash the OS and overwrite the root disk. The system administrator arrives and shoots you in the foot. After a moment of contemplation, the administrator shoots himself in the foot and then hops around the room rabidly shooting at everyone in sight. or You try to shoot yourself in the foot only to discover you must first reinvent the gun, the bullet, and your foot.or The bullet travels to your foot instantly, but it took you three weeks to load the round and aim the gun.   BCPL You shoot yourself somewhere in the leg -- you can't get any finer resolution than that. Concurrent Euclid You shoot yourself in somebody else's foot.   Motif You spend days writing a UIL description of your foot, the trajectory, the bullet and the intricate scrollwork on the ivory handles of the gun. When you finally get around to pulling the trigger, the gun jams.   Powerbuilder While attempting to load the gun you discover that the LoadGun system function is buggy; as a work around you tape the bullet to the outside of the gun and unsuccessfully attempt to fire it with a nail. In frustration you club your foot with the butt of the gun and explain to your client that this approximates the functionality of shooting yourself in the foot and that the next version of Powerbuilder will fix it.   Standard ML By the time you get your code to typecheck, you're using a shoot to foot yourself in the gun.   MUMPS You shoot 583149 AK-47 teflon-tipped, hollow-point, armour-piercing bullets into even-numbered toes on odd-numbered feet of everyone in the building -- with one line of code. Three weeks later you shoot yourself in the head rather than try to modify that line.   Java You locate the Gun class, but discover that the Bullet class is abstract, so you extend it and write the missing part of the implementation. Then you implement the ShootAble interface for your foot, and recompile the Foot class. The interface lets the bullet call the doDamage method on the Foot, so the Foot can damage itself in the most effective way. Now you run the program, and call the doShoot method on the instance of the Gun class. First the Gun creates an instance of Bullet, which calls the doFire method on the Gun. The Gun calls the hit(Bullet) method on the Foot, and the instance of Bullet is passed to the Foot. But this causes an IllegalHitByBullet exception to be thrown, and you die.   Unix You shoot yourself in the foot or % ls foot.c foot.h foot.o toe.c toe.o % rm * .o rm: .o: No such file or directory % ls %   370 JCL (alternative) You shoot yourself in the head just thinking about it.   DOS JCL You first find the building you're in in the phone book, then find your office number in the corporate phone book. Then you have to write this down, then describe, in cubits, your exact location, in relation to the door (right hand side thereof). Then you need to write down the location of the gun (loading it is a proprietary utility), then you load it, and the COBOL program, and run them, and, with luck, it may be run tonight.   VMS   $ MOUNT/DENSITY=.45/LABEL=BULLET/MESSAGE="BYE" BULLET::BULLET$GUN SYS$BULLET $ SET GUN/LOAD/SAFETY=OFF/SIGHT=NONE/HAND=LEFT/CHAMBER=1/ACTION=AUTOMATIC/ LOG/ALL/FULL SYS$GUN_3$DUA3:[000000]GUN.GNU $ SHOOT/LOG/AUTO SYS$GUN SYS$SYSTEM:[FOOT]FOOT.FOOT   %DCL-W-ACTIMAGE, error activating image GUN -CLI-E-IMGNAME, image file $3$DUA240:[GUN]GUN.EXE;1 -IMGACT-F-NOTNATIVE, image is not an OpenVMS Alpha AXP image or %SYS-F-FTSHT, foot shot (fifty lines of traceback omitted) sh,csh, etc You can't remember the syntax for anything, so you spend five hours reading manual pages, then your foot falls asleep. You shoot the computer and switch to C.   Apple System 7 Double click the gun icon and a window giving a selection for guns, target areas, plus balloon help with medical remedies, and assorted sound effects. Click "shoot" button and a small bomb appears with note "Error of Type 1 has occurred."   Windows 3.1 Double click the gun icon and wait. Eventually a window opens giving a selection for guns, target areas, plus balloon help with medical remedies, and assorted sound effects. Click "shoot" button and a small box appears with note "Unable to open Shoot.dll, check that path is correct."   Windows 95 Your gun is not compatible with this OS and you must buy an upgrade and install it before you can continue. Then you will be informed that you don't have enough memory.   CP/M I remember when shooting yourself in the foot with a BB gun was a big deal.   DOS You finally found the gun, but can't locate the file with the foot for the life of you.   MSDOS You shoot yourself in the foot, but can unshoot yourself with add-on software.   Access You try to point the gun at your foot, but it shoots holes in all your Borland distribution diskettes instead.   Paradox Not only can you shoot yourself in the foot, your users can too.   dBase You squeeze the trigger, but the bullet moves so slowly that by the time your foot feels the pain, you've forgotten why you shot yourself anyway. or You buy a gun. Bullets are only available from another company and are promised to work so you buy them. Then you find out that the next version of the gun is the one scheduled to actually shoot bullets.   DBase IV, V1.0 You pull the trigger, but it turns out that the gun was a poorly designed hand grenade and the whole building blows up.   SQL You cut your foot off, send it out to a service bureau and when it returns, it has a hole in it but will no longer fit the attachment at the end of your leg. or Insert into Foot Select Bullet >From Gun.Hand Where Chamber = 'LOADED' And Trigger = 'PULLED'   Clipper You grab a bullet, get ready to insert it in the gun so that you can shoot yourself in the foot and discover that the gun that the bullets fits has not yet been built, but should be arriving in the mail _REAL_SOON_NOW_. Oracle The menus for coding foot_shooting have not been implemented yet and you can't do foot shooting in SQL.   English You put your foot in your mouth, then bite it off. (For those who don't know, English is a McDonnell Douglas/PICK query language which allegedly requires 110% of system resources to run happily.) Revelation [an implementation of the PICK Operating System] You'll be able to shoot yourself in the foot just as soon as you figure out what all these bullets are for.   FlagShip Starting at the top of your head, you aim the gun at yourself repeatedly until, half an hour later, the gun is finally pointing at your foot and you pull the trigger. A new foot with a hole in it appears but you can't work out how to get rid of the old one and your gun doesn't work anymore.   FidoNet You put your foot in your mouth, then echo it internationally.   PicoSpan [a UNIX-based computer conferencing system] You can't shoot yourself in the foot because you're not a host. or (host variation) Whenever you shoot yourself in the foot, someone opens a topic in policy about it.   Internet You put your foot in your mouth, shoot it, then spam the bullet so that everybody gets shot in the foot.   troff rmtroff -ms -Hdrwp | lpr -Pwp2 & .*place bullet in footer .B .NR FT +3i .in 4 .bu Shoot! .br .sp .in -4 .br .bp NR HD -2i .*   Genetic Algorithms You create 10,000 strings describing the best way to shoot yourself in the foot. By the time the program produces the optimal solution, humans have evolved wings and the problem is moot.   CSP (Communicating Sequential Processes) You only fail to shoot everything that isn't your foot.   MS-SQL Server MS-SQL Server’s gun comes pre-loaded with an unlimited supply of Teflon coated bullets, and it only has two discernible features: the muzzle and the trigger. If that wasn't enough, MS-SQL Server also puts the gun in your hand, applies local anesthetic to the skin of your forefinger and stitches it to the gun's trigger. Meanwhile, another process has set up a spinal block to numb your lower body. It will then proceeded to surgically remove your foot, cryogenically freeze it for preservation, and attach it to the muzzle of the gun so that no matter where you aim, you will shoot your foot. In order to avoid shooting yourself in the foot, you need to unstitch your trigger finger, remove your foot from the muzzle of the gun, and have it surgically reattached. Then you probably want to get some crutches and go out to buy a book on SQL Server Performance Tuning.   Sybase Sybase's gun requires assembly, and you need to go out and purchase your own clip and bullets to load the gun. Assembly is complicated by the fact that Sybase has hidden the gun behind a big stack of reference manuals, but it hasn't told you where that stack is. While you were off finding the gun, assembling it, buying bullets, etc., Sybase was also busy surgically removing your foot and cryogenically freezing it for preservation. Instead of attaching it to the muzzle of the gun, though, it packed your foot on dry ice and sent it UPS-Ground to an unnamed hookah bar somewhere in the middle east. In order to shoot your foot, you must modify your gun with a GPS system for targeting and hire some guy named "Indy" to find the hookah bar and wire the coordinates back to you. By this time, you've probably become so daunted at the tasks stand between you and shooting your foot that you hire a guy who's read all the books on Sybase to help you shoot your foot. If you're lucky, he'll be smart enough both to find your foot and to stop you from shooting it.   Magic software You spend 1 week looking up the correct syntax for GUN. When you find it, you realise that GUN will not let you shoot in your own foot. It will allow you to shoot almost anything but your foot. You then decide to build your own gun. You can't use the standard barrel since this will only allow for standard bullets, which will not fire if the barrel is pointed at your foot. After four weeks, you have created your own custom gun. It blows up in your hand without warning, because you failed to initialise the safety catch and it doesn't know whether the initial state is "0", 0, NULL, "ZERO", 0.0, 0,0, "0.0", or "0,00". You fix the problem with your remaining hand by nesting 12 safety catches, and then decide to build the gun without safety catch. You then shoot the management and retire to a happy life where you code in languages that will allow you to shoot your foot in under 10 days.FirefoxLets you shoot yourself in as many feet as you'd like, while using multiple great addons! IEA moving target in terms of standard ammunition size and doesn't always work properly with non-Microsoft ammunition, so sometimes you shoot something other than your foot. However, it's the corporate world's standard foot-shooting apparatus. Hackers seem to enjoy rigging websites up to trigger cascading foot-shooting failures. Windows 98 About the same as Windows 95 in terms of overall bullet capacity and triggering mechanisms. Includes updated DirectShot API. A new version was released later on to support USB guns, Windows 98 SE.WPF:You get your baseball glove and a ball and you head out to your backyard, where you throw balls to your pitchback. Then your unkempt-haired-cargo-shorts-and-sandals-with-white-socks-wearing neighbor uses XAML to sculpt your arm into a gun, the ball into a bullet and the pitchback into your foot. By now, however, only the neighbor can get it to work and he's only around from 6:30 PM - 3:30 AM. LOGO: You very carefully lay out the trajectory of the bullet. Then you start the gun, which fires very slowly. You walk precisely to the point where the bullet will travel and wait, but just before it gets to you, your class time is up and one of the other kids has already used the system to hack into Sony's PS3 network. Flash: Someone has designed a beautiful-looking gun that anyone can shoot their feet with for free. It weighs six hundred pounds. All kinds of people are shooting themselves in the feet, and sending the link to everyone else so that they can too. That is, except for the criminals, who are all stealing iOS devices that the gun won't work with.APL: Its (mostly) all greek to me. Lisp: Place ((gun in ((hand sight (foot then shoot))))) (Lots of Insipid Stupid Parentheses)Apple OS/X and iOS Once a year, Steve Jobs returns from sick leave to tell millions of unwavering fans how they will be able to shoot themselves in the foot differently this year. They retweet and blog about it ad nauseam, and wait in line to be the first to experience "shoot different".Windows ME Usually fails, even at shooting you in the foot. Yo dawg, I heard you like shooting yourself in the foot. So I put a gun in your gun, so you can shoot yourself in the foot while you shoot yourself in the foot. (Okay, I'm not especially proud of this joke.) Windows 2000 Now you really do have to log in, before you are allowed to shoot yourself in the foot.Windows XPYou thought you learned your lesson: Don't use Windows ME. Then, along came this new creature, built on top of Windows NT! So you spend the next couple days installing antivirus software, patches and service packs, just so you can get that driver to install, and then proceed to shoot yourself in the foot. Windows Vista Newer! Glossier! Shootier! Windows 7 The bullets come out a lot smoother. Active Directory Each bullet now has an attached Bullet Identifier, and can be uniquely identified. Policies can be applied to dictate fragmentation, and the gun will occasionally have a confusing delay after the trigger has been pulled. PythonYou try to use import foot; foot.shoot() only to realize that's only available in 3.0, to which you can't yet upgrade from 2.7 because of all those extension libs lacking support. Solaris Shoots best when used on SPARC hardware, but still runs the trigger GUI under Java. After weeks of learning the appropriate STOP command to prevent the trigger from automatically being pressed on boot, you think you've got it under control. Then the one time you ever use dtrace, it hits a bug that fires the gun. MySQL The feature that allows you to shoot yourself in the foot has been in development for about 6 years, and they are adding it into the next version, which is coming out REAL SOON NOW, promise! But you can always check it out of source control and try it yourself (just not in any environment where data integrity is important because it will probably explode.) PostgreSQLAllows you to have a smug look on your face while you shoot yourself in the foot, because those MySQL guys STILL don't have that feature. NoSQL Barrel? Who needs a barrel? Just put the bullet on your foot, and strike it with a hammer. See? It's so much simpler and more efficient that way. You can even strike multiple bullets in one swing if you swing with a good enough arc, because hammers are easy to use. Getting them to synchronize is a little difficult, though.Eclipse There are about a dozen different packages for shooting yourself in the foot, with weird interdependencies on outdated components. Once you finally navigate the morass and get one installed, you then have something to look at while you shoot yourself in the foot with that package: You can watch the screen redraw.Outlook Makes it really easy to let everyone know you shot yourself in the foot!Shooting yourself in the foot using delegates.You really need to shoot yourself in the foot but you hate firearms (you don't want any dependency on the specifics of shooting) so you delegate it to somebody else. You don't care how it is done as long is shooting your foot. You can do it asynchronously in case you know you may faint so you are called back/slapped in the face by your shooter/friend (or background worker) when everything is done.C#You prepare the gun and the bullet, carefully modeling all of the physics of a bullet traveling through a foot. Just before you're about to pull the trigger, you stumble on System.Windows.BodyParts.Foot.ShootAt(System.Windows.Firearms.IGun gun) in the extended framework, realize you just wasted the entire afternoon, and shoot yourself in the head.PHP<?phprequire("foot_safety_check.php");?><!DOCTYPE HTML><html><head> <!--Lower!--><title>Shooting me in the foot</title></head> <body> <!--LOWER!!!--><leg> <!--OK, I made this one up...--><footer><?php echo (dungSift($_SERVER['HTTP_USER_AGENT'], "ie"))?("Your foot is safe, but you might want to wear a hard hat!"):("<div class=\"shot\">BANG!</div>"); ?></footer></leg> </body> </html>

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • Zend/PHP: Problem uploading/downloading file to/from MySQL's BLOB field.

    - by NAVEED
    I am uploading file(any type) like this: (It is uploading content of file in blob field of mysql) $organizationModel = new Model_Organization_Object( organizationId ); $myFile = file_get_contents( '../path/to/my/file/filename.ext' ); $organizationModel->setOrganizationProfile( $myFile ); $organizationModel->save(); Now I want to get that file from database and want to download. I doing this in controller's action: (I am aspecting pdf file here therefore it is hardcoded below. But in future I want to download any file from blob field) $organizationModel = new Model_Organization_Object( $organizationId ); $content = $organizationModel->getOrganizationProfile(); header('Content-Type: application/octet-stream'); header("Content-Length: " . strlen($content) ); header('Content-Disposition: attachment; filename=orgProfile.pdf'); $this->view->organizationProfile = $content; Now in view file I am doing this: echo $this-organizationProfile; But above download process print(echo) the content of file in firbug and does not download file in orignal format. My echo output in firebug is like this: %PDF-1.3 %???? 84 0 obj << /Linearized 1 /O 86 /H [ 541 212 ] /L 958398 /E 11238 /N 27 /T 956600 >> endobj xref 84 7 0000000016 00000 n 0000000486 00000 n 0000000753 00000 n 0000000982 00000 n 0000001102 00000 n 0000000541 00000 n 0000000732 00000 n trailer << /Size 91 /Info 83 0 R /Root 85 0 R /Prev 956590 /ID[<0a8d7035bf08791da591e8cae39b8c49><0a8d7035bf08791da591e8cae39b8c49>] >> startxref 0 %%EOF 85 0 obj << /Type /Catalog /Pages 82 0 R >> endobj 89 0 obj << /S 151 /Filter /FlateDecode /Length 90 0 R >> stream H?b```f``?e`b`?f`@\0?.????\\I~aV$?X??dO????bA?Az?lv1o#?{-????1+??G?????N`?b? >?-?? \0\0D40 endstream endobj 90 0 obj 106 endobj 86 0 obj << /Type /Page /Contents 87 0 R /Parent 79 0 R /Resources << /XObject << /img0 88 0 R >> /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ] >> /MediaBox [ 0 0 612 792 ] /CropBox [ 0 0 612 792 ] /Rotate 0 >> endobj 87 0 obj << /Filter /FlateDecode /Length 46 >> stream x?+T05???P0\0Bs#SC=S3c3??\\???t?|?@.\0??? endstream endobj 88 0 obj << /Filter /FlateDecode /Type /XObject /Length 8926 /BitsPerComponent 8 /Height 1122 /ColorSpace [ /Indexed /DeviceRGB 255 (\0\0\0JJJkkk{{{????????????????????????????????????\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0) ] /Subtype /Image /Width 793 >> stream x???v??\0?bF???mf?\\3??k?~? ?7uj??\\\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0~??/\0~??/\0~?&|??tp?pKS ????Fc???!?Q~?72?&???>???]?$?KUo????9?Tx??E8U}?????? _#=??6 Q{????v?T|s?>\\??w??.|??8?7Q????o.?o????????G??x??|?Is:??????oN>4???jJ?F?v? ? V?q<???P?????I>?.|?iT? ???Ç?Q?m????G?8c`????a`<?.|??~`????OG!?x7j??K*]??S?1??_??1\'?D?????0??\"?w\\?e?????<F:4????E-??Fa????O?v????9??_ m???P??8iuTr?i?FX?????<C? ????t:?(0??I>?2`????.???:??pv:???A??<$M??????e9??\\c???.0???t?kum?K;??<???\\@?]f/?h??m_???g???l?8&??*??2?-??Ew?4[j?v?(?????p?T???M--?8 cb??]?h??pN???kt?J$?m???X???5Cr?]?Jm?VP?X?Ð!? ?$???-?PM??O]??,?h???r=???qV}?p*?c?uq??t??????R6v??l8?I?e?9 {s\\K _?CN?^??W?8%p\']?2U?D{???Z?EB?*?d?va1^??Z\"?7?t]?TL?^??d???.|?4?q?2?&2??S{(??G?vNi4?D?K?)_^?]???D]DK???j?9????OQ?]???us?n?T4?om?P??E?|?t??w?????c?7>!]?\"}$??:??<????[9?C??Wi?u?su#9?\0?t?u=??=w??Q??A??.?dyb vN?N\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0??U/????<??v????S?A\\??qkm? !???&??J????!,??+?w;????{?!D???K×%5?E???????|n?FT*,? ?? ?j#Q??uT~r:}\\?_?????v???8Q?&? ???T?S?I\"?(>Y??????}H??aj?3??u?h?T?X?Z?-~??c\'P?^??d?????????]_???z??O]?q?????7??|?mN%??????T?????o???sUzT???m?v8?q? ??e]?wS???C~Ta???.??[%!??????2x]n~?7??????6.????K??;c }????r?)V?? u???*?7?$c\\???m?~???r??)U{?????????R?? ??D1L_????WUog?>??/?????????~?%???M???}\'?? ;???y??K`?????O?,??????<?,0???;3 #??m?v???aZ=?N?u?J`??dwnm;??.??k?n?K1-M?7????H&??????s?C??? ?}Z?1????c?(0?q?_1??7?%???G7U/??h9I??????S?Q??4nc?Lq??H6??;??c(/O??2???-?*_????%?I?/??I?o????ô?k??<????q??\'??]v?\"??+????,????qxgk?\\\\?6???7??Y???.G???Y??8??.??*???M_??J?hu1????z??W?o_??F?/???s?:?Y~??>0?g\\E?l?K5e???&L?/????k$????{?:\\>??Fs?-??l?>c??o?????9?V+?2;??}q?4 ?zS?|u?A`dK???n~?s???K?hiY?j??#p???S?M\\???0P2?\\*?m+?L5Er????[W?>9|???i?????}`Nmc:Qv??]&|?_????fx???????Ns~w??to????K?M???uN????0J6q1??u(b?M?_?????7?]?m?\':????S@???4?????\\??@~Mn?????|}?9?F6_Vr ??7??{?_??_????Y?Go?9??f1????E?|?Ucd? ????????t7k?? }??:??n?M?_????#?M$DG???:Z??y??;g:?|????F?m??e?F*?uJ?C??-?v?%??^?*??????z:l???w?e???9??i?5j???x?~??Ao???a?x?{?UL??? ??#:???\'^?????W??f;?u???ejq¯?u[?2K8??e?>/?ug?@S??L???? ??u0uI~?z?YYV???[[O?T??-Y?u?j?M?_???n&??7O?f??s??z`.`?,W??#?l??n???s??\'?????=??&#?z?M7_????s???x??y? ??u?p?G???0?e?G????8]{??N?1}??}~Q?[)?XF??_??*? p7iQ????M?(?l????????????f??6????*??U;@~\\k?i??w_??*?#???^?j?\\?L??/?}?Y?[??V??t~?w?n??a???m?O?(.?n;??ji:??W?ZnQ[9?n=?^??sE9??;?.??u3\"???<?L??y8?<H???g??u??\\?q???71p?U??}???f`?Y??m3b*C?t{?SX??7m<??6??8K??[Qs??&_??(M??:?Z???W?????W? ??4d??4?A????lw?e?d?>? ?pCV??h?SS?Z?T??4?N?,?? ?8=-?%???4?p?a??~??R?L??=J??j}??"??,?(?x?????????o?ï??t??X7???~jQ?aK???Z*YL????X??/?m?ot?9&s0???O5??j=?7sb?l?Mh???y?}Q\\4?MM?i5&?Yf??hS??N????\'?\0?????i?9??G?$??R?A1[??Y?t??4b?}????u??3?Y??Il????{??[u??f??q???Z_;??|*?t?uTO??}b?a?0>????>?>w\'P?E??]????6???v?^?,?;?uE?f?;?> yo?eNS@?C???I??????Otf????4I??? ?s????*??G?\'?>?</?=T?CE??5NR?~??%?1?d^V??O??????????e||/b??^ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0~????+?W8??t????????}????????X?????7??/???\\}LM?????b?#?q];J??[U???(0N??t?????[??_V?!??%????-?7:?m???9??Dau?o]??^????*??,???h?o3f??[%?FW?W!?X#; ?>HC?-?\'/??^??\'?*?)??!?_?e?+TC?7O??I.?[?tN?,??Rs???u???^???????q??S??.?c?UR?????? ????M????FS???A????>?^K?|[?]z~7??7u???7V]L?|??l???]?[e?+u?????{U}???Em??IWbV7????/v?x?zk??F@.??5?G?Í>f??_???Gg?}??tc??&R???n???G-?N]/?w?? ????f?}Ue%?;?~?:????`6(??_???g???`? E~?p06?}#/?G=????;??<$Y???l??m?T??@Y??p?????r??.?H?>\0.Ih??~???!?N/^o? ??&v??R???9?suJ?r???JZg?z?Y?7??^?J??H>{[?vQ????qw?e{{?l????????u]?.Z?xh%7??>?|???b`?K?|I\"?nh?m?????m?z5Qpw??N3???y?)??k??????,?Ws*SJ]????????!?o?Iq3~x??Az{?v]\'?k????k???Dc ?]??l?)L??? I8eG#r?dC??;??/C???l???rm???????e?6?M??fP?4?r??)?!?\\s???{??!cN??h??>?? ??o>??m?dO=&<??P??]=]???n?v??y?l??\"?K??????rF?I???)Z??]n?J??N?w???S/S??w???R6}\'u??kN?K`?C/???N??,??o??I?>?S?(??hOV????-]?p?r??0??u?(?,a????/???\"o;???44????P?9K!O]??x?r?}??8?????w?4?|?el7U??l.}|w?- ?=?Lq??e<&??g?/z8??7??:n?????ï??~??_?a???&?7sy???,?3?1??rV???m?????s??C?x50?????g???\\??!??????e?????/Cl?Y???:??jz3??????/?a?]}??\\n???BZ?0?J-+u??????x?=??CC??M??W[??v<???S14?????\\C?Z ??g???q:?u?C?k?vc?K?;??\"Y?t?r]??G?z????w???r??????0??????e?:??/f?*^?W?Q8WsN??9}*?|??~x)?N?=6J?l????M?b??????M45?C?k]??r?u??????r ] Can someone help that how to download file or I am doing something wrong with uploading process. setOrganizationProfile() and getOrganizationProfile() function are created by me which are working fine while storing/getting data to/from database. Thanks

    Read the article

  • Problem compiling hive with ant

    - by conandor
    I compiling with Solaris 10 SPARC, jdk 1.6 from Sun, Ant 1.7.1 from OpenCSW. I have no problem running hadoop 0.17.2.1 However, I have problem compiling/integrating hive with the error 'cannot find symbol', although I followed the tutorial. I have the hive source code from SVN exactly from tutorial. How can I know the hive version I compiling and how can I compile against hadoop 0.17.2.1? Please advice. Thank you. -bash-3.00$ export PATH=/usr/jdk/instances/jdk1.6.0/bin:/usr/bin:/opt/csw/bin:/opt/webstack/bin -bash-3.00$ export JAVA_HOME=/usr/jdk/instances/jdk1.6.0 -bash-3.00$ export HADOOP=/export/home/mywork/hadoop-0.17.2.1/bin/hadoop -bash-3.00$ /opt/csw/bin/ant package -Dhadoop.version=0.17.2.1 Buildfile: build.xml jar: create-dirs: compile-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks deploy-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks jar: init: compile: ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#shims;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.17.2.1 in hadoop-source [ivy:retrieve] found hadoop#core;0.18.3 in hadoop-source [ivy:retrieve] found hadoop#core;0.19.0 in hadoop-source [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 25878ms :: artifacts dl 37ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 4 | 0 | 0 | 0 || 4 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#shims [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 4 already retrieved (0kB/82ms) install-hadoopcore-internal: build_shims: [echo] Compiling shims against hadoop 0.17.2.1 (/export/home/mywork/hive/build/hadoopcore/hadoop-0.17.2.1) ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#shims;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.17.2.1 in hadoop-source [ivy:retrieve] found hadoop#core;0.18.3 in hadoop-source [ivy:retrieve] found hadoop#core;0.19.0 in hadoop-source [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 12041ms :: artifacts dl 30ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 4 | 0 | 0 | 0 || 4 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#shims [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 4 already retrieved (0kB/39ms) install-hadoopcore-internal: build_shims: [echo] Compiling shims against hadoop 0.18.3 (/export/home/mywork/hive/build/hadoopcore/hadoop-0.18.3) ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#shims;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.17.2.1 in hadoop-source [ivy:retrieve] found hadoop#core;0.18.3 in hadoop-source [ivy:retrieve] found hadoop#core;0.19.0 in hadoop-source [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 11107ms :: artifacts dl 36ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 4 | 0 | 0 | 0 || 4 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#shims [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 4 already retrieved (0kB/49ms) install-hadoopcore-internal: build_shims: [echo] Compiling shims against hadoop 0.19.0 (/export/home/mywork/hive/build/hadoopcore/hadoop-0.19.0) ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#shims;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.17.2.1 in hadoop-source [ivy:retrieve] found hadoop#core;0.18.3 in hadoop-source [ivy:retrieve] found hadoop#core;0.19.0 in hadoop-source [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 9969ms :: artifacts dl 33ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 4 | 0 | 0 | 0 || 4 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#shims [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 4 already retrieved (0kB/57ms) install-hadoopcore-internal: build_shims: [echo] Compiling shims against hadoop 0.20.0 (/export/home/mywork/hive/build/hadoopcore/hadoop-0.20.0) jar: [echo] Jar: shims create-dirs: compile-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks deploy-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks jar: init: install-hadoopcore: install-hadoopcore-default: ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#common;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 4864ms :: artifacts dl 13ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 1 | 0 | 0 | 0 || 1 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#common [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 1 already retrieved (0kB/52ms) install-hadoopcore-internal: setup: compile: [echo] Compiling: common jar: [echo] Jar: common create-dirs: compile-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks deploy-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks jar: init: dynamic-serde: compile: [echo] Compiling: hive [javac] Compiling 167 source files to /export/home/mywork/hive/build/serde/classes [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorFactory.java:30: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorFactory.java:31: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorUtils [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/MetadataTypedColumnsetSerDe.java:31: cannot find symbol [javac] symbol : class MetadataListStructObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector [javac] import org.apache.hadoop.hive.serde2.objectinspector.MetadataListStructObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:33: cannot find symbol [javac] symbol : class BooleanObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:35: cannot find symbol [javac] symbol : class DoubleObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.DoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:36: cannot find symbol [javac] symbol : class FloatObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.FloatObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:39: cannot find symbol [javac] symbol : class ShortObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.ShortObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:40: cannot find symbol [javac] symbol : class StringObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:44: cannot find symbol [javac] symbol : class BooleanObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:46: cannot find symbol [javac] symbol : class DoubleObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.DoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:47: cannot find symbol [javac] symbol : class FloatObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.FloatObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:50: cannot find symbol [javac] symbol : class ShortObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.ShortObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:51: cannot find symbol [javac] symbol : class StringObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazySimpleSerDe.java:43: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarSerDe.java:41: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java:26: cannot find symbol [javac] symbol : class LazySimpleStructObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.lazy.objectinspector [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java:39: cannot find symbol [javac] symbol: class LazySimpleStructObjectInspector [javac] LazyNonPrimitive<LazySimpleStructObjectInspector> { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java:68: cannot find symbol [javac] symbol : class LazySimpleStructObjectInspector [javac] location: class org.apache.hadoop.hive.serde2.lazy.LazyStruct [javac] public LazyStruct(LazySimpleStructObjectInspector oi) { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDe.java:36: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDe.java:37: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorUtils [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDeTypeString.java:23: cannot find symbol [javac] symbol : class StringObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDeTypei16.java:23: cannot find symbol [javac] symbol : class ShortObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.ShortObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDeTypeDouble.java:23: cannot find symbol [javac] symbol : class DoubleObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.DoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDeTypeBool.java:23: cannot find symbol [javac] symbol : class BooleanObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyBoolean.java:20: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyBooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyBoolean.java:37: cannot find symbol [javac] symbol: class LazyBooleanObjectInspector [javac] LazyPrimitive<LazyBooleanObjectInspector, BooleanWritable> { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyBoolean.java:39: cannot find symbol [javac] symbol : class LazyBooleanObjectInspector [javac] location: class org.apache.hadoop.hive.serde2.lazy.LazyBoolean [javac] public LazyBoolean(LazyBooleanObjectInspector oi) { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyByte.java:21: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyByteObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyByte.java:37: cannot find symbol [javac] symbol: class LazyByteObjectInspector [javac] LazyPrimitive<LazyByteObjectInspector, ByteWritable> { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyByte.java:39: cannot find symbol [javac] symbol : class LazyByteObjectInspector [javac] location: class org.apache.hadoop.hive.serde2.lazy.LazyByte [javac] public LazyByte(LazyByteObjectInspector oi) { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyDouble.java:23: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyDoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyDouble.java:31: cannot find symbol [javac] symbol: class LazyDoubleObjectInspector [javac] LazyPrimitive<LazyDoubleObjectInspector, DoubleWritable> { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyDouble.java:33: cannot find symbol [javac] symbol : class LazyDoubleObjectInspector [javac] location: class org.apache.hadoop.hive.serde2.lazy.LazyDouble [javac] public LazyDouble(LazyDoubleObjectInspector oi) { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:25: cannot find symbol [javac] symbol : class LazyObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.lazy.objectinspector [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.LazyObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:26: cannot find symbol [javac] symbol : class LazySimpleStructObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.lazy.objectinspector [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:27: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyBooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:28: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyByteObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:29: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyDoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:30: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyFloatObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:31: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:32: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyLongObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:33: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyPrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:34: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyShortObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:35: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyStringObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFloat.java:

    Read the article

< Previous Page | 28 29 30 31 32