Search Results

Search found 2086 results on 84 pages for 'pixel shader'.

Page 83/84 | < Previous Page | 79 80 81 82 83 84  | Next Page >

  • CodePlex Daily Summary for Friday, January 07, 2011

    CodePlex Daily Summary for Friday, January 07, 2011Popular ReleasesAutoLoL: AutoLoL v1.5.2: Implemented the Auto Updater Fix: Your settings will no longer be cleared with new releases of AutoLoL The mastery Editor and Browser now have their own tabs instead of nested tabs The Browser tab will only show the masteries matching ALL filters instead of just one Added a 'Browse' button in the Mastery Editor tab to open the Masteries Directory The Browser tab now shows a message when there are no mastery files in the Masteries Directory Fix: Fixed the Save As dialog again, for ...Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.StyleCop for ReSharper: StyleCop for ReSharper 5.1.14980.000: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the correct path ...VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.SSH.NET Library: 2011.1.6: Fixes CommandTimeout default value is fixed to infinite. Port Forwarding feature improvements Memory leaks fixes New Features Add ErrorOccurred event to handle errors that occurred on different thread New and improve SFTP features SftpFile now has more attributes and some operations Most standard operations now available Allow specify encoding for command execution KeyboardInteractiveConnectionInfo class added for "keyboard-interactive" authentication. Add ability to specify bo....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...VidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch....NET Voice Recorder: Auto-Tune Release: This is the source code and binaries to accompany the article on the Coding 4 Fun website. It is the Auto Tuner release of the .NET Voice Recorder application.BloodSim: BloodSim - 1.3.2.0: - Simulation Log is now automatically disabled and hidden when running 10 or more iterations - Hit and Expertise are now entered by Rating, and include option for a Racial Expertise bonus - Added option for boss to use a periodic magic ability (Dragon Breath) - Added option for boss to periodically Enrage, gaining a Damage/Attack Speed buffASP.NET MVC CMS ( Using CommonLibrary.NET ): CommonLibrary.NET CMS 0.9.5 Alpha: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 4.0. ActiveRecord based components for Blogs, Widgets, Pages, Parts, Events, Feedback, BlogRolls, Links Includes several widgets ( tag cloud, archives, recent, user cloud, links twitter, blog roll and more ) Built using the http://commonlibrarynet.codeplex.com framework. ( Uses TDD, DDD, Models/Entities, Code Generation ) Can run w/ In-Memory Repositories or Sql Server Database See Documentation tab for Ins...AllNewsManager.NET: AllNewsManager.NET 1.2.1: AllNewsManager.NET 1.2.1 It is a minor update from version 1.2EnhSim: EnhSim 2.2.9 BETA: 2.2.9 BETAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the Gobl...xUnit.net - Unit Testing for .NET: xUnit.net 1.7 Beta: xUnit.net release 1.7 betaBuild #1533 Important notes for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. This release adds the following new features: Added support for ASP.NET MVC 3 Added Assert.Equal(double expected, double actual, int precision)...Json.NET: Json.NET 4.0 Release 1: New feature - Added Windows Phone 7 project New feature - Added dynamic support to LINQ to JSON New feature - Added dynamic support to serializer New feature - Added INotifyCollectionChanged to JContainer in .NET 4 build New feature - Added ReadAsDateTimeOffset to JsonReader New feature - Added ReadAsDecimal to JsonReader New feature - Added covariance to IJEnumerable type parameter New feature - Added XmlSerializer style Specified property support New feature - Added ...DbDocument: DbDoc Initial Version: DbDoc Initial versionASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.2: Atomic CMS 2.1.2 release notes Atomic CMS installation guide N2 CMS: 2.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) All-round improvements and bugfixes File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open the proj...Mobile Device Detection and Redirection: 0.1.11.10: IMPORTANT CHANGESThis release changes the way some WURFL capabilities and attributes are exposed to .NET developers. If you cast MobileCapabilities to return some values then please read the Release Note before implementing this release. The following code snippet can be used to access any WURFL capability. For instance, if the device is a tablet: string capability = Request.Browser["is_tablet"]; SummaryNew attributes have been added to the redirect section: originalUrlAsQueryString If se...Wii Backup Fusion: Wii Backup Fusion 1.0: - Norwegian translation - French translation - German translation - WBFS dump for analysis - Scalable full HQ cover - Support for log file - Load game images improved - Support for image splitting - Diff for images after transfer - Support for scrubbing modes - Search functionality for log - Recurse depth for Files/Load - Show progress while downloading game cover - Supports more databases for cover download - Game cover loading routines improvedBlogEngine.NET: BlogEngine.NET 2.0: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. To get started, be sure to check out our installatio...New Projects9192631770: This project is created for learning .net 3.5 personally. However it may not suffice for anyone to give a start point. (9192631770) is equivalent to 1 sec in atomic clock.AGS: AGSAll-In-One Code Framework Prerelease: All-In-One Code Framework PrereleaseAwait Events with "yield": This is a library that allows you to stop running the code wherever you want in order to await an event using the functionality of "yield" sentence. It's useful when you want to await asynchronous events or when you have to deal with many events in a sequential way.Battle.net SDK: This is a SDK that retrieves it's information from the Battle.Net community site. At the moment blizzard only supports this for World of Warcraft, so that's what our main aim is at the momeen.t C++ Hash Container Benchmark: C++ Hash Container Benchmark for STL map, C++0x unordered map, Boost unordered map, ATL map and ATL hash map for STL wide string and ATL CString.Colour Lovers .NET: A .NET library for the Colour Lovers API.DatingGame: Course to teach high-school aged girls basic T-SQL using a fun scenario - querying to find the hottest boys! Used at Microsoft DigiGirlz and TKP events. Included DDL script, CSV for bcp with data, PPTX, T-SQL Cheat Sheet and teaching tips. Enjoy!do-Dots open .NET SDK: The do-Dots open SDK brings developers a full set of classes that allow to build applications based on do-Dots, a framework for M2M communication. It's developed in C#. EFMVC - ASP.NET MVC 3 and EF Code First: Demo web app using ASP.NET MVC 3 and EF Code FirstGS1: D is a 2D game demo written in C++ and using an API : HAPI for the graphic part and the audio part. All the xml files are handled with tinyXML. It is a vertical scrolling shoot'em up where the player controls a dragon flying in Central Park.GS2: In Zombies, you are a wizard, the most powerful wizard in the world, and two days ago, the Devil forces began to attack our world. The only person capable of stopping them is you, this is why the Devil himself came to you and took your powers. You're now alone, without any weaponIPProvider: DFGiwtfly: ????iwtfly26050: iwtfly2Knowledge Exchange .Net: This is my learning experience with creating an enterprise scale .NET application with tools such as Tortoise SVN, NANT, and Linq to SQLLinqPad Data Context Driver for SharePoint: The SharePoint Data Context Driver for LinqPad makes it easer for SharePoint 2010 Developers to develop, maintain and just play around with Linq To SharePoint statements via LinqPad. It is developed in C# and enables SharePoint 2010 Support to LinqPad.MaxLeafWebSiteK3: MaxLeafWebSiteK3Open ASP.NET CMS: Open ASP.NET 3.5 CMS Plug 'N Play Settings Manager: Plug 'N Play Settings Manager will be an application to configure settings on a windows computer by waiting for a usb thumbstick with a configuration file to be inserted, the application would then read and apply those settings. The early focus will be applying network settings.project windy: Windy - enhanced window manager. windy does window management a breeze. It started as a windows alternative to divvy, but now it has evolved with into its own. Thanks to the generous feedback from you folks. whats different from divvy? - first - its free. - has divvy likeRiaMVVM : MVVM Friendly WCF Ria Services: Simple, light-weight, MVVM friendly access to WCF Ria Services. Written in C# for use with Silverlight 4.SharePoint Designer 2007 Policy: Enable or Disable SharePoint Designer 2007 per site web application and per site colleciton. Spruckus - SharePoint ReUsable Content Keystamp Usage Search: Adds a keystamp to all html type items in the SharePoint Reusable Content list and adds a context item to the reusable content list that will find usages of that reusable content in your site using search.Student Insiders: Student InsidersTea: Tea Web Operator SystemVegas.NET: Projeto teste de TransportadoraXNA 4 Game state management system: XNA 4 Game State Management??????: aa

    Read the article

  • CodePlex Daily Summary for Friday, June 01, 2012

    CodePlex Daily Summary for Friday, June 01, 2012Popular ReleasesASP.Net Client Dependency Framework: v1.5: This release brings you many bug fixes and some new features Install via Nuget:Install-Package ClientDependency Install-Package ClientDependency-Mvc New featuresNew PlaceHolderProvider for webforms which will now let you specify exactly where the CSS and JS is rendered, so you can now separate them Better API support for runtime changes & registration Allows for custom formatting of composite file URLs new config option: pathUrlFormat="{dependencyId}/{version}/{type}" to have full contr...Silverlight 5 Multi-Window Controls: May 2012: This release introduces a new context menu type for desktop apps that can overflow the parent window: http://trelford.com/ContextMenu_SL5_Native.png Code snippet: <TextBlock Text="Right click on me to show the context menu"> <multiwindow:ContextMenuService.ContextMenu> <multiwindow:ContextMenuWindow> <multiwindow:MenuItem Header="Menu Item"/> </multiwindow:ContextMenuWindow> </multiwindow:ContextMenuService.Co...Better Explorer: Better Explorer Beta 1: Finally, the first Beta is here! There were a lot of changes, including: Translations into 10 different languages (the translations are not complete and will be updated soon) Conditional Select new tools for managing archives Folder Tools tab new search bar and Search Tab new image editing tools update function many bug fixes, stability fixes, and memory leak fixes other new features as well! Please check it out and if there are any problems, let us know. :) Also, do not forge...myManga: myManga v1.0.0.3: Will include MangaPanda as a default option. ChangeLog Updating from Previous Version: Extract contents of Release - myManga v1.0.0.3.zip to previous version's folder. Replaces: myManga.exe BakaBox.dll CoreMangaClasses.dll Manga.dll Plugins/MangaReader.manga.dll Plugins/MangaFox.manga.dll Plugins/MangaHere.manga.dll Plugins/MangaPanda.manga.dllPlayer Framework by Microsoft: Player Framework for Windows 8 Metro (Preview 3): Player Framework for HTML/JavaScript and XAML/C# Metro Style Applications. Additional DownloadsIIS Smooth Streaming Client SDK for Windows 8 Microsoft PlayReady Client SDK for Metro Style Apps Release notes:Support for Windows 8 Release Preview (released 5/31/12) Advertising support (VAST, MAST, VPAID, & clips) Miscellaneous improvements and bug fixesConfuser: Confuser 1.8: Changelog: +New UI...again. +New project system, replacing the previous declarative obfuscation and XML configuration. *Improve the protection strength... *Improve the compatibility. Now Confuser can obfuscate itself and even some real-life application like Paint.NET and ILSpy! (of course with some small adjustment)Naked Objects: Naked Objects Release 4.1.0: Corresponds to the packaged version 4.1.0 available via NuGet. Note that the versioning has moved to SemVer (http://semver.org/) This is a bug fix release with no new functionality. Please note that the easiest way to install and run the Naked Objects Framework is via the NuGet package manager: just search the Official NuGet Package Source for 'nakedobjects'. It is only necessary to download the source code (from here) if you wish to modify or re-build the framework yourself. If you do wi...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.54: Fix for issue #18161: pretty-printing CSS @media rule throws an exception due to mismatched Indent/Unindent pair.Silverlight Toolkit: Silverlight 5 Toolkit Source - May 2012: Source code for December 2011 Silverlight 5 Toolkit release.Json.NET: Json.NET 4.5 Release 6: New feature - Added IgnoreDataMemberAttribute support New feature - Added GetResolvedPropertyName to DefaultContractResolver New feature - Added CheckAdditionalContent to JsonSerializer Change - Metro build now always uses late bound reflection Change - JsonTextReader no longer returns no content after consecutive underlying content read failures Fix - Fixed bad JSON in an array with error handling creating an infinite loop Fix - Fixed deserializing objects with a non-default cons...DotNetNuke® Community Edition CMS: 06.02.00: Major Highlights Fixed issue in the Site Settings when single quotes were being treated as escape characters Fixed issue loading the Mobile Premium Data after upgrading from CE to PE Fixed errors logged when updating folder provider settings Fixed the order of the mobile device capabilities in the Site Redirection Management UI The User Profile page was completely rebuilt. We needed User Profiles to have multiple child pages. This would allow for the most flexibility by still f...Thales Simulator Library: Version 0.9.6: The Thales Simulator Library is an implementation of a software emulation of the Thales (formerly Zaxus & Racal) Hardware Security Module cryptographic device. This release fixes a problem with the FK command and a bug in the implementation of PIN block 05 format deconstruction. A new 0.9.6.Binaries file has been posted. This includes executable programs without an installer, including the GUI and console simulators, the key manager and the PVV clashing demo. Please note that you will need ...????: ????2.0.1: 1、?????。WiX Toolset: WiX v3.6 RC: WiX v3.6 RC (3.6.2928.0) provides feature complete Burn with VS11 support. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/5/28/WiX-v3.6-Release-Candidate-availableJavascript .NET: Javascript .NET v0.7: SetParameter() reverts to its old behaviour of allowing JavaScript code to add new properties to wrapped C# objects. The behavior added briefly in 0.6 (throws an exception) can be had via the new SetParameterOptions.RejectUnknownProperties. TerminateExecution now uses its isolate to terminate the correct context automatically. Added support for converting all C# integral types, decimal and enums to JavaScript numbers. (Previously only the common types were handled properly.) Bug fixe...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (May 2012): Fixes: unserialize() of negative float numbers fix pcre possesive quantifiers and character class containing ()[] array deserilization when the array contains a reference to ISerializable parsing lambda function fix round() reimplemented as it is in PHP to avoid .NET rounding errors filesize bypass for FileInfo.Length bug in Mono New features: Time zones reimplemented, uses Windows/Linux databaseSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.1): New fetures:Admin enable / disable match Hide/Show Euro 2012 SharePoint lists (3 lists) Installing SharePoint Euro 2012 PredictorSharePoint Euro 2012 Predictor has been developed as a SharePoint Sandbox solution to support SharePoint Online (Office 365) Download the solution havivi.euro2012.wsp from the download page: Downloads Upload this solution to your Site Collection via the solutions area. Click on Activate to make the web parts in the solution available for use in the Site C...????SDK for .Net 4.0+(OAuth2.0+??V2?API): ??V2?SDK???: ?????????API?? ???????OAuth2.0?? ????:????????????,??????????“SOURCE CODE”?????????Changeset,http://weibosdk.codeplex.com/SourceControl/list/changesets ???:????????,DEMO??AppKey????????????????,?????AppKey,????AppKey???????????,?????“????>????>????>??????”.Net Code Samples: Code Samples: Code samples (SLNs).LINQ_Koans: LinqKoans v.02: Cleaned up a bitNew ProjectsAntiXSS Experimental: Welcome to AntiXSS Experimental. AntiXSS Experimental contains code for common encoders auto-generated using Microsoft Research's BEK project.atfone: atfoneBango Adobe Air Application Analytics SDK: Bango application analytics is an analytics solution for mobile applications. This SDK provides a framework you can use in your application to add analytics capabilities to your mobile applications. It's developed in Actionscript with Native Extensions to target iOS,Android and the Blackberry Playbook.bbinjest: bbinjestdevMobile.NET Library for Windows Phone 7.1: devMobile.NET Library intends to offer a set of commons and not so commons controls for developing Windows Phone 7.5 applications. It offers classic controls like both pie and column charts for simple scenarios as well as other not so classic controls like SignalAccuracy control for displaying, for instance, the GPS accuracy like WP7 built-in GPRS/3G signal coverage indicator does. It also provide a tag cloud control for displaying item in the way common web based tag clouds usually offer. ...dnnFiddle: dnnFiddle is a DotNetNuke module that aims to make it easier to add rich content to your DotNetNuke website.DotNetNuke Task Manager: Test Project to learn DotNetNuke and CodePlex IntegrationDSIB - TireService: Project for handling Tiresets/Wheels - looking up dimensions, loadindex, speedindex, etc. for wheelsflyabroad: flyabroadhphai: My ProjectIIS File Manager - Editor: IIS File Manager provides ability to upload files faster through HTTP and it requires no extra installation, just one website with windows authentication lets users upload files easily.KeypItSafe Password Vault: KeypItSafe Password Vault Easily and safely store your website passwords on your computer - or go mobile in just a few clicks! What is KeypItSafe? KeypItSafe is a free open source password manager that helps you store and manage all of your passwords securely on your computer or a USB/Removable Media drive. With only a few clicks you can transfer all of your saved passwords to a USB drive and immediately have access to them on virtually any computer. You only have to remember one mas...litwaredk: Sourcecode for the projects on www.litware.dkMVC Pattern Toolkit (Sample): This is a sample MVC pattern toolkit that helps web developers create ASP.NET MVC 2 web applications using advanced tooling and automation, with integrated guidance. This toolkit is provided as a *sample* soley to demonstrate how easy pattern toolkits can be created that provide custom automation and tooling in Visual Studio to speed development. *NOTE*: This pattern toolkit is not intended to demonstrate THE official way to build ASP.NET MVC applications. It is intended to demonstrate ...MyDeveloperCareer: willwymydevelopercarMyPomodoroWatch: My personal Pomodoro watch.NRails: NRailsPayment Gateways: Open source project for integrating payment gateways all over the world into .NET websites and desktop applications. Developers are requested to submit their code and check out the project to start contributing.Preactor Object Model: pom is a Preactor library which provides easy access and manipulation of Preactor data.RgC: RuCSecondPong: blablablaSharp Home: Sharphome is designed to run on Windows and Linux (via the Mono Project) and is designed to be useful in home automation and home security.SharpPTC: SharpPTC is a framebuffer library designed for creating retro games and applications, built on DirectDraw targeting the .NET platform. It provides a simple pixel buffer and methods to ease drawing (line, rectangle, clear etc). SharpPTC also comes with limited keyboard support.simplesocxs: simplesocxsSQL Process Viewer: View all of the processes (that you have security to see) currently running on a SQL database.testtom05312012git02: fdsfdText-To-Speech with Microsoft Translator Service: With this library, you can easily add Text-To-Speech capabilities to your .NET applications. It uses the Microsoft Translator Service to obtain streams of file speaking text in the desired language. At the moment of writing, there are 44 supported languages, including English, Italian, German, French, Spanish, Japanese and Chinese.ultsvn: The description.umbracoCssZenGarden: This is a learning package, It's not supposed to be a best practice for content management, just a fun test to see what a content managed cssZenGarden might be like.Visual Studio Solution Code Format AddIn: 0 people following this project (follow) VS???????? ????????:http://www.cnblogs.com/viter/addin ??Visual Studio 2008?2010,???????????,???????????"version“????。 ????namespace , class , struct , enum , property ,?????????(??????Function)??????Function,????????。 ???????、????。 ????Function????????????xml???,???????BUG,????Function???。 ??? class , struct , enum , property , Function??#region #endregion?????。 ????Property ? Function ???????,?Property????“?????? ”??。 ?????...Viz, Simple 3D Control inspired by Processing.orgVolunteerManager: Its a small app that manages volunteers in a volunteer organization.WebLearningFS: Dokan Based Web Learning File SystemWindowsPhonePusherSLService: Silverlight Pusher for WPhoneWindowsPhonePusherWcfService: for pushing same to wphone tooWPF Encryption: This develop a application for encryption/encode a string or checksum a fileXNAGameFeatures: XNAGameFeatures Project XNAGameFeatures is a XNA 4.0 library which permit to create and manage a XNA game easily. The project is separate in five part : BasicGame library Widget library Shapes library Input library Features library Each part is thinking to offer a easy way to the creation of a game in XNA 4.0

    Read the article

  • CodePlex Daily Summary for Sunday, June 05, 2011

    CodePlex Daily Summary for Sunday, June 05, 2011Popular ReleasesSizeOnDisk: 1.0.8.1: Toolbar: Show total of selected folder and hard drive usage of the entire disk Fix: Command binding do not work Execution State Toolbar state by selection Progress task barEnhSim: EnhSim 2.4.6 BETA: 2.4.6 BETAThis release supports WoW patch 4.1 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the proper...patterns & practices: Project Silk: Project Silk Community Drop 10 - June 3, 2011: Changes from previous drop: Many code changes: please see the readme.mht for details. New "Application Notifications" chapter. Updated "Server-Side Implementation" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To install and run the reference implementation, you must perform the fol...Claims Based Identity & Access Control Guide: Release Candidate: Highlights of this release This is the release candidate drop of the new "Claims Identity Guide" edition. In this release you will find: All code samples, including all ACS v2: ACS as a Federation Provider - Showing authentication with LiveID, Google, etc. ACS as a FP with Multiple Business Partners. ACS and REST endpoints. Using a WP7 client with REST endpoints. All ACS specific chapters. Two new chapters on SharePoint (SSO and Federation) All revised v1 chapters We are now ...Media Companion: MC 3.404b Weekly: Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. Refer to the documentation on this site for the Installation & Setup Guide Important! *** Due to an issue where the date added & the full genre information was not being read into the Movie cache, it is highly recommended that you perform a Rebuild Movies when first running this latest version. This will read in the information from the nfo's & repopulate the cache used by MC during operation. Fi...Terraria Map Generator: TerrariaMapTool 1.0.0.4 Beta: 1) Fixed the generated map.html file so that the file:/// is included in the base path. 2) Added the ability to use parallelization during generation. This will cause the program to use as many threads as there are physical cores. 3) Fixed some background overdraw.DotRas: DotRas v1.2 (Version 1.2.4168): This release includes compiled (and signed) versions of the binaries, PDBs, CHM help documentation, along with both C# and VB.NET examples. Please don't forget to rate the release! If you find a bug, please open a work item and give as much description as possible. Stack traces, which operating system(s) you're targeting, and build type is also very helpful for bug hunting. If you find something you believe to be a bug but are not sure, create a new discussion on the discussions board. Thank...Caliburn Micro: WPF, Silverlight and WP7 made easy.: Caliburn.Micro v1.1 RTW: Download ContentsDebug and Release Assemblies Samples Changes.txt License.txt Release Highlights For WP7A new Tombstoning API based on ideas from Fluent NHibernate A new Launcher/Chooser API Strongly typed Navigation SimpleContainer included The full phone lifecycle is made easy to work with ie. we figure out whether your app is actually Resurrecting or just Continuing for you For WPFSupport for the Client Profile Better support for WinForms integration All PlatformsA power...VidCoder: 0.9.1: Added color coding to the Log window. Errors are highlighted in red, HandBrake logs are in black and VidCoder logs are in dark blue. Moved enqueue button to the right with the other control buttons. Added logic to report failures when errors are logged during the encode or when the encode finishes prematurely. Added Copy button to Log window. Adjusted audio track selection box to always show the full track name. Changed encode job progress bar to also be colored yellow when the enco...AutoLoL: AutoLoL v2.0.1: - Fixed a small bug in Auto Login - Fixed the updaterEPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.9.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the server This version has been updated to .Net Framework 3.5 New Features Data Validation. PivotTables (Basic functionalliy...) Support for worksheet data sources. Column, Row, Page and Data fields. Date and Numeric grouping Build in styles. ...and more And some minor new features... Ranges Text-Property|Get the formated value AutofitColumns-method to set the column width from the content of the range LoadFromCollection-metho...jQuery ASP.Net MVC Controls: Version 1.4.0.0: Version 1.4.0.0 contains the following additions: Upgraded to MVC 3.0 Upgraded to jQuery 1.6.1 (Though the project supports all jQuery version from 1.4.x onwards) Upgraded to jqGrid 3.8 Better Razor View-Engine support Better Pager support, includes support for custom pagers Added jqGrid toolbar buttons support Search module refactored, with full suport for multiple filters and ordering And Code cleanup, bug-fixes and better controller configuration support.Nearforums - ASP.NET MVC forum engine: Nearforums v6.0: Version 6.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Authentication using Membership Provider for SQL Server and MySql Spam prevention: Flood Control Moderation: Flag messages Content management: Pages: Create pages (about us/contact/texts) through web administration Allow nearforums to run as an IIS subapp Migrated Facebook Connect to OAuth 2.0 Visit the project Roadmap for more details.NetOffice - The easiest way to use Office in .NET: NetOffice Release 0.8b: Changes: - fix critical issue 15922(AccessViolationException) once and for all ...update is strongly recommended Known Issues: - some addin ribbon examples has a COM Register function with missing codebase entry(win32 registry) ...the problem is only affected to c# examples. fixed in latest source code. NetOffice\ReleaseTags\NetOffice Release 0.8.rar Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:...................Serviio for Windows Home Server: Beta Release 0.5.2.0: Ready for widespread beta. Synchronized build number to Serviio version to avoid confusion.AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta4: ??AcDown?????????????,??????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta4 2011-5-31?? ???Bilibili.us????? ???? ?? ???"????" ???Bilibili.us??? ??????? ?? ??????? ?? ???????? ?? ?? ???Bilibili.us?????(??????????????????) ??????(6.cn)?????(????) ?? ?????Acfun?????????? ?????????????? ???QQ???????? ????????????Discussion...CodeCopy Auto Code Converter: Code Copy v0.1: Full add-in, setup project source code and setup fileWordpress Theme 'Windows Metro': v1.00.0017: v1.00.0017 Dies ist eine Vorab-Version aus der Entwickler-Phase. This is a preview version from the development phase.TerrariViewer: TerrariViewer v2.4.1: Added Piggy Bank editor and fixed some minor bugs.Kooboo CMS: Kooboo CMS 3.02: Updated the Kooboo_CMS.zip at 2011-06-02 11:44 GMT 1.Fixed: Adding data rule issue on page. 2.Fixed: Caching issue for higher performance. Updated the Kooboo_CMS.zip at 2011-06-01 10:00 GMT 1. Fixed the published package throwed a compile error under WebSite mode. 2. Fixed the ContentHelper.NewMediaFolderObject return TextFolder issue. 3. Shorten the name of ContentHelper API. NewMediaFolderObject=>MediaFolder, NewTextFolderObject=> TextFolder, NewSchemaObject=>Schema. Also update the C...New ProjectsbuddyAgent: buddyAgent is a Classified PortalDataReaderMapping: use linq expression make mapping sqldatareader to DTO model easyForeign eXchange Center: Foreign eXchange CenterGoogleAuthCLONE: Use this like the Google Authenticator you may have on your Android device to interact with Google's Two Factor Authentication scheme. This allows you to log into Google without having to whip out your phone. Stores account information securely!Grammar and Spell Checking Plugin for Windows Live Writer: This project is a spelling and grammar checker plugin for Windows Live Writer. It uses the "After the Deadline" web service to provide advanced grammar checking abilities.GUNDEMAS: Gundemas.comInstantMods: LolLig4 => Análise e Desenvolvimento de Sistemas: Beta teste App_SERVERLight Time Sheet: Silverlight Time SheetMort8088s XmlHelper: XmlHelper makes it easier for developers to create and read data in XML documents by condensing common tasks into a simple static helper class. XmlHelper's developed in c#. Feel free to suggest new methods or better ways to achieve the existing methods.NeuroForecast: This project is designed to study neural network technologies on the example of predicting currency quotes.Orchard Forums: a forums moduleRules Engine: Rules Engine is a C# project that makes it easier for developers to define business rules on domain objects without coupling the domain object to the business rule. The rules engine supports cross-field validation and conditional validation. Rules are interface-based and are easily extensible. Rules can be added using a fluent interface.Super Byte: Super Byte makes it easier for everyone to use your computer .You'll no longer have to strugle. We are always looking for new devlopers. It's developed in C#. Need a fast and reliable "Operating System"? Well SuperByte can give that to you. We have spent many hours of work to bring this project to you. Enjoy! Taygeta - The video shadering library: This library allows rendering YUV and RGB pixel formats using Direct3D 9 runtime. You can also apply shaders and add text, bitmap and line overlays.Test Driver Program: Testing driver programTowerDefense: ????,??ogre??YAPE: Yet Another Pacman Emulator.

    Read the article

  • Tips for XNA WP7 Developers

    - by Michael B. McLaughlin
    There are several things any XNA developer should know/consider when coming to the Windows Phone 7 platform. This post assumes you are familiar with the XNA Framework and with the changes between XNA 3.1 and XNA 4.0. It’s not exhaustive; it’s simply a list of things I’ve gathered over time. I may come back and add to it over time, and I’m happy to add anything anyone else has experienced or learned as well. Display · The screen is either 800x480 or 480x800. · But you aren’t required to use only those resolutions. · The hardware scaler on the phone will scale up from 240x240. · One dimension will be capped at 800 and the other at 480; which depends on your code, but you cannot have, e.g., an 800x600 back buffer – that will be created as 800x480. · The hardware scaler will not normally change aspect ratio, though, so no unintended stretching. · Any dimension (width, height, or both) below 240 will be adjusted to 240 (without any aspect ratio adjustment such that, e.g. 200x240 will be treated as 240x240). · Dimensions below 240 will be honored in terms of calculating whether to use portrait or landscape. · If dimensions are exactly equal or if height is greater than width then game will be in portrait. · If width is greater than height, the game will be in landscape. · Landscape games will automatically flip if the user turns the phone 180°; no code required. · Default landscape is top = left. In other words a user holding a phone who starts a landscape game will see the first image presented so that the “top” of the screen is along the right edge of his/her phone, such that the natural behavior would be to turn the phone 90° so that the top of the phone will be held in the user’s left hand and the bottom would be held in the user’s right hand. · The status bar (where the clock, battery power, etc., are found) is hidden when the Game-derived class sets GraphicsDeviceManager.IsFullScreen = true. It is shown when IsFullScreen = false. The default value is false (i.e. the status bar is shown). · You should have a good reason for hiding the status bar. Users find it helpful to know what time it is, how much charge their battery has left, and whether or not their phone is in service range. This is especially true for casual games that you expect someone to play for a few minutes at a time, e.g. while waiting for some event to start, for a phone call to come in, or for a train, bus, or subway to arrive. · In portrait mode, the status bar occupies 32 pixels of space. This means that a game with a back buffer of 480x800 will be scaled down to occupy approximately 461x768 screen pixels. Setting the back buffer to 480x768 (or some resolution with the same 0.625 aspect ratio) will avoid this scaling. · In landscape mode, the status bar occupies 72 pixels of space. This means that a game with a back buffer of 800x480 will be scaled down to occupy approximately 728x437 screen pixels. Setting the back buffer to 728x480 (or some resolution with the same 1.51666667 aspect ratio) will avoid this scaling. Input · Touch input is scaled with screen size. · So if your back buffer is 600x360, a tap in the bottom right corner will come in as (599,359). You don’t need to do anything special to get this automatic scaling of touch behavior. · If you do not use full area of the screen, any touch input outside the area you use will still register as a touch input. For example, if you set a portrait resolution of 240x240, it would be scaled up to occupy a 480x480 area, centered in the screen. If you touch anywhere above this area, you will get a touch input of (X,0) where X is a number from 0 to 239 (in accordance with your 240 pixel wide back buffer). Any touch below this area will give a touch input of (X,239). · If you keep the status bar visible, touches within its area will not be passed to your game. · In general, a screen measurement is the diagonal. So a 3.5” screen is 3.5” long from the bottom right corner to the top left corner. With an aspect ratio of 0.6 (480/800 = 0.6), this means that a phone with a 3.5” screen is only approximately 1.8” wide by 3” tall. So there are approximately 267 pixels in an inch on a 3.5” screen. · Again, this time in metric! 3.5 inches is approximately 8.89 cm. So an 8.89 cm screen is 8.89 cm long from the bottom right corner to the top left corner. With an aspect ratio of 0.6, this means that a phone with an 8.89 cm screen is only approximately 4.57 cm wide by 7.62 cm tall. So there are approximately 105 pixels in a centimeter on an 8.89 cm screen. · Think about the size of your finger tip. If you do not have large hands, think about the size of the fingertip of someone with large hands. Consider that when you are sizing your touch input. Especially consider that when you are spacing two touch targets near one another. You need to judge it for yourself, but items that are next to each other and are each 100x100 should be fine when it comes to selecting items individually. Smaller targets than that are ok provided that you leave space between them. · You want your users to have a pleasant experience. Making touch controls too small or too close to one another will make them nervous about whether they will touch the right target. Take this into account when you plan out your game initially. If possible, do some quick size mockups on an actual phone using colored rectangles that you position and size where you plan to have your game controls. Adjust as necessary. · People do not have transparent hands! Nor are their hands the size of a mouse pointer icon. Consider leaving a dedicated space for input rather than forcing the user to cover up to one-third of the screen with a finger just to play the game. · Another benefit of designing your controls to use a dedicated area is that you’re less likely to have players moving their finger(s) so frantically that they accidentally hit the back button, start button, or search button (many phones have one or more of these on the screen itself – it’s easy to hit one by accident and really annoying if you hit, e.g., the search button and then quickly tap back only to find out that the game didn’t save your progress such that you just wasted all the time you spent playing). · People do not like doing somersaults in order to move something forward with accelerometer-based controls. Test your accelerometer-based controls extensively and get a lot of feedback. Very well-known games from noted publishers have created really bad accelerometer controls and been virtually unplayable as a result. Also be wary of exceptions and other possible failures that the documentation warns about. · When done properly, the accelerometer can add a nice touch to your game (see, e.g. ilomilo where the accelerometer was used to move the background; it added a nice touch without frustrating the user; I also think CarniVale does direct accelerometer controls very well). However, if done poorly, it will make your game an abomination unto the Marketplace. Days, weeks, perhaps even months of development time that you will never get back. I won’t name names; you can search the marketplace for games with terrible reviews and you’ll find them. Graphics · The maximum frame rate is 30 frames per second. This was set as a compromise between battery life and quality. · At least one model of phone is known to have a screen refresh rate that is between 59 and 60 hertz. Because of this, using a fixed time step with a target frame rate of 30 will cause a slight internal delay to build up as the framework is forced to wait slightly for the next refresh. Eventually the delay will get to the point where a draw is skipped in order to recover from the delay. (See Nick's comment below for clarification.) · To deal with that delay, you can either stay with a fixed time step and set the frame rate slightly lower or else you can go to a variable time step and make sure to adjust all of your update data (e.g. player movement distance) to take into account the elapsed time from the last update. A variable time step makes your update logic slightly more complicated but will avoid frame skips entirely. · Currently there are no custom shaders. This might change in the future (there is no hardware limitation preventing it; it simply wasn’t a feature that could be implemented in the time available before launch). · There are five built-in shaders. You can create a lot of nice effects with the built-in shaders. · There is more power on the CPU than there is on the GPU so things you might typically off-load to the GPU will instead make sense to do on the CPU side. · This is a phone. It is not a PC. It is not an Xbox 360. The emulator runs on a PC and uses the full power of your PC. It is very good for testing your code for bugs and doing early prototyping and layout. You should not use it to measure performance. Use actual phone hardware instead. · There are many phone models, each of which has slightly different performance levels for I/O, screen blitting, CPU performance, etc. Do not take your game right to the performance limit on your phone since for some other phones you might be crossing their limits and leaving players with a bad experience. Leave a cushion to account for hardware differences. · Smaller screened phones will have slightly more dots per inch (dpi). Larger screened phones will have slightly less. Either way, the dpi will be much higher than the typical 96 found on most computer screens. Make sure that whoever is doing art for your game takes this into account. · Screens are only required to have 16 bit color (65,536 colors). This is common among smart phones. Using gradients on a 16 bit display can produce an ugly artifact known as banding. Banding is when, rather than a smooth transition from one color to another, you instead see distinct lines. Be careful to avoid this when possible. Banding can be avoided through careful art creation. Its effects can be minimized and even unnoticeable when the texture in question is always moving. You should be careful not to rely on “looks good on my phone” since some phones do have 32-bit displays and thus you’ll find yourself wondering why you’re getting bad reviews that complain about the graphics. Avoid gradients; if you can’t, make sure they are 16-bit safe. Audio · Never rely on sounds as your sole signal to the player that something is happening in the game. They might have the sound off. They might be playing somewhere loud. Etc. · You have to provide controls to disable sound & music. These should be separate. · On at least one model of phone, the volume control API currently has no effect. Players can adjust sound with their hardware volume buttons, but in game selectors simply won’t work. As such, it may not be worth the effort of providing anything beyond on/off switches for sound and music. · MediaPlayer.GameHasControl will return true when a game is hooked up to a PC running Zune. When Zune is running, any attempts to do anything (beyond check GameHasControl) with MediaPlayer will cause an exception to be thrown. If this exception is thrown, catch it and disable music. Exceptions take time to propagate; you don’t want one popping up in every single run of your game’s Update method. · Remember that players can already be listening to music or using the FM radio. In this case GameHasControl will be false and you should handle this appropriately. You can, alternately, ask the player for permission to stop their current music and play your music instead, but the (current) requirement that you restore their music when done is very hard (if not impossible) to deal with. · You can still play sound effects even when the game doesn’t have control of the music, but don’t think this is a backdoor to playing music. Your game will fail certification if your “sound effect” seems to be more like music in scope and length.

    Read the article

  • Android: Strange out of memory issue

    - by Chrispix
    I am not sure where to start to explain this one. I have a list view with a couple image buttons on each row. When you click the list row, it launches a new activity. If you review some of my other posts, I have had to build my own tabs because of an issue w/ the camera layout. The activity that gets launched for result is a map. If I click on my button to launch the image preview (load an image off the sd card) the application returns from the activity back to the listview activity to the result handler to relaunch my new activity which is nothing more than an image widget. So here is the issue, the image preview on the list view is being done w/ the cursor & listadapter. This makes it pretty simple, but I am not sure how I can put a resized (i.e. smaller bit size not pixel) image as the src for the imgbutton on the fly. So I just resized the image that came off the phone camera. The issue is that I get an out of memory error when it tries to go back and re-launch the 2nd activity. ** My question : is there a way I can build the list adapter easily row by row, where I can resize on the fly (bit wise)? - this would be preferable as I also need to make some changes to the properties of the widgets/elements in each row as I am unable to select a row w/ touch screen b/c of focus issue. (I can use roller ball). ** I know I can do an out of band resize and save of my image, but that is not really what I want to do, but some sample code for that would be nice if that is your suggestion. As soon as I disabled the image on the listview it worked fine again. FYI : This is how I was doing it : String[] from = new String[] { DBHelper.KEY_BUSINESSNAME, DBHelper.KEY_ADDRESS, DBHelper.KEY_CITY, DBHelper.KEY_GPSLONG, DBHelper.KEY_GPSLAT, DBHelper.KEY_IMAGEFILENAME + ""}; to = new int[] { R.id.businessname, R.id.address, R.id.city, R.id.gpslong, R.id.gpslat, R.id.imagefilename }; notes = new SimpleCursorAdapter(this, R.layout.notes_row, c, from, to); setListAdapter(notes); Where R.id.imagefilename is a ButtonImage Here is my LogCat 01-25 05:05:49.877: ERROR/dalvikvm-heap(3896): 6291456-byte external allocation too large for this process. 01-25 05:05:49.877: ERROR/(3896): VM won't let us allocate 6291456 bytes 01-25 05:05:49.877: ERROR/AndroidRuntime(3896): Uncaught handler: thread main exiting due to uncaught exception 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): java.lang.OutOfMemoryError: bitmap size exceeds VM budget 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.graphics.BitmapFactory.nativeDecodeStream(Native Method) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:304) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.graphics.BitmapFactory.decodeFile(BitmapFactory.java:149) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.graphics.BitmapFactory.decodeFile(BitmapFactory.java:174) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.graphics.drawable.Drawable.createFromPath(Drawable.java:729) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.ImageView.resolveUri(ImageView.java:484) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.ImageView.setImageURI(ImageView.java:281) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.SimpleCursorAdapter.setViewImage(SimpleCursorAdapter.java:183) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.SimpleCursorAdapter.bindView(SimpleCursorAdapter.java:129) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.CursorAdapter.getView(CursorAdapter.java:150) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.AbsListView.obtainView(AbsListView.java:1057) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.ListView.makeAndAddView(ListView.java:1616) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.ListView.fillSpecific(ListView.java:1177) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.ListView.layoutChildren(ListView.java:1454) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.AbsListView.onLayout(AbsListView.java:937) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.View.layout(View.java:5611) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.LinearLayout.layoutHorizontal(LinearLayout.java:1108) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.LinearLayout.onLayout(LinearLayout.java:922) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.View.layout(View.java:5611) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.FrameLayout.onLayout(FrameLayout.java:294) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.View.layout(View.java:5611) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:999) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.LinearLayout.onLayout(LinearLayout.java:920) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.View.layout(View.java:5611) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.FrameLayout.onLayout(FrameLayout.java:294) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.View.layout(View.java:5611) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.ViewRoot.performTraversals(ViewRoot.java:771) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.ViewRoot.handleMessage(ViewRoot.java:1103) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.os.Handler.dispatchMessage(Handler.java:88) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.os.Looper.loop(Looper.java:123) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.app.ActivityThread.main(ActivityThread.java:3742) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at java.lang.reflect.Method.invokeNative(Native Method) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at java.lang.reflect.Method.invoke(Method.java:515) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:739) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:497) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at dalvik.system.NativeStart.main(Native Method) 01-25 05:10:01.127: ERROR/AndroidRuntime(3943): ERROR: thread attach failed I also have a new error when displaying an image : 01-25 22:13:18.594: DEBUG/skia(4204): xxxxxxxxxxx jpeg error 20 Improper call to JPEG library in state %d 01-25 22:13:18.604: INFO/System.out(4204): resolveUri failed on bad bitmap uri: 01-25 22:13:18.694: ERROR/dalvikvm-heap(4204): 6291456-byte external allocation too large for this process. 01-25 22:13:18.694: ERROR/(4204): VM won't let us allocate 6291456 bytes 01-25 22:13:18.694: DEBUG/skia(4204): xxxxxxxxxxxxxxxxxxxx allocPixelRef failed

    Read the article

  • Windows 2008 RenderFarm Service: CreateProcessAsUser "Session 0 Isolation" and OpenGL

    - by holtavolt
    Hello, I have a legacy Windows server service and (spawned) application that works fine in XP-64 and W2K3, but fails on W2K8. I believe it is because of the new "Session 0 isolation " feature. (Note: As a StackOverflow newbie I'm being limited to one link in this post, so you'll need to scroll to bottom to lookup the links for '' items)* Consequently, I'm looking for code samples/security settings mojo that let you create a new process from a windows service for Windows 2008 Server such that I can restore (and possibly surpass) the previous behavior. I need a solution that: Creates the new process in a non-zero session to get around session-0 isolation restrictions (no access to graphics hardware from session 0) - the official MS line on this is: Because Session 0 is no longer a user session, services that are running in Session 0 do not have access to the video driver. This means that any attempt that a service makes to render graphics fails. Querying the display resolution and color depth in Session 0 reports the correct results for the system up to a maximum of 1920x1200 at 32 bits per pixel. The new process gets a windows station/desktop (e.g. winsta0/default) that can be used to create windows DCs. I've found a solution (that launches OK in an interactive session) for this here: *(Starting an Interactive Client Process in C++ - 2) The windows DC, when used as the basis for an *(OpenGL DescribePixelFormat enumeration - 3), is able to find and use the hardware-accelerated format (on a system appropriately equipped with OpenGL hardware.) Note that our current solution works OK on XP-64 and W2K3, except if a terminal services session is running (VNC works fine.) A solution that also allowed the process to work (i.e. run with OpenGL hardware acceleration even when a terminal services session is open) would be fanastic, although not required. I'm stuck at item #1 currently, and although there are some similar postings that discuss this (like *(this -4), and *(this - 5) - they are not suitable solutions, as there is no guarantee of a user session logged in already to "take" a session id from, nor am I running from a LocalSystem account (I'm running from a domain account for the service, for which I can adjust the privileges of, within reason, although I'd prefer to not have to escalate priorities to include SeTcbPrivileges.) For instance - here's a stub that I think should work, but always returns an error 1314 on the SetTokenInformation call (even though the AdjustTokenPrivileges returned no errors) I've used some alternate strategies involving "LogonUser" as well (instead of opening the existing process token), but I can't seem to swap out the session id. I'm also dubious about using the WTSActiveConsoleSessionId in all cases (for instance, if no interactive user is logged in) - although a quick test of the service running with no sessions logged in seemed to return a reasonable session value (1). I’ve removed error handling for ease of reading (still a bit messy - apologies) //Also tried using LogonUser(..) here OpenProcessToken(GetCurrentProcess(), TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE, &hToken) GetTokenInformation( hToken, TokenSessionId, &logonSessionId, sizeof(DWORD), &dwTokenLength ) DWORD consoleSessionId = WTSGetActiveConsoleSessionId(); /* Can't use this - requires very elevated privileges (LOCAL only, SeTcbPrivileges as well) if( !WTSQueryUserToken(consoleSessionId, &hToken)) ... */ DuplicateTokenEx(hToken, (TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE), NULL, SecurityIdentification, TokenPrimary, &hDupToken)) // Look up the LUID for the TCB Name privilege. LookupPrivilegeValue(NULL, SE_TCB_NAME, &tp.Privileges[0].Luid)) // Enable the TCB Name privilege in the token. tp.PrivilegeCount = 1; tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; if (!AdjustTokenPrivileges(hDupToken, FALSE, &tp, sizeof(TOKEN_PRIVILEGES), NULL, 0)) { DisplayError("AdjustTokenPrivileges"); ... } if (GetLastError() == ERROR_NOT_ALL_ASSIGNED) { DEBUG( "Token does not have the necessary privilege.\n"); } else { DEBUG( "No error reported from AdjustTokenPrivileges!\n"); } // Never errors here DEBUG(LM_INFO, "Attempting setting of sessionId to: %d\n", consoleSessionId ); if (!SetTokenInformation(hDupToken, TokenSessionId, &consoleSessionId, sizeof(DWORD))) *** ALWAYS FAILS WITH 1314 HERE *** All the debug output looks fine up until the SetTokenInformation call - I see session 0 is my current process session, and in my case, it's trying to set session 1 (the result of the WTSGetActiveConsoleSessionId). (Note that I'm logged into the W2K8 box via VNC, not RDC) So - a the questions: Is this approach valid, or are all service-initiated processes restricted to session 0 intentionally? Is there a better approach (short of "Launch on logon" and auto-logon for the servers?) Is there something wrong with this code, or a different way to create a process token where I can swap out the session id to indicate I want to spawn the process in a new session? I did try using LogonUser instead of OpenProcessToken, but that didn't work either. (I don't care if all spawned processes share the same non-zero session or not at this point.) Any help much appreciated - thanks! (You need to replace the 'zttp' with 'http' - StackOverflow restriction on one link in my newbie post) 2: http://msdn.microsoft.com/en-us/library/aa379608(VS.85).aspx 3: http://www.opengl.org/resources/faq/technical/mswindows.htm 4: http://stackoverflow.com/questions/2237696/creating-a-process-in-a-non-zero-session-from-a-service-in-windows-2008-server 5: http://stackoverflow.com/questions/1602996/how-can-i-lauch-a-process-which-has-a-ui-from-windows-service

    Read the article

  • Firefox throwing a exception with HTML Canvas putImageData

    - by mr.doob
    So I was working on this little javascript experiment and I needed a widget to track the FPS of it. I ported a widget I've been using with Actionscript 3 to Javascript and it seems to be working fine with Chrome/Safari but on Firefox is throwing an exception. This is the experiment: Depth of Field This is the error: [Exception... "An invalid or illegal string was specified" code: "12" nsresult: "0x8053000c (NS_ERROR_DOM_SYNTAX_ERR)" location: "http://mrdoob.com/projects/chromeexperiments/depth_of_field__debug/js/net/hires/debug/Stats.js Line: 105"] The line that is complaning about is this one: graph.putImageData(graphData, 1, 0, 0, 0, 69, 50); Which is a crappy code to "scroll" the bitmap pixels. The idea is that I only draw a few pixels on the left of the bitmap and then on the next frame I copy the whole bitmap and paste it on pixel to the right. This error usually is thrown because you're pasting a bitmap bigger than the source and it's going off the limits, but in theory that shouldn't be the case as I'm defining 69 as the width of the rectangle to paste (being the bitmap 70px wide). And this is full code: var Stats = { baseFps: null, timer: null, timerStart: null, timerLast: null, fps: null, ms: null, container: null, fpsText: null, msText: null, memText: null, memMaxText: null, graph: null, graphData: null, init: function(userfps) { baseFps = userfps; timer = 0; timerStart = new Date() - 0; timerLast = 0; fps = 0; ms = 0; container = document.createElement("div"); container.style.fontFamily = 'Arial'; container.style.fontSize = '10px'; container.style.backgroundColor = '#000033'; container.style.width = '70px'; container.style.paddingTop = '2px'; fpsText = document.createElement("div"); fpsText.style.color = '#ffff00'; fpsText.style.marginLeft = '3px'; fpsText.style.marginBottom = '-3px'; fpsText.innerHTML = "FPS:"; container.appendChild(fpsText); msText = document.createElement("div"); msText.style.color = '#00ff00'; msText.style.marginLeft = '3px'; msText.style.marginBottom = '-3px'; msText.innerHTML = "MS:"; container.appendChild(msText); memText = document.createElement("div"); memText.style.color = '#00ffff'; memText.style.marginLeft = '3px'; memText.style.marginBottom = '-3px'; memText.innerHTML = "MEM:"; container.appendChild(memText); memMaxText = document.createElement("div"); memMaxText.style.color = '#ff0070'; memMaxText.style.marginLeft = '3px'; memMaxText.style.marginBottom = '3px'; memMaxText.innerHTML = "MAX:"; container.appendChild(memMaxText); var canvas = document.createElement("canvas"); canvas.width = 70; canvas.height = 50; container.appendChild(canvas); graph = canvas.getContext("2d"); graph.fillStyle = '#000033'; graph.fillRect(0, 0, canvas.width, canvas.height ); graphData = graph.getImageData(0, 0, canvas.width, canvas.height); setInterval(this.update, 1000/baseFps); return container; }, update: function() { timer = new Date() - timerStart; if ((timer - 1000) > timerLast) { fpsText.innerHTML = "FPS: " + fps + " / " + baseFps; timerLast = timer; graph.putImageData(graphData, 1, 0, 0, 0, 69, 50); graph.fillRect(0,0,1,50); graphData = graph.getImageData(0, 0, 70, 50); var index = ( Math.floor(Math.min(50, (fps / baseFps) * 50)) * 280 /* 70 * 4 */ ); graphData.data[index] = graphData.data[index + 1] = 256; index = ( Math.floor(Math.min(50, 50 - (timer - ms) * .5)) * 280 /* 70 * 4 */ ); graphData.data[index + 1] = 256; graph.putImageData (graphData, 0, 0); fps = 0; } ++fps; msText.innerHTML = "MS: " + (timer - ms); ms = timer; } } Any ideas? Thanks in advance.

    Read the article

  • Please help me correct the small bugs in this image editor

    - by Alex
    Hi, I'm working on a website that will sell hand made jewelry and I'm finishing the image editor, but it's not behaving quite right. Basically, the user uploads an image which will be saved as a source and then it will be resized to fit the user's screen and saved as a temp. The user will then go to a screen that will allow them to crop the image and then save it to it's final versions. All of that works fine, except, the final versions have 3 bugs. First is some black horizontal line on the very bottom of the image. Second is an outline of sorts that follows the edges. I thought it was because I was reducing the quality, but even at 100% it still shows up... And lastly, I've noticed that the cropped image is always a couple of pixels lower than what I'm specifying... Anyway, I'm hoping someone whose got experience in editing images with C# can maybe take a look at the code and see where I might be going off the right path. Oh, by the way, this in an ASP.NET MVC application. Here's the code: using System; using System.Drawing; using System.Drawing.Drawing2D; using System.Drawing.Imaging; using System.IO; using System.Linq; using System.Web; namespace Website.Models.Providers { public class ImageProvider { private readonly ProductProvider ProductProvider = null; private readonly EncoderParameters HighQualityEncoder = new EncoderParameters(); private readonly ImageCodecInfo JpegCodecInfo = ImageCodecInfo.GetImageEncoders().Single( c => (c.MimeType == "image/jpeg")); private readonly string Path = HttpContext.Current.Server.MapPath("~/Resources/Images/Products"); private readonly short[][] Dimensions = new short[3][] { new short[2] { 640, 480 }, new short[2] { 240, 0 }, new short[2] { 80, 60 } }; ////////////////////////////////////////////////////////// // Constructor ////////////////////////////////////////// ////////////////////////////////////////////////////////// public ImageProvider( ProductProvider ProductProvider) { this.ProductProvider = ProductProvider; HighQualityEncoder.Param[0] = new EncoderParameter(Encoder.Quality, 100L); } ////////////////////////////////////////////////////////// // Crop ////////////////////////////////////////////// ////////////////////////////////////////////////////////// public void Crop( string FileName, Image Image, Crop Crop) { using (Bitmap Source = new Bitmap(Image)) { using (Bitmap Target = new Bitmap(Crop.Width, Crop.Height)) { using (Graphics Graphics = Graphics.FromImage(Target)) { Graphics.InterpolationMode = InterpolationMode.HighQualityBicubic; Graphics.SmoothingMode = SmoothingMode.HighQuality; Graphics.PixelOffsetMode = PixelOffsetMode.HighQuality; Graphics.CompositingQuality = CompositingQuality.HighQuality; Graphics.DrawImage(Source, new Rectangle(0, 0, Target.Width, Target.Height), new Rectangle(Crop.Left, Crop.Top, Crop.Width, Crop.Height), GraphicsUnit.Pixel); }; Target.Save(FileName, JpegCodecInfo, HighQualityEncoder); }; }; } ////////////////////////////////////////////////////////// // Crop & Resize ////////////////////////////////////// ////////////////////////////////////////////////////////// public void CropAndResize( Product Product, Crop Crop) { using (Image Source = Image.FromFile(String.Format("{0}/{1}.source", Path, Product.ProductId))) { using (Image Temp = Image.FromFile(String.Format("{0}/{1}.temp", Path, Product.ProductId))) { float Percent = ((float)Source.Width / (float)Temp.Width); short Width = (short)(Temp.Width * Percent); short Height = (short)(Temp.Height * Percent); Crop.Height = (short)(Crop.Height * Percent); Crop.Left = (short)(Crop.Left * Percent); Crop.Top = (short)(Crop.Top * Percent); Crop.Width = (short)(Crop.Width * Percent); Img Img = new Img(); this.ProductProvider.AddImageAndSave(Product, Img); this.Crop(String.Format("{0}/{1}.cropped", Path, Img.ImageId), Source, Crop); using (Image Cropped = Image.FromFile(String.Format("{0}/{1}.cropped", Path, Img.ImageId))) { this.Resize(this.Dimensions[0], String.Format("{0}/{1}-L.jpg", Path, Img.ImageId), Cropped, HighQualityEncoder); this.Resize(this.Dimensions[1], String.Format("{0}/{1}-T.jpg", Path, Img.ImageId), Cropped, HighQualityEncoder); this.Resize(this.Dimensions[2], String.Format("{0}/{1}-S.jpg", Path, Img.ImageId), Cropped, HighQualityEncoder); }; }; }; this.Purge(Product); } ////////////////////////////////////////////////////////// // Queue ////////////////////////////////////////////// ////////////////////////////////////////////////////////// public void QueueFor( Product Product, HttpPostedFileBase PostedFile) { using (Image Image = Image.FromStream(PostedFile.InputStream)) { this.Resize(new short[2] { 1152, 0 }, String.Format("{0}/{1}.temp", Path, Product.ProductId), Image, HighQualityEncoder); }; PostedFile.SaveAs(String.Format("{0}/{1}.source", Path, Product.ProductId)); } ////////////////////////////////////////////////////////// // Purge ////////////////////////////////////////////// ////////////////////////////////////////////////////////// private void Purge( Product Product) { string Source = String.Format("{0}/{1}.source", Path, Product.ProductId); string Temp = String.Format("{0}/{1}.temp", Path, Product.ProductId); if (File.Exists(Source)) { File.Delete(Source); }; if (File.Exists(Temp)) { File.Delete(Temp); }; foreach (Img Img in Product.Imgs) { string Cropped = String.Format("{0}/{1}.cropped", Path, Img.ImageId); if (File.Exists(Cropped)) { File.Delete(Cropped); }; }; } ////////////////////////////////////////////////////////// // Resize ////////////////////////////////////////////// ////////////////////////////////////////////////////////// public void Resize( short[] Dimensions, string FileName, Image Image, EncoderParameters EncoderParameters) { if (Dimensions[1] == 0) { Dimensions[1] = (short)(Image.Height / ((float)Image.Width / (float)Dimensions[0])); }; using (Bitmap Bitmap = new Bitmap(Dimensions[0], Dimensions[1])) { using (Graphics Graphics = Graphics.FromImage(Bitmap)) { Graphics.InterpolationMode = InterpolationMode.HighQualityBicubic; Graphics.SmoothingMode = SmoothingMode.HighQuality; Graphics.PixelOffsetMode = PixelOffsetMode.HighQuality; Graphics.CompositingQuality = CompositingQuality.HighQuality; Graphics.DrawImage(Image, 0, 0, Dimensions[0], Dimensions[1]); }; Bitmap.Save(FileName, JpegCodecInfo, EncoderParameters); }; } } } Here's one of the images this produces:

    Read the article

  • How to get rid of previous reflection when reflecting a UIImageView (with changing pictures)?

    - by epale
    Hi everyone, I have managed to use the reflection sample app from apple to create a reflection from a UIImageView. But the problem is that when I change the picture inside the UIImageView, the reflection from the previous displayed picture remains on the screen. The new reflection on the next picture then overlaps the previous reflection. How do I ensure that the previous reflection is removed when I change to the next picture? Thank you so much. I hope my question is not too basic. Here is the codes which i used so far: //reflection self.view.autoresizesSubviews = YES; self.view.userInteractionEnabled = YES; // create the reflection view CGRect reflectionRect = currentView.frame; // the reflection is a fraction of the size of the view being reflected reflectionRect.size.height = reflectionRect.size.height * kDefaultReflectionFraction; // and is offset to be at the bottom of the view being reflected reflectionRect = CGRectOffset(reflectionRect, 0, currentView.frame.size.height); reflectionView = [[UIImageView alloc] initWithFrame:reflectionRect]; // determine the size of the reflection to create NSUInteger reflectionHeight = currentView.bounds.size.height * kDefaultReflectionFraction; // create the reflection image, assign it to the UIImageView and add the image view to the containerView reflectionView.image = [self reflectedImage:currentView withHeight:reflectionHeight]; reflectionView.alpha = kDefaultReflectionOpacity; [self.view addSubview:reflectionView]; //reflection */ Then the codes below are used to form the reflection: CGImageRef CreateGradientImage(int pixelsWide, int pixelsHigh) { CGImageRef theCGImage = NULL; // gradient is always black-white and the mask must be in the gray colorspace CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); // create the bitmap context CGContextRef gradientBitmapContext = CGBitmapContextCreate(nil, pixelsWide, pixelsHigh, 8, 0, colorSpace, kCGImageAlphaNone); // define the start and end grayscale values (with the alpha, even though // our bitmap context doesn't support alpha the gradient requires it) CGFloat colors[] = {0.0, 1.0, 1.0, 1.0}; // create the CGGradient and then release the gray color space CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2); CGColorSpaceRelease(colorSpace); // create the start and end points for the gradient vector (straight down) CGPoint gradientStartPoint = CGPointZero; CGPoint gradientEndPoint = CGPointMake(0, pixelsHigh); // draw the gradient into the gray bitmap context CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint, gradientEndPoint, kCGGradientDrawsAfterEndLocation); CGGradientRelease(grayScaleGradient); // convert the context into a CGImageRef and release the context theCGImage = CGBitmapContextCreateImage(gradientBitmapContext); CGContextRelease(gradientBitmapContext); // return the imageref containing the gradient return theCGImage; } CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh) { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // create the bitmap context CGContextRef bitmapContext = CGBitmapContextCreate (nil, pixelsWide, pixelsHigh, 8, 0, colorSpace, // this will give us an optimal BGRA format for the device: (kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst)); CGColorSpaceRelease(colorSpace); return bitmapContext; } (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height { if (!height) return nil; // create a bitmap graphics context the size of the image CGContextRef mainViewContentContext = MyCreateBitmapContext(fromImage.bounds.size.width, height); // offset the context - // This is necessary because, by default, the layer created by a view for caching its content is flipped. // But when you actually access the layer content and have it rendered it is inverted. Since we're only creating // a context the size of our reflection view (a fraction of the size of the main view) we have to translate the // context the delta in size, and render it. // CGFloat translateVertical= fromImage.bounds.size.height - height; CGContextTranslateCTM(mainViewContentContext, 0, -translateVertical); // render the layer into the bitmap context CALayer *layer = fromImage.layer; [layer renderInContext:mainViewContentContext]; // create CGImageRef of the main view bitmap content, and then release that bitmap context CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext); CGContextRelease(mainViewContentContext); // create a 2 bit CGImage containing a gradient that will be used for masking the // main view content to create the 'fade' of the reflection. The CGImageCreateWithMask // function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient CGImageRef gradientMaskImage = CreateGradientImage(1, height); // create an image by masking the bitmap of the mainView content with the gradient view // then release the pre-masked content bitmap and the gradient bitmap CGImageRef reflectionImage = CGImageCreateWithMask(mainViewContentBitmapContext, gradientMaskImage); CGImageRelease(mainViewContentBitmapContext); CGImageRelease(gradientMaskImage); // convert the finished reflection image to a UIImage UIImage *theImage = [UIImage imageWithCGImage:reflectionImage]; // image is retained by the property setting above, so we can release the original CGImageRelease(reflectionImage); return theImage; } */

    Read the article

  • Java image conversion to RGB565

    - by Vladimir
    I try to convert image to RGB565 format. I read this image: BufferedImage bufImg = ImageIO.read(imagePathFile); sendImg = new BufferedImage(CONTROLLER_LCD_WIDTH/*320*/, CONTROLLER_LCD_HEIGHT/*240*/, BufferedImage.TYPE_USHORT_565_RGB); sendImg .getGraphics().drawImage(bufImg, 0, 0, CONTROLLER_LCD_WIDTH/*320*/, CONTROLLER_LCD_HEIGHT/*240*/, null); Here is it: Then I convert it to RGB565: int numByte=0; byte[] OutputImageArray = new byte[CONTROLLER_LCD_WIDTH*CONTROLLER_LCD_HEIGHT*2]; int i=0; int j=0; int len = OutputImageArray.length; for (i=0;i<CONTROLLER_LCD_WIDTH;i++) { for (j=0;j<CONTROLLER_LCD_HEIGHT;j++) { Color c = new Color(sendImg.getRGB(i, j)); int aRGBpix = sendImg.getRGB(i, j); int alpha; int red = c.getRed(); int green = c.getGreen(); int blue = c.getBlue(); //RGB888 red = (aRGBpix >> 16) & 0x0FF; green = (aRGBpix >> 8) & 0x0FF; blue = (aRGBpix >> 0) & 0x0FF; alpha = (aRGBpix >> 24) & 0x0FF; //RGB565 red = red >> 3; green = green >> 2; blue = blue >> 3; //A pixel is represented by a 4-byte (32 bit) integer, like so: //00000000 00000000 00000000 11111111 //^ Alpha ^Red ^Green ^Blue //Converting to RGB565 short pixel_to_send = 0; int pixel_to_send_int = 0; pixel_to_send_int = (red << 11) | (green << 5) | (blue); pixel_to_send = (short) pixel_to_send_int; //dividing into bytes byte byteH=(byte)((pixel_to_send >> 8) & 0x0FF); byte byteL=(byte)(pixel_to_send & 0x0FF); //Writing it to array - High-byte is second OutputImageArray[numByte]=byteH; OutputImageArray[numByte+1]=byteL; numByte+=2; } } Then I try to restore this from resulting array OutputImageArray: i=0; j=0; numByte=0; BufferedImage NewImg = new BufferedImage(CONTROLLER_LCD_WIDTH, CONTROLLER_LCD_HEIGHT, BufferedImage.TYPE_USHORT_565_RGB); for (i=0;i<CONTROLLER_LCD_WIDTH;i++) { for (j=0;j<CONTROLLER_LCD_HEIGHT;j++) { int curPixel=0; int alpha=0x0FF; int red; int green; int blue; byte byteL=0; byte byteH=0; byteH = OutputImageArray[numByte]; byteL = OutputImageArray[numByte+1]; curPixel= (byteH << 8) | (byteL); //RGB565 red = (curPixel >> (6+5)) & 0x01F; green = (curPixel >> 5) & 0x03F; blue = (curPixel) & 0x01F; //RGB888 red = red << 3; green = green << 2; blue = blue << 3; //aRGB curPixel = 0; curPixel = (alpha << 24) | (red << 16) | (green << 8) | (blue); NewImg.setRGB(i, j, curPixel); numByte+=2; } } I output this restored image. But I see that it looks very poor. I expected the lost of pictures quality. But as I thought, this picture has to have almost the same quality as the previous picture. - Is it right? Is my code right?

    Read the article

  • Anchor bug on IE9, anchor hitbox overlapped

    - by user1456399
    The menu below works just fine in Firefox and Chrome, but in IE9, but hitbox for the "HOME" anchor is being covered partially by the list item behind it except for the actual test, and a single horizontal pixel line at the top and bottom of the button. Can anyone see what I'm missing? Code to follow: index.html <html lang="en"> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" > <title>Navigation</title> <style> .menu-container { width:960px; height:45px; margin:0 auto; position:relative; z-index:2; background-color: #3d3d3d; margin-bottom: 20px; } .menu-navigation { font-family:Arial, Helvetica, sans-serif; font-size:13px; list-style:none; padding:0; margin:0; text-align: right; font-size: 0; /* Removes whitespace between <li> */ } .menu-navigation > li { display:inline; margin:0; border-left:solid 1px #242424; background-color:transparent; font-size: 12px; padding: 15px 0 15px 0; } .menu-navigation > li:hover, .menu-navigation > li.current-menu-item { background-color:#202020; } .menu-navigation > li.active, .menu-navigation > li.active:hover { background-color:#131313; } .menu-navigation > li a { text-decoration:none; color:#bbbbbb; } .menu-navigation > li a:hover { color:#efefef; } .menu-navigation > li span a { color:#ffffff; } .menu-navigation > li a:focus { outline:none; } .menu-navigation > li.menu-item-parent > span, .menu-navigation > li > span > a { text-transform:uppercase; outline:0; text-decoration:none; color:#ffffff; text-shadow:1px 1px 1px #000000; line-height: 45px; display: inline-block; height: 45px; padding: 0 20px 0 20px; } .menu-navigation > li.menu-item-parent > span { background:url("../img/down.png") no-repeat right center; } .menu-navigation > li.menu-item-parent > span:hover, .menu-navigation > li > span > a:hover { cursor:pointer; } </style> </head> <body> <div class="menu-container"> <ul class="menu-navigation"> <li><span><a href="#">This is a link</a></span></li> <li><span><a href="#">This is a link</a></span></li> <li class="menu-item-parent"><span>Not a link</span></li> <li class="menu-item-parent"><span>Not a link</span></li> <li class="menu-item-parent"><span>Not a link</span></li> </ul> </div> </body> </html> EDIT: Alright, I've changed the code to isolate the issue, and narrowed down the problem CSS. Simply adding a background image fixes the issue. The code is set up so it styles so the hit boxes for all the menu items are the same regardless if there's a link in them or not to facilitate an "onClick" drop down on those menu items without anchors. Refer to lines 81 - 86. Remove those lines (just a background image), and the hit box issue is seen in the menu items "NOT A LINK" as well. Further, the act of adding a background image fixes the hit box issue of "THIS IS A LINK" menu items. Simply add the following snippet to the CSS: .menu-navigation > li > span > a { background:url("../img/down.png") no-repeat right center; } Adding a background image, even if there is no actual image, is changing something with regards to the hit box... I'm just not sure what...

    Read the article

  • Getting a NullPointerException at seemingly random intervals, not sure why

    - by Miles
    I'm running an example from a Kinect library for Processing (http://www.shiffman.net/2010/11/14/kinect-and-processing/) and sometimes get a NullPointerException pointing to this line: int rawDepth = depth[offset]; The depth array is created in this line: int[] depth = kinect.getRawDepth(); I'm not exactly sure what a NullPointerException is, and much googling hasn't really helped. It seems odd to me that the code compiles 70% of the time and returns the error unpredictably. Could the hardware itself be affecting it? Here's the whole example if it helps: // Daniel Shiffman // Kinect Point Cloud example // http://www.shiffman.net // https://github.com/shiffman/libfreenect/tree/master/wrappers/java/processing import org.openkinect.*; import org.openkinect.processing.*; // Kinect Library object Kinect kinect; float a = 0; // Size of kinect image int w = 640; int h = 480; // We'll use a lookup table so that we don't have to repeat the math over and over float[] depthLookUp = new float[2048]; void setup() { size(800,600,P3D); kinect = new Kinect(this); kinect.start(); kinect.enableDepth(true); // We don't need the grayscale image in this example // so this makes it more efficient kinect.processDepthImage(false); // Lookup table for all possible depth values (0 - 2047) for (int i = 0; i < depthLookUp.length; i++) { depthLookUp[i] = rawDepthToMeters(i); } } void draw() { background(0); fill(255); textMode(SCREEN); text("Kinect FR: " + (int)kinect.getDepthFPS() + "\nProcessing FR: " + (int)frameRate,10,16); // Get the raw depth as array of integers int[] depth = kinect.getRawDepth(); // We're just going to calculate and draw every 4th pixel (equivalent of 160x120) int skip = 4; // Translate and rotate translate(width/2,height/2,-50); rotateY(a); for(int x=0; x<w; x+=skip) { for(int y=0; y<h; y+=skip) { int offset = x+y*w; // Convert kinect data to world xyz coordinate int rawDepth = depth[offset]; PVector v = depthToWorld(x,y,rawDepth); stroke(255); pushMatrix(); // Scale up by 200 float factor = 200; translate(v.x*factor,v.y*factor,factor-v.z*factor); // Draw a point point(0,0); popMatrix(); } } // Rotate a += 0.015f; } // These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html float rawDepthToMeters(int depthValue) { if (depthValue < 2047) { return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161)); } return 0.0f; } PVector depthToWorld(int x, int y, int depthValue) { final double fx_d = 1.0 / 5.9421434211923247e+02; final double fy_d = 1.0 / 5.9104053696870778e+02; final double cx_d = 3.3930780975300314e+02; final double cy_d = 2.4273913761751615e+02; PVector result = new PVector(); double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue); result.x = (float)((x - cx_d) * depth * fx_d); result.y = (float)((y - cy_d) * depth * fy_d); result.z = (float)(depth); return result; } void stop() { kinect.quit(); super.stop(); } And here are the errors: processing.app.debug.RunnerException: NullPointerException at processing.app.Sketch.placeException(Sketch.java:1543) at processing.app.debug.Runner.findException(Runner.java:583) at processing.app.debug.Runner.reportException(Runner.java:558) at processing.app.debug.Runner.exception(Runner.java:498) at processing.app.debug.EventThread.exceptionEvent(EventThread.java:367) at processing.app.debug.EventThread.handleEvent(EventThread.java:255) at processing.app.debug.EventThread.run(EventThread.java:89) Exception in thread "Animation Thread" java.lang.NullPointerException at org.openkinect.processing.Kinect.enableDepth(Kinect.java:70) at PointCloud.setup(PointCloud.java:48) at processing.core.PApplet.handleDraw(PApplet.java:1583) at processing.core.PApplet.run(PApplet.java:1503) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • Unable to Calculate Position within Owner-Draw Text

    - by Jonathan Wood
    I'm trying to use Visual Studio 2012 to create a Windows Forms application that can place the caret at the current position within a owner-drawn string. However, I've been unable to find a way to accurately calculate that position. I've done this successfully before in C++. I've now tried numerous methods in C#. Originally, I tried using .NET classes to determine the correct position, but then I tried accessing the Windows API directly. In some cases, I came close, but after some time I still cannot place the caret accurately. I've created a small test program and posted key parts below. I've also posted the entire project here. The exact font used is not important to me; however, my application assumes a mono-spaced font. Any help is appreciated. Form1.cs This is my main form. public partial class Form1 : Form { private string TestString; private int AveCharWidth; private int Position; public Form1() { InitializeComponent(); TestString = "123456789012345678901234567890123456789012345678901234567890"; AveCharWidth = GetFontWidth(); Position = 0; } private void Form1_Load(object sender, EventArgs e) { Font = new Font(FontFamily.GenericMonospace, 12, FontStyle.Regular, GraphicsUnit.Pixel); } protected override void OnGotFocus(EventArgs e) { Windows.CreateCaret(Handle, (IntPtr)0, 2, (int)Font.Height); Windows.ShowCaret(Handle); UpdateCaretPosition(); base.OnGotFocus(e); } protected void UpdateCaretPosition() { Windows.SetCaretPos(Padding.Left + (Position * AveCharWidth), Padding.Top); } protected override void OnLostFocus(EventArgs e) { Windows.HideCaret(Handle); Windows.DestroyCaret(); base.OnLostFocus(e); } protected override void OnPaint(PaintEventArgs e) { e.Graphics.DrawString(TestString, Font, SystemBrushes.WindowText, new PointF(Padding.Left, Padding.Top)); } protected override bool IsInputKey(Keys keyData) { switch (keyData) { case Keys.Right: case Keys.Left: return true; } return base.IsInputKey(keyData); } protected override void OnKeyDown(KeyEventArgs e) { switch (e.KeyCode) { case Keys.Left: Position = Math.Max(Position - 1, 0); UpdateCaretPosition(); break; case Keys.Right: Position = Math.Min(Position + 1, TestString.Length); UpdateCaretPosition(); break; } base.OnKeyDown(e); } protected int GetFontWidth() { int AverageCharWidth = 0; using (var graphics = this.CreateGraphics()) { try { Windows.TEXTMETRIC tm; var hdc = graphics.GetHdc(); IntPtr hFont = this.Font.ToHfont(); IntPtr hOldFont = Windows.SelectObject(hdc, hFont); var a = Windows.GetTextMetrics(hdc, out tm); var b = Windows.SelectObject(hdc, hOldFont); var c = Windows.DeleteObject(hFont); AverageCharWidth = tm.tmAveCharWidth; } catch { } finally { graphics.ReleaseHdc(); } } return AverageCharWidth; } } Windows.cs Here are my Windows API declarations. public static class Windows { [Serializable, StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)] public struct TEXTMETRIC { public int tmHeight; public int tmAscent; public int tmDescent; public int tmInternalLeading; public int tmExternalLeading; public int tmAveCharWidth; public int tmMaxCharWidth; public int tmWeight; public int tmOverhang; public int tmDigitizedAspectX; public int tmDigitizedAspectY; public short tmFirstChar; public short tmLastChar; public short tmDefaultChar; public short tmBreakChar; public byte tmItalic; public byte tmUnderlined; public byte tmStruckOut; public byte tmPitchAndFamily; public byte tmCharSet; } [DllImport("user32.dll")] public static extern bool CreateCaret(IntPtr hWnd, IntPtr hBitmap, int nWidth, int nHeight); [DllImport("User32.dll")] public static extern bool SetCaretPos(int x, int y); [DllImport("User32.dll")] public static extern bool DestroyCaret(); [DllImport("User32.dll")] public static extern bool ShowCaret(IntPtr hWnd); [DllImport("User32.dll")] public static extern bool HideCaret(IntPtr hWnd); [DllImport("gdi32.dll", CharSet = CharSet.Auto)] public static extern bool GetTextMetrics(IntPtr hdc, out TEXTMETRIC lptm); [DllImport("gdi32.dll")] public static extern IntPtr SelectObject(IntPtr hdc, IntPtr hgdiobj); [DllImport("GDI32.dll")] public static extern bool DeleteObject(IntPtr hObject); }

    Read the article

  • TextField breaking MOUSE_OVER in as3: only after moving it!

    - by Franz
    I'm having a really weird problem with the MOUSE_OVER event. I'm building dynamic tabs representing mp3 songs containing textfields with info and a dynamic image for the cover art. I am trying to get a simple MOUSE_OVER working over the whole tab, such that you can select the next song to play. I am using a Sprite with alpha 0 that overlays my whole tab (incl. the textFields) as a Listener for MOUSE_OVER and _OUT... I've checked by setting the alpha to something visible and it indeed covers my tab and follows it around as I move it (just making sure I'm not moving the tab without moving the hotspot). Also, I only create it once my cover art is loaded, ensuring that it will cover that too. Now, when the tab is in the top position, everything is dandy. As soon as I move the tab to make space for the next tab, the textFields break my roll behaviour... just like that noob mistake of overlaying a sprite over the one that you're listening for MouseEvents on. But... the roll area is still on top of the field, I've set selectable and mouseEnabled to false on the textFields... nothing. It is as if the mere fact of moving the whole tab now puts the textField on top of everything in my tab (whereas visually, it's still in its expected layer). I'm using pixel fonts but tried it with system fonts, same thing... at my wits end here. public function Tab(tune:Tune) { _tune = tune; mainSprite = new Sprite(); addChild(mainSprite); drawBorder(); createFormat(); placeArtist(); placeTitle(); placeAlbum(); coverArt(); } private function placeButton():void { _button = new Sprite(); _button.graphics.beginFill(0xFF000,0); _button.graphics.drawRect(0,0,229,40); _button.graphics.endFill(); _button.addEventListener(MouseEvent.MOUSE_OVER, mouseListener); _button.addEventListener(MouseEvent.MOUSE_OUT, mouseListener); _button.buttonMode = true; mainSprite.addChild(_button); } private function mouseListener(event:MouseEvent):void { switch(event.type){ case MouseEvent.MOUSE_OVER : hilite(true); break; case MouseEvent.MOUSE_OUT : hilite(false); break; } } private function createFormat():void { _format = new TextFormat(); _format.font = "FFF Neostandard"; _format.size = 8; _format.color = 0xFFFFFF; } private function placeArtist():void { var artist : TextField = new TextField(); artist.selectable = false; artist.defaultTextFormat = _format; artist.x = 41; artist.y = 3; artist.width = 135; artist.text = _tune.artist; artist.mouseEnabled = false; mainSprite.addChild(artist); } private function placeTitle():void { var title : TextField = new TextField(); title.selectable = false; title.defaultTextFormat = _format; title.x = 41; title.y = 14; title.width = 135; title.text = _tune.title; title.mouseEnabled = false; mainSprite.addChild(title); } private function placeAlbum():void { var album : TextField = new TextField(); album.selectable = false; album.defaultTextFormat = _format; album.x = 41; album.y = 25; album.width = 135; album.text = _tune.album; album.mouseEnabled = false; mainSprite.addChild(album); } private function drawBorder():void { _border = new Sprite(); _border.graphics.lineStyle(1, 0x545454); _border.graphics.drawRect (0,0,229,40); mainSprite.addChild(_border); } private function coverArt():void { _image = new Sprite(); var imageLoader : Loader = new Loader(); _loaderInfo = imageLoader.contentLoaderInfo; _loaderInfo.addEventListener(Event.COMPLETE, coverLoaded) var image:URLRequest = new URLRequest(_tune.coverArt); imageLoader.load(image); _image.x = 1.5; _image.y = 2; _image.addChild(imageLoader); } private function coverLoaded(event:Event):void { _loaderInfo.removeEventListener(Event.COMPLETE, coverLoaded); var scaling : Number = IMAGE_SIZE / _image.width; _image.scaleX = scaling; _image.scaleY = scaling; mainSprite.addChild (_image); placeButton(); } public function hilite(state:Boolean):void{ var col : ColorTransform = new ColorTransform(); if(state){ col.color = 0xFFFFFF; } else { col.color = 0x545454; } _border.transform.colorTransform = col; }

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Rounded Corners and Shadows &ndash; Dialogs with CSS

    - by Rick Strahl
    Well, it looks like we’ve finally arrived at a place where at least all of the latest versions of main stream browsers support rounded corners and box shadows. The two CSS properties that make this possible are box-shadow and box-radius. Both of these CSS Properties now supported in all the major browsers as shown in this chart from QuirksMode: In it’s simplest form you can use box-shadow and border radius like this: .boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; } .roundbox { -moz-border-radius: 6px 6px 6px 6px; -webkit-border-radius: 6px; border-radius: 6px 6px 6px 6px; } box-shadow: horizontal-shadow-pixels vertical-shadow-pixels blur-distance shadow-color box-shadow attributes specify the the horizontal and vertical offset of the shadow, the blur distance (to give the shadow a smooth soft look) and a shadow color. The spec also supports multiple shadows separated by commas using the attributes above but we’re not using that functionality here. box-radius: top-left-radius top-right-radius bottom-right-radius bottom-left-radius border-radius takes a pixel size for the radius for each corner going clockwise. CSS 3 also specifies each of the individual corner elements such as border-top-left-radius, but support for these is much less prevalent so I would recommend not using them for now until support improves. Instead use the single box-radius to specify all corners. Browser specific Support in older Browsers Notice that there are two variations: The actual CSS 3 properties (box-shadow and box-radius) and the browser specific ones (-moz, –webkit prefixes for FireFox and Chrome/Safari respectively) which work in slightly older versions of modern browsers before official CSS 3 support was added. The goal is to spread support as widely as possible and the prefix versions extend the range slightly more to those browsers that provided early support for these features. Notice that box-shadow and border-radius are used after the browser specific versions to ensure that the latter versions get precedence if the browser supports both (last assignment wins). Use the .boxshadow and .roundbox Styles in HTML To use these two styles create a simple rounded box with a shadow you can use HTML like this: <!-- Simple Box with rounded corners and shadow --> <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="boxcontenttext"> Simple Rounded Corner Box. </div> </div> which looks like this in the browser: This works across browsers and it’s pretty sweet and simple. Watch out for nested Elements! There are a couple of things to be aware of however when using rounded corners. Specifically, you need to be careful when you nest other non-transparent content into the rounded box. For example check out what happens when I change the inside <div> to have a colored background: <!-- Simple Box with rounded corners and shadow --> <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="boxcontenttext" style="background: khaki;"> Simple Rounded Corner Box. </div> </div> which renders like this:   If you look closely you’ll find that the inside <div>’s corners are not rounded and so ‘poke out’ slightly over the rounded corners. It looks like the rounded corners are ‘broken’ up instead of a solid rounded line around the corner, which his pretty ugly. The bigger the radius the more drastic this effect becomes . To fix this issue the inner <div> also has have rounded corners at the same or slightly smaller radius than the outer <div>. The simple fix for this is to simply also apply the roundbox style to the inner <div> in addition to the boxcontenttext style already applied: <div class="boxcontenttext roundbox" style="background: khaki;"> The fixed display now looks proper: Separate Top and Bottom Elements This gets even a little more tricky if you have an element at the top or bottom only of the rounded box. What if you need to add something like a header or footer <div> that have non-transparent backgrounds which is a pretty common scenario? In those cases you want only the top or bottom corners rounded and not both. To make this work a couple of additional styles to round only the top and bottom corners can be created: .roundbox-top { -moz-border-radius: 4px 4px 0 0; -webkit-border-radius: 4px 4px 0 0; border-radius: 4px 4px 0 0; } .roundbox-bottom { -moz-border-radius: 0 0 4px 4px; -webkit-border-radius: 0 0 4px 4px; border-radius: 0 0 4px 4px; } Notice that radius used for the ‘inside’ rounding is smaller (4px) than the outside radius (6px). This is so the inner radius fills into the outer border – if you use the same size you may have some white space showing between inner and out rounded corners. Experiment with values to see what works – in my experimenting the behavior across browsers here is consistent (thankfully). These styles can be applied in addition to other styles to make only the top or bottom portions of an element rounded. For example imagine I have styles like this: .gridheader, .gridheaderbig, .gridheaderleft, .gridheaderright { padding: 4px 4px 4px 4px; background: #003399 url(images/vertgradient.png) repeat-x; text-align: center; font-weight: bold; text-decoration: none; color: khaki; } .gridheaderleft { text-align: left; } .gridheaderright { text-align: right; } .gridheaderbig { font-size: 135%; } If I just apply say gridheader by itself in HTML like this: <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="gridheaderleft">Box with a Header</div> <div class="boxcontenttext" style="background: khaki;"> Simple Rounded Corner Box. </div> </div> This results in a pretty funky display – again due to the fact that the inner elements render square rather than rounded corners: If you look close again you can see that both the header and the main content have square edges which jumps out at the eye. To fix this you can now apply the roundbox-top and roundbox-bottom to the header and content respectively: <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="gridheaderleft roundbox-top">Box with a Header</div> <div class="boxcontenttext roundbox-bottom" style="background: khaki;"> Simple Rounded Corner Box. </div> </div> Which now gives the proper display with rounded corners both on the top and bottom: All of this is sweet to be supported – at least by the newest browser – without having to resort to images and nasty JavaScripts solutions. While this is still not a mainstream feature yet for the majority of actually installed browsers, the majority of browser users are very likely to have this support as most browsers other than IE are actively pushing users to upgrade to newer versions. Since this is a ‘visual display only feature it degrades reasonably well in non-supporting browsers: You get an uninteresting square and non-shadowed browser box, but the display is still overall functional. The main sticking point – as always is Internet Explorer versions 8.0 and down as well as older versions of other browsers. With those browsers you get a functional view that is a little less interesting to look at obviously: but at least it’s still functional. Maybe that’s just one more incentive for people using older browsers to upgrade to a  more modern browser :-) Creating Dialog Related Styles In a lot of my AJAX based applications I use pop up windows which effectively work like dialogs. Using the simple CSS behaviors above, it’s really easy to create some fairly nice looking overlaid windows with nothing but CSS. Here’s what a typical ‘dialog’ I use looks like: The beauty of this is that it’s plain CSS – no plug-ins or images (other than the gradients which are optional) required. Add jQuery-ui draggable (or ww.jquery.js as shown below) and you have a nice simple inline implementation of a dialog represented by a simple <div> tag. Here’s the HTML for this dialog: <div id="divDialog" class="dialog boxshadow" style="width: 450px;"> <div class="dialog-header"> <div class="closebox"></div> User Sign-in </div> <div class="dialog-content"> <label>Username:</label> <input type="text" name="txtUsername" value=" " /> <label>Password</label> <input type="text" name="txtPassword" value=" " /> <hr /> <input type="button" id="btnLogin" value="Login" /> </div> <div class="dialog-statusbar">Ready</div> </div> Most of this behavior is driven by the ‘dialog’ styles which are fairly basic and easy to understand. They do use a few support images for the gradients which are provided in the sample I’ve provided. Here’s what the CSS looks like: .dialog { background: White; overflow: hidden; border: solid 1px steelblue; -moz-border-radius: 6px 6px 4px 4px; -webkit-border-radius: 6px 6px 4px 4px; border-radius: 6px 6px 3px 3px; } .dialog-header { background-image: url(images/dialogheader.png); background-repeat: repeat-x; text-align: left; color: cornsilk; padding: 5px; padding-left: 10px; font-size: 1.02em; font-weight: bold; position: relative; -moz-border-radius: 4px 4px 0px 0px; -webkit-border-radius: 4px 4px 0px 0px; border-radius: 4px 4px 0px 0px; } .dialog-top { -moz-border-radius: 4px 4px 0px 0px; -webkit-border-radius: 4px 4px 0px 0px; border-radius: 4px 4px 0px 0px; } .dialog-bottom { -moz-border-radius: 0 0 3px 3px; -webkit-border-radius: 0 0 3px 3px; border-radius: 0 0 3px 3px; } .dialog-content { padding: 15px; } .dialog-statusbar, .dialog-toolbar { background: #eeeeee; background-image: url(images/dialogstrip.png); background-repeat: repeat-x; padding: 5px; padding-left: 10px; border-top: solid 1px silver; border-bottom: solid 1px silver; font-size: 0.8em; } .dialog-statusbar { -moz-border-radius: 0 0 3px 3px; -webkit-border-radius: 0 0 3px 3px; border-radius: 0 0 3px 3px; padding-right: 10px; } .closebox { position: absolute; right: 2px; top: 2px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 1; filter: alpha(opacity="100"); } The main style is the dialog class which is the outer box. It has the rounded border that serves as the outline. Note that I didn’t add the box-shadow to this style because in some situations I just want the rounded box in an inline display that doesn’t have a shadow so it’s still applied separately. dialog-header, then has the rounded top corners and displays a typical dialog heading format. dialog-bottom and dialog-top then provide the same functionality as roundbox-top and roundbox-bottom described earlier but are provided mainly in the stylesheet for consistency to match the dialog’s round edges and making it easier to  remember and find in Intellisense as it shows up in the same dialog- group. dialog-statusbar and dialog-toolbar are two elements I use a lot for floating windows – the toolbar serves for buttons and options and filters typically, while the status bar provides information specific to the floating window. Since the the status bar is always on the bottom of the dialog it automatically handles the rounding of the bottom corners. Finally there’s  closebox style which is to be applied to an empty <div> tag in the header typically. What this does is render a close image that is by default low-lighted with a low opacity value, and then highlights when hovered over. All you’d have to do handle the close operation is handle the onclick of the <div>. Note that the <div> right aligns so typically you should specify it before any other content in the header. Speaking of closable – some time ago I created a closable jQuery plug-in that basically automates this process and can be applied against ANY element in a page, automatically removing or closing the element with some simple script code. Using this you can leave out the <div> tag for closable and just do the following: To make the above dialog closable (and draggable) which makes it effectively and overlay window, you’d add jQuery.js and ww.jquery.js to the page: <script type="text/javascript" src="../../scripts/jquery.min.js"></script> <script type="text/javascript" src="../../scripts/ww.jquery.min.js"></script> and then simply call: <script type="text/javascript"> $(document).ready(function () { $("#divDialog") .draggable({ handle: ".dialog-header" }) .closable({ handle: ".dialog-header", closeHandler: function () { alert("Window about to be closed."); return true; // true closes - false leaves open } }); }); </script> * ww.jquery.js emulates base features in jQuery-ui’s draggable. If jQuery-ui is loaded its draggable version will be used instead and voila you have now have a draggable and closable window – here in mid-drag:   The dragging and closable behaviors are of course optional, but it’s the final touch that provides dialog like window behavior. Relief for older Internet Explorer Versions with CSS Pie If you want to get these features to work with older versions of Internet Explorer all the way back to version 6 you can check out CSS Pie. CSS Pie provides an Internet Explorer behavior file that attaches to specific CSS rules and simulates these behavior using script code in IE (mostly by implementing filters). You can simply add the behavior to each CSS style that uses box-shadow and border-radius like this: .boxshadow {     -moz-box-shadow: 3px 3px 5px #535353;     -webkit-box-shadow: 3px 3px 5px #535353;           box-shadow: 3px 3px 5px #535353;     behavior: url(scripts/PIE.htc);           } .roundbox {      -moz-border-radius: 6px 6px 6px 6px;     -webkit-border-radius: 6px;      border-radius: 6px 6px 6px 6px;     behavior: url(scripts/PIE.htc); } CSS Pie requires the PIE.htc on your server and referenced from each CSS style that needs it. Note that the url() for IE behaviors is NOT CSS file relative as other CSS resources, but rather PAGE relative , so if you have more than one folder you probably need to reference the HTC file with a fixed path like this: behavior: url(/MyApp/scripts/PIE.htc); in the style. Small price to pay, but a royal pain if you have a common CSS file you use in many applications. Once the PIE.htc file has been copied and you have applied the behavior to each style that uses these new features Internet Explorer will render rounded corners and box shadows! Yay! Hurray for box-shadow and border-radius All of this functionality is very welcome natively in the browser. If you think this is all frivolous visual candy, you might be right :-), but if you take a look on the Web and search for rounded corner solutions that predate these CSS attributes you’ll find a boatload of stuff from image files, to custom drawn content to Javascript solutions that play tricks with a few images. It’s sooooo much easier to have this functionality built in and I for one am glad to see that’s it’s finally becoming standard in the box. Still remember that when you use these new CSS features, they are not universal, and are not going to be really soon. Legacy browsers, especially old versions of Internet Explorer that can’t be updated will continue to be around and won’t work with this shiny new stuff. I say screw ‘em: Let them get a decent recent browser or see a degraded and ugly UI. We have the luxury with this functionality in that it doesn’t typically affect usability – it just doesn’t look as nice. Resources Download the Sample The sample includes the styles and images and sample page as well as ww.jquery.js for the draggable/closable example. Online Sample Check out the sample described in this post online. Closable and Draggable Documentation Documentation for the closeable and draggable plug-ins in ww.jquery.js. You can also check out the full documentation for all the plug-ins contained in ww.jquery.js here. © Rick Strahl, West Wind Technologies, 2005-2011Posted in HTML  CSS  

    Read the article

  • header confusion. Compiler not recognizing datatypes

    - by numerical25
    I am getting confused on why the compiler is not recognizing my classes. So I am just going to show you my code and let you guys decide. My error is this error C2653: 'RenderEngine' : is not a class or namespace name and it's pointing to this line std::vector<RenderEngine::rDefaultVertex> m_verts; Here is the code for rModel, in its entirety. It contains the varible. the class that holds it is further down. #ifndef _MODEL_H #define _MODEL_H #include "stdafx.h" #include <vector> #include <string> //#include "RenderEngine.h" #include "rTri.h" class rModel { public: typedef tri<WORD> sTri; std::vector<sTri> m_tris; std::vector<RenderEngine::rDefaultVertex> m_verts; std::wstring m_name; ID3D10Buffer *m_pVertexBuffer; ID3D10Buffer *m_pIndexBuffer; rModel( const TCHAR *filename ); rModel( const TCHAR *name, int nVerts, int nTris ); ~rModel(); float GenRadius(); void Scale( float amt ); void Draw(); //------------------------------------ Access functions. int NumVerts(){ return m_verts.size(); } int NumTris(){ return m_tris.size(); } const TCHAR *Name(){ return m_name.c_str(); } RenderEngine::cDefaultVertex *VertData(){ return &m_verts[0]; } sTri *TriData(){ return &m_tris[0]; } }; #endif at the very top of the code there is a header file #include "stdafx.h" that includes this // stdafx.h : include file for standard system include files, // or project specific include files that are used frequently, but // are changed infrequently // #include "targetver.h" #define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers // Windows Header Files: #include <windows.h> // C RunTime Header Files #include <stdlib.h> #include <malloc.h> #include <memory.h> #include <tchar.h> #include "resource.h" #include "d3d10.h" #include "d3dx10.h" #include "dinput.h" #include "RenderEngine.h" #include "rModel.h" // TODO: reference additional headers your program requires here as you can see, RenderEngine.h comes before rModel.h #include "RenderEngine.h" #include "rModel.h" According to my knowledge, it should recognize it. But on the other hand, I am not really that great with organizing headers. Here my my RenderEngine Declaration. #pragma once #include "stdafx.h" #define MAX_LOADSTRING 100 #define MAX_LIGHTS 10 class RenderEngine { public: class rDefaultVertex { public: D3DXVECTOR3 m_vPosition; D3DXVECTOR3 m_vNormal; D3DXCOLOR m_vColor; D3DXVECTOR2 m_TexCoords; }; class rLight { public: rLight() { } D3DXCOLOR m_vColor; D3DXVECTOR3 m_vDirection; }; static HINSTANCE m_hInst; HWND m_hWnd; int m_nCmdShow; TCHAR m_szTitle[MAX_LOADSTRING]; // The title bar text TCHAR m_szWindowClass[MAX_LOADSTRING]; // the main window class name void DrawTextString(int x, int y, D3DXCOLOR color, const TCHAR *strOutput); //static functions static LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); static INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam); bool InitWindow(); bool InitDirectX(); bool InitInstance(); int Run(); void ShutDown(); void AddLight(D3DCOLOR color, D3DXVECTOR3 pos); RenderEngine() { m_screenRect.right = 800; m_screenRect.bottom = 600; m_iNumLights = 0; } protected: RECT m_screenRect; //direct3d Members ID3D10Device *m_pDevice; // The IDirect3DDevice10 // interface ID3D10Texture2D *m_pBackBuffer; // Pointer to the back buffer ID3D10RenderTargetView *m_pRenderTargetView; // Pointer to render target view IDXGISwapChain *m_pSwapChain; // Pointer to the swap chain RECT m_rcScreenRect; // The dimensions of the screen ID3D10Texture2D *m_pDepthStencilBuffer; ID3D10DepthStencilState *m_pDepthStencilState; ID3D10DepthStencilView *m_pDepthStencilView; //transformation matrixs system D3DXMATRIX m_mtxWorld; D3DXMATRIX m_mtxView; D3DXMATRIX m_mtxProj; //pointers to shaders matrix varibles ID3D10EffectMatrixVariable* m_pmtxWorldVar; ID3D10EffectMatrixVariable* m_pmtxViewVar; ID3D10EffectMatrixVariable* m_pmtxProjVar; //Application Lights rLight m_aLights[MAX_LIGHTS]; // Light array int m_iNumLights; // Number of active lights //light pointers from shader ID3D10EffectVectorVariable* m_pLightDirVar; ID3D10EffectVectorVariable* m_pLightColorVar; ID3D10EffectVectorVariable* m_pNumLightsVar; //Effect members ID3D10Effect *m_pDefaultEffect; ID3D10EffectTechnique *m_pDefaultTechnique; ID3D10InputLayout* m_pDefaultInputLayout; ID3DX10Font *m_pFont; // The font used for rendering text // Sprites used to hold font characters ID3DX10Sprite *m_pFontSprite; ATOM RegisterEngineClass(); void DoFrame(float); bool LoadEffects(); void UpdateMatrices(); void UpdateLights(); }; The classes are defined within the class class rDefaultVertex { public: D3DXVECTOR3 m_vPosition; D3DXVECTOR3 m_vNormal; D3DXCOLOR m_vColor; D3DXVECTOR2 m_TexCoords; }; class rLight { public: rLight() { } D3DXCOLOR m_vColor; D3DXVECTOR3 m_vDirection; }; Not sure if thats good practice, but I am just going by the book. In the end, I just need a good way to organize it so that rModel recognizes RenderEngine. and if possible, the other way around.

    Read the article

  • Run-Time Check Failure #2 - Stack around the variable 'indices' was corrupted.

    - by numerical25
    well I think I know what the problem is. I am just having a hard time debugging it. I am working with the directx api and I am trying to generate a plane along the x and z axis according to a book I have. The problem is when I am creating my indices. I think I am setting values out of the bounds of the indices array. I am just having a hard time figuring out what I did wrong. I am unfamiliar with the this method of generating a plane. so its a little difficult for me. below is my code. Take emphasis on the indices loop. #include "MyGame.h" //#include "CubeVector.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; //grid information #define NUM_COLS 16 #define NUM_ROWS 16 #define CELL_WIDTH 32 #define CELL_HEIGHT 32 #define NUM_VERTSX (NUM_COLS + 1) #define NUM_VERTSY (NUM_ROWS + 1) bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(0.0f, 10.0f, -20.0f), new D3DXVECTOR3(0.0f, 0.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngle = 0.0f; // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&WorldMatrix, rotationAngle); rotationAngle += (float)D3DX_PI * 0.0f; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); mpD3DDevice->IASetIndexBuffer(modelObject.pIndicesBuffer, DXGI_FORMAT_R32_UINT, 0); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->DrawIndexed(modelObject.numIndices,0,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { DX3dApp::Render(); } bool MyGame::CreateObject() { VertexPos vertices[NUM_VERTSX * NUM_VERTSY]; for(int z=0; z < NUM_VERTSY; ++z) { for(int x = 0; x < NUM_VERTSX; ++x) { vertices[x + z * NUM_VERTSX].pos.x = (float)x * CELL_WIDTH; vertices[x + z * NUM_VERTSX].pos.z = (float)z * CELL_HEIGHT; vertices[x + z * NUM_VERTSX].pos.y = 0.0f; vertices[x + z * NUM_VERTSX].color = D3DXVECTOR4(1.0, 0.0f, 0.0f, 0.0f); } } DWORD indices[NUM_VERTSX * NUM_VERTSY]; int curIndex = 0; for(int z=0; z < NUM_ROWS; ++z) { for(int x = 0; x < NUM_COLS; ++x) { int curVertex = x + (z * NUM_VERTSX); indices[curIndex] = curVertex; indices[curIndex + 1] = curVertex + NUM_VERTSX; indices[curIndex + 2] = curVertex + 1; indices[curIndex + 3] = curVertex + 1; indices[curIndex + 4] = curVertex + NUM_VERTSX; indices[curIndex + 5] = curVertex + NUM_VERTSX + 1; curIndex += 6; } } //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; modelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * modelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pIndicesBuffer); if(FAILED(hr)) return false; ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; }

    Read the article

  • Conversion from YUV444 to RGB888

    - by Abhi
    I am new in this field and i desperately need some guidance from u all. I have to support yuv444 to rgb 888 in display driver module. There is one test which i have done for yv12 → rgb565 in wince 6.0 r3 which is mentioned below. //------------------------------------------------------------------------------ // // Function: PP_CSC_YV12_RGB565Test // // This function tests the Post-processor // // // // Parameters: // uiMsg // [in] Ignored. // // tpParam // [in] Ignored. // // lpFTE // [in] Ignored. // // Returns: // Specifies if the test passed (TPR_PASS), failed (TPR_FAIL), or was // skipped (TPR_SKIP). // // TESTPROCAPI PP_CSC_YV12_RGB565Test(UINT uMsg, TPPARAM tpParam, LPFUNCTION_TABLE_ENTRY lpFTE) { LogEntry(L"%d : In %s Function \r\n",++abhineet,__WFUNCTION__); UNREFERENCED_PARAMETER(tpParam); UNREFERENCED_PARAMETER(lpFTE); DWORD dwResult= TPR_SKIP; ppConfigData ppData; DWORD iInputBytesPerFrame, iOutputBytesPerFrame; UINT32 iInputStride, iOutputStride; UINT16 iOutputWidth, iOutputHeight, iOutputBPP; UINT16 iInputWidth, iInputHeight, iInputBPP; int iOption; PP_TEST_FUNCTION_ENTRY(); // Validate that the shell wants the test to run if (uMsg != TPM_EXECUTE) { return TPR_NOT_HANDLED; } PPTestInit(); iInputWidth = PP_TEST_FRAME_WIDTH; //116 iInputHeight = PP_TEST_FRAME_HEIGHT; //160 iInputBPP = PP_TEST_FRAME_BPP; //2 iInputStride = iInputWidth * 3/2; // YV12 is 12 bits per pixel iOutputWidth = PP_TEST_FRAME_WIDTH; iOutputHeight = PP_TEST_FRAME_HEIGHT; iOutputBPP = PP_TEST_FRAME_BPP; iOutputStride = iOutputWidth * iOutputBPP; // Allocate buffers for input and output frames iInputBytesPerFrame = iInputStride * iInputHeight; pInputFrameVirtAddr = (UINT32 *) AllocPhysMem(iInputBytesPerFrame, PAGE_EXECUTE_READWRITE, 0, 0, (ULONG *) &pInputFramePhysAddr); iOutputBytesPerFrame = iOutputStride * iOutputHeight; pOutputFrameVirtAddr = (UINT32 *) AllocPhysMem(iOutputBytesPerFrame, PAGE_EXECUTE_READWRITE, 0, 0, (ULONG *) &pOutputFramePhysAddr); if ((NULL == pInputFrameVirtAddr) || (NULL == pOutputFrameVirtAddr)) { dwResult = TPR_FAIL; goto PP_CSC_YV12_RGB565Test_clean_up; } //----------------------------- // Configure PP //----------------------------- // Set up post-processing configuration data memset(&ppData, 0 , sizeof(ppData)); // Set up input format and data width ppData.inputIDMAChannel.FrameFormat = icFormat_YUV420; ppData.inputIDMAChannel.DataWidth = icDataWidth_8BPP; // dummy value for YUV ppData.inputIDMAChannel.PixelFormat.component0_offset = 0; ppData.inputIDMAChannel.PixelFormat.component1_offset = 8; ppData.inputIDMAChannel.PixelFormat.component2_offset = 16; ppData.inputIDMAChannel.PixelFormat.component3_offset = 24; ppData.inputIDMAChannel.PixelFormat.component0_width = 8-1; ppData.inputIDMAChannel.PixelFormat.component1_width = 8-1; ppData.inputIDMAChannel.PixelFormat.component2_width = 8-1; ppData.inputIDMAChannel.PixelFormat.component3_width = 8-1; ppData.inputIDMAChannel.FrameSize.height = iInputHeight; ppData.inputIDMAChannel.FrameSize.width = iInputWidth; ppData.inputIDMAChannel.LineStride = iInputWidth; // Set up output format and data width ppData.outputIDMAChannel.FrameFormat = icFormat_RGB; ppData.outputIDMAChannel.DataWidth = icDataWidth_16BPP; ppData.outputIDMAChannel.PixelFormat.component0_offset = RGB_COMPONET0_OFFSET; ppData.outputIDMAChannel.PixelFormat.component1_offset = RGB_COMPONET1_OFFSET; ppData.outputIDMAChannel.PixelFormat.component2_offset = RGB_COMPONET2_OFFSET; ppData.outputIDMAChannel.PixelFormat.component3_offset = RGB_COMPONET3_OFFSET; ppData.outputIDMAChannel.PixelFormat.component0_width = RGB_COMPONET0_WIDTH -1; ppData.outputIDMAChannel.PixelFormat.component1_width = RGB_COMPONET1_WIDTH -1; ppData.outputIDMAChannel.PixelFormat.component2_width = RGB_COMPONET2_WIDTH -1; ppData.outputIDMAChannel.PixelFormat.component3_width = RGB_COMPONET3_WIDTH; ppData.outputIDMAChannel.FrameSize.height = iOutputHeight; ppData.outputIDMAChannel.FrameSize.width = iOutputWidth; ppData.outputIDMAChannel.LineStride = iOutputStride; // Set up post-processing channel CSC parameters // based on input and output ppData.CSCEquation = CSCY2R_A1; ppData.inputIDMAChannel.UBufOffset = iInputHeight * iInputWidth + (iInputHeight * iInputWidth)/4; ppData.inputIDMAChannel.VBufOffset = iInputHeight * iInputWidth; ppData.FlipRot.verticalFlip = FALSE; ppData.FlipRot.horizontalFlip = FALSE; ppData.FlipRot.rotate90 = FALSE; if (!PPConfigure(hPP, &ppData)) { dwResult = TPR_FAIL; goto PP_CSC_YV12_RGB565Test_clean_up; } //----------------------------- // Read first input buffer //----------------------------- // Read Input file for new frame if (!ReadImage(PP_TEST_YV12_FILENAME,pInputFrameVirtAddr,iInputBytesPerFrame,PP_TEST_FRAME_WIDTH,PP_TEST_FRAME_HEIGHT)) { g_pKato->Log(PP_ZONE_ERROR, (TEXT("fail to ReadImage()!\r\n"))); dwResult = TPR_FAIL; goto PP_CSC_YV12_RGB565Test_clean_up; } //----------------------------- // Start PP //----------------------------- if (!PPStart(hPP)) { dwResult = TPR_FAIL; goto PP_CSC_YV12_RGB565Test_clean_up; } if (!PPInterruptEnable(hPP, FRAME_INTERRUPT)) { dwResult = TPR_FAIL; goto PP_CSC_YV12_RGB565Test_clean_up; } //----------------------------- // Queue Input/Output Buffers //----------------------------- UINT32 starttime = GetTickCount(); // Add input and output buffers to PP queues. if (!PPAddInputBuffer(hPP, (UINT32) pInputFramePhysAddr)) { dwResult = TPR_FAIL; goto PP_CSC_YV12_RGB565Test_clean_up; } if (!PPAddOutputBuffer(hPP,(UINT32) pOutputFramePhysAddr)) { dwResult = TPR_FAIL; goto PP_CSC_YV12_RGB565Test_clean_up; } if (!PPWaitForNotBusy(hPP, FRAME_INTERRUPT)) { dwResult = TPR_FAIL; goto PP_CSC_YV12_RGB565Test_clean_up; } RETAILMSG(1, (TEXT("===========FLIP TIME: %dms====== \r\n"), GetTickCount()-starttime)); //----------------------------- // Stop PP //----------------------------- if (!PPStop(hPP)) { dwResult = TPR_FAIL; goto PP_CSC_YV12_RGB565Test_clean_up; } if (!PPClearBuffers(hPP)) { dwResult = TPR_FAIL; goto PP_CSC_YV12_RGB565Test_clean_up; } ShowRGBContent((UINT8 *) pOutputFrameVirtAddr, PP_TEST_FRAME_WIDTH, PP_TEST_FRAME_HEIGHT); iOption = MessageBox( NULL,TEXT("After CSC(YV12->RGB565). Is it correct?"),TEXT("Test result"),MB_YESNO ); if ( IDNO == iOption ) { dwResult = TPR_FAIL; } else { dwResult = TPR_PASS; } PP_CSC_YV12_RGB565Test_clean_up: if(NULL != pInputFrameVirtAddr) { FreePhysMem( pInputFrameVirtAddr ); pInputFrameVirtAddr = NULL; } if(NULL != pOutputFrameVirtAddr) { FreePhysMem( pOutputFrameVirtAddr ); pOutputFrameVirtAddr = NULL; } PPTestDeInit(); LogEntry(L"%d :Out %s Function \r\n",++abhineet,__WFUNCTION__); return dwResult; } The below is the flow for this function. It tells the start and end of this test. *** vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv *** TEST STARTING *** *** Test Name: PP CSC(YV12-RGB565) Test *** Test ID: 500 *** Library Path: pp_test.dll *** Command Line: *** Kernel Mode: Yes *** Random Seed: 24421 *** Thread Count: 0 *** vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv *******Abhineet-PPTEST : 338 : In ShellProc Function *******Abhineet-PPTEST : 339 : In Debug Function PP_TEST: ShellProc(SPM_BEGIN_TEST, ...) called *******Abhineet-PPTEST : 340 :Out Debug Function BEGIN TEST: "PP CSC(YV12-RGB565) Test", Threads=0, Seed=24421 *******Abhineet-PPTEST : 341 :Out ShellProc Function *******Abhineet-PPTEST : 342 : In PP_CSC_YV12_RGB565Test Function PP_CSC_YV12_RGB565Test *******Abhineet-PPTEST : 343 : In PPTestInit Function *******Abhineet-PPTEST : 344 : In GetPanelDimensions Function *******Abhineet-PPTEST : 345 :Out GetPanelDimensions Function GetPanelDimensions: width=1024 height=768 bpp=16 *******Abhineet-PPTEST : 346 :Out PPTestInit Function *******Abhineet-PPTEST : 347 : In ReadImage Function RELFSD: Opening file flags_112x160.yv12 from desktop *******Abhineet-PPTEST : 348 :Out ReadImage Function ===========FLIP TIME: 1ms====== *******Abhineet-PPTEST : 349 : In ShowRGBContent Function *******Abhineet-PPTEST : 350 :Out ShowRGBContent Function *******Abhineet-PPTEST : 351 : In PPTestDeInit Function *******Abhineet-PPTEST : 352 :Out PPTestDeInit Function *******Abhineet-PPTEST : 353 :Out PP_CSC_YV12_RGB565Test Function *******Abhineet-PPTEST : 354 : In DllMain Function *******Abhineet-PPTEST : 355 :Out DllMain Function *******Abhineet-PPTEST : 356 : In ShellProc Function *******Abhineet-PPTEST : 357 : In Debug Function PP_TEST: ShellProc(SPM_END_TEST, ...) called *******Abhineet-PPTEST : 358 :Out Debug Function END TEST: "PP CSC(YV12-RGB565) Test", PASSED, Time=6.007 *******Abhineet-PPTEST : 359 :Out ShellProc Function *** ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ *** TEST COMPLETED *** *** Test Name: PP CSC(YV12-RGB565) Test *** Test ID: 500 *** Library Path: pp_test.dll *** Command Line: *** Kernel Mode: Yes *** Result: Passed *** Random Seed: 24421 *** Thread Count: 1 *** Execution Time: 0:00:06.007 *** ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Please help me out to make changes to the above function for yuv444-rgb888.

    Read the article

  • How to capture live camera frames in RGB with DirectShow

    - by Jonny Boy
    I'm implementing live video capture through DirectShow for live processing and display. (Augmented Reality app). I can access the pixels easily enough, but it seems I can't get the SampleGrabber to provide RGB data. The device (an iSight -- running VC++ Express in VMWare) only reports MEDIASUBTYPE_YUY2. After extensive Googling, I still can't figure out whether DirectShow is supposed to provide built-in color space conversion for this sort of thing. Some sites report that there is no YUV<-RGB conversion built in, others report that you just have to call SetMediaType on your ISampleGrabber with an RGB subtype. Any advice is greatly appreciated, I'm going nuts on this one. Code provided below. Please note that The code works, except that it doesn't provide RGB data I'm aware that I can implement my own conversion filter, but this is not feasible because I'd have to anticipate every possible device format, and this is a relatively small project // Playback IGraphBuilder *pGraphBuilder = NULL; ICaptureGraphBuilder2 *pCaptureGraphBuilder2 = NULL; IMediaControl *pMediaControl = NULL; IBaseFilter *pDeviceFilter = NULL; IAMStreamConfig *pStreamConfig = NULL; BYTE *videoCaps = NULL; AM_MEDIA_TYPE **mediaTypeArray = NULL; // Device selection ICreateDevEnum *pCreateDevEnum = NULL; IEnumMoniker *pEnumMoniker = NULL; IMoniker *pMoniker = NULL; ULONG nFetched = 0; HRESULT hr = CoInitializeEx(NULL, COINIT_MULTITHREADED); // Create CreateDevEnum to list device hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (PVOID *)&pCreateDevEnum); if (FAILED(hr)) goto ReleaseDataAndFail; // Create EnumMoniker to list devices hr = pCreateDevEnum->CreateClassEnumerator(CLSID_VideoInputDeviceCategory, &pEnumMoniker, 0); if (FAILED(hr)) goto ReleaseDataAndFail; pEnumMoniker->Reset(); // Find desired device while (pEnumMoniker->Next(1, &pMoniker, &nFetched) == S_OK) { IPropertyBag *pPropertyBag; TCHAR devname[256]; // bind to IPropertyBag hr = pMoniker-&gt;BindToStorage(0, 0, IID_IPropertyBag, (void **)&amp;pPropertyBag); if (FAILED(hr)) { pMoniker-&gt;Release(); continue; } VARIANT varName; VariantInit(&amp;varName); HRESULT hr = pPropertyBag-&gt;Read(L"DevicePath", &amp;varName, 0); if (FAILED(hr)) { pMoniker-&gt;Release(); pPropertyBag-&gt;Release(); continue; } char devicePath[DeviceInfo::STRING_LENGTH_MAX] = ""; wcstombs(devicePath, varName.bstrVal, DeviceInfo::STRING_LENGTH_MAX); if (strcmp(devicePath, deviceId) == 0) { // Bind Moniker to Filter pMoniker-&gt;BindToObject(0, 0, IID_IBaseFilter, (void**)&amp;pDeviceFilter); break; } pMoniker-&gt;Release(); pPropertyBag-&gt;Release(); } if (pDeviceFilter == NULL) goto ReleaseDataAndFail; // Create sample grabber IBaseFilter *pGrabberF = NULL; hr = CoCreateInstance(CLSID_SampleGrabber, NULL, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void**)&pGrabberF); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGrabberF->QueryInterface(IID_ISampleGrabber, (void**)&pGrabber); if (FAILED(hr)) goto ReleaseDataAndFail; // Create FilterGraph hr = CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC, IID_IGraphBuilder, (LPVOID *)&pGraphBuilder); if (FAILED(hr)) goto ReleaseDataAndFail; // create CaptureGraphBuilder2 hr = CoCreateInstance(CLSID_CaptureGraphBuilder2, NULL, CLSCTX_INPROC, IID_ICaptureGraphBuilder2, (LPVOID *)&pCaptureGraphBuilder2); if (FAILED(hr)) goto ReleaseDataAndFail; // set FilterGraph hr = pCaptureGraphBuilder2->SetFiltergraph(pGraphBuilder); if (FAILED(hr)) goto ReleaseDataAndFail; // get MediaControl interface hr = pGraphBuilder->QueryInterface(IID_IMediaControl, (LPVOID *)&pMediaControl); if (FAILED(hr)) goto ReleaseDataAndFail; // Add filters hr = pGraphBuilder->AddFilter(pDeviceFilter, L"Device Filter"); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGraphBuilder->AddFilter(pGrabberF, L"Sample Grabber"); if (FAILED(hr)) goto ReleaseDataAndFail; // Set sampe grabber options AM_MEDIA_TYPE mt; ZeroMemory(&mt, sizeof(AM_MEDIA_TYPE)); mt.majortype = MEDIATYPE_Video; mt.subtype = MEDIASUBTYPE_RGB32; hr = pGrabber->SetMediaType(&mt); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGrabber->SetOneShot(FALSE); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGrabber->SetBufferSamples(TRUE); if (FAILED(hr)) goto ReleaseDataAndFail; // Get stream config interface hr = pCaptureGraphBuilder2->FindInterface(NULL, &MEDIATYPE_Video, pDeviceFilter, IID_IAMStreamConfig, (void **)&pStreamConfig); if (FAILED(hr)) goto ReleaseDataAndFail; int streamCapsCount = 0, capsSize, bestFit = -1, bestFitPixelDiff = 1000000000, desiredPixelCount = _width * _height, bestFitWidth = 0, bestFitHeight = 0; float desiredAspectRatio = (float)_width / (float)_height; hr = pStreamConfig->GetNumberOfCapabilities(&streamCapsCount, &capsSize); if (FAILED(hr)) goto ReleaseDataAndFail; videoCaps = (BYTE *)malloc(capsSize * streamCapsCount); mediaTypeArray = (AM_MEDIA_TYPE **)malloc(sizeof(AM_MEDIA_TYPE *) * streamCapsCount); for (int i = 0; i < streamCapsCount; i++) { hr = pStreamConfig->GetStreamCaps(i, &mediaTypeArray[i], videoCaps + capsSize * i); if (FAILED(hr)) continue; VIDEO_STREAM_CONFIG_CAPS *currentVideoCaps = (VIDEO_STREAM_CONFIG_CAPS *)(videoCaps + capsSize * i); int closestWidth = MAX(currentVideoCaps-&gt;MinOutputSize.cx, MIN(currentVideoCaps-&gt;MaxOutputSize.cx, width)); int closestHeight = MAX(currentVideoCaps-&gt;MinOutputSize.cy, MIN(currentVideoCaps-&gt;MaxOutputSize.cy, height)); int pixelDiff = ABS(desiredPixelCount - closestWidth * closestHeight); if (pixelDiff &lt; bestFitPixelDiff &amp;&amp; ABS(desiredAspectRatio - (float)closestWidth / (float)closestHeight) &lt; 0.1f) { bestFit = i; bestFitPixelDiff = pixelDiff; bestFitWidth = closestWidth; bestFitHeight = closestHeight; } } if (bestFit == -1) goto ReleaseDataAndFail; AM_MEDIA_TYPE *mediaType; hr = pStreamConfig->GetFormat(&mediaType); if (FAILED(hr)) goto ReleaseDataAndFail; VIDEOINFOHEADER *videoInfoHeader = (VIDEOINFOHEADER *)mediaType->pbFormat; videoInfoHeader->bmiHeader.biWidth = bestFitWidth; videoInfoHeader->bmiHeader.biHeight = bestFitHeight; //mediaType->subtype = MEDIASUBTYPE_RGB32; hr = pStreamConfig->SetFormat(mediaType); if (FAILED(hr)) goto ReleaseDataAndFail; pStreamConfig->Release(); pStreamConfig = NULL; free(videoCaps); videoCaps = NULL; free(mediaTypeArray); mediaTypeArray = NULL; // Connect pins IPin *pDeviceOut = NULL, *pGrabberIn = NULL; if (FindPin(pDeviceFilter, PINDIR_OUTPUT, 0, &pDeviceOut) && FindPin(pGrabberF, PINDIR_INPUT, 0, &pGrabberIn)) { hr = pGraphBuilder->Connect(pDeviceOut, pGrabberIn); if (FAILED(hr)) goto ReleaseDataAndFail; } else { goto ReleaseDataAndFail; } // start playing hr = pMediaControl->Run(); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGrabber->GetConnectedMediaType(&mt); // Set dimensions width = bestFitWidth; height = bestFitHeight; _width = bestFitWidth; _height = bestFitHeight; // Allocate pixel buffer pPixelBuffer = (unsigned *)malloc(width * height * 4); // Release objects pGraphBuilder->Release(); pGraphBuilder = NULL; pEnumMoniker->Release(); pEnumMoniker = NULL; pCreateDevEnum->Release(); pCreateDevEnum = NULL; return true;

    Read the article

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • Access violation writing location, in my loop

    - by numerical25
    The exact error I am getting is First-chance exception at 0x0096234a in chp2.exe: 0xC0000005: Access violation writing location 0x002b0000. Windows has triggered a breakpoint in chp2.exe. And the breakpoint stops here for(DWORD i = 0; i < m; ++i) { //we are start at the top of z float z = halfDepth - i*dx; for(DWORD j = 0; j < n; ++j) { //to the left of us float x = -halfWidth + j*dx; float y = 0.0f; vertices[i*n+j].pos = D3DXVECTOR3(x, y, z); //<----- Right here vertices[i*n+j].color = D3DXVECTOR4(1.0f, 0.0f, 0.0f, 0.0f); } } I am not sure what I am doing wrong. below is the code in its entirety #include "MyGame.h" //#include "CubeVector.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; //grid information #define NUM_COLS 16 #define NUM_ROWS 16 #define CELL_WIDTH 32 #define CELL_HEIGHT 32 #define NUM_VERTSX (NUM_COLS + 1) #define NUM_VERTSY (NUM_ROWS + 1) bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix //The first line of code creates your identity matrix. Second line of code //second combines your camera position, target location, and which way is up respectively D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(200.0f, 60.0f, -20.0f), new D3DXVECTOR3(200.0f, 50.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngle = 0.0f; // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&WorldMatrix, rotationAngle); rotationAngle += (float)D3DX_PI * 0.0f; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); mpD3DDevice->IASetIndexBuffer(modelObject.pIndicesBuffer, DXGI_FORMAT_R32_UINT, 0); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->DrawIndexed(modelObject.numIndices,0,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { DX3dApp::Render(); } bool MyGame::CreateObject() { //dx will represent the width and the height of the spacing of each vector float dx = 1; //Below are the number of vertices //m is the vertices of each row. n is the columns DWORD m = 30; DWORD n = 30; //This get the width of the entire land //30 - 1 = 29 rows * 1 = 29 * 0.5 = 14.5 float halfWidth = (n-1)*dx*0.5f; float halfDepth = (m-1)*dx*0.5f; float vertexsize = m * n; VertexPos vertices[80]; for(DWORD i = 0; i < m; ++i) { //we are start at the top of z float z = halfDepth - i*dx; for(DWORD j = 0; j < n; ++j) { //to the left of us float x = -halfWidth + j*dx; float y = 0.0f; vertices[i*n+j].pos = D3DXVECTOR3(x, y, z); vertices[i*n+j].color = D3DXVECTOR4(1.0f, 0.0f, 0.0f, 0.0f); } } int k = 0; DWORD indices[540]; for(DWORD i = 0; i < n-1; ++i) { for(DWORD j = 0; j < n-1; ++j) { indices[k] = (i * n) + j; indices[k + 1] = (i * n) + j + 1; indices[k + 2] = (i + 1) * n + j; indices[k + 3] = (i + 1) * n + j; indices[k + 4] = (i * n) + j + 1; indices[k + 5] = (i + 1) * n + j+ 1; k += 6; } } //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; modelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * modelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pIndicesBuffer); if(FAILED(hr)) return false; ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; }

    Read the article

  • Create a class that inherets DrawableGameComponent in XNA as a CLASS with custom functions

    - by user3675013
    using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; namespace TileEngine { class Renderer : DrawableGameComponent { public Renderer(Game game) : base(game) { } SpriteBatch spriteBatch ; protected override void LoadContent() { base.LoadContent(); } public override void Draw(GameTime gameTime) { base.Draw(gameTime); } public override void Update(GameTime gameTime) { base.Update(gameTime); } public override void Initialize() { base.Initialize(); } public RenderTarget2D new_texture(int width, int height) { Texture2D TEX = new Texture2D(GraphicsDevice, width, height); //create the texture to render to RenderTarget2D Mine = new RenderTarget2D(GraphicsDevice, width, height); GraphicsDevice.SetRenderTarget(Mine); //set the render device to the reference provided //maybe base.draw can be used with spritebatch. Idk. We'll see if the order of operation //works out. Wish I could call base.draw here. return Mine; //I'm hoping that this returns the same instance and not a copy. } public void draw_texture(int width, int height, RenderTarget2D Mine) { GraphicsDevice.SetRenderTarget(null); //Set the renderer to render to the backbuffer again Rectangle drawrect = new Rectangle(0, 0, width, height); //Set the rendering size to what we want spriteBatch.Begin(); //This uses spritebatch to draw the texture directly to the screen spriteBatch.Draw(Mine, drawrect, Color.White); //This uses the color white spriteBatch.End(); //ends the spritebatch //Call base.draw after this since it doesn't seem to recognize inside the function //maybe base.draw can be used with spritebatch. Idk. We'll see if the order of operation //works out. Wish I could call base.draw here. } } } I solved a previous issue where I wasn't allowed to access GraphicsDevice outside the main Default 'main' class Ie "Game" or "Game1" etc. Now I have a new issue. FYi no one told me that it would be possible to use GraphicsDevice References to cause it to not be null by using the drawable class. (hopefully after this last bug is solved it doesn't still return null) Anyways at present the problem is that I can't seem to get it to initialize as an instance in my main program. Ie Renderer tileClipping; and I'm unable to use it such as it is to be noted i haven't even gotten to testing these two steps below but before it compiled but when those functions of this class were called it complained that it can't render to a null device. Which meant that the device wasn't being initialized. I had no idea why. It took me hours to google this. I finally figured out the words I needed.. which were "do my rendering in XNA in a seperate class" now I haven't used the addcomponent function because I don't want it to only run these functions automatically and I want to be able to call the custom ones. In a nutshell what I want is: *access to rendering targets and graphics device OUTSIDE default class *passing of Rendertarget2D (which contain textures and textures should automatically be passed with a rendering target? ) *the device should be passed to this function as well OR the device should be passed to this function as a byproduct of passing the rendertarget (which is automatically associated with the render device it was given originally) *I'm assuming I'm dealing with abstracted pointers here so when I pass a class object or instance, I should be recieving the SAME object , I referenced, and not a copy that has only the lifespan of the function running. *the purpose for all these options: I want to initialize new 2d textures on the fly to customize tileclipping and even the X , y Offsets of where a WHOLE texture will be rendered, and the X and Y offsets of where tiles will be rendered ON that surface. This is why. And I'll be doing region based lighting effects per tile or even per 8X8 pixel spaces.. we'll see I'll also be doing sprite rotations on the whole texture then copying it again to a circular masked texture, and then doing a second copy for only solid tiles for masked rotated collisions on sprites. I'll be checking the masked pixels for my collision, and using raycasting possibly to check for collisions on those areas. The sprite will stay in the center, when this rotation happens. Here is a detailed diagram: http://i.stack.imgur.com/INf9K.gif I'll be using texture2D for steps 4-6 I suppose for steps 1 as well. Now ontop of that, the clipping size (IE the sqaure rendered) will be able to be shrunk or increased, on a per frame basis Therefore I can't use the same static size for my main texture2d and I can't use just the backbuffer Or we get the annoying flicker. Also I will have multiple instances of the renderer class so that I can freely pass textures around as if they are playing cards (in a sense) layering them ontop of eachother, cropping them how i want and such. and then using spritebatch to simply draw them at the locations I want. Hopefully this makes sense, and yes I will be planning on using alpha blending but only after all tiles have been drawn.. The masked collision is important and Yes I am avoiding using math on the tile rendering and instead resorting to image manipulation in video memory which is WHY I need this to work the way I'm intending it to work and not in the default way that XNA seems to handle graphics. Thanks to anyone willing to help. I hate the code form offered, because then I have to rely on static presence of an update function. What if I want to kill that update function or that object, but have it in memory, but just have it temporarily inactive? I'm making the assumption here the update function of one of these gamecomponents is automatic ? Anyways this is as detailed as I can make this post hopefully someone can help me solve the issue. Instead of tell me "derrr don't do it this wayyy" which is what a few people told me (but they don't understand the actual goal I have in mind) I'm trying to create basically a library where I can copy images freely no matter the size, i just have to specify the size in the function then as long as a reference to that object exists it should be kept alive? right? :/ anyways.. Anything else? I Don't know. I understand object oriented coding but I don't understand this XNA It's beggining to feel impossible to do anything custom in it without putting ALL my rendering code into the draw function of the main class tileClipping.new_texture(GraphicsDevice, width, height) tileClipping.Draw_texture(...)

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • Debian squeeze keyboard and touchpad not working / detected on laptop

    - by Esa
    They work before gdm3 starts. a connected mouse also stops working, but functions after removal and re-plug. no xorg.conf. log doesn't show any loading of drivers for kbd/touchpad [ 33.783] X.Org X Server 1.10.4 Release Date: 2011-08-19 [ 33.783] X Protocol Version 11, Revision 0 [ 33.783] Build Operating System: Linux 3.0.0-1-amd64 x86_64 Debian [ 33.783] Current Operating System: Linux sus 3.2.0-0.bpo.2-amd64 #1 SMP Sun Mar 25 10:33:35 UTC 2012 x86_64 [ 33.783] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-0.bpo.2-amd64 root=UUID=8686f840-d165-4d1e-b995-2ebbd94aa3d2 ro quiet [ 33.783] Build Date: 28 August 2011 09:39:43PM [ 33.783] xorg-server 2:1.10.4-1~bpo60+1 (Cyril Brulebois <[email protected]>) [ 33.783] Current version of pixman: 0.16.4 [ 33.783] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 33.783] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 33.783] (==) Log file: "/var/log/Xorg.0.log", Time: Wed Mar 28 09:34:04 2012 [ 33.837] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 33.936] (==) No Layout section. Using the first Screen section. [ 33.936] (==) No screen section available. Using defaults. [ 33.936] (**) |-->Screen "Default Screen Section" (0) [ 33.936] (**) | |-->Monitor "<default monitor>" [ 33.936] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 33.936] (==) Automatically adding devices [ 33.936] (==) Automatically enabling devices [ 34.164] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 34.164] Entry deleted from font path. [ 34.226] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/75dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, /usr/share/fonts/X11/75dpi, /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, built-ins [ 34.226] (==) ModulePath set to "/usr/lib/xorg/modules" [ 34.226] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 34.226] (II) Loader magic: 0x7d3ae0 [ 34.226] (II) Module ABI versions: [ 34.226] X.Org ANSI C Emulation: 0.4 [ 34.226] X.Org Video Driver: 10.0 [ 34.226] X.Org XInput driver : 12.2 [ 34.226] X.Org Server Extension : 5.0 [ 34.227] (--) PCI:*(0:1:5:0) 1002:9712:103c:1661 rev 0, Mem @ 0xd0000000/268435456, 0xf1400000/65536, 0xf1300000/1048576, I/O @ 0x00008000/256 [ 34.227] (--) PCI: (0:2:0:0) 1002:6760:103c:1661 rev 0, Mem @ 0xe0000000/268435456, 0xf0300000/131072, I/O @ 0x00004000/256, BIOS @ 0x????????/131072 [ 34.227] (II) Open ACPI successful (/var/run/acpid.socket) [ 34.227] (II) LoadModule: "extmod" [ 34.249] (II) Loading /usr/lib/xorg/modules/extensions/libextmod.so [ 34.277] (II) Module extmod: vendor="X.Org Foundation" [ 34.277] compiled for 1.10.4, module version = 1.0.0 [ 34.277] Module class: X.Org Server Extension [ 34.277] ABI class: X.Org Server Extension, version 5.0 [ 34.277] (II) Loading extension SELinux [ 34.277] (II) Loading extension MIT-SCREEN-SAVER [ 34.277] (II) Loading extension XFree86-VidModeExtension [ 34.277] (II) Loading extension XFree86-DGA [ 34.277] (II) Loading extension DPMS [ 34.277] (II) Loading extension XVideo [ 34.277] (II) Loading extension XVideo-MotionCompensation [ 34.277] (II) Loading extension X-Resource [ 34.277] (II) LoadModule: "dbe" [ 34.277] (II) Loading /usr/lib/xorg/modules/extensions/libdbe.so [ 34.299] (II) Module dbe: vendor="X.Org Foundation" [ 34.299] compiled for 1.10.4, module version = 1.0.0 [ 34.299] Module class: X.Org Server Extension [ 34.299] ABI class: X.Org Server Extension, version 5.0 [ 34.299] (II) Loading extension DOUBLE-BUFFER [ 34.299] (II) LoadModule: "glx" [ 34.299] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 34.477] (II) Module glx: vendor="X.Org Foundation" [ 34.477] compiled for 1.10.4, module version = 1.0.0 [ 34.477] ABI class: X.Org Server Extension, version 5.0 [ 34.477] (==) AIGLX enabled [ 34.477] (II) Loading extension GLX [ 34.477] (II) LoadModule: "record" [ 34.478] (II) Loading /usr/lib/xorg/modules/extensions/librecord.so [ 34.481] (II) Module record: vendor="X.Org Foundation" [ 34.481] compiled for 1.10.4, module version = 1.13.0 [ 34.481] Module class: X.Org Server Extension [ 34.481] ABI class: X.Org Server Extension, version 5.0 [ 34.481] (II) Loading extension RECORD [ 34.481] (II) LoadModule: "dri" [ 34.481] (II) Loading /usr/lib/xorg/modules/extensions/libdri.so [ 34.512] (II) Module dri: vendor="X.Org Foundation" [ 34.512] compiled for 1.10.4, module version = 1.0.0 [ 34.512] ABI class: X.Org Server Extension, version 5.0 [ 34.512] (II) Loading extension XFree86-DRI [ 34.512] (II) LoadModule: "dri2" [ 34.512] (II) Loading /usr/lib/xorg/modules/extensions/libdri2.so [ 34.515] (II) Module dri2: vendor="X.Org Foundation" [ 34.515] compiled for 1.10.4, module version = 1.2.0 [ 34.515] ABI class: X.Org Server Extension, version 5.0 [ 34.515] (II) Loading extension DRI2 [ 34.515] (==) Matched ati as autoconfigured driver 0 [ 34.515] (==) Matched vesa as autoconfigured driver 1 [ 34.515] (==) Matched fbdev as autoconfigured driver 2 [ 34.515] (==) Assigned the driver to the xf86ConfigLayout [ 34.515] (II) LoadModule: "ati" [ 34.706] (II) Loading /usr/lib/xorg/modules/drivers/ati_drv.so [ 34.724] (II) Module ati: vendor="X.Org Foundation" [ 34.724] compiled for 1.10.3, module version = 6.14.2 [ 34.724] Module class: X.Org Video Driver [ 34.724] ABI class: X.Org Video Driver, version 10.0 [ 34.724] (II) LoadModule: "radeon" [ 34.725] (II) Loading /usr/lib/xorg/modules/drivers/radeon_drv.so [ 34.923] (II) Module radeon: vendor="X.Org Foundation" [ 34.923] compiled for 1.10.3, module version = 6.14.2 [ 34.923] Module class: X.Org Video Driver [ 34.923] ABI class: X.Org Video Driver, version 10.0 [ 34.945] (II) LoadModule: "vesa" [ 34.945] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 34.988] (II) Module vesa: vendor="X.Org Foundation" [ 34.988] compiled for 1.10.3, module version = 2.3.0 [ 34.988] Module class: X.Org Video Driver [ 34.988] ABI class: X.Org Video Driver, version 10.0 [ 34.988] (II) LoadModule: "fbdev" [ 34.988] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 35.020] (II) Module fbdev: vendor="X.Org Foundation" [ 35.020] compiled for 1.10.3, module version = 0.4.2 [ 35.020] ABI class: X.Org Video Driver, version 10.0 [ 35.020] (II) RADEON: Driver for ATI Radeon chipsets: <snip> [ 35.023] (II) VESA: driver for VESA chipsets: vesa [ 35.023] (II) FBDEV: driver for framebuffer: fbdev [ 35.023] (++) using VT number 7 [ 35.033] (II) Loading /usr/lib/xorg/modules/drivers/radeon_drv.so [ 35.033] (II) [KMS] Kernel modesetting enabled. [ 35.033] (WW) Falling back to old probe method for vesa [ 35.034] (WW) Falling back to old probe method for fbdev [ 35.034] (II) Loading sub module "fbdevhw" [ 35.034] (II) LoadModule: "fbdevhw" [ 35.034] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 35.185] (II) Module fbdevhw: vendor="X.Org Foundation" [ 35.185] compiled for 1.10.4, module version = 0.0.2 [ 35.185] ABI class: X.Org Video Driver, version 10.0 [ 35.288] (II) RADEON(0): Creating default Display subsection in Screen section "Default Screen Section" for depth/fbbpp 24/32 [ 35.288] (==) RADEON(0): Depth 24, (--) framebuffer bpp 32 [ 35.288] (II) RADEON(0): Pixel depth = 24 bits stored in 4 bytes (32 bpp pixmaps) [ 35.288] (==) RADEON(0): Default visual is TrueColor [ 35.288] (==) RADEON(0): RGB weight 888 [ 35.288] (II) RADEON(0): Using 8 bits per RGB (8 bit DAC) [ 35.288] (--) RADEON(0): Chipset: "ATI Mobility Radeon HD 4200" (ChipID = 0x9712) [ 35.288] (II) RADEON(0): PCI card detected [ 35.288] drmOpenDevice: node name is /dev/dri/card0 [ 35.288] drmOpenDevice: open result is 9, (OK) [ 35.288] drmOpenByBusid: Searching for BusID pci:0000:01:05.0 [ 35.288] drmOpenDevice: node name is /dev/dri/card0 [ 35.288] drmOpenDevice: open result is 9, (OK) [ 35.288] drmOpenByBusid: drmOpenMinor returns 9 [ 35.288] drmOpenByBusid: drmGetBusid reports pci:0000:01:05.0 [ 35.288] (II) Loading sub module "exa" [ 35.288] (II) LoadModule: "exa" [ 35.288] (II) Loading /usr/lib/xorg/modules/libexa.so [ 35.335] (II) Module exa: vendor="X.Org Foundation" [ 35.335] compiled for 1.10.4, module version = 2.5.0 [ 35.335] ABI class: X.Org Video Driver, version 10.0 [ 35.335] (II) RADEON(0): KMS Color Tiling: disabled [ 35.335] (II) RADEON(0): KMS Pageflipping: enabled [ 35.335] (II) RADEON(0): SwapBuffers wait for vsync: enabled [ 35.360] (II) RADEON(0): Output VGA-0 has no monitor section [ 35.360] (II) RADEON(0): Output LVDS has no monitor section [ 35.364] (II) RADEON(0): Output HDMI-0 has no monitor section [ 35.388] (II) RADEON(0): EDID for output VGA-0 [ 35.388] (II) RADEON(0): EDID for output LVDS [ 35.388] (II) RADEON(0): Manufacturer: LGD Model: 2ac Serial#: 0 [ 35.388] (II) RADEON(0): Year: 2010 Week: 0 [ 35.388] (II) RADEON(0): EDID Version: 1.3 [ 35.388] (II) RADEON(0): Digital Display Input [ 35.388] (II) RADEON(0): Max Image Size [cm]: horiz.: 34 vert.: 19 [ 35.388] (II) RADEON(0): Gamma: 2.20 [ 35.388] (II) RADEON(0): No DPMS capabilities specified [ 35.388] (II) RADEON(0): Supported color encodings: RGB 4:4:4 YCrCb 4:4:4 [ 35.388] (II) RADEON(0): First detailed timing is preferred mode [ 35.388] (II) RADEON(0): redX: 0.616 redY: 0.371 greenX: 0.355 greenY: 0.606 [ 35.388] (II) RADEON(0): blueX: 0.152 blueY: 0.100 whiteX: 0.313 whiteY: 0.329 [ 35.388] (II) RADEON(0): Manufacturer's mask: 0 [ 35.388] (II) RADEON(0): Supported detailed timing: [ 35.388] (II) RADEON(0): clock: 69.3 MHz Image Size: 344 x 194 mm [ 35.388] (II) RADEON(0): h_active: 1366 h_sync: 1398 h_sync_end 1430 h_blank_end 1486 h_border: 0 [ 35.388] (II) RADEON(0): v_active: 768 v_sync: 770 v_sync_end 774 v_blanking: 782 v_border: 0 [ 35.388] (II) RADEON(0): LG Display [ 35.388] (II) RADEON(0): LP156WH2-TLQB [ 35.388] (II) RADEON(0): EDID (in hex): [ 35.388] (II) RADEON(0): 00ffffffffffff0030e4ac0200000000 [ 35.388] (II) RADEON(0): 00140103802213780ac1259d5f5b9b27 [ 35.388] (II) RADEON(0): 19505400000001010101010101010101 [ 35.388] (II) RADEON(0): 010101010101121b567850000e302020 [ 35.388] (II) RADEON(0): 240058c2100000190000000000000000 [ 35.388] (II) RADEON(0): 00000000000000000000000000fe004c [ 35.388] (II) RADEON(0): 4720446973706c61790a2020000000fe [ 35.388] (II) RADEON(0): 004c503135365748322d544c514200c1 [ 35.388] (II) RADEON(0): Printing probed modes for output LVDS [ 35.388] (II) RADEON(0): Modeline "1366x768"x59.6 69.30 1366 1398 1430 1486 768 770 774 782 -hsync -vsync (46.6 kHz) [ 35.388] (II) RADEON(0): Modeline "1280x720"x59.9 74.50 1280 1344 1472 1664 720 723 728 748 -hsync +vsync (44.8 kHz) [ 35.388] (II) RADEON(0): Modeline "1152x768"x59.8 71.75 1152 1216 1328 1504 768 771 781 798 -hsync +vsync (47.7 kHz) [ 35.388] (II) RADEON(0): Modeline "1024x768"x59.9 63.50 1024 1072 1176 1328 768 771 775 798 -hsync +vsync (47.8 kHz) [ 35.388] (II) RADEON(0): Modeline "800x600"x59.9 38.25 800 832 912 1024 600 603 607 624 -hsync +vsync (37.4 kHz) [ 35.388] (II) RADEON(0): Modeline "848x480"x59.7 31.50 848 872 952 1056 480 483 493 500 -hsync +vsync (29.8 kHz) [ 35.388] (II) RADEON(0): Modeline "720x480"x59.7 26.75 720 744 808 896 480 483 493 500 -hsync +vsync (29.9 kHz) [ 35.388] (II) RADEON(0): Modeline "640x480"x59.4 23.75 640 664 720 800 480 483 487 500 -hsync +vsync (29.7 kHz) [ 35.392] (II) RADEON(0): EDID for output HDMI-0 [ 35.392] (II) RADEON(0): Output VGA-0 disconnected [ 35.392] (II) RADEON(0): Output LVDS connected [ 35.392] (II) RADEON(0): Output HDMI-0 disconnected [ 35.392] (II) RADEON(0): Using exact sizes for initial modes [ 35.392] (II) RADEON(0): Output LVDS using initial mode 1366x768 [ 35.392] (II) RADEON(0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. [ 35.392] (II) RADEON(0): mem size init: gart size :1fdff000 vram size: s:10000000 visible:fba0000 [ 35.392] (II) RADEON(0): EXA: Driver will allow EXA pixmaps in VRAM [ 35.392] (==) RADEON(0): DPI set to (96, 96) [ 35.392] (II) Loading sub module "fb" [ 35.392] (II) LoadModule: "fb" [ 35.392] (II) Loading /usr/lib/xorg/modules/libfb.so [ 35.492] (II) Module fb: vendor="X.Org Foundation" [ 35.492] compiled for 1.10.4, module version = 1.0.0 [ 35.492] ABI class: X.Org ANSI C Emulation, version 0.4 [ 35.492] (II) Loading sub module "ramdac" [ 35.492] (II) LoadModule: "ramdac" [ 35.492] (II) Module "ramdac" already built-in [ 35.492] (II) UnloadModule: "vesa" [ 35.492] (II) Unloading vesa [ 35.492] (II) UnloadModule: "fbdev" [ 35.492] (II) Unloading fbdev [ 35.492] (II) UnloadModule: "fbdevhw" [ 35.492] (II) Unloading fbdevhw [ 35.492] (--) Depth 24 pixmap format is 32 bpp [ 35.492] (II) RADEON(0): [DRI2] Setup complete [ 35.492] (II) RADEON(0): [DRI2] DRI driver: r600 [ 35.492] (II) RADEON(0): Front buffer size: 4224K [ 35.492] (II) RADEON(0): VRAM usage limit set to 228096K [ 35.615] (==) RADEON(0): Backing store disabled [ 35.615] (II) RADEON(0): Direct rendering enabled [ 35.658] (II) RADEON(0): Setting EXA maxPitchBytes [ 35.658] (II) EXA(0): Driver allocated offscreen pixmaps [ 35.658] (II) EXA(0): Driver registered support for the following operations: [ 35.658] (II) Solid [ 35.658] (II) Copy [ 35.658] (II) Composite (RENDER acceleration) [ 35.658] (II) UploadToScreen [ 35.658] (II) DownloadFromScreen [ 35.687] (II) RADEON(0): Acceleration enabled [ 35.687] (==) RADEON(0): DPMS enabled [ 35.687] (==) RADEON(0): Silken mouse enabled [ 35.721] (II) RADEON(0): Set up textured video [ 35.721] (II) RADEON(0): RandR 1.2 enabled, ignore the following RandR disabled message. [ 35.721] (--) RandR disabled [ 35.721] (II) Initializing built-in extension Generic Event Extension [ 35.721] (II) Initializing built-in extension SHAPE [ 35.721] (II) Initializing built-in extension MIT-SHM [ 35.721] (II) Initializing built-in extension XInputExtension [ 35.721] (II) Initializing built-in extension XTEST [ 35.721] (II) Initializing built-in extension BIG-REQUESTS [ 35.721] (II) Initializing built-in extension SYNC [ 35.721] (II) Initializing built-in extension XKEYBOARD [ 35.721] (II) Initializing built-in extension XC-MISC [ 35.721] (II) Initializing built-in extension SECURITY [ 35.721] (II) Initializing built-in extension XINERAMA [ 35.721] (II) Initializing built-in extension XFIXES [ 35.721] (II) Initializing built-in extension RENDER [ 35.721] (II) Initializing built-in extension RANDR [ 35.721] (II) Initializing built-in extension COMPOSITE [ 35.721] (II) Initializing built-in extension DAMAGE [ 35.721] (II) SELinux: Disabled on system [ 35.982] (II) AIGLX: enabled GLX_MESA_copy_sub_buffer [ 35.982] (II) AIGLX: enabled GLX_INTEL_swap_event [ 35.982] (II) AIGLX: enabled GLX_SGI_swap_control and GLX_MESA_swap_control [ 35.982] (II) AIGLX: enabled GLX_SGI_make_current_read [ 35.982] (II) AIGLX: GLX_EXT_texture_from_pixmap backed by buffer objects [ 35.982] (II) AIGLX: Loaded and initialized /usr/lib/dri/r600_dri.so [ 35.982] (II) GLX: Initialized DRI2 GL provider for screen 0 [ 35.999] (II) RADEON(0): Setting screen physical size to 361 x 203 [ 43.896] (II) RADEON(0): EDID vendor "LGD", prod id 684 [ 43.896] (II) RADEON(0): Printing DDC gathered Modelines: [ 43.896] (II) RADEON(0): Modeline "1366x768"x0.0 69.30 1366 1398 1430 1486 768 770 774 782 -hsync -vsync (46.6 kHz) [ 43.924] (II) RADEON(0): EDID vendor "LGD", prod id 684 [ 43.924] (II) RADEON(0): Printing DDC gathered Modelines: [ 43.924] (II) RADEON(0): Modeline "1366x768"x0.0 69.30 1366 1398 1430 1486 768 770 774 782 -hsync -vsync (46.6 kHz) [ 43.988] (II) RADEON(0): EDID vendor "LGD", prod id 684 [ 43.988] (II) RADEON(0): Printing DDC gathered Modelines: [ 43.988] (II) RADEON(0): Modeline "1366x768"x0.0 69.30 1366 1398 1430 1486 768 770 774 782 -hsync -vsync (46.6 kHz) [ 67.375] (II) config/udev: Adding input device Logitech USB Optical Mouse (/dev/input/event1) [ 67.376] (**) Logitech USB Optical Mouse: Applying InputClass "evdev pointer catchall" [ 67.376] (II) LoadModule: "evdev" [ 67.376] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 67.392] (II) Module evdev: vendor="X.Org Foundation" [ 67.392] compiled for 1.10.3, module version = 2.6.0 [ 67.392] Module class: X.Org XInput Driver [ 67.392] ABI class: X.Org XInput driver, version 12.2 [ 67.392] (II) Using input driver 'evdev' for 'Logitech USB Optical Mouse' [ 67.392] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 67.392] (**) Logitech USB Optical Mouse: always reports core events [ 67.392] (**) Logitech USB Optical Mouse: Device: "/dev/input/event1" [ 67.392] (--) Logitech USB Optical Mouse: Found 12 mouse buttons [ 67.392] (--) Logitech USB Optical Mouse: Found scroll wheel(s) [ 67.392] (--) Logitech USB Optical Mouse: Found relative axes [ 67.392] (--) Logitech USB Optical Mouse: Found x and y relative axes [ 67.392] (II) Logitech USB Optical Mouse: Configuring as mouse [ 67.392] (II) Logitech USB Optical Mouse: Adding scrollwheel support [ 67.392] (**) Logitech USB Optical Mouse: YAxisMapping: buttons 4 and 5 [ 67.392] (**) Logitech USB Optical Mouse: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 [ 67.392] (**) Option "config_info" "udev:/sys/devices/pci0000:00/0000:00:13.0/usb5/5-1/5-1:1.0/input/input14/event1" [ 67.392] (II) XINPUT: Adding extended input device "Logitech USB Optical Mouse" (type: MOUSE) [ 67.392] (II) Logitech USB Optical Mouse: initialized for relative axes. [ 67.392] (**) Logitech USB Optical Mouse: (accel) keeping acceleration scheme 1 [ 67.392] (**) Logitech USB Optical Mouse: (accel) acceleration profile 0 [ 67.392] (**) Logitech USB Optical Mouse: (accel) acceleration factor: 2.000 [ 67.392] (**) Logitech USB Optical Mouse: (accel) acceleration threshold: 4 [ 67.392] (II) config/udev: Adding input device Logitech USB Optical Mouse (/dev/input/mouse0) [ 67.392] (II) No input driver/identifier specified (ignoring) [ 78.692] (II) Logitech USB Optical Mouse: Close [ 78.692] (II) UnloadModule: "evdev" [ 78.692] (II) Unloading evdev

    Read the article

< Previous Page | 79 80 81 82 83 84  | Next Page >