Search Results

Search found 8902 results on 357 pages for 'hardware virtualization'.

Page 353/357 | < Previous Page | 349 350 351 352 353 354 355 356 357  | Next Page >

  • Matrix Multiplication with C++ AMP

    - by Daniel Moth
    As part of our API tour of C++ AMP, we looked recently at parallel_for_each. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. Now is the time, so this is part 2 of parallel_for_each, and also a post that brings together everything we've seen until now. The code for serial and accelerated Consider a naïve (or brute force) serial implementation of matrix multiplication  0: void MatrixMultiplySerial(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 1: { 2: for (int row = 0; row < M; row++) 3: { 4: for (int col = 0; col < N; col++) 5: { 6: float sum = 0.0f; 7: for(int i = 0; i < W; i++) 8: sum += vA[row * W + i] * vB[i * N + col]; 9: vC[row * N + col] = sum; 10: } 11: } 12: } We notice that each loop iteration is independent from each other and so can be parallelized. If in addition we have really large amounts of data, then this is a good candidate to offload to an accelerator. First, I'll just show you an example of what that code may look like with C++ AMP, and then we'll analyze it. It is assumed that you included at the top of your file #include <amp.h> 13: void MatrixMultiplySimple(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 14: { 15: concurrency::array_view<const float,2> a(M, W, vA); 16: concurrency::array_view<const float,2> b(W, N, vB); 17: concurrency::array_view<concurrency::writeonly<float>,2> c(M, N, vC); 18: concurrency::parallel_for_each(c.grid, 19: [=](concurrency::index<2> idx) restrict(direct3d) { 20: int row = idx[0]; int col = idx[1]; 21: float sum = 0.0f; 22: for(int i = 0; i < W; i++) 23: sum += a(row, i) * b(i, col); 24: c[idx] = sum; 25: }); 26: } First a visual comparison, just for fun: The beginning and end is the same, i.e. lines 0,1,12 are identical to lines 13,14,26. The double nested loop (lines 2,3,4,5 and 10,11) has been transformed into a parallel_for_each call (18,19,20 and 25). The core algorithm (lines 6,7,8,9) is essentially the same (lines 21,22,23,24). We have extra lines in the C++ AMP version (15,16,17). Now let's dig in deeper. Using array_view and extent When we decided to convert this function to run on an accelerator, we knew we couldn't use the std::vector objects in the restrict(direct3d) function. So we had a choice of copying the data to the the concurrency::array<T,N> object, or wrapping the vector container (and hence its data) with a concurrency::array_view<T,N> object from amp.h – here we used the latter (lines 15,16,17). Now we can access the same data through the array_view objects (a and b) instead of the vector objects (vA and vB), and the added benefit is that we can capture the array_view objects in the lambda (lines 19-25) that we pass to the parallel_for_each call (line 18) and the data will get copied on demand for us to the accelerator. Note that line 15 (and ditto for 16 and 17) could have been written as two lines instead of one: extent<2> e(M, W); array_view<const float, 2> a(e, vA); In other words, we could have explicitly created the extent object instead of letting the array_view create it for us under the covers through the constructor overload we chose. The benefit of the extent object in this instance is that we can express that the data is indeed two dimensional, i.e a matrix. When we were using a vector object we could not do that, and instead we had to track via additional unrelated variables the dimensions of the matrix (i.e. with the integers M and W) – aren't you loving C++ AMP already? Note that the const before the float when creating a and b, will result in the underling data only being copied to the accelerator and not be copied back – a nice optimization. A similar thing is happening on line 17 when creating array_view c, where we have indicated that we do not need to copy the data to the accelerator, only copy it back. The kernel dispatch On line 18 we make the call to the C++ AMP entry point (parallel_for_each) to invoke our parallel loop or, as some may say, dispatch our kernel. The first argument we need to pass describes how many threads we want for this computation. For this algorithm we decided that we want exactly the same number of threads as the number of elements in the output matrix, i.e. in array_view c which will eventually update the vector vC. So each thread will compute exactly one result. Since the elements in c are organized in a 2-dimensional manner we can organize our threads in a two-dimensional manner too. We don't have to think too much about how to create the first argument (a grid) since the array_view object helpfully exposes that as a property. Note that instead of c.grid we could have written grid<2>(c.extent) or grid<2>(extent<2>(M, N)) – the result is the same in that we have specified M*N threads to execute our lambda. The second argument is a restrict(direct3d) lambda that accepts an index object. Since we elected to use a two-dimensional extent as the first argument of parallel_for_each, the index will also be two-dimensional and as covered in the previous posts it represents the thread ID, which in our case maps perfectly to the index of each element in the resulting array_view. The kernel itself The lambda body (lines 20-24), or as some may say, the kernel, is the code that will actually execute on the accelerator. It will be called by M*N threads and we can use those threads to index into the two input array_views (a,b) and write results into the output array_view ( c ). The four lines (21-24) are essentially identical to the four lines of the serial algorithm (6-9). The only difference is how we index into a,b,c versus how we index into vA,vB,vC. The code we wrote with C++ AMP is much nicer in its indexing, because the dimensionality is a first class concept, so you don't have to do funny arithmetic calculating the index of where the next row starts, which you have to do when working with vectors directly (since they store all the data in a flat manner). I skipped over describing line 20. Note that we didn't really need to read the two components of the index into temporary local variables. This mostly reflects my personal choice, in some algorithms to break down the index into local variables with names that make sense for the algorithm, i.e. in this case row and col. In other cases it may i,j,k or x,y,z, or M,N or whatever. Also note that we could have written line 24 as: c(idx[0], idx[1])=sum  or  c(row, col)=sum instead of the simpler c[idx]=sum Targeting a specific accelerator Imagine that we had more than one hardware accelerator on a system and we wanted to pick a specific one to execute this parallel loop on. So there would be some code like this anywhere before line 18: vector<accelerator> accs = MyFunctionThatChoosesSuitableAccelerators(); accelerator acc = accs[0]; …and then we would modify line 18 so we would be calling another overload of parallel_for_each that accepts an accelerator_view as the first argument, so it would become: concurrency::parallel_for_each(acc.default_view, c.grid, ...and the rest of your code remains the same… how simple is that? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • CodePlex Daily Summary for Sunday, December 02, 2012

    CodePlex Daily Summary for Sunday, December 02, 2012Popular ReleasesD3 Loot Tracker: 1.5.6: Updated to work with D3 version 1.0.6.13300DirectQ: DirectQ II 2012-11-29: A (slightly) modernized port of Quake II to D3D9. You need SM3 or better hardware to run this - if you don't have it, then don't even bother. It should work on Windows Vista, 7 or 8; it may also work on XP but I haven't tested. Known bugs include: Some mods may not work. This is unfortunately due to the nature of Quake II's game DLLs; sometimes a recompile of the game DLL is all that's needed. In any event, ensure that the game DLL is compatible with the last release of Quake II first (...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.2: new UI for the Administration console Bugs fixes and improvement version 2.2.215.3JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...NHook - A debugger API: NHook 1.0: x86 debugger Resolve symbol from MS Public server Resolve RVA from executable's image Add breakpoints Assemble / Disassemble target process assembly More information here, you can also check unit tests that are real sample code.PDF Library: PDFLib v2.0: Release notes This new version include many bug fixes and include support for stream objects and cross-reference object streams. New FeatureExtract images from the PDFDocument.Editor: 2013.5: Whats new for Document.Editor 2013.5: New Read-only File support New Check For Updates support Minor Bug Fix's, improvements and speed upsMCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.76: Fixed a typo in ObjectLiteralProperty.IsConstant that caused all object literals to be treated like they were constants, and possibly moved around in the code when they shouldn't be.Kooboo CMS: Kooboo CMS 3.3.0: New features: Dropdown/Radio/Checkbox Lists no longer references the userkey. Instead they refer to the UUID field for input value. You can now delete, export, import content from database in the site settings. Labels can now be imported and exported. You can now set the required password strength and maximum number of incorrect login attempts. Child sites can inherit plugins from its parent sites. The view parameter can be changed through the page_context.current value. Addition of c...Facebook Windows 8 Sample: Facebook Windows 8 Sample: The current drop holds two versions of the sample: A basic version that uses a Facebook application to list the content of facebook page. A full version including the use of Bing Maps sdk for positioning the restaurant in a map, and showing how to get there. See Developing a Windows Store App to learn how to use the Bing Maps AJAX Control to add Bing Maps to your Windows Store app.Commerce Server Tools: Delete a Site (CS10): Updated from the Commerce Server 2002 version (Delete Site) to work with CS10. The tool will delete the CS Site, all associated resources, databases, IIS Sites, and folders disk.RaptorDB - The Document Store: v1.9.0: v1.9.0 - speed increase writing bitmap indexes to disk - bug fix hoot search with wildcards - bug fix datetime indexing with UTC time (all times are localtime) - upgrade to fastJSON v2.0.9 - upgrade to fastBinaryJSON v1.3.5 - changed CodeDOM to Reflection.Emit for MonoDroid compatibility - more optimized bitmap storage format (save offsets if smaller than WAH) - fixed path seperator character for monodroid and windows compatibility changed to Path.DirectorySeparatorChar - new generic Query i...Antenna Tracking Unit - Projet Tuteuré /w RFTronic: Présentation Projet: Ci-joint la présentation du projet par l'entreprise RFTronic.Distributed Publish/Subscribe (Pub/Sub) Event System: Distributed Pub Sub Event System Version 3.0: Important Wsp 3.0 is NOT backward compatible with Wsp 2.1. Prerequisites You need to install the Microsoft Visual C++ 2010 Redistributable Package. You can find it at: x64 http://www.microsoft.com/download/en/details.aspx?id=14632x86 http://www.microsoft.com/download/en/details.aspx?id=5555 Wsp now uses Rx (Reactive Extensions) and .Net 4.0 3.0 Enhancements I changed the topology from a hierarchy to peer-to-peer groups. This should provide much greater scalability and more fault-resi...Team Foundation Server Administration Tool: 2.2: TFS Administration Tool 2.2 supports the Team Foundation Server 2012 Object Model. Visual Studio 2012 or Team Explorer 2012 must be installed before you can install this tool. You can download and install Team Explorer 2012 from http://aka.ms/TeamExplorer2012. There are no functional changes between the previous release (2.1) and this release.New ProjectsAppWebStore: Application Store in ASP.NET MVCBaseCrafter: BaseCrafter ProjectBranding SharePoint 2013: An how-to project for Braning in SharePoint 2013Clrizr: A set of less files that automatically define color shades and font colors for use in LESS CSS enabled web applications. Code is located in /Styles/Clrizr/didxaza: proyecto para aprender a usar team fundation serverFastMapper - CONVENTION BASED MAPPER: Comming Soon Google Earth Wrapper: Google Earth c# wrapperKirikiri (TVP) 2 Core: Simplified Chinese Translation: Simplified Chinese translated version of Kirikiri (TVP) 2 Core (http://kikyou.info/tvp), based on the SVN Head development version.Linq to CRM for Silverlight: Linq to CRM framework allows Silverlgith application communicate with MS Dynamics CRM 2011 through the "LINQ to CRM" ORM layer.MovieBuddy: a simple movie UIMyPractice: Some small applications which mainly use microsoft technology, Win8 Metro, Javascript, WCF, C# and etc.MySubstitutionCipher: Substitution cipher educational programOmega Game Engine: Omega Game Engine Engine para criação de jogos 2D e 3D Direcx9/10OpenXML PowerPoint Generation: Sample Project for the use of OpenXML API 2.5 with PowerPointpdh2.0? ???? ??? Perfomance_counter? ?????.: pdh 2.0 ? ???? ?? ??.Riksdagsappen: Detta projekt ämnar att bygga en Windows 8 Store App vars mål är att sprida medvetenhet om Sveriges riksdag, dess ledamöter och deras arbete. Service Stack Docs: Maintain and generate documentation for Service Stack services directly from code.SharpDX for Rastertek tutorials: This project is introduction to the SharpDX by following the Rastertek tutorials which are written in C++.Simple Set for .net: simple set class makes it easier for students of computer science to manage sets and set related operations. It's developed in C#.Spk.Controls: The Spk.Controls is a visual control library for .NET Windows Forms.Stateless Designer: Visual Studio extension to support visual design of stateless state machinesTeamWorkProject: this project for team working training out of CompanyUsing the Microsoft Kinect to control GoogleMap: The project is a WPF application that uses Microsoft Kinect to control google maps. Feel free to learn WPF MVVM pattern and Kinect development from it!VR Player: VR Player is an experimental Virtual Reality Media Player for Head-Mounted Display devices like the Oculus Rift.VS Tool for WSS 3.0: Visual Studio (2005 and 2008) add-ons for WSS. Included: - schema.xml explorer?????????? Microsoft Office 2013 ? 1? – ????? ???????????: ?????? ???????? ???????? ??? ???????? ? ????? ??????????? ????? ??????? ?? TechEd Russia 2012.??UBBCODE: PHP????UBBCODE????,??????: 1.??????(10px ? 24px); 2.????; 3.?????; 4.??????; 5.??????; 6.??????(????????????); 7.??????; 8.?????; 9.?????; 10.???QQ??,??????; 11.???????(?????,??????????); 12.?????????; 13.????????; 14.?????????; 15.?????????; 16.?????????。??????QrPortal: ???????Summary??: fff

    Read the article

  • CodePlex Daily Summary for Tuesday, June 10, 2014

    CodePlex Daily Summary for Tuesday, June 10, 2014Popular ReleasesBell Open Imaging package: 1.0: First release at Codeplex.Australia Income and Tax Calculator: Australia Income and Tax Calculator 1.4: 1. added 2014 financial year 2. refactored the codeCS-Script Source: Release v3.8.1: Improved ConfigConsole AdvancedShell extensions support cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationPapercut: Papercut v3.0.0.0: Papercut has switched to semantic versioning! That means you will have to uninstall old "clickonce" versions to get the latest as it will see it as an older version. Latest Has Tons of New Features: Modern UI MVVM Architecture Watch Directories for New Messages Optional Backend Papercut Service Load on Windows Startup Attachments/Mime SectionsExperfwiz (Exchange Performance Data Collection tool): Experfwiz 1.3.8: List of updates in 1.3.8 Added support for Windows 2012 & 2012 R2 (for future use) Added support for Exchange 2013 Full is now enabled by default. To disable full mode, use 'nofull'. Exchange 2013 requires full. Added "\Processor Information()\" counters to Exchange 2010 full Blocked Exmon execution on Exchange 2013BugNET Issue Tracker: BugNET 1.6: Version 1.6 is a major upgrade to the latest frameworks and components by Microsoft. It includes a major UI overhaul using the bootstrap framework to have a modern, mobile friendly and easily customized layout. Upgrade to .NET 4.5 ASP.NET social auth Add script bundling and optimization Improvements for mobile devices Bootstrap (UI overhaul) Rewritten / friendly URL's Please read our release notes for BugNET 1.6: http://blog.bugnetproject.com/2014/06/08/bugnet-1-6-and-bugnet-pro-1...NDataTable: NDataTable Source Code: Source codeWindows System API in Visual Basic: DataTools Merged Beta 2.4: Added FileSystemWatcher and demonstration to this release..NET Memory Tools: DataTools Merged Beta 2.4: Added FileSystemWatcher and demonstration to this release.freeasyExplorer: 3. Alpha: This is the 3th alpha release. Please inform me if you find some issues or problems. Fixes: - add assembly informations - set list view to list mode and add custom icon - add settings window - update Mahapp.Metro Framework - add drag&drop functionality - change target .net versionkbvault-mysql: V0.16a: Additions for SEO friendly pages,printer friendly view option and tag cloud on landing pageSFDL.NET: SFDL.NET (2.2.9.3): Changelog: Retry Bugfix (Error Counter wurde nicht korrekt zurückgesetzt) Neue Einstellung: Retry Wartezeit ist nun Einstellbarbabelua: 1.5.7.0: V1.5.7.0 - 2014.6.6Stability improvement: use "lua scripts folder" as lua search path when debugging;SEToolbox: SEToolbox 01.033.007 Release 1: Fixed breaking changes in Space Engineers in latest update. Installation of this version will replace older version.Virto Commerce Enterprise Open Source eCommerce Platform (asp.net mvc): Virto Commerce 1.10: Virto Commerce Community Edition version 1.10. To install the SDK package, please refer to SDK getting started documentation To configure source code package, please refer to Source code getting started documentation This release includes bug fixes and improvements (including Commerce Manager localization and https support). More details about this release can be found on our blog at http://blog.virtocommerce.com.NPOI: NPOI 2.1: Assembly Version: 2.1.0 New Features a. XSSFSheet.CopySheet b. Excel2Html for XSSF c. insert picture in word 2007 d. Implement IfError function in formula engine Bug Fixes a. fix conditional formatting issue b. fix ctFont order issue c. fix vertical alignment issue in XSSF d. add IndexedColors to NPOI.SS.UserModel e. fix decimal point issue in non-English culture f. fix SetMargin issue in XSSF g.fix multiple images insert issue in XSSF h.fix rich text style missing issue in XSSF i. fix cell...51Degrees - Device Detection and Redirection: 3.1.2.3: Version 3.1 HighlightsDevice detection algorithm is over 100 times faster. Regular expressions and levenshtein distance calculations are no longer used. The device detection algorithm performance is no longer limited by the number of device combinations contained in the dataset. Two modes of operation are available: Memory – the detection data set is loaded into memory and there is no continuous connection to the source data file. Slower initialisation time but faster detection performanc...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.27.0: CodeMap now indicates the type name for all members Implemented running scripts 'as administrator'. Just add '//css_npp asadmin' to the script and run it as usual. 'Prepare script for distribution' now aggregates script dependency assemblies. Various improvements in CodeSnipptet, Autcompletion and MethodInfo interactions with each other. Added printing line number for the entries in CodeMap (subject of configuration value) Improved debugging step indication for classless scripts ...ClosedXML - The easy way to OpenXML: ClosedXML 0.72.3: 70426e13c415 ClosedXML for .Net 4.0 now uses Open XML SDK 2.5 b9ef53a6654f Merge branch 'master' of https://git01.codeplex.com/forks/vbjay/closedxml 727714e86416 Fix range.Merge(Boolean) for .Net 3.5 eb1ed478e50e Make public range.Merge(Boolean checkIntersects) 6284cf3c3991 More performance improvements when saving.DNN Blog: 06.00.07: Highlights: Enhancement: Option to show management panel on child modules Fix: Changed SPROC and code to ensure the right people are notified of pending comments. (CP-24018) Fix: Fix to have notification actions authentication assume the right module id so these will work from the messaging center (CP-24019) Fix: Fix to issue in categories in a post that will not save when no categories and no tags selectedNew ProjectsATC FSX Console: Flight Simulator CodeCRM 2011 Duplicate Detection Sample: CRM 2011 Duplicate Detection Plug-in SampleCustomisation tool for Visual Studio projects: This tool is used to speedup the setting up of a Visual Studio projects and suppress any repetitive manual tasks.D3 Focus: A basic tool for people with trouble deciding what to do with their time in Diablo 3.EPS - PowerShell templating: EPS (Embedded PowerShell), inspired by erb, is a templating system that embeds PowerShell code into a text document such as HTML files. Export Bugs: Export Bugs is an application using the TFS API. Killer Bytes Pack: project of class visual studioMagickViewer: An application that uses the Magick.NET library to display images.Mario Card: https://drive.google.com/?authuser=0#folders/0B0VMM5klwOkwLXptMVV6b3NhRVEOZone: OZonePlexApp PowerShell Module: PoSH4PlexApp is an in-development PowerShell module for querying and managing PlexApp libraries via Plex Media Server's HTTP listener. Contributions welcomeProduct Usage Tracker: This project is about creating a tool/mechanism to capture and record the usage volume and usage patterns of any application.Project C²: A simple lightweight C-style programming language for near-hardware system development using GNU GAS as a back-end.Random Execute: Random Execute is a command wrapper that randomizes the start time of a processes execution.RuLaw.NET: .NET library for official Russian State Duma API web service for querying and searching information of laws, deputies, authorities, sessions, votings, etcTest_DB_Name_Check: Latestweb_gioman_proyecto: proyecto de lucas gioiaWPF Globalization and Multi-language Application: Multilingual User Interfaces providing support for switching UIs from one language to another. ??????-??????【??】: ????????,??????:?????,?????,??????,??????????,????????。????????!??????-??????【??】: ?????????????????????????????,??????????,????,????,?????????、??????,??????。?????-?????【??】: ???????????????????,??????????,????????、????,??????????,??????????。?????-?????【??】: ?????????????????、????,??100%????,??????,????????????,???????????!?????-?????【??】: ??????????????????????:????、????、??????????????,????????。????????!??????-??????【??】: ??????????????????????????,????,????,??????????。???????????????,??,??,??????????,????????????-??????【??】: ???????????????????,?????????????,????,?????????,?????????????,?????,?????!??????-??????【??】: ??????????????????,???????????,??????????????,??????????,??????????????!?????-?????【??】: ????【??:13406937288 ??? QQ:798761615】???????????????,???,??????????、???????????????????。??????,????、????,??????! ?????-?????【??】: ???????????????,?????????????? ??。????????、????、????、?????????? ???????。?????-?????【??】: ????????????????,???????????????。???????????,??????:????、????、????????????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】?????????????????,????,????“???、???、???”?????,?????,?????????????????。??????!?????-?????【??】: ?????????【??:13406937288 ??? QQ:798761615】,???????????,??????????,????:??,????,???????? ??????????,????????。??????!?????-?????【??】: ?????【??:13406937288 ??? QQ:798761615】????:?????,??????!???????????????,???????、?????、??????“??”????,????,????!?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】???????,??????,??????,??????、??????,??????、??,????,????????????-??????【??】: ?????????【??:13406937288 ??? QQ:798761615】????????????????????,???????????,??????,??????????????...????????。??????!?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】?????????、?????????,??????、??????????????????,???????.??????????,????????。??????-??????【??】: ?????????????????,??????????、??????,??????????、????、????、???????。??????-??????【??】: ??????????????????????,???????????????,???????,?????,?????,????? !!!?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】????????,????,?????、???、?????,???????,?????,???????????100%。??????!????-????【??】: ???????????、??????????????????,????????,?????,??????,????,????,????!?????-?????【??】: ?????????????????????,???????????????,????????????????????!?????-?????【??】: ?????????【??:13406937288 ??? QQ:798761615】???2005?,????????????????????,??????????,??????。?????,???????????????????,??????!?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】???????:??????!?????!???:????、????、????、????。??,??????????!??????.?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】??????????,????????????,????????,???,???????????,????,????。?????,??????. ?????-?????【??】: ?????????????????:??????!?????!???:????、????、????、????。??,??????????!??????.?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】 ?????????????????,????,????“???、???、???”?????,?????,?????????????????。??????!?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】?????????????,??????,???????????,????????????????,????????.??????.?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】???1992?,????????????????。??????????????????????。????????????,????,????????!?????-?????【??】: ?????【??:13406937288 ??? QQ:798761615】??????:?????,??????!???????????????,???????、?????、??????“??”????,????,????! ?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】????????,????,?????、???、?????,???????,?????,???????????100%。??????!??????-??????【??】: ????????【??:13406937288 ??? QQ:798761615】?????????、?????????,??????、??????????????????,???????.??????????,????????。?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】???1992?,????????????????。??????????????????????。????????????,????,????????! ?????-?????【??】: ?????????【??:13406937288 ??? QQ:798761615】???2005?,????????????????????,??????????,??????。?????,???????????????????,??????!?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】??????????????????,???????????,??????,??????????????...????????。??????!?????-?????【??】: ?????????????【??:13406937288 ??? QQ:798761615】?????????????,??????,???????????,????????????????,????????.??????.??????-??????【??】: ???????????,?????,???????????,???????,????,????,????,?????。??????-??????【??】: ????????????????8?,????????,????????,??????????,?????,????? ,????????!??????-??????【??】: ????????????????6?,???????????????????????????,??????????????,?????????!?????-?????【??】: ?????????????【??:13406937288 ??? QQ:798761615】???????,??????,??????,??????、??????,??????、??,????,??????!?????-?????【??】: ???????????、???、??、??????????????????????????????,????????????????!?????-?????【??】: ????????????????????????????、????、????、???????????,????,????!?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】?????????????????,????,????“???、???、???”?????,?????,?????????????????。??????!?????-?????【??】: ???????【??:13406937288 ??? QQ:798761615】??????????,????????????,????????,???,???????????,????,????。?????,??????.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 4: Application Development Considerations

    - by Tim Murphy
    This is the final part in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Application Development Considerations Now we get to the actual building of your solutions.  What are the skills and resources that will be needed in order to develop a smartphone application in the enterprise? Language Knowledge One of the first things you need to consider when you are deciding which platform language do you either have the most in house skill base or can you easily acquire.  If you already have developers who know Java or C# you may want to use either Android or Windows Phone.  You should also take into consideration the market availability of developers.  If your key developer leaves how easy is it to find a knowledgeable replacement? A second consideration when it comes to programming languages is the qualities exposed by the languages of a particular platform.  How well does that development language and its associated frameworks support things like security and access to the features of the smartphone hardware?  This will play into your overall cost of ownership if you have to create this infrastructure on your own. Manage Limited Resources Everything is limited on a smartphone: battery, memory, processing power, network bandwidth.  When developing your applications you will have to keep your footprint as small as possible in every way.  This means not running unnecessary processes in the background that will drain the battery or pulling more data over the airwaves than you have to.  You also want to keep your on device in as compact a format as possible. Mobile Design Patterns There are a number of design patterns that have either come to life because of smartphone development or have been adapted for this use.  The main pattern in the Windows Phone environment is the MVVM (Model-View-View-Model).  This is great for overall application structure and separation of concerns.  The fun part is trying to keep that separation as pure as possible.  Many of the other patterns may or may not have strict definitions, but some that you need to be concerned with are push notification, asynchronous communication and offline data storage. Real estate is limited on smartphones and even tablets. You are also limited in the type of controls that can be represented in the UI. This means rethinking how you modularize your application. Typing is also much harder to do so you want to reduce this as much as possible.  This leads to UI patterns.  While not what we would traditionally think of as design patterns the guidance each platform has for UI design is critical to the success of your application.  If user find the application difficult navigate they will not use it. Development Process Because of the differences in development tools required, test devices and certification and deployment processes your teams will need to learn new way of working together.  This will include the need to integrate service contracts of back-end systems with mobile applications.  You will also want to make sure that you present consistency across different access points to corporate data.  Your web site may have more functionality than your smartphone application, but it should have a consistent core set of functionality.  This all requires greater communication between sub-teams of your developers. Testing Process Testing of smartphone apps has a lot more to do with what happens when you lose connectivity or if the user navigates away from your application. There are a lot more opportunities for the user or the device to perform disruptive acts.  This should be your main testing concentration aside from the main business requirements.  You will need to do things like setting the phone to airplane mode and seeing what the application does in order to weed out any gaps in your handling communication interruptions. Need For Outside Experts Since this is a development area that is new to most companies the need for experts is a lot greater. Whether these are consultants, vendor representatives or just development community forums you will need to establish expert contacts. Nothing is more dangerous for your project timelines than a lack of knowledge.  Make sure you know who to call to avoid lengthy delays in your project because of knowledge gaps. Security Security has to be a major concern for enterprise applications. You aren't dealing with just someone's game standings. You are dealing with a companies intellectual property and competitive advantage. As such you need to start by limiting access to the application itself.  Once the user is in the app you need to ensure that the data is secure at all times.  This includes both local storage and across the wire.  This means if a platform doesn’t natively support encryption for these functions you will need to find alternatives to secure your data.  You also need to keep secret (encryption) keys obfuscated or locked away outside of the application. People can disassemble the code otherwise and break your encryption. Offline Capabilities As we discussed earlier one your biggest concerns is not having connectivity.  Because of this a good portion of your code may be dedicated to handling loss of connection and reconnection situations.  What do you do if you lose the network?  Back up all your transactions and store of any supporting data so that operations can continue off line. In order to support this you will need to determine the available flat file or local data base capabilities of the platform.  Any failed transactions will need to support a retry mechanism whether it is automatic or user initiated.  This also includes your services since they will need to be able to roll back partially completed transactions.  What ever you do, don’t ignore this area when you are designing your system. Deployment Each platform has different deployment capabilities. Some are more suited to enterprise situations than others. Apple's approach is probably the most mature at the moment. Prior to the current generation of smartphone platforms it would have been Windows CE. Windows Phone 7 has the limitation that the app has to be distributed through the same network as public facing applications. You mark them as private which means that they are only accessible by a direct URL. Unfortunately this does not make them undiscoverable (although it is very difficult). This will change with Windows Phone 8 where companies will be able to certify their own applications and distribute them.  Given this Windows Phone applications need to be more diligent with application access in order to keep them restricted to the company's employees. My understanding of the Android deployment schemes is that it is much less standardized then either iOS or Windows Phone. Someone would have to confirm or deny that for me though since I have not yet put the time into researching this platform further. Given my limited exposure to the iOS and Android platforms I have not been able to confirm this, but there are varying degrees of user involvement to install and keep applications updated. At one extreme the user just goes to a website to do the install and in other case they may need to download files and perform steps to install them. Future Bluetooth Today we use Bluetooth for keyboards, mice and headsets.  In the future it could be used to interrogate car computers or manufacturing systems or possibly retail machines by service techs.  This would open smartphones to greater use as a almost a Star Trek Tricorder.  You would get you all your data as well as being able to use it as a universal remote for just about any device or machine. Better corporation controlled deployment At least in the Windows Phone world the upcoming release of Windows Phone 8 will include a private certification and deployment option that is currently not available with Windows Phone 7 (Mango). We currently have to run the apps through the Marketplace certification process and use a targeted distribution method. Platform independent approaches HTML5 and JavaScript with Web Service has become a popular topic lately for not only creating flexible web site, but also creating cross platform mobile applications.  I’m not yet convinced that this lowest common denominator approach is viable in most cases, but it does have it’s place and seems to be growing.  Be sure to keep an eye on it. Summary From my perspective enterprise smartphone applications can offer a great competitive advantage to many companies.  They are not cheap to build and should be approached cautiously.  Understand the factors I have outlined in this series, do you due diligence and see if there is a portion of your business that can benefit from the mobile experience. del.icio.us Tags: Architecture,Smartphones,Windows Phone,iOS,Android

    Read the article

  • Navigation in Win8 Metro Style applications

    - by Dennis Vroegop
    In Windows 8, Touch is, as they say, a first class citizen. Now, to be honest: they also said that in Windows 7. However in Win8 this is actually true. Applications are meant to be used by touch. Yes, you can still use mouse, keyboard and pen and your apps should take that into account but touch is where you should focus on initially. Will all users have touch enabled devices? No, not in the first place. I don’t think touchscreens will be on every device sold next year. But in 5 years? Who knows? Don’t forget: if your app is successful it will be around for a long time and by that time touchscreens will be everywhere. Another reason to embrace touch is that it’s easier to develop a touch-oriented app and then to make sure that keyboard, nouse and pen work as doing it the other way around. Porting a mouse-based application to a touch based application almost never works. The reverse gives you much more chances for success. That being said, there are some things that you need to think about. Most people have more than one finger, while most users only use one mouse at the time. Still, most touch-developers translate their mouse-knowledge to the touch and think they did a good job. Martin Tirion from Microsoft said that since Touch is a new language people face the same challenges they do when learning a new real spoken language. The first thing people try when learning a new language is simply replace the words in their native language to the newly learned words. At first they don’t care about grammar. To a native speaker of that other language this sounds all wrong but they still will be able to understand what the intention was. If you don’t believe me: try Google translate to translate something for you from your language to another and then back and see what happens. The same thing happens with Touch. Most developers translate a mouse-click into a tap-event and think they’re done. Well matey, you’re not done. Not by far. There are things you can do with a mouse that you cannot do with touch. Think hover. A mouse has the ability to ‘slide’ over UI elements. Touch doesn’t (I know: with Pen you can do this but I’m talking about actual fingers here). A touch is either there or it isn’t. And right-click? Forget about it. A click is a click.  Yes, you have more than one finger but the machine doesn’t know which finger you use… The other way around is also true. Like I said: most users only have one mouse but they are likely to have more than one finger. So how do we take that into account? Thinking about this is really worth the time: you might come up with some surprisingly good ideas! Still: don’t forget that not every user has touch-enabled hardware so make sure your app is useable for both groups. Keep this in mind: we’re going to need it later on! Now. Apps should be easy to use. You don’t want your user to read through pages and pages of documentation before they can use the app. Imagine that spotter next to an airfield suddenly seeing a prototype of a Concorde 2 landing on the nearby runway. He probably wants to enter that information in our app NOW and not after he’s taken a 3 day course. Even if he still has to download the app, install it for the first time and then run it he should be on his way immediately. At least, fast enough to note down the details of that unique, rare and possibly exciting sighting he just did. So.. How do we do this? Well, I am not talking about games here. Games are in a league of their own. They fall outside the scope of the apps I am describing. But all the others can roughly be characterized as being one of two flavors: the navigation is either flat or hierarchical. That’s it. And if it’s hierarchical it’s no more than three levels deep. Not more. Your users will get lost otherwise and we don’t want that. Flat is simple. Just imagine we have one screen that is as high as our physical screen is and as wide as you need it to be. Don’t worry if it doesn’t fit on the screen: people can scroll to the right and left. Don’t combine up/down and left/right scrolling: it’s confusing. Next to that, since most users will hold their device in landscape mode it’s very natural to scroll horizontal. So let’s use that when we have a flat model. The same applies to the hierarchical model. Try to have at most three levels. If you need more space, find a way to group the items in such a way that you can fit it in three, very wide lanes. At the highest level we have the so called hub level. This is the entry point of the app and as such it should give the user an immediate feeling of what the app is all about. If your app has categories if items then you might show these categories here. And while you’re at it: also show 2 or 3 of the items itself here to give the user a taste of what lies beneath. If the user selects a category you go to the section part. Here you show several sections (again, go as wide as you need) with again some detail examples. After that: the details layer shows each item. By giving some samples of the underlaying layer you achieve several things: you make the layer attractive by showing several different things, you show some highlights so the user sees actual content and you provide a shortcut to the layers underneath. The image below is borrowed from the http://design.windows.com website which has tons and tons of examples: For our app we’ll use this layout. So what will we show? Well, let’s see what sorts of features our app has to offer. I’ll repeat them here: Note planes Add pictures of that plane Notify friends of new spots Share new spots on social media Write down arrival times Write down departure times Write down the runway they take I am sure you can think of some more items but for now we'll use these. In the hub we’ll show something that represents “Spots”, “Friends”, “Social”. Apparently we have an inner list of spotter-friends that are in the app, while we also have to whole world in social. In the layer below we show something else, depending on what the user choose. When they choose “Spots” we’ll display the last spots, last spots by our friends (so we can actually jump from this category to the one next to it) and so on. When they choose a “spot” (or press the + icon in the App bar, which I’ll talk about next time) they go to the lowest and final level that shows details about that spot, including a picture, date and time and the notes belonging to that entry. You’d be amazed at how easy it is to organize your app this way. If you don’t have enough room in these three layers you probably could easily get away with grouping items. Take a look at our hub: we have three completely different things in one place. If you still can’t fit it all in in a logical and consistent way, chances are you are trying to do too much in this app. Go back to your mission statement, determine if it is specific enough and if your feature list helps that statement or makes it unclear. Go ahead. Give it a go! Next time we’ll talk about the look and feel, the charms and the app-bar….

    Read the article

  • Processing Kinect v2 Color Streams in Parallel

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/08/20/processing-kinect-v2-color-streams-in-parallel.aspxProcessing Kinect v2 Color Streams in Parallel I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative. The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum: using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   BitmapToDisplay.Lock(); _ColorFrame.CopyConvertedFrameDataToIntPtr( BitmapToDisplay.BackBuffer, Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ), ColorImageFormat.Bgra ); BitmapToDisplay.AddDirtyRect( new Int32Rect( 0, 0, _ColorFrame.FrameDescription.Width, _ColorFrame.FrameDescription.Height ) ); BitmapToDisplay.Unlock(); } With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second. After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case. Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread. I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all? The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work. After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread. Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way. The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) { if( null == e.FrameReference ) return;   // If you do not dispose of the frame, you never get another one... using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   Task.Factory.StartNew( () => { ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   Application.Current.Dispatcher.Invoke( () => { BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } ); } ); } } This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer. Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   void CompositionTarget_Rendering( object sender, EventArgs e ) { using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } }

    Read the article

  • Help with Design for Vacation Tracking System (C#/.NET/Access/WebServices/SOA/Excel) [closed]

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?) Many thanks for your advices, Aaron

    Read the article

  • Batch break up after call other batch file

    - by Sven Arno Jopen
    i have the problem, that my batch process breaking up each time after call a other batch file. The batch files are used to run a make process out from IBM Rhapsody. There convert the call from Rhapsody to the Visual Studio tools. So nmake will be called from the batch after make different settings. The scripts aren’t written completely from me, I only adapt theme to run under both windows architecture versions, x86 and x64. The first script (vs2005_make.bat) will be called from Rhapsody and run to the “call” statement. The second script (Vcvars_VisualStudio2005.bat) runs to the end. But the first script isn’t resume working, at this point the process break up without a error message. I'm not very familiar with batch files, this is the first time I make more than simple console commands in a batch file. So I hope I have given all information’s which needed, otherwise ask me. Here the start script (vs2005_make.bat): :: parameter 1 - Makefile which should be used :: parameter 2 - The make target mark @echo off IF "%2"=="" set target=all IF "%2"=="all" set target=all IF "%2"=="build" set target=all IF "%2"=="rebuild" set target=clean all IF "%2"=="clean" set target=clean set RegQry="HKLM\Hardware\Description\System\CentralProcessor\0" REG.exe Query %RegQry% > checkOS.txt Find /i "x86" < CheckOS.txt > StringCheck.txt IF %ERRORLEVEL%==0 ( set arch=x86 ) ELSE ( set arch=x64 ) call "%ProgramFiles%\IBM\Rhapsody752\Share\etc\Vcvars_VisualStudio2005.bat" %arch% IF %ERRORLEVEL%==0 ( set makeflags= nmake /nologo /S /F %1 %target% ) del checkOS.txt del StringCheck.txt exit and here the called script (Vcvars_VisualStudio2005.bat): :: param 1 - Processor architecture @echo off ECHO param 1 = %1 IF %1==x86 ( SET ProgrammPath=%ProgramFiles% ) ELSE IF %1==x64 ( SET ProgrammPath=%ProgramFiles(x86)% ) ELSE ( ECHO Unknowen architectur EXIT /B 1 ) SET VSINSTALLDIR="%ProgrammPath%\Microsoft Visual Studio 8\Common7\IDE" SET VCINSTALLDIR="%ProgrammPath%\Microsoft Visual Studio 8" SET FrameworkDir=C:\WINDOWS\Microsoft.NET\Framework SET FrameworkVersion=v2.0.50727 SET FrameworkSDKDir="%ProgrammPath%\Microsoft Visual Studio 8\SDK\v2.0" rem Root of Visual Studio common files. IF %VSINSTALLDIR%=="" GOTO Usage IF %VCINSTALLDIR%=="" SET VCINSTALLDIR=%VSINSTALLDIR% rem rem Root of Visual Studio ide installed files. rem SET DevEnvDir=%VSINSTALLDIR% rem rem Root of Visual C++ installed files. rem SET MSVCDir=%VCINSTALLDIR%\VC SET PATH=%DevEnvDir%;%MSVCDir%\BIN;%VCINSTALLDIR%\Common7\Tools;%VCINSTALLDIR%Common7 \Tools\bin\prerelease;%VCINSTALLDIR%\Common7\Tools\bin;%FrameworkSDKDir%\bin;%FrameworkDir%\%FrameworkVersion%;%PATH%; SET INCLUDE=%MSVCDir%\ATLMFC\INCLUDE;%MSVCDir%\INCLUDE;%MSVCDir%\PlatformSDK\include\gl;%MSVCDir%\PlatformSDK\include;%FrameworkSDKDir%\include;%INCLUDE% SET LIB=%MSVCDir%\ATLMFC\LIB;%MSVCDir%\LIB;%MSVCDir%\PlatformSDK\lib;%FrameworkSDKDir%\lib;%LIB% GOTO end :Usage ECHO. VSINSTALLDIR variable is not set. ECHO. ECHO SYNTAX: %0 GOTO end :end Here the console output, what I not understand here and find very suspicious is that after the “IF %ERRORLEVEL%” Statement in the first script all will be put out regardless the echo is set to off… Executing: "C:\Programme\IBM\Rhapsody752\Share\etc\vs2005_make.bat" Simulation.mak build IF %ERRORLEVEL%==0 ( Mehr? set arch=x86 Mehr? ) ELSE ( Mehr? set arch=x64 Mehr? ) call "%ProgramFiles%\IBM\Rhapsody752\Share\etc\Vcvars_VisualStudio2005.bat" %arch% :: param 1 - Processor architecture @echo off ECHO param 1 = %1 param 1 = x86 IF %1==x86 ( Mehr? SET ProgrammPath=%ProgramFiles% Mehr? ) ELSE IF %1==x64 ( Mehr? SET ProgrammPath=%ProgramFiles(x86)% Mehr? ) ELSE ( Mehr? ECHO Unknowen architectur Mehr? EXIT /B 1 Mehr? ) SET VSINSTALLDIR="%ProgrammPath%\Microsoft Visual Studio 8\Common7\IDE" SET VCINSTALLDIR="%ProgrammPath%\Microsoft Visual Studio 8" SET FrameworkDir=C:\WINDOWS\Microsoft.NET\Framework SET FrameworkVersion=v2.0.50727 SET FrameworkSDKDir="%ProgrammPath%\Microsoft Visual Studio 8\SDK\v2.0" rem Root of Visual Studio common files. IF %VSINSTALLDIR%=="" GOTO Usage IF %VCINSTALLDIR%=="" SET VCINSTALLDIR=%VSINSTALLDIR% rem rem Root of Visual Studio ide installed files. rem SET DevEnvDir=%VSINSTALLDIR% rem rem Root of Visual C++ installed files. rem SET MSVCDir=%VCINSTALLDIR%\VC SET PATH=%DevEnvDir%;%MSVCDir%\BIN;%VCINSTALLDIR%\Common7\Tools;%VCINSTALLDIR%\Common7\Tools\bin\prerelease;%VCINSTALLDIR%\Common7\Tools\bin;%FrameworkSDKDir%\bin;%FrameworkDir%\%FrameworkVersion%;%PATH%; SET INCLUDE=%MSVCDir%\ATLMFC\INCLUDE;%MSVCDir%\INCLUDE;%MSVCDir%\PlatformSDK\include\gl;%MSVCDir%\PlatformSDK\include;%FrameworkSDKDir%\include;%INCLUDE% SET LIB=%MSVCDir%\ATLMFC\LIB;%MSVCDir%\LIB;%MSVCDir%\PlatformSDK\lib;%FrameworkSDKDir%\lib;%LIB% GOTO end Build Done I hope someone have an idea, i work now since two days and don't find the error... thanking you in anticipation. Note: The word “Mehr?” in the text output is german and means “more”. I don't know were it comes from and it is possible that it is a bad translation from the English output to german.

    Read the article

  • Design for Vacation Tracking System

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?)

    Read the article

  • What does a Software Developer actually do?

    - by chobo2
    Hi I am graduating from my Computer Science degree in a few weeks from now!! I started to look for my first job. For the last couple years I gotten really into web programming(Asp.net). My first choice would be to get a junior asp.net MVC developer but I don't any companies in my area use MVC yet or if they do they are not hiring. So my second choice would be a junior asp.net Webforms developer. My other choices after that would be forms applications, mobile applications using .Net and C#. As you can see I am looking for something with .Net. I spent the last couple years doing .Net projects for school, on my free time and love the Language and it would pain me right now to switch to something like php. So now I found a posting in my area for an Entry Software Developer. I like the fact that they are using .net and that it is entry job(I never worked in this industry and never had more then like a tutoring job so I want to for like intermediate jobs). Posting Are you looking for an exciting challenge within a dynamic, people-oriented culture where you can launch your technical career? Company Name Inc. is a technology consulting company, located in Canada, that designs, develops, and delivers real-time interactive applications accessed via the Internet as well as back-end tools to support these applications. Company Name provides a combination of out-of-the-box and customized solutions to an expanding list of partners and customers. POSITION SUMMARY As a member of our team, the successful candidate will be responsible for helping us increase the quality and stability of our software systems by working jointly and directly with both the Software Development teams and the QA Team. The primary mission of this role will be to substantially enhance our test automation suite. The incumbent will design and program automated tests (unit, integration, system, stress and load) in Visual Studio using C# and will develop sound processes that help us identify and resolve defects as early as possible. The successful incumbent will help us improve and enhance system functionality, reliability, performance and scalability. This role is specifically designed for an eager, bright, new graduate who is looking for a stepping stone into a software engineering role. We promote from within and invite new graduates to apply for this important position - which may lead to new opportunities. We also offer a generous professional development plan to help you on your way. You will be a key part of a team of experts that is responsible for improving the quality of our software by: • Designing, writing, and executing test plans and programmatic tests in Visual Studio using C# and NUnit for functional testing of our code, new features, regression, and performance test procedures. • Working with the engineers to design and build the stress and load testing framework which emulates tens and even hundreds of thousands of concurrent users via a distributed network interfacing with our Load Testing Lab. • Interfacing with both the Development Team and the QA Team to ensure risks are identified and managed. • Mentoring and leading the QA Team in programmatic test automation technologies and tools. MUST HAVE SKILLS / QUALIFICATIONS: • Diploma or higher Degree in Computer Science, or equivalent formal training. • Fundamental C# programming skills. • Knowledge of Internet technologies and Microsoft Windows platforms. • Knowledge of PC hardware. • Excellent communication skills (both oral and written). • Self-starter who takes initiative, requires minimal supervision, can handle multiple simultaneous tasks. • Detail-oriented, able to concentrate, and work quickly. • Proven diagnostic, analytical, and problem solving skills. NICE TO HAVE SKILLS: • Exposure to Visual Studio Team System or Visual Studio Test Edition. • Exposure in C# using NUnit. • Exposure to NUnit, HTTPUnit, and other automation tool suites. • Exposure to Performance/Stress/Load Testing. • Good understanding of relational databases (MS SQL Server). • Familiar with video and online multi-player games. As part of our team you will have the opportunity to work with a supportive team of experts, drive your own success, and ride the wave as we continually expand our team of experts. If you are interested in this opportunity, please send your resume to [email protected] with “Entry Level Software Developer” in the subject line. So that is the posting. To me it sounds like it is QA job. I don't have anything against QA jobs but alot of them seems to be your just clicking buttons and running scripts. Is this what a typical software developer does? Like I am so on the fence to apply for this job. On one side I am not sure how much programming I would be doing. Like I want to be at least half the time programming otherwise my skills will never improve since I will never be programming in teams and stuff. At the same time I have no experience in the industry so on the other side I am thinking just go for it and then maybe a year later try to get a full programming job(provided that I got the job). Yet if I am not programming in that job then that experience will not help me for the next job I find as I will be back a square one.

    Read the article

  • DirectX 10 Primitive is not displayed

    - by pypmannetjies
    I am trying to write my first DirectX 10 program that displays a triangle. Everything compiles fine, and the render function is called, since the background changes to black. However, the triangle I'm trying to draw with a triangle strip primitive is not displayed at all. The Initialization function: bool InitDirect3D(HWND hWnd, int width, int height) { //****** D3DDevice and SwapChain *****// DXGI_SWAP_CHAIN_DESC swapChainDesc; ZeroMemory(&swapChainDesc, sizeof(swapChainDesc)); swapChainDesc.BufferCount = 1; swapChainDesc.BufferDesc.Width = width; swapChainDesc.BufferDesc.Height = height; swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; swapChainDesc.BufferDesc.RefreshRate.Numerator = 60; swapChainDesc.BufferDesc.RefreshRate.Denominator = 1; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.OutputWindow = hWnd; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.Windowed = TRUE; if (FAILED(D3D10CreateDeviceAndSwapChain( NULL, D3D10_DRIVER_TYPE_HARDWARE, NULL, 0, D3D10_SDK_VERSION, &swapChainDesc, &pSwapChain, &pD3DDevice))) return fatalError(TEXT("Hardware does not support DirectX 10!")); //***** Shader *****// if (FAILED(D3DX10CreateEffectFromFile( TEXT("basicEffect.fx"), NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, pD3DDevice, NULL, NULL, &pBasicEffect, NULL, NULL))) return fatalError(TEXT("Could not load effect file!")); pBasicTechnique = pBasicEffect->GetTechniqueByName("Render"); pViewMatrixEffectVariable = pBasicEffect->GetVariableByName( "View" )->AsMatrix(); pProjectionMatrixEffectVariable = pBasicEffect->GetVariableByName( "Projection" )->AsMatrix(); pWorldMatrixEffectVariable = pBasicEffect->GetVariableByName( "World" )->AsMatrix(); //***** Input Assembly Stage *****// D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = 2; D3D10_PASS_DESC PassDesc; pBasicTechnique->GetPassByIndex(0)->GetDesc(&PassDesc); if (FAILED( pD3DDevice->CreateInputLayout( layout, numElements, PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &pVertexLayout))) return fatalError(TEXT("Could not create Input Layout.")); pD3DDevice->IASetInputLayout( pVertexLayout ); //***** Vertex buffer *****// UINT numVertices = 100; D3D10_BUFFER_DESC bd; bd.Usage = D3D10_USAGE_DYNAMIC; bd.ByteWidth = sizeof(vertex) * numVertices; bd.BindFlags = D3D10_BIND_VERTEX_BUFFER; bd.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE; bd.MiscFlags = 0; if (FAILED(pD3DDevice->CreateBuffer(&bd, NULL, &pVertexBuffer))) return fatalError(TEXT("Could not create vertex buffer!"));; UINT stride = sizeof(vertex); UINT offset = 0; pD3DDevice->IASetVertexBuffers( 0, 1, &pVertexBuffer, &stride, &offset ); //***** Rasterizer *****// // Set the viewport viewPort.Width = width; viewPort.Height = height; viewPort.MinDepth = 0.0f; viewPort.MaxDepth = 1.0f; viewPort.TopLeftX = 0; viewPort.TopLeftY = 0; pD3DDevice->RSSetViewports(1, &viewPort); D3D10_RASTERIZER_DESC rasterizerState; rasterizerState.CullMode = D3D10_CULL_NONE; rasterizerState.FillMode = D3D10_FILL_SOLID; rasterizerState.FrontCounterClockwise = true; rasterizerState.DepthBias = false; rasterizerState.DepthBiasClamp = 0; rasterizerState.SlopeScaledDepthBias = 0; rasterizerState.DepthClipEnable = true; rasterizerState.ScissorEnable = false; rasterizerState.MultisampleEnable = false; rasterizerState.AntialiasedLineEnable = true; ID3D10RasterizerState* pRS; pD3DDevice->CreateRasterizerState(&rasterizerState, &pRS); pD3DDevice->RSSetState(pRS); //***** Output Merger *****// // Get the back buffer from the swapchain ID3D10Texture2D *pBackBuffer; if (FAILED(pSwapChain->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*)&pBackBuffer))) return fatalError(TEXT("Could not get back buffer.")); // create the render target view if (FAILED(pD3DDevice->CreateRenderTargetView(pBackBuffer, NULL, &pRenderTargetView))) return fatalError(TEXT("Could not create the render target view.")); // release the back buffer pBackBuffer->Release(); // set the render target pD3DDevice->OMSetRenderTargets(1, &pRenderTargetView, NULL); return true; } The render function: void Render() { if (pD3DDevice != NULL) { pD3DDevice->ClearRenderTargetView(pRenderTargetView, D3DXCOLOR(0.0f, 0.0f, 0.0f, 0.0f)); //create world matrix static float r; D3DXMATRIX w; D3DXMatrixIdentity(&w); D3DXMatrixRotationY(&w, r); r += 0.001f; //set effect matrices pWorldMatrixEffectVariable->SetMatrix(w); pViewMatrixEffectVariable->SetMatrix(viewMatrix); pProjectionMatrixEffectVariable->SetMatrix(projectionMatrix); //fill vertex buffer with vertices UINT numVertices = 3; vertex* v = NULL; //lock vertex buffer for CPU use pVertexBuffer->Map(D3D10_MAP_WRITE_DISCARD, 0, (void**) &v ); v[0] = vertex( D3DXVECTOR3(-1,-1,0), D3DXVECTOR4(1,0,0,1) ); v[1] = vertex( D3DXVECTOR3(0,1,0), D3DXVECTOR4(0,1,0,1) ); v[2] = vertex( D3DXVECTOR3(1,-1,0), D3DXVECTOR4(0,0,1,1) ); pVertexBuffer->Unmap(); // Set primitive topology pD3DDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP ); //get technique desc D3D10_TECHNIQUE_DESC techDesc; pBasicTechnique->GetDesc(&techDesc); for(UINT p = 0; p < techDesc.Passes; ++p) { //apply technique pBasicTechnique->GetPassByIndex(p)->Apply(0); //draw pD3DDevice->Draw(numVertices, 0); } pSwapChain->Present(0,0); } }

    Read the article

  • Users being forced to re-login randomly, before session and auth ticket timeout values are reached

    - by Don
    I'm having reports and complaints from my user that they will be using a screen and get kicked back to the login screen immediately on their next request. It doesn't happen all the time but randomly. After looking at the Web server the error that shows up in the application event log is: Event code: 4005 Event message: Forms authentication failed for the request. Reason: The ticket supplied has expired. Everything that I read starts out with people asking about web gardens or load balancing. We are not using either of those. We're a single Windows 2003 (32-bit OS, 64-bit hardware) Server with IIS6. This is the only website on this server too. This behavior does not generate any application exceptions or visible issues to the user. They just get booted back to the login screen and are forced to login. As you can imagine this is extremely annoying and counter-productive for our users. Here's what I have set in my web.config for the application in the root: <authentication mode="Forms"> <forms name=".TcaNet" protection="All" timeout="40" loginUrl="~/Login.aspx" defaultUrl="~/MyHome.aspx" path="/" slidingExpiration="true" requireSSL="false" /> </authentication> I have also read that if you have some locations setup that no longer exist or are bogus you could have issues. My path attributes are all valid directories so that shouldn't be the problem: <location path="js"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="images"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="anon"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="App_Themes"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="NonSSL"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> The only thing I'm not clear on is if my timeout value in the forms property for the auth ticket has to be the same as my session timeout value (defined in the app's configuration in IIS). I've read some things that say you should have the authentication timeout shorter (40) than the session timeout (45) to avoid possible complications. Either way we have users that get kicked to the login screen a minute or two after their last action. So the session definitely should not be expiring. Update 2/23/09: I've since set the session timeout and authentication ticket timeout values to both be 45 and the problem still seems to be happening. The only other web.config in the application is in 1 virtual directory that hosts Community Server. That web.config's authentication settings are as follows: <authentication mode="Forms"> <forms name=".TcaNet" protection="All" timeout="40" loginUrl="~/Login.aspx" defaultUrl="~/MyHome.aspx" path="/" slidingExpiration="true" requireSSL="true" /> </authentication> And while I don't believe it applies unless you're in a web garden, I have both of the machine key values set in both web.config files to be the same (removed for convenience): <machineKey validationKey="<MYVALIDATIONKEYHERE>" decryptionKey="<MYDECRYPTIONKEYHERE>" validation="SHA1" /> <machineKey validationKey="<MYVALIDATIONKEYHERE>" decryptionKey="<MYDECRYPTIONKEYHERE>" validation="SHA1"/> Any help with this would be greatly appreciated. This seems to be one of those problems that yields a ton of Google results, none of which seem to be fitting into my situation so far.

    Read the article

  • linux thread synchronization

    - by johnnycrash
    I am new to linux and linux threads. I have spent some time googling to try to understand the differences between all the functions available for thread synchronization. I still have some questions. I have found all of these different types of synchronizations, each with a number of functions for locking, unlocking, testing the lock, etc. gcc atomic operations futexes mutexes spinlocks seqlocks rculocks conditions semaphores My current (but probably flawed) understanding is this: semaphores are process wide, involve the filesystem (virtually I assume), and are probably the slowest. Futexes might be the base locking mechanism used by mutexes, spinlocks, seqlocks, and rculocks. Futexes might be faster than the locking mechanisms that are based on them. Spinlocks dont block and thus avoid context swtiches. However they avoid the context switch at the expense of consuming all the cycles on a CPU until the lock is released (spinning). They should only should be used on multi processor systems for obvious reasons. Never sleep in a spinlock. The seq lock just tells you when you finished your work if a writer changed the data the work was based on. You have to go back and repeat the work in this case. Atomic operations are the fastest synch call, and probably are used in all the above locking mechanisms. You do not want to use atomic operations on all the fields in your shared data. You want to use a lock (mutex, futex, spin, seq, rcu) or a single atomic opertation on a lock flag when you are accessing multiple data fields. My questions go like this: Am I right so far with my assumptions? Does anyone know the cpu cycle cost of the various options? I am adding parallelism to the app so we can get better wall time response at the expense of running fewer app instances per box. Performances is the utmost consideration. I don't want to consume cpu with context switching, spinning, or lots of extra cpu cycles to read and write shared memory. I am absolutely concerned with number of cpu cycles consumed. Which (if any) of the locks prevent interruption of a thread by the scheduler or interrupt...or am I just an idiot and all synchonization mechanisms do this. What kinds of interruption are prevented? Can I block all threads or threads just on the locking thread's CPU? This question stems from my fear of interrupting a thread holding a lock for a very commonly used function. I expect that the scheduler might schedule any number of other workers who will likely run into this function and then block because it was locked. A lot of context switching would be wasted until the thread with the lock gets rescheduled and finishes. I can re-write this function to minimize lock time, but still it is so commonly called I would like to use a lock that prevents interruption...across all processors. I am writing user code...so I get software interrupts, not hardware ones...right? I should stay away from any functions (spin/seq locks) that have the word "irq" in them. Which locks are for writing kernel or driver code and which are meant for user mode? Does anyone think using an atomic operation to have multiple threads move through a linked list is nuts? I am thinking to atomicly change the current item pointer to the next item in the list. If the attempt works, then the thread can safely use the data the current item pointed to before it was moved. Other threads would now be moved along the list. futexes? Any reason to use them instead of mutexes? Is there a better way than using a condition to sleep a thread when there is no work? When using gcc atomic ops, specifically the test_and_set, can I get a performance increase by doing a non atomic test first and then using test_and_set to confirm? *I know this will be case specific, so here is the case. There is a large collection of work items, say thousands. Each work item has a flag that is initialized to 0. When a thread has exclusive access to the work item, the flag will be one. There will be lots of worker threads. Any time a thread is looking for work, they can non atomicly test for 1. If they read a 1, we know for certain that the work is unavailable. If they read a zero, they need to perform the atomic test_and_set to confirm. So if the atomic test_and_set is 500 cpu cycles because it is disabling pipelining, causes cpu's to communicate and L2 caches to flush/fill .... and a simple test is 1 cycle .... then as long as I had a better ratio of 500 to 1 when it came to stumbling upon already completed work items....this would be a win.* I hope to use mutexes or spinlocks to sparilngly protect sections of code that I want only one thread on the SYSTEM (not jsut the CPU) to access at a time. I hope to sparingly use gcc atomic ops to select work and minimize use of mutexes and spinlocks. For instance: a flag in a work item can be checked to see if a thread has worked it (0=no, 1=yes or in progress). A simple test_and_set tells the thread if it has work or needs to move on. I hope to use conditions to wake up threads when there is work. Thanks!

    Read the article

  • Getting a NullPointerException at seemingly random intervals, not sure why

    - by Miles
    I'm running an example from a Kinect library for Processing (http://www.shiffman.net/2010/11/14/kinect-and-processing/) and sometimes get a NullPointerException pointing to this line: int rawDepth = depth[offset]; The depth array is created in this line: int[] depth = kinect.getRawDepth(); I'm not exactly sure what a NullPointerException is, and much googling hasn't really helped. It seems odd to me that the code compiles 70% of the time and returns the error unpredictably. Could the hardware itself be affecting it? Here's the whole example if it helps: // Daniel Shiffman // Kinect Point Cloud example // http://www.shiffman.net // https://github.com/shiffman/libfreenect/tree/master/wrappers/java/processing import org.openkinect.*; import org.openkinect.processing.*; // Kinect Library object Kinect kinect; float a = 0; // Size of kinect image int w = 640; int h = 480; // We'll use a lookup table so that we don't have to repeat the math over and over float[] depthLookUp = new float[2048]; void setup() { size(800,600,P3D); kinect = new Kinect(this); kinect.start(); kinect.enableDepth(true); // We don't need the grayscale image in this example // so this makes it more efficient kinect.processDepthImage(false); // Lookup table for all possible depth values (0 - 2047) for (int i = 0; i < depthLookUp.length; i++) { depthLookUp[i] = rawDepthToMeters(i); } } void draw() { background(0); fill(255); textMode(SCREEN); text("Kinect FR: " + (int)kinect.getDepthFPS() + "\nProcessing FR: " + (int)frameRate,10,16); // Get the raw depth as array of integers int[] depth = kinect.getRawDepth(); // We're just going to calculate and draw every 4th pixel (equivalent of 160x120) int skip = 4; // Translate and rotate translate(width/2,height/2,-50); rotateY(a); for(int x=0; x<w; x+=skip) { for(int y=0; y<h; y+=skip) { int offset = x+y*w; // Convert kinect data to world xyz coordinate int rawDepth = depth[offset]; PVector v = depthToWorld(x,y,rawDepth); stroke(255); pushMatrix(); // Scale up by 200 float factor = 200; translate(v.x*factor,v.y*factor,factor-v.z*factor); // Draw a point point(0,0); popMatrix(); } } // Rotate a += 0.015f; } // These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html float rawDepthToMeters(int depthValue) { if (depthValue < 2047) { return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161)); } return 0.0f; } PVector depthToWorld(int x, int y, int depthValue) { final double fx_d = 1.0 / 5.9421434211923247e+02; final double fy_d = 1.0 / 5.9104053696870778e+02; final double cx_d = 3.3930780975300314e+02; final double cy_d = 2.4273913761751615e+02; PVector result = new PVector(); double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue); result.x = (float)((x - cx_d) * depth * fx_d); result.y = (float)((y - cy_d) * depth * fy_d); result.z = (float)(depth); return result; } void stop() { kinect.quit(); super.stop(); } And here are the errors: processing.app.debug.RunnerException: NullPointerException at processing.app.Sketch.placeException(Sketch.java:1543) at processing.app.debug.Runner.findException(Runner.java:583) at processing.app.debug.Runner.reportException(Runner.java:558) at processing.app.debug.Runner.exception(Runner.java:498) at processing.app.debug.EventThread.exceptionEvent(EventThread.java:367) at processing.app.debug.EventThread.handleEvent(EventThread.java:255) at processing.app.debug.EventThread.run(EventThread.java:89) Exception in thread "Animation Thread" java.lang.NullPointerException at org.openkinect.processing.Kinect.enableDepth(Kinect.java:70) at PointCloud.setup(PointCloud.java:48) at processing.core.PApplet.handleDraw(PApplet.java:1583) at processing.core.PApplet.run(PApplet.java:1503) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • Database file is inexplicably locked during SQLite commit

    - by sweeney
    Hello, I'm performing a large number of INSERTS to a SQLite database. I'm using just one thread. I batch the writes to improve performance and have a bit of security in case of a crash. Basically I cache up a bunch of data in memory and then when I deem appropriate, I loop over all of that data and perform the INSERTS. The code for this is shown below: public void Commit() { using (SQLiteConnection conn = new SQLiteConnection(this.connString)) { conn.Open(); using (SQLiteTransaction trans = conn.BeginTransaction()) { using (SQLiteCommand command = conn.CreateCommand()) { command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)"; command.Parameters.Add(this.col1Param); command.Parameters.Add(this.col2Param); foreach (Data o in this.dataTemp) { this.col1Param.Value = o.Col1Prop; this. col2Param.Value = o.Col2Prop; command.ExecuteNonQuery(); } } this.TryHandleCommit(trans); } conn.Close(); } } I now employ the following gimmick to get the thing to eventually work: private void TryHandleCommit(SQLiteTransaction trans) { try { trans.Commit(); } catch (Exception e) { Console.WriteLine("Trying again..."); this.TryHandleCommit(trans); } } I create my DB like so: public DataBase(String path) { //build connection string SQLiteConnectionStringBuilder connString = new SQLiteConnectionStringBuilder(); connString.DataSource = path; connString.Version = 3; connString.DefaultTimeout = 5; connString.JournalMode = SQLiteJournalModeEnum.Persist; connString.UseUTF16Encoding = true; using (connection = new SQLiteConnection(connString.ToString())) { //check for existence of db FileInfo f = new FileInfo(path); if (!f.Exists) //build new blank db { SQLiteConnection.CreateFile(path); connection.Open(); using (SQLiteTransaction trans = connection.BeginTransaction()) { using (SQLiteCommand command = connection.CreateCommand()) { command.CommandText = DataBase.CREATE_MATCHES; command.ExecuteNonQuery(); command.CommandText = DataBase.CREATE_STRING_DATA; command.ExecuteNonQuery(); //TODO add logging } trans.Commit(); } connection.Close(); } } } I then export the connection string and use it to obtain new connections in different parts of the program. At seemingly random intervals, though at far too great a rate to ignore or otherwise workaround this problem, I get unhandled SQLiteException: Database file is locked. This occurs when I attempt to commit the transaction. No errors seem to occur prior to then. This does not always happen. Sometimes the whole thing runs without a hitch. No reads are being performed on these files before the commits finish. I have the very latest SQLite binary. I'm compiling for .NET 2.0. I'm using VS 2008. The db is a local file. All of this activity is encapsulated within one thread / process. Virus protection is off (though I think that was only relevant if you were connecting over a network?). As per Scotsman's post I have implemented the following changes: Journal Mode set to Persist DB files stored in C:\Docs + Settings\ApplicationData via System.Windows.Forms.Application.AppData windows call No inner exception Witnessed on two distinct machines (albeit very similar hardware and software) Have been running Process Monitor - no extraneous processes are attaching themselves to the DB files - the problem is definitely in my code... Does anyone have any idea whats going on here? I know I just dropped a whole mess of code, but I've been trying to figure this out for way too long. My thanks to anyone who makes it to the end of this question! brian UPDATES: Thanks for the suggestions so far! I've implemented many of the suggested changes. I feel that we are getting closer to the answer...however... The code above technically works however it is non-deterministic! It is not guaranteed to do anything aside from spin in neutral forever. In practice it seems to work somewhere between the 1st and 10th iteration. If i batch my commits at a reasonable interval damage will be mitigated but I really do not want to leave things in this state... More suggestions welcome!

    Read the article

  • Cocos2d-iPhone: CCSprite positions differ between Retina & non-Retina Screens

    - by bobwaycott
    I have a fairly simple app built using cocos2d-iphone, but a strange positioning problem that I've been unable to resolve. The app uses sprite sheets, and there is a Retina and non-Retina sprite sheet within the app that use the exact same artwork (except for resolution, of course). There are other artwork within the app used for CCSprites that are both standard and -hd suffixed. Within the app, a group of sprites are created when the app starts. These initially created CCSprites always position identically (and correctly) on Retina & non-Retina screens. // In method called to setup sprites when app launches // Cache & setup app sprites [[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile: @"sprites.plist"]; sprites = [CCSpriteBatchNode batchNodeWithFile: @"sprites.png"]; hill = [CCSprite spriteWithSpriteFrameName: @"hill.png"]; hill.position = ccp( 160, 75 ); [sprites addChild: hill z: 1]; // ... [create more sprites in same fashion] // NOTE: All sprites created here have correct positioning on Retina & non-Retina screens When a user taps the screen a certain way, a method is called that creates another group of CCSprites (on- and off-screen), animating them all in. One of these sprites, hand, is always positioned identically (and correctly) on Retina & non-Retina screens. The others (a group of clouds) successfully create & animate, but their positions are correct only on Retina displays. On a non-Retina display, each cloud has incorrect starting positions (incorrect for x, y, or sometimes both), and their ending positions after animation are also wrong. I've included the responsible code below from the on-touch method that creates the new sprites and animates them in. Again, it works as expected on a Retina display, but incorrectly on non-Retina screens. The CCSprites used are created in the same way at app-start to setup all the initial sprites in the app, which always have correct positions. // Elsewhere, in a method called on touch // Create clouds cloud1 = [CCSprite spriteWithSpriteFrameName: @"cloud_1.png"]; cloud1.position = ccp(-150, 320); cloud1.scale = 1.2f; cloud2 = [CCSprite spriteWithSpriteFrameName: @"cloud_2.png"]; cloud2.position = ccp(-150, 335); cloud2.scale = 1.3f; cloud3 = [CCSprite spriteWithSpriteFrameName: @"cloud_4.png"]; cloud3.position = ccp(-150, 400); cloud4 = [CCSprite spriteWithSpriteFrameName: @"cloud_5.png"]; cloud4.position = ccp(-150, 420); cloud5 = [CCSprite spriteWithSpriteFrameName: @"cloud_3.png"]; cloud5.position = ccp(400, 350); cloud6 = [CCSprite spriteWithSpriteFrameName: @"cloud_1.png"]; cloud6.position = ccp(400, 335); cloud6.scale = 1.1f; cloud7 = [CCSprite spriteWithSpriteFrameName: @"cloud_2.png"]; cloud7.flipY = YES; cloud7.flipX = YES; cloud7.position = ccp(400, 380); // Create hand hand = [CCSprite spriteWithSpriteFrameName:@"hand.png"]; hand.position = ccp(160, 650); [sprites addChild: cloud1 z: 10]; [sprites addChild: cloud2 z: 9]; [sprites addChild: cloud3 z: 8]; [sprites addChild: cloud4 z: 7]; [sprites addChild: cloud5 z: 6]; [sprites addChild: cloud6 z: 10]; [sprites addChild: cloud7 z: 8]; [sprites addChild: hand z: 10]; // ACTION!! [cloud1 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(70, 320)]]; [cloud2 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(60, 335)]]; [cloud3 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(100, 400)]]; [cloud4 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(80, 420)]]; [cloud5 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(250, 350)]]; [cloud6 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(250, 335)]]; [cloud7 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(270, 380)]]; [hand runAction: handIn]; It may be worth mentioning that I see this incorrect positioning behavior in the iOS Simulator when running the app and switching between the standard iPhone and iPhone (Retina) hardware options. I have not been able to verify this occurs or does not occur on an actual non-Retina iPhone because I do not have one. However, this is the only time I see this odd positioning behavior occur (the incorrect results obtained after user touch), and since I'm creating all sprites in exactly the same way (i.e., [CCSprite spriteWithSpriteFrameName:] and then setting position with cpp()), I would be especially grateful for any help in tracking down why this single group of sprites are always incorrect on non-Retina screens. Thank you.

    Read the article

  • Symfony2 same form, different entities NOT related

    - by user1381537
    I'm trying to write one form for submitting against MySQL DB, but I can't get it working, I've tried a lot of things (separate forms, create an ->add('foo', new foo()) to a field, and trying to parse plain SQL with a normal HTML form is my only solution, which is obviously not the best. This is my DB structure: As you can see I need to insert the comments textarea to ticketcomments among the user who wrote it, etc. On crmentity the description field. Then on ticketcf the fields that I need to submit from form, are this (because you wont know if I don't tell you because of the field names): tcf.cf594 AS Type, tcf.cf675 AS Suscription, tcf.cf770 AS ID_PRODUCT, tcf.cf746 AS NotificationDate, tcf.cf747 AS ResponseDate, tcf.cf748 AS ResolutionDate, And, of course, every table needs to have the same ticketid id for the submitted form, so we can retrieve it with one simple query. It will be easy to do with plain SQL instead of using DQL and Symfony2 forms, but is not a good way to do it. Also, here's my "Ticket list" query, if you need it to have it more clear... SELECT t.ticketNo AS Ticket, t.title AS Asunto, t.status AS Estado, t.updateLog AS LOG, t.hours AS Horas, t.solution AS Solucion, t.priority AS Prioridad, tcf.cf594 AS Tipo, tcf.cf675 AS Suscripcion, tcf.cf770 AS IDPROD, tcf.cf746 AS F_Noti, tcf.cf747 AS F_Resp, tcf.cf748 AS F_Reso, CONCAT (cd.firstname, cd.lastname) AS Contacto, crm.description AS Descripcion, crm.crmid AS id FROM WbsGoclientsBundle:VtigerTroubletickets t INNER JOIN WbsGoclientsBundle:VtigerTicketcf tcf WITH t.ticketid = tcf.ticketid INNER JOIN WbsGoclientsBundle:VtigerContactdetails cd WITH t.parentId = cd.contactid INNER JOIN WbsGoclientsBundle:VtigerCrmentity crm WITH t.ticketid = crm.crmid WHERE t.parentId IN ( SELECT cd1.contactid FROM WbsGoclientsBundle:VtigerContactdetails cd1 WHERE cd1.accountid = ( SELECT cd2.accountid FROM WbsGoclientsBundle:VtigerContactdetails cd2 WHERE cd2.contactid = :contactid)) AND t.status <> \'Closed\' And also "Ticket details" query (which is not in DQL format yet, only SQL) is so simple, it only retrieve the comments field and createdtime from ticketcomments appended to this query so we have all the fields... Thank you. This is a test form, using troubletickets and ticketcomments, it's returning errores because I can't set a comments field because troubletickets doesn't has it, but I need that field to be submitted to ticketcomments ... VtigerTicketcommentsType <?php namespace WbsGo\clientsBundle\Form\Type; use Symfony\Component\Form\AbstractType, Symfony\Component\Form\FormBuilderInterface; use Symfony\Component\OptionsResolver\OptionsResolverInterface; class VtigerTicketcommentsType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('ticketid') ->add('comments') ->add('ownerid') ->add('ownertype') ->add('createdtype') ; } public function setDefaultOptions(OptionsResolverInterface $resolver) { $resolver->setDefaults(array( 'data_class' => 'WbsGo\clientsBundle\Entity\VtigerTicketcomments' )); } public function getName() { return 'comments'; } } OpenTicketType.php <?php namespace WbsGo\clientsBundle\Form; use Symfony\Component\Form\AbstractType, Symfony\Component\Form\FormBuilderInterface ; use WbsGo\clientsBundle\Form\Type\VtigerTicketcommentsType; use Symfony\Component\OptionsResolver\OptionsResolverInterface; class OpenTicketType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('title') ->add('priority') ->add('solution') ->add('comments', 'collection', array( 'type' => new VtigerTicketcommentsType() )) ; } public function setDefaultOptions(OptionsResolverInterface $resolver) { $resolver->setDefaults(array( 'data_class' => 'WbsGo\clientsBundle\Entity\VtigerTroubletickets' )); } public function getName() { return 'ticket'; } } TicketController.php <?php namespace WbsGo\clientsBundle\Controller; use Symfony\Bundle\FrameworkBundle\Controller\Controller; use WbsGo\clientsBundle\Entity\VtigerTroubletickets; use WbsGo\clientsBundle\Entity\VtigerTicketcomments; use WbsGo\clientsBundle\Form\OpenTicketType; use Symfony\Component\HttpFoundation\Request; class TicketController extends Controller { public function indexAction() { $em = $this->getDoctrine()->getManager(); $tickets = $em ->getRepository('WbsGoclientsBundle:VtigerTroubletickets') ->findAllOpenByCustomerId($this->getUser()->getId()); $userdata = $this->getDoctrine()->getManager() ->getRepository('WbsGoclientsBundle:VtigerContactdetails') ->findContact($this->getUser()->getId()); return $this ->render('WbsGoclientsBundle:Ticket:index.html.twig', array('tickets' => $tickets, 'userdata' => $userdata)); } public function addAction() { $assets = $this->getDoctrine()->getManager() ->getRepository('WbsGoclientsBundle:VtigerAssets') ->findAssetByAccountId($this->getUser()->getId()); $assetlist = array(); foreach ($assets as $key => $v) { $assetlist[$key] = $key; } $form = $this->createForm(new OpenTicketType(), new VtigerTroubletickets()); return $this ->render('WbsGoclientsBundle:Ticket:add.html.twig', array('form' => $form->createView(), 'assets' => $assets,)); } } This is the error Symfony2 is returning Neither the property "comments" nor one of the methods "getComments()", "isComments()", "hasComments()", "_get()" or "_call()" exist and have public access in class "WbsGo\clientsBundle\Entity\VtigerTroubletickets". EDIT 2 This code is actually rendering my forms, but I need help in order to submit each XXXType form to its corresponding table. public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('descripcion') ->add('prioridad') ->add('solucion') ->add('comment', new VtigerTicketcommentsType() ) ->add('contacto') ->add('suscripcion') ->add('producto', 'entity', array( 'class' => 'WbsGo\clientsBundle\Entity\VtigerAssets', 'property' => 'assetname', 'empty_value' => '--SELECT--', 'query_builder' => function(\WbsGo\clientsBundle\Entity\VtigerAssetsRepository $repository) { //return $repository->findAssetByAccountId($this->customerId); return $repository->createQueryBuilder('a') ->select('a') ->where('a.account = (SELECT cd.accountid FROM WbsGoclientsBundle:VtigerContactdetails cd WHERE cd.contactid = ?1)') ->setParameter(1, $this->customerId); } ) ) ->add('hardware') ->add('backup') ->add('web') ->add('restore') ->add('customerId') ; } I also removed ->add('ticketid') from VtigerTicketcommentsType.php because it has relationship and is not needed. it's auto_incremental and must be generated once everything is submitted.

    Read the article

  • error about ACPI _OSC request failed (AE_NOT_FOUND)

    - by Yavuz Maslak
    I have ubuntu server 11.10 64 bit I see an error in kernel.log. This error comes out when the server reboot. some port of grep APCI in kernel.log; Dec 5 09:08:51 www kernel: [ 0.588605] pci0000:00: Requesting ACPI _OSC control (0x1d) Dec 5 09:08:51 www kernel: [ 0.588667] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d Dec 5 09:08:51 www kernel: [ 0.588746] ACPI _OSC control for PCIe not granted, disabling ASPM Which hardware may be cause this error ? root@www:# grep -r ACPI /var/log/kern.log Dec 5 09:08:51 www kernel: [ 0.000000] BIOS-e820: 00000000bf780000 - 00000000bf798000 (ACPI data) Dec 5 09:08:51 www kernel: [ 0.000000] BIOS-e820: 00000000bf798000 - 00000000bf7dc000 (ACPI NVS) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: RSDP 00000000000fb1a0 00014 (v00 ACPIAM) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: RSDT 00000000bf780000 00040 (v01 022410 RSDT1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: FACP 00000000bf780200 00084 (v01 022410 FACP1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: DSDT 00000000bf7804b0 0C359 (v01 A1279 A1279001 00000001 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: FACS 00000000bf798000 00040 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: APIC 00000000bf780390 000D8 (v01 022410 APIC1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: MCFG 00000000bf780470 0003C (v01 022410 OEMMCFG 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: OEMB 00000000bf798040 00072 (v01 022410 OEMB1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: HPET 00000000bf78f4b0 00038 (v01 022410 OEMHPET 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: OSFR 00000000bf78f4f0 000B0 (v01 022410 OEMOSFR 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: SSDT 00000000bf798fe0 00363 (v01 DpgPmm CpuPm 00000012 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: PM-Timer IO Port: 0x808 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x84] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x85] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x86] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x87] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x88] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x89] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x8a] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x8b] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x8c] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x8d] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x8e] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x8f] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0]) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x03] address[0xfec8a000] gsi_base[24]) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IRQ0 used by override. Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IRQ2 used by override. Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IRQ9 used by override. Dec 5 09:08:51 www kernel: [ 0.000000] Using ACPI (MADT) for SMP configuration information Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: HPET id: 0x8086a301 base: 0xfed00000 Dec 5 09:08:51 www kernel: [ 0.009507] ACPI: Core revision 20110413 Dec 5 09:08:51 www kernel: [ 0.499129] PM: Registering ACPI NVS region at bf798000 (278528 bytes) Dec 5 09:08:51 www kernel: [ 0.500749] ACPI: bus type pci registered Dec 5 09:08:51 www kernel: [ 0.502747] ACPI: EC: Look up EC in DSDT Dec 5 09:08:51 www kernel: [ 0.503788] ACPI: Executed 1 blocks of module-level executable AML code Dec 5 09:08:51 www kernel: [ 0.520435] ACPI: SSDT 00000000bf7980c0 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.520863] ACPI: Dynamic OEM Table Load: Dec 5 09:08:51 www kernel: [ 0.520990] ACPI: SSDT (null) 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.521308] ACPI: Interpreter enabled Dec 5 09:08:51 www kernel: [ 0.521366] ACPI: (supports S0 S1 S3 S4 S5) Dec 5 09:08:51 www kernel: [ 0.521611] ACPI: Using IOAPIC for interrupt routing Dec 5 09:08:51 www kernel: [ 0.522622] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in ACPI motherboard resources Dec 5 09:08:51 www kernel: [ 0.554150] ACPI: No dock devices found. Dec 5 09:08:51 www kernel: [ 0.554267] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 5 09:08:51 www kernel: [ 0.555231] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 5 09:08:51 www kernel: [ 0.588224] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT] Dec 5 09:08:51 www kernel: [ 0.588398] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT] Dec 5 09:08:51 www kernel: [ 0.588451] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT] Dec 5 09:08:51 www kernel: [ 0.588473] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P6._PRT] Dec 5 09:08:51 www kernel: [ 0.588492] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P7._PRT] Dec 5 09:08:51 www kernel: [ 0.588512] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P8._PRT] Dec 5 09:08:51 www kernel: [ 0.588540] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE1._PRT] Dec 5 09:08:51 www kernel: [ 0.588559] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE3._PRT] Dec 5 09:08:51 www kernel: [ 0.588579] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE7._PRT] Dec 5 09:08:51 www kernel: [ 0.588605] pci0000:00: Requesting ACPI _OSC control (0x1d) Dec 5 09:08:51 www kernel: [ 0.588667] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d Dec 5 09:08:51 www kernel: [ 0.588746] ACPI _OSC control for PCIe not granted, disabling ASPM Dec 5 09:08:51 www kernel: [ 0.597666] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 6 7 10 11 12 14 *15) Dec 5 09:08:51 www kernel: [ 0.598142] ACPI: PCI Interrupt Link [LNKB] (IRQs *5) Dec 5 09:08:51 www kernel: [ 0.598336] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 6 7 10 *11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.598810] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 6 7 *10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.599284] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 6 7 10 11 12 *14 15) Dec 5 09:08:51 www kernel: [ 0.599762] ACPI: PCI Interrupt Link [LNKF] (IRQs *3 4 6 7 10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.600236] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 6 *7 10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.600709] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 *4 6 7 10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.601931] PCI: Using ACPI for IRQ routing Dec 5 09:08:51 www kernel: [ 0.628146] pnp: PnP ACPI init Dec 5 09:08:51 www kernel: [ 0.628211] ACPI: bus type pnp registered Dec 5 09:08:51 www kernel: [ 0.628417] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) Dec 5 09:08:51 www kernel: [ 0.628859] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 5 09:08:51 www kernel: [ 0.628915] pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active) Dec 5 09:08:51 www kernel: [ 0.628951] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active) Dec 5 09:08:51 www kernel: [ 0.628975] pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active) Dec 5 09:08:51 www kernel: [ 0.629004] pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active) Dec 5 09:08:51 www kernel: [ 0.629229] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.629779] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.629849] pnp 00:08: Plug and Play ACPI device, IDs PNP0103 (active) Dec 5 09:08:51 www kernel: [ 0.629901] pnp 00:09: Plug and Play ACPI device, IDs INT0800 (active) Dec 5 09:08:51 www kernel: [ 0.630030] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.630254] system 00:0b: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.630304] pnp 00:0c: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) Dec 5 09:08:51 www kernel: [ 0.630359] pnp 00:0d: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) Dec 5 09:08:51 www kernel: [ 0.630492] system 00:0e: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.630986] system 00:0f: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 5 09:08:51 www kernel: [ 0.631078] pnp: PnP ACPI: found 16 devices Dec 5 09:08:51 www kernel: [ 0.631135] ACPI: ACPI bus type pnp unregistered Dec 5 09:08:51 www kernel: [ 0.726291] ACPI: Power Button [PWRB] Dec 5 09:08:51 www kernel: [ 0.726452] ACPI: Power Button [PWRF] Dec 5 09:08:51 www kernel: [ 0.726527] ACPI: acpi_idle yielding to intel_idle Dec 7 21:45:22 www kernel: [ 0.000000] BIOS-e820: 00000000bf780000 - 00000000bf798000 (ACPI data) Dec 7 21:45:22 www kernel: [ 0.000000] BIOS-e820: 00000000bf798000 - 00000000bf7dc000 (ACPI NVS) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: RSDP 00000000000fb1a0 00014 (v00 ACPIAM) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: RSDT 00000000bf780000 00040 (v01 022410 RSDT1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: FACP 00000000bf780200 00084 (v01 022410 FACP1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: DSDT 00000000bf7804b0 0C359 (v01 A1279 A1279001 00000001 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: FACS 00000000bf798000 00040 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: APIC 00000000bf780390 000D8 (v01 022410 APIC1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: MCFG 00000000bf780470 0003C (v01 022410 OEMMCFG 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: OEMB 00000000bf798040 00072 (v01 022410 OEMB1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: HPET 00000000bf78f4b0 00038 (v01 022410 OEMHPET 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: OSFR 00000000bf78f4f0 000B0 (v01 022410 OEMOSFR 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: SSDT 00000000bf798fe0 00363 (v01 DpgPmm CpuPm 00000012 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: PM-Timer IO Port: 0x808 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x84] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x85] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x86] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x87] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x88] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x89] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x8a] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x8b] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x8c] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x8d] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x8e] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x8f] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0]) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x03] address[0xfec8a000] gsi_base[24]) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IRQ0 used by override. Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IRQ2 used by override. Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IRQ9 used by override. Dec 7 21:45:22 www kernel: [ 0.000000] Using ACPI (MADT) for SMP configuration information Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: HPET id: 0x8086a301 base: 0xfed00000 Dec 7 21:45:22 www kernel: [ 0.009505] ACPI: Core revision 20110413 Dec 7 21:45:22 www kernel: [ 0.499203] PM: Registering ACPI NVS region at bf798000 (278528 bytes) Dec 7 21:45:22 www kernel: [ 0.500819] ACPI: bus type pci registered Dec 7 21:45:22 www kernel: [ 0.503121] ACPI: EC: Look up EC in DSDT Dec 7 21:45:22 www kernel: [ 0.504162] ACPI: Executed 1 blocks of module-level executable AML code Dec 7 21:45:22 www kernel: [ 0.520821] ACPI: SSDT 00000000bf7980c0 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.521247] ACPI: Dynamic OEM Table Load: Dec 7 21:45:22 www kernel: [ 0.521374] ACPI: SSDT (null) 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.521691] ACPI: Interpreter enabled Dec 7 21:45:22 www kernel: [ 0.521748] ACPI: (supports S0 S1 S3 S4 S5) Dec 7 21:45:22 www kernel: [ 0.521993] ACPI: Using IOAPIC for interrupt routing Dec 7 21:45:22 www kernel: [ 0.523002] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in ACPI motherboard resources Dec 7 21:45:22 www kernel: [ 0.554533] ACPI: No dock devices found. Dec 7 21:45:22 www kernel: [ 0.554649] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 7 21:45:22 www kernel: [ 0.555620] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 7 21:45:22 www kernel: [ 0.588224] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT] Dec 7 21:45:22 www kernel: [ 0.588398] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT] Dec 7 21:45:22 www kernel: [ 0.588451] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT] Dec 7 21:45:22 www kernel: [ 0.588473] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P6._PRT] Dec 7 21:45:22 www kernel: [ 0.588492] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P7._PRT] Dec 7 21:45:22 www kernel: [ 0.588512] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P8._PRT] Dec 7 21:45:22 www kernel: [ 0.588540] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE1._PRT] Dec 7 21:45:22 www kernel: [ 0.588559] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE3._PRT] Dec 7 21:45:22 www kernel: [ 0.588579] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE7._PRT] Dec 7 21:45:22 www kernel: [ 0.588606] pci0000:00: Requesting ACPI _OSC control (0x1d) Dec 7 21:45:22 www kernel: [ 0.588667] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d Dec 7 21:45:22 www kernel: [ 0.588746] ACPI _OSC control for PCIe not granted, disabling ASPM Dec 7 21:45:22 www kernel: [ 0.597661] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 6 7 10 11 12 14 *15) Dec 7 21:45:22 www kernel: [ 0.598137] ACPI: PCI Interrupt Link [LNKB] (IRQs *5) Dec 7 21:45:22 www kernel: [ 0.598331] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 6 7 10 *11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.598804] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 6 7 *10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.599278] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 6 7 10 11 12 *14 15) Dec 7 21:45:22 www kernel: [ 0.599756] ACPI: PCI Interrupt Link [LNKF] (IRQs *3 4 6 7 10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.600230] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 6 *7 10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.600704] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 *4 6 7 10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.601926] PCI: Using ACPI for IRQ routing Dec 7 21:45:22 www kernel: [ 0.624115] pnp: PnP ACPI init Dec 7 21:45:22 www kernel: [ 0.624179] ACPI: bus type pnp registered Dec 7 21:45:22 www kernel: [ 0.624382] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) Dec 7 21:45:22 www kernel: [ 0.624821] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 7 21:45:22 www kernel: [ 0.624875] pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active) Dec 7 21:45:22 www kernel: [ 0.624911] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active) Dec 7 21:45:22 www kernel: [ 0.624933] pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active) Dec 7 21:45:22 www kernel: [ 0.624962] pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active) Dec 7 21:45:22 www kernel: [ 0.625186] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.625733] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.625803] pnp 00:08: Plug and Play ACPI device, IDs PNP0103 (active) Dec 7 21:45:22 www kernel: [ 0.625856] pnp 00:09: Plug and Play ACPI device, IDs INT0800 (active) Dec 7 21:45:22 www kernel: [ 0.625984] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.626206] system 00:0b: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.626256] pnp 00:0c: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) Dec 7 21:45:22 www kernel: [ 0.626312] pnp 00:0d: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) Dec 7 21:45:22 www kernel: [ 0.626445] system 00:0e: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.626936] system 00:0f: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 7 21:45:22 www kernel: [ 0.627027] pnp: PnP ACPI: found 16 devices Dec 7 21:45:22 www kernel: [ 0.627084] ACPI: ACPI bus type pnp unregistered Dec 7 21:45:22 www kernel: [ 0.722086] ACPI: Power Button [PWRB] Dec 7 21:45:22 www kernel: [ 0.722246] ACPI: Power Button [PWRF] Dec 7 21:45:22 www kernel: [ 0.722320] ACPI: acpi_idle yielding to intel_idle

    Read the article

  • Unexpected start of already-primary server processes when heartbeat on secondary is stopped.

    - by vorik
    Hi, I've got an active-passive Heartbeat cluster with Apache, MySQL, ActiveMQ and DRBD. Today, I wanted to perform hardware-maintenance on the secondary node (node04), so I stopped the heartbeat service before shutting it down. Then, the primary node (node03) received a shutdown notice from the secondary node (node04). This logging comes from the primary node: node03 heartbeat[4458]: 2010/03/08_08:52:56 info: Received shutdown notice from 'node04.companydomain.nl'. heartbeat[4458]: 2010/03/08_08:52:56 info: Resources being acquired from node04.companydomain.nl. harc[27522]: 2010/03/08_08:52:56 info: Running /etc/ha.d/rc.d/status status heartbeat[27523]: 2010/03/08_08:52:56 info: Local Resource acquisition completed. mach_down[27567]: 2010/03/08_08:52:56 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired mach_down[27567]: 2010/03/08_08:52:56 info: mach_down takeover complete for node node04.companydomain.nl. heartbeat[4458]: 2010/03/08_08:52:56 info: mach_down takeover complete. harc[27620]: 2010/03/08_08:52:56 info: Running /etc/ha.d/rc.d/ip-request-resp ip-request-resp ip-request-resp[27620]: 2010/03/08_08:52:56 received ip-request-resp drbddisk OK yes ResourceManager[27645]: 2010/03/08_08:52:56 info: Acquiring resource group: node03.companydomain.nl drbddisk Filesystem::/dev/drbd0::/data::ext3 mysql apache::/etc/httpd/conf/httpd.conf LVSSyncDaemonSwap::master monitor activemq tivoli-cluster MailTo::[email protected]::DRBDFailureDrisAcc MailTo::[email protected]::DRBDFailureDrisAcc 1.2.3.212 ResourceManager[27645]: 2010/03/08_08:52:56 info: Running /etc/ha.d/resource.d/drbddisk start Filesystem[27700]: 2010/03/08_08:52:57 INFO: Running OK ResourceManager[27645]: 2010/03/08_08:52:57 info: Running /etc/ha.d/resource.d/mysql start mysql[27783]: 2010/03/08_08:52:57 Starting MySQL[ OK ] apache[27853]: 2010/03/08_08:52:57 INFO: Running OK ResourceManager[27645]: 2010/03/08_08:52:57 info: Running /etc/ha.d/resource.d/monitor start monitor[28160]: 2010/03/08_08:52:58 ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/activemq start activemq[28210]: 2010/03/08_08:52:58 Starting ActiveMQ Broker... ActiveMQ Broker is already running. ResourceManager[27645]: 2010/03/08_08:52:58 ERROR: Return code 1 from /etc/ha.d/resource.d/activemq ResourceManager[27645]: 2010/03/08_08:52:58 CRIT: Giving up resources due to failure of activemq ResourceManager[27645]: 2010/03/08_08:52:58 info: Releasing resource group: node03.companydomain.nl drbddisk Filesystem::/dev/drbd0::/data::ext3 mysql apache::/etc/httpd/conf/httpd.conf LVSSyncDaemonSwap::master monitor activemq tivoli-cluster MailTo::[email protected]::DRBDFailureDrisAcc MailTo::[email protected]::DRBDFailureDrisAcc 1.2.3.212 ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/IPaddr 1.2.3.212 stop IPaddr[28329]: 2010/03/08_08:52:58 INFO: ifconfig eth0:0 down IPaddr[28312]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/MailTo [email protected] DRBDFailureDrisAcc stop MailTo[28378]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/MailTo [email protected] DRBDFailureDrisAcc stop MailTo[28433]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/tivoli-cluster stop ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/activemq stop activemq[28503]: 2010/03/08_08:53:01 Stopping ActiveMQ Broker... Stopped ActiveMQ Broker. ResourceManager[27645]: 2010/03/08_08:53:01 info: Running /etc/ha.d/resource.d/monitor stop monitor[28681]: 2010/03/08_08:53:01 ResourceManager[27645]: 2010/03/08_08:53:01 info: Running /etc/ha.d/resource.d/LVSSyncDaemonSwap master stop LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncmaster down LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncbackup up LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncmaster released ResourceManager[27645]: 2010/03/08_08:53:02 info: Running /etc/ha.d/resource.d/apache /etc/httpd/conf/httpd.conf stop apache[28782]: 2010/03/08_08:53:03 INFO: Killing apache PID 18390 apache[28782]: 2010/03/08_08:53:03 INFO: apache stopped. apache[28771]: 2010/03/08_08:53:03 INFO: Success ResourceManager[27645]: 2010/03/08_08:53:03 info: Running /etc/ha.d/resource.d/mysql stop mysql[28851]: 2010/03/08_08:53:24 Shutting down MySQL.....................[ OK ] ResourceManager[27645]: 2010/03/08_08:53:24 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd0 /data ext3 stop Filesystem[29010]: 2010/03/08_08:53:25 INFO: Running stop for /dev/drbd0 on /data Filesystem[29010]: 2010/03/08_08:53:25 INFO: Trying to unmount /data Filesystem[29010]: 2010/03/08_08:53:25 ERROR: Couldn't unmount /data; trying cleanup with SIGTERM Filesystem[29010]: 2010/03/08_08:53:25 INFO: Some processes on /data were signalled Filesystem[29010]: 2010/03/08_08:53:27 INFO: unmounted /data successfully Filesystem[28999]: 2010/03/08_08:53:27 INFO: Success ResourceManager[27645]: 2010/03/08_08:53:27 info: Running /etc/ha.d/resource.d/drbddisk stop heartbeat[4458]: 2010/03/08_08:53:29 WARN: node node04.companydomain.nl: is dead heartbeat[4458]: 2010/03/08_08:53:29 info: Dead node node04.companydomain.nl gave up resources. heartbeat[4458]: 2010/03/08_08:53:29 info: Link node04.companydomain.nl:eth0 dead. heartbeat[4458]: 2010/03/08_08:53:29 info: Link node04.companydomain.nl:eth1 dead. hb_standby[29193]: 2010/03/08_08:53:57 Going standby [foreign]. heartbeat[4458]: 2010/03/08_08:53:57 info: node03.companydomain.nl wants to go standby [foreign] Soo... What just happened here??? Heartbeat on node04 stopped and told node03, which was the active node at the time. Somehow, node03 decided to start the cluster processes that were already running. (For the processes that are not critical, I always return a 0 from the startupscript so it does not stops the entire cluster when a non-essential part fails.) When starting ActiveMQ, it returns status 1 because it is already running. This fails the node and shuts everything down. As heartbeat is not running on the secondary node, it cannot failover to there. When I tried to run ha_takeover to restart the resources, absolutely nothing happened. Only after I restarted heartbeat on the primary node the resources could be started (after a delay of 2 minutes). These are my questions: Why does heartbeat on the primary node try to start the cluster processes again? Why did ha_takeover not work? What can I do to prevent this from happening? Server configuration: DRBD: version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by [email protected], 2010-01-20 09:14:48 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate B r---- ns:0 nr:6459432 dw:6459432 dr:0 al:0 bm:301 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0 uname -a Linux node04 2.6.18-164.11.1.el5 #1 SMP Wed Jan 6 13:26:04 EST 2010 x86_64 x86_64 x86_64 GNU/Linux haresources node03.companydomain.nl \ drbddisk \ Filesystem::/dev/drbd0::/data::ext3 \ mysql \ apache::/etc/httpd/conf/httpd.conf \ LVSSyncDaemonSwap::master \ monitor \ activemq \ tivoli-cluster \ MailTo::[email protected]::DRBDFailureDrisAcc \ MailTo::[email protected]::DRBDFailureDrisAcc \ 1.2.3.212 ha.cf debugfile /var/log/ha-debug logfile /var/log/ha-log keepalive 500ms deadtime 30 warntime 10 initdead 120 udpport 694 mcast eth0 225.0.0.3 694 1 0 mcast eth1 225.0.0.4 694 1 0 auto_failback off node node03.companydomain.nl node node04.companydomain.nl respawn hacluster /usr/lib64/heartbeat/dopd apiauth dopd gid=haclient uid=hacluster Thank you very much in advance, Ger Apeldoorn

    Read the article

  • Compat Wireless Drivers Centrino N-2230

    - by user2699451
    So I am using linux and am having trouble installing the Compat Wireless drivers Hardware: Intel Centrino N-2230 OS: Linux Mint 64bit (kernel 13.08-generic) I followed this link http://www.mathyvanhoef.com/2012/09/compat-wireless-injection-patch-for.html Output: apt-get install linux-headers-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done linux-headers-3.8.0-19-generic is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 19 not upgraded. charles-W55xEU compat-wireless-2010-10-16 # cd ~ charles-W55xEU ~ # dir adt-bundle-linux-x86_64-20130917.zip Desktop known_hosts_backup charles-W55xEU ~ # wget http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.6/compat-wireless-3.6.2-1-snp.tar.bz2 --2013-10-29 10:28:23-- http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.6/compat-wireless-3.6.2-1-snp.tar.bz2 Resolving www.orbit-lab.org (www.orbit-lab.org)... 128.6.192.131 Connecting to www.orbit-lab.org (www.orbit-lab.org)|128.6.192.131|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4443700 (4,2M) [application/x-bzip2] Saving to: ‘compat-wireless-3.6.2-1-snp.tar.bz2’ 100%[======================================>] 4 443 700 13,5KB/s in 11m 3s 2013-10-29 10:39:27 (6,55 KB/s) - ‘compat-wireless-3.6.2-1-snp.tar.bz2’ saved [4443700/4443700] charles-W55xEU ~ # tar -xf compat-wireless-3.6.2-1-snp.tar.bz2 charles-W55xEU ~ # cd compat-wireless-3.6-rc6-1 bash: cd: compat-wireless-3.6-rc6-1: No such file or directory charles-W55xEU ~ # dir adt-bundle-linux-x86_64-20130917.zip Desktop compat-wireless-3.6.2-1-snp known_hosts_backup compat-wireless-3.6.2-1-snp.tar.bz2 charles-W55xEU ~ # cd compat-wireless-3.6.2-1-snp/ charles-W55xEU compat-wireless-3.6.2-1-snp # dir code-metrics.txt defconfigs linux-next-pending pending-stable compat drivers MAINTAINERS README config.mk enable-older-kernels Makefile scripts COPYRIGHT include net udev crap linux-next-cherry-picks patches charles-W55xEU compat-wireless-3.6.2-1-snp # wget http://patches.aircrack-ng.org/mac80211.compat08082009.wl_frag+ack_v1.patch --2013-10-29 10:40:52-- http://patches.aircrack-ng.org/mac80211.compat08082009.wl_frag+ack_v1.patch Resolving patches.aircrack-ng.org (patches.aircrack-ng.org)... 213.186.33.2, 2001:41d0:1:1b00:213:186:33:2 Connecting to patches.aircrack-ng.org (patches.aircrack-ng.org)|213.186.33.2|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1049 (1,0K) [text/plain] Saving to: ‘mac80211.compat08082009.wl_frag+ack_v1.patch’ 100%[======================================>] 1 049 --.-K/s in 0s 2013-10-29 10:40:56 (180 MB/s) - ‘mac80211.compat08082009.wl_frag+ack_v1.patch’ saved [1049/1049] charles-W55xEU compat-wireless-3.6.2-1-snp # patch -p1 < mac80211.compat08082009.wl_frag+ack_v1.patch patching file net/mac80211/tx.c Hunk #1 succeeded at 792 (offset 115 lines). charles-W55xEU compat-wireless-3.6.2-1-snp # wget -Ocompatwireless_chan_qos_frag.patch http://pastie.textmate.org/pastes/4882675/download --2013-10-29 10:43:18-- http://pastie.textmate.org/pastes/4882675/download Resolving pastie.textmate.org (pastie.textmate.org)... 178.79.137.125 Connecting to pastie.textmate.org (pastie.textmate.org)|178.79.137.125|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://pastie.org/pastes/4882675/download [following] --2013-10-29 10:43:20-- http://pastie.org/pastes/4882675/download Resolving pastie.org (pastie.org)... 96.126.119.119 Connecting to pastie.org (pastie.org)|96.126.119.119|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 2036 (2,0K) [application/octet-stream] Saving to: ‘compatwireless_chan_qos_frag.patch’ 100%[======================================>] 2 036 --.-K/s in 0,001s 2013-10-29 10:43:21 (3,35 MB/s) - ‘compatwireless_chan_qos_frag.patch’ saved [2036/2036] charles-W55xEU compat-wireless-3.6.2-1-snp # patch -p1 < compatwireless_chan_qos_frag.patch patching file drivers/net/wireless/rtl818x/rtl8187/dev.c patching file net/mac80211/tx.c Hunk #1 succeeded at 1495 (offset 8 lines). patching file net/wireless/chan.c charles-W55xEU compat-wireless-3.6.2-1-snp # make ./scripts/gen-compat-autoconf.sh /root/compat-wireless-3.6.2-1-snp/.config /root/compat-wireless-3.6.2-1-snp/config.mk > include/linux/compat_autoconf.h make -C /lib/modules/3.8.0-19-generic/build M=/root/compat-wireless-3.6.2-1-snp modules make[1]: Entering directory `/usr/src/linux-headers-3.8.0-19-generic' CC [M] /root/compat-wireless-3.6.2-1-snp/compat/main.o LD [M] /root/compat-wireless-3.6.2-1-snp/compat/compat.o CC [M] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:8:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_pci.h:217:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_pci_init’ In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:10:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_gmac_cmn.h:95:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_gmac_cmn_init’ In file included from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8:0: /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:25:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:152:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:17:21: warning: ‘bcma_bus_next_num’ defined but not used [-Wunused-variable] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:93:12: warning: ‘bcma_register_cores’ defined but not used [-Wunused-function] make[3]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o] Error 1 make[2]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma] Error 2 make[1]: *** [_module_/root/compat-wireless-3.6.2-1-snp] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.8.0-19-generic' make: *** [modules] Error 2 charles-W55xEU compat-wireless-3.6.2-1-snp # make install Warning: You may or may not need to update your initframfs, you should if any of the modules installed are part of your initramfs. To add support for your distribution to do this automatically send a patch against ./scripts/update-initramfs. If your distribution does not require this send a patch against the '/usr/bin/lsb_release -i -s': LinuxMint tag for your distribution to avoid this warning. make -C /lib/modules/3.8.0-19-generic/build M=/root/compat-wireless-3.6.2-1-snp modules make[1]: Entering directory `/usr/src/linux-headers-3.8.0-19-generic' CC [M] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:8:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_pci.h:217:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_pci_init’ In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:10:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_gmac_cmn.h:95:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_gmac_cmn_init’ In file included from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8:0: /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:25:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:152:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:17:21: warning: ‘bcma_bus_next_num’ defined but not used [-Wunused-variable] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:93:12: warning: ‘bcma_register_cores’ defined but not used [-Wunused-function] make[3]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o] Error 1 make[2]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma] Error 2 make[1]: *** [_module_/root/compat-wireless-3.6.2-1-snp] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.8.0-19-generic' make: *** [modules] Error 2 charles-W55xEU compat-wireless-3.6.2-1-snp # It keeps giving errors, same with other sites, I get the same errors??? I am lost, help needed

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • CodePlex Daily Summary for Tuesday, January 11, 2011

    CodePlex Daily Summary for Tuesday, January 11, 2011Popular ReleasesArcGIS Editor for OpenStreetMap: ArcGIS Editor for OpenStreetMap 1.1 beta2: This is the beta2 release for the ArcGIS Editor for OpenStreetMap version 1.1. Changes from version 1.0: Multi-part geometries are now supported. Homogeneous relations (consisting of only lines or only polygons) are converted into the appropriate multi-part geometry. Mixed relations and super relations are maintained and tracked in a stand-alone relation table. The underlying editing logic has changed. As opposed to tracking the editing changes upon "Save edit" or "Stop edit" the changes a...VSSpeedster - Parallel Builds for VS: VSSpeedster 1.2 (beta): - Improved Parallel Builds - Cancel running Parallel Build using Ctrl+BreakASP.NET Comet Ajax Library (Reverse Ajax - Server Push): Multiple server ASP.NET Reverse Ajax: This sample project demonstrates how is easy to scale your web applications via PokeInHawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.5: In the case you are running an x86 Windows and you installed Release 1.2.4, you should consider upgrading to this release (1.2.5) as it appears Hawkeye is broken on x86 OS. I apologize for the inconvenience, but it appears Hawkeye 1.2.4 (and probably previous versions) doesn't run on x86 Windows (See issue http://hawkeye.codeplex.com/workitem/7791). This maintenance release fixes this broken behavior. This release comes in two flavors: Hawkeye.125.N2 is the standard .NET 2 build, was compile...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...EnhSim: EnhSim 2.3.0: 2.3.0This release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Changed how flame shoc...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...sNPCedit: sNPCedit v0.9d: added elementclient coordinate catcher to catch coordinates select a target (ingame) i.e. your char, npc or monster than click the button and coordinates+direction will be transfered to the selected row in the table corrected labels from Rot to Direction (because it is a vector)Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.7 beta Released: Hi, Today Visifire is released along with one new feature * Inlines property has been implemented in Title. Now onwards you can customize the text content in Title. Please check out the Visifire documentation for more information. This release contains fix for the following bugs: * Styles for chart elements were not working as expected. * Bar chart was not drawn properly if AxisMinimum property was set to a value above zero base line. * In DateTime axis, AxisLables were no...Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.StyleCop for ReSharper: StyleCop for ReSharper 5.1.14980.000: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the correct path ...VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.UltimateJB: Ultimate JB 2.03 PL3 KAKAROTO + HERMES + Spoof 3.5: Voici une version attendu avec impatience pour beaucoup : - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 2.43 !!! Conclusion : - UltimateJB203PSXXXDEFAULTKAKAROTO=> Pas de spoof mais disponible pour les PS3 suivantes : 3.41_kiosk 3.41 3.40 3.30 3.21 3.15 3.10 3.01 2.76 2.70 2.60 2.53 2.43 - UltimateJB203PS341_HERMES => Pas de spoof mais version hermes 4b - UltimateJB203PS341HERMESSPOOF35X => hermes 4b + spoof des firmwares 3.50 et 3.55 au li....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...EFMVC - ASP.NET MVC 3 and EF Code First: EFMVC 0.5- ASP.NET MVC 3 and EF Code First: Demo web app ASP.NET MVC 3, Razor and EF Code FirstVidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch....NET Voice Recorder: Auto-Tune Release: This is the source code and binaries to accompany the article on the Coding 4 Fun website. It is the Auto Tuner release of the .NET Voice Recorder application.BloodSim: BloodSim - 1.3.2.0: - Simulation Log is now automatically disabled and hidden when running 10 or more iterations - Hit and Expertise are now entered by Rating, and include option for a Racial Expertise bonus - Added option for boss to use a periodic magic ability (Dragon Breath) - Added option for boss to periodically Enrage, gaining a Damage/Attack Speed buffASP.NET MVC CMS ( Using CommonLibrary.NET ): CommonLibrary.NET CMS 0.9.5 Alpha: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 4.0. ActiveRecord based components for Blogs, Widgets, Pages, Parts, Events, Feedback, BlogRolls, Links Includes several widgets ( tag cloud, archives, recent, user cloud, links twitter, blog roll and more ) Built using the http://commonlibrarynet.codeplex.com framework. ( Uses TDD, DDD, Models/Entities, Code Generation ) Can run w/ In-Memory Repositories or Sql Server Database See Documentation tab for Ins...New Projects.NET Event Spy: Full information available here: http://martincarolan.blogspot.com/2011/01/secret-project.html Simple development/debugging tool that hooks into and monitors events raised on any .NET object3DTweet: 3Dtweet is an effort to make tweets appear in a aesthetic manner to the users of windows phone.Its developed using VS2010 expresss.Agile .NET with SCRUM and XP: Source code for the book Apress Professional Agile .NET Development with SCRUM and XPBeskid Niski Agroturystyka: Travel Poland, turystyka w beskidzie niskim. Agroturystyka w miejscowosci Losie nad zalewem KlimkowkaCoding better: A better coding labs for .net new feature.FBApp: A simple facebook app I was busy with over the holidays as an experiment to try out the facebook api. Is currently not complete but I wanted to get some criticism on it for my 1st web app. It is developed using WPF and C#. Freemium Helper for WebMatrix: The Freemium Helper for WebMatrix provides an easy way to apply the Freemium model into your WebMatrix site. Using different user groups (or roles), it allows you to easily enable or disable features on your pages depending on the stock-keeping unit the user has paid for.Haversine Distance Calculation: A very small project that implements the Haversine formula, which calculates the great circle distance between two points on the earth's surface. The points are latitude / longitude coordinates in DD. The formula is implemented client side with javascript and server side with C#.Hexa.Core: Hexa.Core is our implementation of the Domain Driven Design Architecture. Also providing a set of helper classes for ASP.Net and WCF development.Minecraft NBT reader: A simple Minecraft NBT reader.MobSoft: MobSoft is silverlight based news related application designed to test the new functionalitities in Silverlight 4netduino Helpers: The 'netduino Helpers' is a C# driver set for common hardware components and features convenient wrappers around complex .Net Micro Framework features such as: Analog joysticks, Real-time clock, 8*8 LED matrix, Shift register, runtime assembly & resource loader, bitmaps, etc.NewsGator Social Connectors for Sharepoint 2010: This project contains social connectors for the NewsGator SharePoint platform and supports sending messages to Twitter and LinkedIn just by putting tags in the text #li for sending to linkedin #tw for sending to twitterNon Profit Contact Relationship Management: A non profit contact relationship management software intended to help those in the non profit arena manage donors, sponsors, and prospects.OpenAGE: OpenAGE, short for Open Advance Game Engine, is aimed at developing a new Advanced Game engine strictly for the PC and Xbox360 gaming System using XNA 4.0, and Visual Studio 2010OpenAutoPoster: OpenAutoPoster automates some of the boring everyday tasks of aggregating, linking and posting that haunts content creators.Phefer WoodTurning Sketcher: Draw out your own turnings before you hit the wood. Import images and trace around them, print them out with the length and width measuresments.Simple Script Interpreter- A simple GPLEX/GPPG (Lex/Yacc) Primer: Simple Script is a simple implementation of an interpreter language built with GPlex and Gppg (Lex/Yacc). It's developed in C#.SP2010Tutorials: Code for learning SharePoint 2010The Social Developer: This is a social developer tool for programmers to create and share projects using the .Net framework and other technologies and integrate it into a socialistic approach of sharing the work load and the resources needed to develop high level applications. Traffic-sign Classification: Traffic sign shape classification and localization.unnamedyet: Experimental! para Investigadores de Sistemas. Objetivo! desarrollar una praxis tal que con un conjunto finito y discreto de términos para describir sea posible auto-demostrar y ejecutar cualquier proposición dada.VSSpeedster - Parallel Builds for VS: Improve the performance of your Visual Studio: - Parallel Builds integrated in visual studioWebservice Xslt Transformer WebPart for SharePoint 2010: The Dynamic Webservice Xslt Transformer WebPart makes it much easier for SharePoint Developers and Administrators to call the webservice and transform the results directly to HTML by providing their own custom xslt. The properties can be set on the webpart by using the UI.WilWaNet.HASH: An ASP.NET MVC web site designed for tracking nutrition for the purposes of losing weight. Tracks calories, fat calories, fat grams and saturated fat along with daily weight and exercise. Includes daily Basic Metabolic Rate calculation and graphing functions.WP7 Try it 01: The first try in wp7WPF TryIt 01: Quan ly Nhan khau WPF ApplicationWX Alerter CAP/XML: NWS Alerter using CAP 1.1 alerting protocol. The goal of this project is to consume weather alerts from the NWS site. The user will select the city or SAME code/zone to watch. As alerts trigger notices will display and info will fill the Alert Tab.

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357  | Next Page >