Search Results

Search found 1650 results on 66 pages for 'indexes'.

Page 66/66 | < Previous Page | 62 63 64 65 66 

  • CodePlex Daily Summary for Sunday, October 30, 2011

    CodePlex Daily Summary for Sunday, October 30, 2011Popular ReleasesKoober: Koober - The Ebook Creator 0.2: The official release of Koober as Open source. Koober is a ebook creator for Windows, and Koob Reader is the reader.?????????????: ??????????????: ??????????????VidCoder: 1.2.0: Updated to HandBrake svn 4311. Refactored to read in list of encoders from HandBrake itself. Added ffaac, FLAC audio and ffmpeg2 video. Reworked audio encoding UI: removed grid and gave each encoding more vertical space. Added audio quality targeting and compression. Added status messages. Added messages for a few events (encode start/stop, playing a preview clip, update available, update download finished). Made source and destination path UI elements smarter about what parts they cut ...patterns & practices: Enterprise Library Contrib: Enterprise Library Contrib - 5.0 (Oct 2011): This release of Enterprise Library Contrib is based on the Microsoft patterns & practices Enterprise Library 5.0 core and contains the following: Common extensionsTypeConfigurationElement<T> - A Polymorphic Configuration Element without having to be part of a PolymorphicConfigurationElementCollection. AnonymousConfigurationElement - A Configuration element that can be uniquely identified without having to define its name explicitly. Data Access Application Block extensionsMySql Provider - ...Network Monitor Open Source Parsers: Network Monitor Parsers 3.4.2748: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, NetowrkMonitor_Parsers.msi continues to improve quality and fix bugs. It has included the fo...Duckworth Lewis Professional Edition Calculator: DLcalc 3.0: DLcalc 3.0 can perform Duckworth/Lewis Professional Edition calculations 100% accurately. It also produces over-by-over and ball-by-ball PAR score tables.Folder Bookmarks: Folder Bookmarks 2.2.0.1: In this version: Custom Icons - now you can change the icons of the bookmarks. By default, whenever an image is added, the icon is automatically changed to a thumbnail of the picture. This can be turned off in the settings (Options... > Settings) Ability to remove items from the 'Recent' category Bugfixes - 'Choose' button in 'Edit Bookmark' now works Another bug fix: another problem in the 'Edit Bookmark' windowMedia Companion: MC 3.420b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movies Fixed: Fanart and poster scraping issues TV Shows (Re)Added: Rebuild single show Fixed: Issue when shows are moved from original location Ability to handle " for actor nicknames Crash when episode name contains "<" (does not scrape yet) Clears fanart when switch...patterns & practices - Unity: Unity 3.0 for .NET4.5 Preview: The Unity 3.0.1026.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. The major changes include: Unity projects updated to target .NET 4.5. Dynamic build plans modified to use compiled lambda expressions instead of Reflection.Emit Converting reflection to use the new TypeInfo for reflection. Projects updated to work with the Microsoft Visual Studio 2011 Preview Notes/Known Issues: The Microsoft.Practices.Unity.UnityServiceLocator class cannot be use...Managed Extensibility Framework: MEF 2 Preview 4: Detailed information on this release is available on the BCL team blog.Image Converter: Image Converter 0.3: New Features: - English and German support Technical Improvements: - Microsoft All Rules using Code Analysis Planned Features for future release: 1. Unit testing 2. Command line interface 3. Automatic UpdatesAcDown????? - Anime&Comic Downloader: AcDown????? v3.6: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6?? ??“????”...Path Copy Copy: 8.0: New version that mostly adds lots of requested features: 11340 11339 11338 11337 This version also features a more elaborate Settings UI that has several tabs. I tried to add some notes to better explain the use and purpose of the various options. The Path Copy Copy documentation is also on the way, both to explain how to develop custom plugins and to explain how to pre-configure options if you're a network admin. Stay tuned.MVC Controls Toolkit: Mvc Controls Toolkit 1.5.0: Added: The new Client Blocks feaure of Views A new "move" js method for the TreeViews The NewHtmlCreated js event to the DataGrid Improved the ChoiceList structure that now allows also the selection list of a dropdown to be chosen with a lambda expression Improved the AcceptViewHintAttribute controller filter. Now a client can specify not only the name of a View or Partial View it prefers, but also to receive just the rough data in Json format. Now the the SMinimum and SMaximum par...Free SharePoint Master Pages: Buried Alive (Halloween) Theme: Release Notes *Created for Halloween, you will find theme file, custom css file and images. *Created by Al Roome @AlstarRoome Features: Custom styling for web part Custom background *Screenshot https://s3.amazonaws.com/kkhipple/post/sharepoint-showcase-halloween.pngDevForce Application Framework: DevForce AF 2.0.3 RTW: PrerequisitesWPF 4.0 Silverlight 4.0 DevForce 2010 6.1.3.1 Download ContentsDebug and Release Assemblies API Documentation Source code License.txt Requirements.txt Release HighlightsNew: EventAggregator event forwarding New: EntityManagerInterceptor<T> to intercept EntityManger events New: IHarnessAware to allow for ViewModel setup when executed inside of the Development Harness New: Improved design time stability New: Support for add-in development New: CoroutineFns.To...NicAudio: NicAudio 2.0.5: Minor change to accept special DTS stereo modes (LtRt, AB,...)NDepend TFS 2010 integration: version 0.5.0 beta 1: Only the activity and the VS plugin are avalaible right now. They basically work. Data types that are logged into tfs reports are subject to change. This is no big deal since data is not yet sent into the warehouse.Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...DotNetNuke® FAQ: 05.00.00: FAQ (Frequently Asked Questions) 05.00.00 will work for any DNN version 5.6.1 and up. It is the first version which is rewritten in C#. The scope of this update is to fix all known issues and improve user interface. Please review and rate this release... (stars are welcome)BUG FIXESManage Categories button text was not localized Edit/Add FAQ Entry: button text was not localized ENHANCEMENTSAdded an option to select the control for category display: Listbox with checkboxes (flat category ...New ProjectsalphaCards: Digitale LernkarteiBinary Code Interpreter: The Binary Code Interpreter is a small and funny programming languaage. This interpreter works a bit like a Turing Machine.Buscar ciudades: This is the source code for the application "Buscar ciudades" published in the Windows Phone Marketplace. CamOnWeb - Componente para WebCam: Este projeto tem como objetivo o desenvolvimento um componente para captura e exibição de videos da webcam. Compare.Net: Compare.Net is a configurable application to compare and reconclie from different sources. C# WPF applcation targeting .Net 4CSGame: Academic project for our studies. Subject "Client -server technology"curbside: curbsideDanish Language Pack for Community Server: Danish Language Packs for Community Server 2.1 and 2007. At its inception, this project contains quite incomplete translations to Danish. This project provides a common resource where all interested parties can gradually improve on this language pack. Please join to improve the contents. The source code tree contains two major folders: One for Community Server 2.1 and one for Community Server 2007.Dependency Variable Lib: ??????????????????、?????????????????????。(???????、?????????????w) ???、「??×??=??」???????????????、????????、????????、???「??×??」????????????????????????????????????。 FreedBack: FreedBack allows you to quickly and easily embed a configurable feedback form into any web application. It can be a standalone application that you install and deploy, then embed a bit of script into existing apps to implement feedback, or you can make it part of your MVC App.Hageveld - NieuwsModule (Chris' Tutorial: A news article manager made for educational purposes.Idea Resto: Aplicación para restaurantes.jqgridmvcgeneration: codesmith template that creates js, html and asp.net mvc server side json server for a jqgrid -based table on a arbitratrary database table. The code includes pagination and server side sorting. Jqgrid is a great jquery extension that provides an ajax-enabled fully customizable interactive table based on server side data. However, the actual coding can be a bit tedious and cryptic. This project provides a template base approach that basically performs two tasks: 1. creates a client si...MyClassMapper: A simple configurable classmapper for mapping ClassA to ClassB and vice versa, the ideas are borrowed from the Dozer project (http://dozer.sourceforge.net). Most code is based on 'Dynamic Access' code from Hongju Cao <hjcao_wei@hotmail.com> (All that code is located in the "DynamicAccess" sub-directory from the project.)NavigateTo Providers: This project is a collection of NavigateTo providers for Visual Studio 2010. NObsidian: A proxy component to access Obisidian Portal API from .NetNSession: The objective of this project is to allow ASP Classic to access ASP.NET out-of-process session stores in the same way that ASP.NET accesses them, and thus share the session state with ASP.NET.NTextile: NTextile is a lightweight, humane web compatible markup generator. It generates valid XHTML output using simple easy to remember conventions and text characters (a.k.a modifiers), to markup plain text. NTextile is especially useful to provide quick and valid markup for .NET web applications. For e.g. wikis, blogs, online forums, content management systems etc. This project was inspired by Dean Allen’s original Textile features, as included in the Textpattern content management application. T...OpenTasksBehaviorsLibrary for Windows Phone: Open Tasks Behavior for Windows Phone *OpenWebBrowserTask *OpenMediaLauncherTaskPowerShell Cmdlet Sample: PowerShell Cmdlet SamplePrismEx: PrismEx is a library of helpers and extension methods to allow developers to more easily develop and test Prism applications. QuickDelete: A Windows Explorer add-on to delete files and folders very quickly, much more so than the usual Explorer command. Works on Windows XP or later. Currently in EXPERIMENTAL state. No releases have been produced yet.Quiz Module for DotNetNuke: This is a module that allows you to quickly and easily provide a quiz that can be taken by the visitors on your DotNetNuke website.RaptorDB - The Key Value Store: Smallest, fastest embedded nosql persisted dictionary using b+tree or MurMur hash indexing. (Now with Hybrid WAH bitmap indexes)Rockin CMS-LMS Combo: RockinCmsLms Combo provides a great way for users to build their content and training sites with one great tool. It will be developed with C# 4.0-4.5, Silverlight, JQuery, and ASP.Net and MVC 3.0 with Razor. This project will demonstrate how to use todays technologies to build solutions that last. We may have 2 versions - one with a local db and one with a Windows Azure storage solution. Suggest using the Web Platform Installer to get all of the components.SandKeeper - Time Tracking Addin for Outlook 2010 tasks: SandKeeper is an Outlook Addin that enables you to track time for your tasks.Share: Share is a new web-based content management system. For more information about this project, please visit http://blog.aphysoft.com.SharePoint 2010 ViewEdit Group By Content Type: A simple project that injects JavaScript into the ribbon bar area to add "Content Type" as a choice on the ViewEdit.aspx page. With this, users can once again create views that group by Content Type without having to use SharePoint Designer 2010.Simple Merge Sort: This is a simple implementation of Merge-Sort, using to sort an Array of Integers.UKag: Unity3D?Kr2/KAG3??moduleUsingLinq: TestUtec Logger: Utec Logger is meant to be used the Turbo XS UTEC and UTEC Delta. It is simply a data logger application, it will auto logged based on pre defined conditionals. It also takes this information and displays it in a two dimensional table, averaging the values appropriately to ease of log analysis.WOXalizer: Fork of WOX project. http://woxserializer.sourceforge.net/Zeta Backup Validator: A small .NET 3.5 console application that helps you verifying whether your backups succeed by checking files and folders with various configurable rules (folder size, file age, etc.).Zeta Links: A web application for generating and measuring shortcut hyperlinks, similar to bit.ly.Zombie Slayer v1.0: Zombie Slayer v1.0 is an action first person shooter where players can try and survive a never-ending wave of zombies for as long as they can while a timer keeps track of how long they survived and posts times online from longest to shortest.?????????????: ??????????????????,????。

    Read the article

  • CodePlex Daily Summary for Friday, June 15, 2012

    CodePlex Daily Summary for Friday, June 15, 2012Popular ReleasesStackBuilder: StackBuilder 1.0.8.0: + Corrected a few bugs in batch processor + Corrected a bug in layer thumbnail generatorAutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.1: Release Notes *New feature added that allows user to select remind later interval.SharePoint KnowledgeBase (ISC 2012): ISC.KBase.Advanced: A full-trust version of the Base application. This version uses a Managed Metadata column for categories and contains other enhancements.Microsoft SQL Server Product Samples: Database: AdventureWorks 2008 OLTP Script: Install AdventureWorks2008 OLTP database from script The AdventureWorks database can be created by running the instawdb.sql DDL script contained in the AdventureWorks 2008 OLTP Script.zip file. The instawdb.sql script depends on two path environment variables: SqlSamplesDatabasePath and SqlSamplesSourceDataPath. The SqlSamplesDatabasePath environment variable is set to the default Microsoft ® SQL Server 2008 path. You will need to change the SqlSamplesSourceDataPath environment variable to th...HigLabo: HigLabo_20120613: Bug fix HigLabo.Mail Decode header encoded by CP1252WipeTouch, a jQuery touch plugin: 1.2.0: Changes since 1.1.0: New: wipeMove event, triggered while moving the mouse/finger. New: added "source" to the result object. Bug fix: sometimes vertical wipe events would not trigger correctly. Bug fix: improved tapToClick handler. General code refactoring. Windows Phone 7 is not supported, yet! Its behaviour is completely broken and would require some special tricks to make it work. Maybe in the future...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3026 (June 2012): Fixes: round( 0.0 ) local TimeZone name TimeZone search compiling multi-script-assemblies PhpString serialization DocDocument::loadHTMLFile() token_get_all() parse_url()BlackJumboDog: Ver5.6.4: 2012.06.13 Ver5.6.4  (1) Web???????、???POST??????????????????Yahoo! UI Library: YUI Compressor for .Net: Version 2.0.0.0 - Ferret: - Merging both 3.5 and 2.0 codebases to a single .NET 2.0 assembly. - MSBuild Task. - NAnt Task.ExcelFileEditor: .CS File: nothingBizTalk Scheduled Task Adapter: Release 4.0: Works with BizTalk Server 2010. Compiled in .NET Framework 4.0. In this new version are available small improvements compared to the current version (3.0). We can highlight the following improvements or changes: 24 hours support in “start time” property. Previous versions had an issue with setting the start time, as it shown 12 hours watch but no AM/PM. Daily scheduler review. Solved a small bug on Daily Properties: unable to switch between “Every day” and “on these days” Installation e...Weapsy - ASP.NET MVC CMS: 1.0.0 RC: - Upgrade to Entity Framework 4.3.1 - Added AutoMapper custom version (by nopCommerce Team) - Added missed model properties and localization resources of Plugin Definitions - Minor changes - Fixed some bugsWebSocket4Net: WebSocket4Net 0.7: Changes included in this release: updated ClientEngine added proper exception handling code added state support for callback added property AllowUnstrustedCertificate for JsonWebSocket added properties for sending ping automatically improved JsBridge fixed a uri compatibility issueXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0 Beta: Catalog and Publication reviews and ratings Store language packs in data base Improve reporting system Improve Import/Export system A lot of WebAdmin app UI improvements Initial implementation of the WebForum app DB indexes Improve and simplify architecture Less abstractions Modernize architecture Improve, simplify and unify API Simplify and improve testing A lot of new unit tests Codebase refactoring and ReSharpering Utilize Castle Windsor Utilize NHibernate ORM ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.55: Properly handle IE extension to CSS3 grammar that allows for multiple parameters to functional pseudo-class selectors. add new switch -braces:(new|same) that affects where opening braces are placed in multi-line output. The default, "new" puts them on their own new line; "same" outputs them at the end of the previous line. add new optional values to the -inline switch: -inline:(force|noforce), which can be combined with the existing boolean value via comma-separators; value "force" (which...Microsoft Media Platform: Player Framework: MMP Player Framework 2.7 (Silverlight and WP7): Additional DownloadsSMFv2.7 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...Media Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingSP Sticky Notes: SPSticky Solution Package: Install steps:Install the sandbox solution to the site collection Activate the solution Add a SPSticky web part (from a 'Custom' group) to a page that resides on the same web as the StickyList list Initial version, supported functionality: Moving a sticky around (position is saved in list) Resizing of the sticky (size is not saved) Changing the text of the sticky (saved in list) Saving occurs when you click away from the sticky being edited Stickys are filtered based on the curre...New ProjectsAdvanced Utility Libs: This project provides many basic and advanced utility libs for developer.ATopSearchEngine: ATopSearchEngine, a class projectAudio Player 3D: A WPF 3D Audio Player.beginABC: day la project test :)BookmarkSave For VB6: An addin, written in VB.net, for VB6 that preserves Bookmarks and Breakpoints between runs of the IDE.Centos 5 & 6 Managements Packs For System Center Operations Manager 2012: Centos 5 & 6 Managements Packs For System Center Operations Manager 2012ChildProcesses - Child Process Management for .NET: ChildProcesses.NET is a child process management library for the .NET framework. It allows to create child processes, and provides bidirectional extendable interprocess communication based on WCF and NamedPipes out of the box Child and parent processes monitor each other and notify about termination or other events. Child processes are an alternative to AppDomains when parent must continue if the child crashes. It is also poossible to mix 32Bit and 64 Bit processes. Cookie Free Analytics: Cookie Free Analytics (CFA) is a free server side Google Analytics tracking solution for Windows based websites running IIS & ASP.NET. 100% javascript & cookie free - with CFA you can track your visitors & file downloads in Google Analytics without cookies or JavaScript. Ideal for mobile websites or those affected by the EU Cookie law.F# (FSharp) based Windows Phone 7.5 software to answer: Where am I?: Location Finder F# (FSharp) based Windows Phone 7.5 software to answer the simple question: - Where am I? Henyo Word: The summary is required.Houma-Thibodaux .NET User Group: The CodePlex home of the Houma-Thibodaux .NET user group.JqGridHelper: some code for using jqgrid 1. javacsript function for customer mutil-search 2. C# function for parse querystring and build sql query command 3. a sql procedure templete for searching NetFluid.IRC: Mibbit clone in NetFluid with the possibilities to use permanent eggdrop in C# nGo: nGo frameworkNinja Client-Server Game: Naruto based online client-server game.NoLinEq: NoLinEq is a mathematical tool for solving nonlinear equations using different methods.Oculus: Oculus is a real-time ETW listening service that is plugin-basedPinoy Henyo: Response.Redirect (https://henyoword.codeplex.com)Razorblade: Razorblade is intended to be a template for WebDesign/WebDevelopment. It is based on HTML5Boilerplate and uses ASP.NET wtih the Razor Syntax (C#). Original HTML5Boilerplate comments are intact with some added comments to explain some of the Razor Syntax.ServiceFramework: The ServiceFramework project is a framework built using Microsoft WCF to address architectural governance issues by tracking the activity of services.Sharp Console: Sharp Console is a Windows Command Line (WCL)alternative written in C#. It targets those who lack access to the WCL or simply wish to use the NET framework instead. It aims to provide the same (and more!) functionality as the WCL. Contribute anything you feel will make it better!SharpKit.on{X}: This project allows you to write on{x} rules using C#shequ01: shequ01 websiteSilena: Porta lectus ac sed scelerisque, pellentesque, cras a et urna enim ultricies, tristique magnis nec proin. Velit tristique, aliquet quis ut mid lorem eros? Dolor magna. Aliquet. Et? Sociis pellentesque, tincidunt aenean cursus, pulvinar! Placerat etiam! Mid eu? Vut a. Placerat duis, risus adipiscing lacus, sagittis magna vut etiam, sed. Et ridiculus! Et sit, enim scelerisque velit tristique vel? Adipiscing vel adipiscing eu. Enim proin egestas lorem, aenean lectus enim vel nisi, augue duis pla...Silverlight 5 Chat: Full-duplex Publish/Subscribe -pattern on TCP/IP: F-Sharp (F#) WCF "PollingDuplex" WebService with SL5 FSharp clientSimple CQRS implemented with F#: Simple CQRS on F# (F-Sharp) 3.0 SOD: sustainable office designerSolvis Control II Viewer: SolvisSC2Viewer kann die Logdaten der Solvis Heizung Steuerung SolvisControl 2 auslesen und grafisch darstellen. testdd061412012tfs01: bvtestddgit061420123: eswrtestddgit061420124: fgtestddhg061420121: sdtestgit061420121: xctesthg06142012hg3: dfsTurismo Digital: Turismo DigitlUmbraco 5 File Picker: ????Umbraco 5 ???????。????????????????????。WeatherFrame: Windows Phone weather app.

    Read the article

  • CodePlex Daily Summary for Monday, June 11, 2012

    CodePlex Daily Summary for Monday, June 11, 2012Popular ReleasesCasanova Language: Casanova IDE alpha release: This is the first release for the Casanova IDE. It features the major capabilities of the framework: support for rules, scripts, input management, and basic content management. The IDE is still under major development. Planned features include: multiplayer support 3D rendering syntax highlighting basic Intellisense slightly improved syntax for rules and scripts audio in-game menus Also, do not forget to download and install OpenAL: http://connect.creativelabs.com/openal/Download...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...SVNUG.CodePlex: Cloud Development with Windows Azure: This release contains the slides for the Cloud Development with Windows Azure presentation.WCF Data Service (OData) Regression & Load Testing Tool: Latest: This is latest stable releaseSHA-1 Hash Checker: SHA-1 Hash Checker (for Windows): Fixed major bugs. Removed false negatives.AutoUpdaterdotNET: AutoUpdater.NET 1.0: Everything seems perfect if you find any problem you can report to http://www.rbsoft.org/contact.htmlMedia Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.Microsoft SQL Server Product Samples: Database: AdventureWorks Sample Reports 2008 R2: AdventureWorks Sample Reports 2008 R2.zip contains several reports include Sales Reason Comparisons SQL2008R2.rdl which uses Adventure Works DW 2008R2 as a data source reference. For more information, go to Sales Reason Comparisons report.Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingLINQ Extensions Library: 1.0.3.0: New to release 1.0.3.0:Combinatronics: Combinations (unique) Combinations (with repetition) Permutations (unique) Permutations (with repetition) Convert jagged arrays to fixed multidimensional arrays Convert fixed multidimensional arrays to jagged arrays ElementAtMax ElementAtMin ElementAtAverage New set of array extension (1.0.2.8):Rotate Flip Resize (maintaing data) Split Fuse Replace Append and Prepend extensions (1.0.2.7) IndexOf extensions (1.0.2.7) Ne...????????API for .Net SDK: SDK for .Net ??? Release 1: 6?11????? ??? - ?Entities???????????EntityBase,???ToString()???????json???,??????4.0???????。2.0?3.5???! ??? - Request????????AccessToken??????source=appkey?????。????,????????,???????public_timeline?????????。 ?? - ???ClinetLogin??????????RefreshToken???????false???。 ?? - ???RepostTimeline????Statuses???null???。 ?? - Utility?BuildPostData?,?WeiboParameter??value?NULL????????。 ??????? ??? - ??.Net 2.0/3.5/4.0????。??????VS2010??????????。VS2008????????,??????????。 ??? - ??.Net 4.0???SDK...Audio Pitch & Shift: Audio Pitch And Shift 4.5.0: Added Instruments tab for modules Open folder content feature Some bug fixesPython Tools for Visual Studio: 1.5 Beta 1: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython, IronPython, Jython and PyPy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging •...Circuit Diagram: Circuit Diagram 2.0 Beta 1: New in this release: Automatically flip components when placing Delete components using keyboard delete key Resize document Document properties window Print document Recent files list Confirm when exiting with unsaved changes Thumbnail previews in Windows Explorer for CDDX files Show shortcut keys in toolbox Highlight selected item in toolbox Zoom using mouse scroll wheel while holding down ctrl key Plugin support for: Custom export formats Custom import formats Open...Umbraco CMS: Umbraco CMS 5.2 Beta: The future of Umbracov5 represents the future architecture of Umbraco, so please be aware that while it's technically superior to v4 it's not yet on a par feature or performance-wise. What's new? For full details see our http://progress.umbraco.org task tracking page showing all items complete for 5.2. In a nutshellPackage Builder Starter Kits Dynamic Extension Methods Querying / IsHelpers Friendly alt template URLs Localization Various bug fixes / performance enhancements Gett...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0.5: JayData is a unified data access library for JavaScript developers to query and update data from different sources like WebSQL, IndexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video New features in JayData 1.0.5http://jaydata.org/blog/jaydata-1.0.5-is-here-with-authentication-support-and-more http://jaydata.org/blog/release-notes Sencha Touch 2 module (read-only)This module can be used to bind data retrieved by JayData to Sencha Touch 2 generated user interface. (exam...32feet.NET: 3.5: This version changes the 32feet.NET library (both desktop and NETCF) to use .NET Framework version 3.5. Previously we compiled for .NET v2.0. There are no code changes from our version 3.4. See the 3.4 release for more information. Changes due to compiling for .NET 3.5Applications should be changed to use NET/NETCF v3.5. Removal of class InTheHand.Net.Bluetooth.AsyncCompletedEventArgs, which we provided on NETCF. We now just use the standard .NET System.ComponentModel.AsyncCompletedEvent...Application Architecture Guidelines: Application Architecture Guidelines 3.0.7: 3.0.7Jolt Environment: Jolt v2 Stable: Many new features. Follow development here for more information: http://www.rune-server.org/runescape-development/rs-503-client-server/projects/298763-jolt-environment-v2.html Setup instructions in downloadSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.5): New fetures:Multilingual Support Max users property in Standings Web Part Games time zone change (UTC +1) bug fix - Version 1.4 locking problem http://euro2012.codeplex.com/discussions/358262 bug fix - Field Title not found (v.1.3) German SP http://euro2012.codeplex.com/discussions/358189#post844228 Bug fix - Access is denied.for users with contribute rights Bug fix - Installing on non-English version of SharePoint Bug fix - Title Rules Installing SharePoint Euro 2012 PredictorSharePoint E...New Projects2D map editor for Game Tool Development class: This project contains a basic 2D map editor, which can read a tileset (or chipset) to create a custom map. The user can then load and save maps previously created (file format .map). ArtifexCore: ArtifexCore - a compilation of unique, original, and revamped RunUO/OrbSA projects.ASP.NET MVC 4 - Sports Store using Visual Studio 2011 Beta: This is a the output of "Sports Store" exercise in Pro ASP.NET MVC 3 Framework by Adam Freeman and Steven Sanderson. Instead of MVC 3 as recommended in the book, I have used MVC 4.clabinet: clabinet is a cloud based file cabinetCopy File Location - Explorer Shortcut: This Explorer Extension adds a shortcut menu to all files and folders to copy the full location to the Clipboard.CSSSEVER: ???????????DoodleLabyrinthLogic: A starter '100 rogues' type logic library for rogue-like buildersEasyFlash Cart Builder: EasyFlash Cart Builder is a tool for linking files together into an EasyFlash cartridge. Generic enumeration: Provides string representation of C# enumerations.Hacker Typer for WP7: This is a hackertyper.net Windows Phone based application HTML Batch Logger: This is a simple Log class that can be used in any .NET C# Console Application. You can use it to log into an HTML file, console window, or database. kinect ????????????: ???、??????????????????????、????????????????。???、??????????????????。?????????、????????????????、??????????????????。laskjdfqewr131231: example omes projectnlite web libraray: Lite Web Framework,????Page,??Ndf???WebApiNMemory - an in-memory relational database for .NET: NMemory is a lightweight in-memory relational database engine that can be hosted by .NET applications. It supports traditional database features like indexes, foreign key relations, transaction handling and isolation.NoManaComponets: A attempt at a reusable library for common tasks in XNA like avatar management, text rendering and shading. Currently abandoned.NP: network client appObject Viewer: A UserControl that can display any object.PowerPoint Graph Creator: The main purpose of this PowerPoint add-in is to help people who sometimes need to draw graphs into the PPT and then for example add some animations or want to load to existing graphs into PPT but don't have a lot of time to re-draw every thing.Project Blue Tigris: This Project aims at enabling users to control and interact with Windows without the need to touch the keyboard or the mouse, through a new user interface using gestures and voice commands. This project will utilize webcam and microphones connected to the computer when installed. This is our vision. It would be great if you join us.QTP FT Uninstaller: KnowledgeInbox QTP/FT Uninstaller is a tool designed for uninstalling HP's QuickTest Professional or Functional Testing products in one click. The tool should only be used in cases where QTP was working fine earlier and after some update or installation it stopped working. The tool scans the system registry for all files associated with QTP and deletes them. Note: Using this uninstaller may impact tools like HP Sprinter, Quality Center. You should re-install those tools also after using t...Radius Client for Microsoft® .NET Micro Framework: Client for Remote Authentication Dial In User Service (RADIUS)Regression Suite: RegressionSuite is a software test suite that incorporates measurement of the startup lag, measurement of accurate execution times, generating execution statistics, customized input distributions, and processable regression specific details as part of the regular unit tests. Essentially, RegressionSuite provides the frame-work around which the individual unit regressors are invoked (and details and statistics collected). Unit regressors are grouped into named regressor sets (or modules), a...Sern: Nothing yet.SharePoint List Number to Text Custom Column: Number to text custom column in SharePoint is a project that is useful in any financial SharePoint implementation to automatically generate the corresponding number representation in textual format, this feature is designed to be easy extended to any language and it’s initially in Arabic and English languages.you can find all related source code in download section SharpDND: A attempt to create a open source DLL for the openSRD. Designed with customization in mind if completed the tools included could build out pathfinder and other RPG rulesets.SmartSense: SmartSense is a wearable holographic gesture and voice controlled intelligent system. Human-computer interaction (HCI) is a heavily researched area in today’s technology driven world. Most people experience HCI using a mouse and keyboard as input devices. We wanted a more natural way to interact with the computer that also allows instant access to information. We are developing a real-time system that is always on and available to collect data at any moment but also is accessible “on-the-fly...Sumzlib: sumzlib is a set of class library that provides useful algorithms in static method. this project is written in vb.netWCF Data Service (OData) Regression & Load Testing Tool: This is a tool that is especially being developed to regress OData Services. Current release only support few test like ($select, Top, etc ), more test will be added over the time. It is a Multithreaded Regression Test Tool that can generate result in Excel format including diagnostic error data and performance data like turnaround time as well. It can be used for bulk testing of several services at the same time It can be quite useful to for those who are developing several service...X.Web.Microdata: This project is intended to represent an mitcrodata entitie in the .NET Framework. (Particularly in ASP.NET) The X.Web.Microdata represent the http://www.data-vocabulary.org/ (and Google) microdata notation And X.Web.Microdata.SchemaOrg represent http://schema.org/ microdata notatio

    Read the article

  • CodePlex Daily Summary for Saturday, December 01, 2012

    CodePlex Daily Summary for Saturday, December 01, 2012Popular ReleasesCleverBobCat: CleverBobCat 1.1.1: Fix for 1.1.1.0 - Energy usage on back drive -> fixed - Breaking energy recharge -> fixed - Not stopped when energy disabled -> fixed - Area transfer range -> fixeddcview: DCView 1.4.2: 1.4.0?? ??/??? ??11/29 ??? ??? ??? ?? ??? ??? ?? ???? ???? ?? ???? ?? ?? ?? ??? ?? ?? ??DirectQ: DirectQ II 2012-11-29: A (slightly) modernized port of Quake II to D3D9. You need SM3 or better hardware to run this - if you don't have it, then don't even bother. It should work on Windows Vista, 7 or 8; it may also work on XP but I haven't tested. Known bugs include: Some mods may not work. This is unfortunately due to the nature of Quake II's game DLLs; sometimes a recompile of the game DLL is all that's needed. In any event, ensure that the game DLL is compatible with the last release of Quake II first (...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.2: new UI for the Administration console Bugs fixes and improvement version 2.2.215.3JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...Command Line Parser Library: 1.9.3.23 beta: Fixes an issue notified by github user sbambrick about parsing negative numbers.MCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.76: Fixed a typo in ObjectLiteralProperty.IsConstant that caused all object literals to be treated like they were constants, and possibly moved around in the code when they shouldn't be.Kooboo CMS: Kooboo CMS 3.3.0: New features: Dropdown/Radio/Checkbox Lists no longer references the userkey. Instead they refer to the UUID field for input value. You can now delete, export, import content from database in the site settings. Labels can now be imported and exported. You can now set the required password strength and maximum number of incorrect login attempts. Child sites can inherit plugins from its parent sites. The view parameter can be changed through the page_context.current value. Addition of c...Distributed Publish/Subscribe (Pub/Sub) Event System: Distributed Pub Sub Event System Version 3.0: Important Wsp 3.0 is NOT backward compatible with Wsp 2.1. Prerequisites You need to install the Microsoft Visual C++ 2010 Redistributable Package. You can find it at: x64 http://www.microsoft.com/download/en/details.aspx?id=14632x86 http://www.microsoft.com/download/en/details.aspx?id=5555 Wsp now uses Rx (Reactive Extensions) and .Net 4.0 3.0 Enhancements I changed the topology from a hierarchy to peer-to-peer groups. This should provide much greater scalability and more fault-resi...datajs - JavaScript Library for data-centric web applications: datajs version 1.1.0: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes...Team Foundation Server Administration Tool: 2.2: TFS Administration Tool 2.2 supports the Team Foundation Server 2012 Object Model. Visual Studio 2012 or Team Explorer 2012 must be installed before you can install this tool. You can download and install Team Explorer 2012 from http://aka.ms/TeamExplorer2012. There are no functional changes between the previous release (2.1) and this release.Coding Guidelines for C# 3.0, C# 4.0 and C# 5.0: Coding Guidelines for CSharp 3.0, 4.0 and 5.0: See Change History for a detailed list of modifications.Math.NET Numerics: Math.NET Numerics v2.3.0: Portable Library Build: Adds support for WP8 (.Net 4.0 and higher, SL5, WP8 and .NET for Windows Store apps) New: portable build also for F# extensions (.Net 4.5, SL5 and .NET for Windows Store apps) NuGet: portable builds are now included in the main packages, no more need for special portable packages Linear Algebra: Continued major storage rework, in this release focusing on vectors (previous release was on matrices) Thin QR decomposition (in addition to existing full QR) Static Cr...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.1: +2012-11-25 v3.2.1 +????????。 -MenuCheckBox?CheckedChanged??????,??????????。 -???????window.IDS??????????????。 -?????(??TabCollection,ControlBaseCollection)???,????????????????。 +Grid??。 -??SelectAllRows??。 -??PageItems??,?????????????,?????、??、?????。 -????grid/gridpageitems.aspx、grid/gridpageitemsrowexpander.aspx、grid/gridpageitems_pagesize.aspx。 -???????????????????。 -??ExpandAllRowExpanders??,?????????????????(grid/gridrowexpanderexpandall2.aspx)。 -??????ExpandRowExpande...VidCoder: 1.4.9 Beta: Updated HandBrake core to SVN 5079. Fixed crashes when encoding DVDs with title gaps.New Projects4SQH: FourSQHackathon project repositoryASP4 Portscan: A online portscan deployed with ASP and PerlscriptAviSynth language service for Visual Studio: This project provides a Visual Studio language extension for the AviSynth scripting language. AviSynth is a scripted frameserver for video post-production and can be found at http://avisynth.orgBuildSlip: GUI interface application for creating slipsteam files (service pack and/or cumulative update) for SQL2008 installationC#apture the Flag AI: C#apture the Flag AI is a project designed to enable C# support for the excellent AI competition being held at http://aisandbox.com.Channel Performing (Windows Azure Media Service Example): Windows Azure Media Service solution example and portal + Tag + Comment + Search + Membership + Progress List + and other featuresCivilinsation: C# DLL C++ Wrapper Swig Civilization INSA de RENNES Département Informatique 4INFOClickOnceSxquer: awesome Combinator CoffeeScript Preprocessor: CoffeeScript preprocessor for the Combinator Orchard module (https://combinator.codeplex.com/).DineshTest: aDirectoryEnumeratorAsync: Proof of concept for using .NET 4.5 async/await for improving performance enumeration of of file system entries.Grahpic_Salon: It's a test web project.jQuery.gMenu: test projectLIve For Speed Statistics: UskoroMIDI.NET: MIDI.NET allows any .NET developer to access the power of MIDI without doing P/Invokes.Olympia Open College: AACS4134Peto: A college project for a claims submission application.QueuedSmtpClient: A C# library that provides an smtp client built on top of the standard SmtpClient class which is able to serialize email messages if no network connection is available. Serialized emails are sent later if a network connection is available.Simple Merge Sort: This is a simple implementation of Merge-Sort, using to sort an Array of Integers.Simple.Json: Easy to use framework for serializing to and from json. Supports typed dto objects, dynamic dto objects and immutable dto types like F# record types.Sosyal: Sosyal as.teamgujju: This Project is now in planing phase. It will be released shortlyTFS Branch Permission Removal Event Subscriber: When you create a branch in TFS, the explicit TFS source control permissions from the source branch are copied to the target branch. Let's change that...Wild Template: Simple template engine for .NETXNA in WPF: WPF Canvas component that allows hosting and presentation of an XNA window from within WPF applications. ZombieSurvival: Yo-ho-ho and the buttle og rum

    Read the article

  • CodePlex Daily Summary for Thursday, October 11, 2012

    CodePlex Daily Summary for Thursday, October 11, 2012Popular ReleasesOstrivDB: OstrivDB 0.1: - Storage configuration: objects serialization (Xml, Json), storage file compressing, data block size. - Caching for Select queries. - Indexing. - Batch of queries. - No special query language (LINQ used). - Integrated sorting and paging. - Multithreaded data processing.Mido: Mido v0.7: Mido is the simplest utility that helps to make watermarks on images and resize them. It has a very thin installer. The program has beta mark but it is able to draw watermark text, watermark images, resize pictures. Change list: + Opacity option + Stroke option + Bold, italic, underline options + Show progress during loading of images + Allow rotate watermart + Allow write multiline text as watermark + Add text aligment if text contains several lines + Add button 'clear custom position' + A...D3 Loot Tracker: 1.5.4: Fixed a bug where the server ip was not logged properly in the stats file.Captcha MVC: Captcha Mvc 2.1.2: v 2.1.2: Fixed problem with serialization. Made all classes from a namespace Jetbrains.Annotaions as the internal. Added autocomplete attribute and autocorrect attribute for captcha input element. Minor changes. v 2.1.1: Fixed problem with serialization. Minor changes. v 2.1: Added support for storing captcha in the session or cookie. See the updated example. Updated example. Minor changes. v 2.0.1: Added support for a partial captcha. Now you can easily customize the layout, s...DotNetNuke® Community Edition CMS: 06.02.04: Major Highlights Fixed issue where the module printing function was only visible to administrators Fixed issue where pane level skinning was being assigned to a default container for any content pane Fixed issue when using password aging and FB / Google authentication Fixed issue that was causing the DateEditControl to not load the assigned value Fixed issue that stopped additional profile properties to be displayed in the member directory after modifying the template Fixed er...Advanced DataGridView with Excel-like auto filter: 1.0.0.0: ?????? ??????Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.69: Fix for issue #18766: build task should not build the output if it's newer than all the input files. Fix for Issue #18764: build taks -res switch not working. update build task to concatenate input source and then minify, rather than minify and then concatenate. include resource string-replacement root name in the assumed globals list. Stop replacing new Date().getTime() with +new Date -- the latter is smaller, but turns out it executes up to 45% slower. add CSS support for single-...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.3: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes NOTE:...VidCoder: 1.4.4 Beta: Fixed inability to create new presets with "Save As".MCEBuddy 2.x: MCEBuddy 2.3.2: Changelog for 2.3.2 (32bit and 64bit) 1. Added support for generating XBMC XML NFO files for files in the conversion queue (store it along with the source video with source video name.nfo). Right click on the file in queue and select generate XML 2. UI bugifx, start and end trim box locations interchanged 3. Added support for removing commercials from non DVRMS/WTV files (MP4, AVI etc) 4. Now checking for Firewall port status before enabling (might help with some firewall problems) 5. User In...Sandcastle Help File Builder: SHFB v1.9.5.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release supports the Sandcastle October 2012 Release (v2.7.1.0). It includes full support for generating, installing, and removing MS Help Viewer files. This new release suppor...ClosedXML - The easy way to OpenXML: ClosedXML 0.68.0: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Json.NET: Json.NET 4.5 Release 10: New feature - Added Portable build to NuGet package New feature - Added GetValue and TryGetValue with StringComparison to JObject Change - Improved duplicate object reference id error message Fix - Fixed error when comparing empty JObjects Fix - Fixed SecAnnotate warnings Fix - Fixed error when comparing DateTime JValue with a DateTimeOffset JValue Fix - Fixed serializer sometimes not using DateParseHandling setting Fix - Fixed error in JsonWriter.WriteToken when writing a DateT...Readable Passphrase Generator: KeePass Plugin 0.7.2: Changes: Tested against KeePass 2.20.1 Tested under Ubuntu 12.10 (and KeePass 2.20) Added GenerateAsUtf8 method returning the encrypted passphrase as a UTF8 byte array.TelerikMvcGridCustomBindingHelper: Version 1.0.15.279-RC3: TelerikMvcGridCustomBindingHelper 1.0.15.279 RC3 Release notes: This is a RC version (hopefully the last one), please test and report any error or problem you encounter. Configurable null handling when filtering (AcceptNullValuesWhenFiltering, NullSubstitutes and NullAliases) Internal DynamicWhereClause improvments GridGridCustomBindingHelper.UseProjections method now acept ProjectionsOptions parameter for easely determine which properties should be projected Isolate and hide some are...JSLint for Visual Studio 2010: 1.4.2: 1.4.2patterns & practices: Prism: Prism for .NET 4.5: This is a release does not include any functionality changes over Prism 4.1 Desktop. These assemblies target .NET 4.5. These assemblies also were compiled against updated dependencies: Unity 3.0 and Common Service Locator (Portable Class Library).Snoop, the WPF Spy Utility: Snoop 2.8.0: Snoop 2.8.0Announcing Snoop 2.8.0! It's been exactly six months since the last release, and this one has a bunch of goodies in it. In particular, there is now a PowerShell scripting tab, compliments of Bailey Ling. With this tab, the possibilities are limitless. It basically lets you automate/script the application that you are Snooping. Bailey has a couple blog posts (one and two) on his tab already, and I am sure more is to come. Please note that if you do not have PowerShell installed, y...Z3: Z3 4.1.2: Minor fixes. Now, z3 compiles with gcc 4.7.x.NET Micro Framework: .NET MF 4.3 (Beta): This is the 4.3 Beta version of the .NET Micro Framework. Feature List for v4.3 Support for Visual Studio 2012 (including the Windows Desktop Express version) All v4.2 QFEs features and bug fixes (PWM enhancements, lwIP and network driver reliability improvements, Analog Output, WinUSB and latest GCC support) Improved diagnostic information for deployment Decreased boot time Bug fixes Work Item 1736 - Create link for MFDeploy under start menu Work Item 1504 - Customizing lwIP o...New Projects.NET 4.0 Object/Function DLL interface: A Visual Basic .NET 4.0 Object / Function interface for quick and easy updating.AzureDirectory Library for Lucene.Net: This project allows you to create Lucene Indexes via a Lucene Directory object which uses Windows Azure BlobStorage for persistent storage.Building Modern Mobile Web Apps: This project provides guidance on building mobile web experiences using HTML5, CSS3, and JavaScript. Developing web apps for mobile browsers can be less forgiviC# Singleton Base-Class: C# Singleton Base Class is a single, simple C# class used to implement the Singleton pattern through inheritance.C++ Unit Test Library for Windows Store apps with Async Helper: This Visual Studio extension will install a project template for C++ Unit Test Library for Windows Store apps that contains async helper.Citrix Mobile Application SDK Samples: This site contains our latest prototype samples for the Citrix Mobile Application SDK for you to play with.eBrain Engine: eBrain Engine is a software to administer neuropsychological tests by means of advanced I/F devices like BCI P300 and eye-tracking systemsFairycake: A game about a little witch. Java. FedEx Connector for AbleCommerce Gold: This plugin provides FedEx rating and tracking services for the AbleCommerce shopping cart.Free - Simple Phone Book (SimPB): An alternative to backup your telephone contacts. Portable, easy to use, multilingual.GPXLocalTime2UTC: Time Format Converter, from Local to UTC, for GPX Files A small utility made to open GPX files (XML), search for the "time" tag, then transform the local time Grid Solutions Framework: Core classes to manage, analyze, and visualize real-time and historical data. This project combines the time-series framework and TVA code library projects.HireMe: HireMe is a HR application that works in conjunction with the HiremeMagazine.com website. The app will run in Android, iOS and WebOS.my-sim-asdf: my greate toolOdeToFoodMvc4: Source code for "Building Applications with ASP.NET MVC 4"OrgCharts for SharePoint: Months of planning, design, development and testing have gone into a truly phenomenal Org Chart experiencePDC BW2: A pkm editor application that tries to make easier legal pkm Packed with Legality Analysis, Event downloader and more...PI Payroll system: This is a payroll system for People Index, LLC.Project13251011: fsdQuickWick.NET - Agile Pproject Management: QuickWick.NET is a simple and effective web-based solution for agile project management.SharePoint 2010 Top Nav: This project was created when a project I was on required a Navigation scheme to manage site collections. SisGAC: Sistema de Gerenciamento de Artigos para CongressosTriDes_project_3AO: encryptohideVB6Doc: Visual Basic 6 Documentor (VB6 Doc)VideoChat: Simple VideoChat web-application on ASP.NET MVC4, SignalR, Wowza Media Server/Windows Updater Open: This is an alternative to the Windows Update software on all versions of Windows OS and the WSUS provided for corporations to update multiple machines.WOWAddonsUpdater: WPF App to update lua addons for the Blizzard World of warcraft gameyygua: email:huliang@yahoo.cnZJUDTS2: ????Silverlight?????????

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • NoSQL with RavenDB and ASP.NET MVC - Part 1

    - by shiju
     A while back, I have blogged NoSQL with MongoDB, NoRM and ASP.NET MVC Part 1 and Part 2 on how to use MongoDB with an ASP.NET MVC application. The NoSQL movement is getting big attention and RavenDB is the latest addition to the NoSQL and document database world. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed  by Ayende Rahien.  Raven stores schema-less JSON documents, allow you to define indexes using Linq queries and focus on low latency and high performance. RavenDB is .NET focused document database which comes with a fully functional .NET client API  and supports LINQ. RavenDB comes with two components, a server and a client API. RavenDB is a REST based system, so you can write your own HTTP cleint API. As a .NET developer, RavenDB is becoming my favorite document database. Unlike other document databases, RavenDB is supports transactions using System.Transactions. Also it's supports both embedded and server mode of database. You can access RavenDB site at http://ravendb.netA demo App with ASP.NET MVCLet's create a simple demo app with RavenDB and ASP.NET MVC. To work with RavenDB, do the following steps. Go to http://ravendb.net/download and download the latest build.Unzip the downloaded file.Go to the /Server directory and run the RavenDB.exe. This will start the RavenDB server listening on localhost:8080You can change the port of RavenDB  by modifying the "Raven/Port" appSetting value in the RavenDB.exe.config file.When running the RavenDB, it will automatically create a database in the /Data directory. You can change the directory name data by modifying "Raven/DataDirt" appSetting value in the RavenDB.exe.config file.RavenDB provides a browser based admin tool. When the Raven server is running, You can be access the browser based admin tool and view and edit documents and index using your browser admin tool. The web admin tool available at http://localhost:8080The below is the some screen shots of web admin tool     Working with ASP.NET MVC  To working with RavenDB in our demo ASP.NET MVC application, do the following steps Step 1 - Add reference to Raven Cleint API In our ASP.NET MVC application, Add a reference to the Raven.Client.Lightweight.dll from the Client directory. Step 2 - Create DocumentStoreThe document store would be created once per application. Let's create a DocumentStore on application start-up in the Global.asax.cs. documentStore = new DocumentStore { Url = "http://localhost:8080/" }; documentStore.Initialise(); The above code will create a Raven DB document store and will be listening the server locahost at port 8080    Step 3 - Create DocumentSession on BeginRequest   Let's create a DocumentSession on BeginRequest event in the Global.asax.cs. We are using the document session for every unit of work. In our demo app, every HTTP request would be a single Unit of Work (UoW). BeginRequest += (sender, args) =>   HttpContext.Current.Items[RavenSessionKey] = documentStore.OpenSession(); Step 4 - Destroy the DocumentSession on EndRequest  EndRequest += (o, eventArgs) => {     var disposable = HttpContext.Current.Items[RavenSessionKey] as IDisposable;     if (disposable != null)         disposable.Dispose(); };  At the end of HTTP request, we are destroying the DocumentSession  object.The below  code block shown all the code in the Global.asax.cs  private const string RavenSessionKey = "RavenMVC.Session"; private static DocumentStore documentStore;   protected void Application_Start() { //Create a DocumentStore in Application_Start //DocumentStore should be created once per application and stored as a singleton. documentStore = new DocumentStore { Url = "http://localhost:8080/" }; documentStore.Initialise(); AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); //DI using Unity 2.0 ConfigureUnity(); }   public MvcApplication() { //Create a DocumentSession on BeginRequest   //create a document session for every unit of work BeginRequest += (sender, args) =>     HttpContext.Current.Items[RavenSessionKey] = documentStore.OpenSession(); //Destroy the DocumentSession on EndRequest EndRequest += (o, eventArgs) => { var disposable = HttpContext.Current.Items[RavenSessionKey] as IDisposable; if (disposable != null) disposable.Dispose(); }; }   //Getting the current DocumentSession public static IDocumentSession CurrentSession {   get { return (IDocumentSession)HttpContext.Current.Items[RavenSessionKey]; } }  We have setup all necessary code in the Global.asax.cs for working with RavenDB. For our demo app, Let’s write a domain class  public class Category {       public string Id { get; set; }       [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }   } We have created simple domain entity Category. Let's create repository class for performing CRUD operations against our domain entity Category.  public interface ICategoryRepository {     Category Load(string id);     IEnumerable<Category> GetCategories();     void Save(Category category);     void Delete(string id);       }    public class CategoryRepository : ICategoryRepository {     private IDocumentSession session;     public CategoryRepository()     {             session = MvcApplication.CurrentSession;     }     //Load category based on Id     public Category Load(string id)     {         return session.Load<Category>(id);     }     //Get all categories     public IEnumerable<Category> GetCategories()     {         var categories= session.LuceneQuery<Category>()                 .WaitForNonStaleResults()             .ToArray();         return categories;       }     //Insert/Update category     public void Save(Category category)     {         if (string.IsNullOrEmpty(category.Id))         {             //insert new record             session.Store(category);         }         else         {             //edit record             var categoryToEdit = Load(category.Id);             categoryToEdit.Name = category.Name;             categoryToEdit.Description = category.Description;         }         //save the document session         session.SaveChanges();     }     //delete a category     public void Delete(string id)     {         var category = Load(id);         session.Delete<Category>(category);         session.SaveChanges();     }        } For every CRUD operations, we are taking the current document session object from HttpContext object. session = MvcApplication.CurrentSession; We are calling the static method CurrentSession from the Global.asax.cs public static IDocumentSession CurrentSession {     get { return (IDocumentSession)HttpContext.Current.Items[RavenSessionKey]; } }  Retrieve Entities  The Load method get the single Category object based on the Id. RavenDB is working based on the REST principles and the Id would be like categories/1. The Id would be created by automatically when a new object is inserted to the document store. The REST uri categories/1 represents a single category object with Id representation of 1.   public Category Load(string id) {    return session.Load<Category>(id); } The GetCategories method returns all the categories calling the session.LuceneQuery method. RavenDB is using a lucen query syntax for querying. I will explain more details about querying and indexing in my future posts.   public IEnumerable<Category> GetCategories() {     var categories= session.LuceneQuery<Category>()             .WaitForNonStaleResults()         .ToArray();     return categories;   } Insert/Update entityFor insert/Update a Category entity, we have created Save method in repository class. If  the Id property of Category is null, we call Store method of Documentsession for insert a new record. For editing a existing record, we load the Category object and assign the values to the loaded Category object. The session.SaveChanges() will save the changes to document store.  //Insert/Update category public void Save(Category category) {     if (string.IsNullOrEmpty(category.Id))     {         //insert new record         session.Store(category);     }     else     {         //edit record         var categoryToEdit = Load(category.Id);         categoryToEdit.Name = category.Name;         categoryToEdit.Description = category.Description;     }     //save the document session     session.SaveChanges(); }  Delete Entity  In the Delete method, we call the document session's delete method and call the SaveChanges method to reflect changes in the document store.  public void Delete(string id) {     var category = Load(id);     session.Delete<Category>(category);     session.SaveChanges(); }  Let’s create ASP.NET MVC controller and controller actions for handling CRUD operations for the domain class Category  public class CategoryController : Controller { private ICategoryRepository categoyRepository; //DI enabled constructor public CategoryController(ICategoryRepository categoyRepository) {     this.categoyRepository = categoyRepository; } public ActionResult Index() {         var categories = categoyRepository.GetCategories();     if (categories == null)         return RedirectToAction("Create");     return View(categories); }   [HttpGet] public ActionResult Edit(string id) {     var category = categoyRepository.Load(id);         return View("Save",category); } // GET: /Category/Create [HttpGet] public ActionResult Create() {     var category = new Category();     return View("Save", category); } [HttpPost] public ActionResult Save(Category category) {     if (!ModelState.IsValid)     {         return View("Save", category);     }           categoyRepository.Save(category);         return RedirectToAction("Index");     }        [HttpPost] public ActionResult Delete(string id) {     categoyRepository.Delete(id);     var categories = categoyRepository.GetCategories();     return PartialView("CategoryList", categories);      }        }  RavenDB is an awesome document database and I hope that it will be the winner in .NET space of document database world.  The source code of demo application available at http://ravenmvc.codeplex.com/

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • NTFS Corruption: Files created in Linux corrupted when Windows Boots

    - by Logan Mayfield
    I'm getting some file loss and corruption on my Win7/Ubuntu 12.04 dual boot setup. I have a large shared NTFS partition. I have my Windows Docs/Music/etc. directories on that file and have the comparable directors in Linux setup as a sym. link. I'm using ntfs-3g on the linux side of things to manage the ntfs partition. The shared partition is on a logical partition along with my Linux /home / and /swap partitions. The ntfs partition is mounted at boot time via fstab with the following options: ntfs-3g users,nls=utf8,locale=en_US.UTF-8,exec,rw The problem seems to be confined to newly created and recently edited files. I have not see data loss or corruption when creating/editing files in Windows and then moving over to Ubuntu. I've been using the sync command aggressively in Ubuntu to try to ensure everything is getting written to the HDD. I do not use hibernate in Windows so I know it's not the usual missing files due to Hibernation problem. I'm not seeing any mount related issues on dmesg. Most recently I had a set of files related to a LaTeX document go bad. Some of them show up in Ubuntu but I am unable to delete them. In the GUI file browser they are given thumbnails associated with files I created on my last boot of Windows. To be more specific: I created a few png files in Windows. The files corrupted by that Windows boot are associated with running PdfLatex on a file and are not image files. However, two of the corrupted files show up with the thumbnail image of one of the previously mentioned png files. The png files are not in the same directory as the latex files but they are both win the Document Folder tree. I've had sucess with using NTFS for shared data in the past and am hoping there's some quirk here I'm missing and it's not just bad luck. On one hand this appears to be some kind of Windows problem as data loss occurs when I boot to Windows after having worked in Ubuntu for a while. However, I'm assuming it's more on the Ubuntu end as it requires the special NTFS drivers. Edit for more info: This is a Lenovo Thinkpad L430. Purchased new in the last month. So it's a fairly fresh install. Many of the files on the shared partition were copied over from a previous NTFS formatted shared partition on another HDD. As requested: here's a sample chkdsk log. Some of the files its mentioning were files that got deleted off the partition while in Ubuntu. Others were created/edited but not deleted. Checking file system on D: Volume dismounted. All opened handles to this volume are now invalid. Volume label is Files. CHKDSK is verifying files (stage 1 of 3)... Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x789f47 for possibly 0x21 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x42 is already in use. Deleting corrupt attribute record (128, "") from file record segment 66. 86496 file records processed. File verification completed. 385 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 3)... Deleted invalid filename Screenshot from 2012-09-09 09:51:27.png (72) in directory 46. The NTFS file name attribute in file 0x48 is incorrect. 53 00 63 00 72 00 65 00 65 00 6e 00 73 00 68 00 S.c.r.e.e.n.s.h. 6f 00 74 00 20 00 66 00 72 00 6f 00 6d 00 20 00 o.t. .f.r.o.m. . 32 00 30 00 31 00 32 00 2d 00 30 00 39 00 2d 00 2.0.1.2.-.0.9.-. 30 00 39 00 20 00 30 00 39 00 3a 00 35 00 31 00 0.9. .0.9.:.5.1. 3a 00 32 00 37 00 2e 00 70 00 6e 00 67 00 0d 00 :.2.7...p.n.g... 00 00 00 00 00 00 90 94 49 1f 5e 00 00 80 d4 00 ......I.^.... File 72 has been orphaned since all its filenames were invalid Windows will recover the file in the orphan recovery phase. Correcting minor file name errors in file 72. Index entry found.000 of index $I30 in file 0x5 points to unused file 0x11. Deleting index entry found.000 in index $I30 of file 5. Index entry found.001 of index $I30 in file 0x5 points to unused file 0x16. Deleting index entry found.001 in index $I30 of file 5. Index entry found.002 of index $I30 in file 0x5 points to unused file 0x15. Deleting index entry found.002 in index $I30 of file 5. Index entry DOWNLO~1 of index $I30 in file 0x28 points to unused file 0x2b6. Deleting index entry DOWNLO~1 in index $I30 of file 40. Unable to locate the file name attribute of index entry Screenshot from 2012-09-09 09:51:27.png of index $I30 with parent 0x2e in file 0x48. Deleting index entry Screenshot from 2012-09-09 09:51:27.png in index $I30 of file 46. An index entry of index $I30 in file 0x32 points to file 0x151e8 which is beyond the MFT. Deleting index entry latexsheet.tex in index $I30 of file 50. An index entry of index $I30 in file 0x58bc points to file 0x151eb which is beyond the MFT. Deleting index entry D8CZ82PK in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151f7 which is beyond the MFT. Deleting index entry EGA4QEAX in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151e9 which is beyond the MFT. Deleting index entry NGTB469M in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151fb which is beyond the MFT. Deleting index entry WU5RKXAB in index $I30 of file 22716. Index entry comp220-lab3.synctex.gz of index $I30 in file 0xda69 points to unused file 0xd098. Deleting index entry comp220-lab3.synctex.gz in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp220-numberGrammars.aux of index $I30 with parent 0xda69 in file 0xa276. Deleting index entry comp220-numberGrammars.aux in index $I30 of file 55913. The file reference 0x500000000cd43 of index entry comp220-numberGrammars.out of index $I30 with parent 0xda69 is not the same as 0x600000000cd43. Deleting index entry comp220-numberGrammars.out in index $I30 of file 55913. The file reference 0x500000000cd45 of index entry comp220-numberGrammars.pdf of index $I30 with parent 0xda69 is not the same as 0xc00000000cd45. Deleting index entry comp220-numberGrammars.pdf in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15290 which is beyond the MFT. Deleting index entry gram.aux in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15291 which is beyond the MFT. Deleting index entry gram.out in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15292 which is beyond the MFT. Deleting index entry gram.pdf in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp230-quiz1.synctex.gz of index $I30 with parent 0xda6f in file 0xd183. Deleting index entry comp230-quiz1.synctex.gz in index $I30 of file 55919. An index entry of index $I30 in file 0xf3cc points to file 0x15283 which is beyond the MFT. Deleting index entry require-transform.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf3cc points to file 0x15284 which is beyond the MFT. Deleting index entry set.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf497 points to file 0x15280 which is beyond the MFT. Deleting index entry logger.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15281 which is beyond the MFT. Deleting index entry misc.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15282 which is beyond the MFT. Deleting index entry more-scheme.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf5bf points to file 0x15285 which is beyond the MFT. Deleting index entry core-layout.rkt in index $I30 of file 62911. An index entry of index $I30 in file 0xf5e0 points to file 0x15286 which is beyond the MFT. Deleting index entry ref.scrbl in index $I30 of file 62944. An index entry of index $I30 in file 0xf6f0 points to file 0x15287 which is beyond the MFT. Deleting index entry base-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15288 which is beyond the MFT. Deleting index entry html-properties.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15289 which is beyond the MFT. Deleting index entry html-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528b which is beyond the MFT. Deleting index entry latex-prefix.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528c which is beyond the MFT. Deleting index entry latex-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528e which is beyond the MFT. Deleting index entry scribble.tex in index $I30 of file 63216. An index entry of index $I30 in file 0xf717 points to file 0x1528a which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63255. An index entry of index $I30 in file 0xf721 points to file 0x1528d which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63265. An index entry of index $I30 in file 0xf764 points to file 0x1528f which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63332. An index entry of index $I30 in file 0x14261 points to file 0x15270 which is beyond the MFT. Deleting index entry fddff3ae9ae2221207f144821d475c08ec3d05 in index $I30 of file 82529. An index entry of index $I30 in file 0x14621 points to file 0x15268 which is beyond the MFT. Deleting index entry FETCH_HEAD in index $I30 of file 83489. An index entry of index $I30 in file 0x14650 points to file 0x15272 which is beyond the MFT. Deleting index entry 86 in index $I30 of file 83536. An index entry of index $I30 in file 0x14651 points to file 0x15266 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.idx in index $I30 of file 83537. An index entry of index $I30 in file 0x14651 points to file 0x15265 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.pack in index $I30 of file 83537. An index entry of index $I30 in file 0x146f1 points to file 0x15275 which is beyond the MFT. Deleting index entry master in index $I30 of file 83697. An index entry of index $I30 in file 0x146f6 points to file 0x15276 which is beyond the MFT. Deleting index entry remotes in index $I30 of file 83702. An index entry of index $I30 in file 0x1477d points to file 0x15278 which is beyond the MFT. Deleting index entry pad.rkt in index $I30 of file 83837. An index entry of index $I30 in file 0x14797 points to file 0x1527c which is beyond the MFT. Deleting index entry pad1.rkt in index $I30 of file 83863. An index entry of index $I30 in file 0x14810 points to file 0x1527d which is beyond the MFT. Deleting index entry cm.rkt in index $I30 of file 83984. An index entry of index $I30 in file 0x14926 points to file 0x1527e which is beyond the MFT. Deleting index entry multi-file-search.rkt in index $I30 of file 84262. An index entry of index $I30 in file 0x149ef points to file 0x1527f which is beyond the MFT. Deleting index entry com.rkt in index $I30 of file 84463. An index entry of index $I30 in file 0x14b47 points to file 0x15202 which is beyond the MFT. Deleting index entry COMMIT_EDITMSG in index $I30 of file 84807. An index entry of index $I30 in file 0x14b47 points to file 0x15279 which is beyond the MFT. Deleting index entry index in index $I30 of file 84807. An index entry of index $I30 in file 0x14b4c points to file 0x15274 which is beyond the MFT. Deleting index entry master in index $I30 of file 84812. An index entry of index $I30 in file 0x14b61 points to file 0x1520b which is beyond the MFT. Deleting index entry 02 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1525a which is beyond the MFT. Deleting index entry 28 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15208 which is beyond the MFT. Deleting index entry 29 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521f which is beyond the MFT. Deleting index entry 2c in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15261 which is beyond the MFT. Deleting index entry 2e in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f0 which is beyond the MFT. Deleting index entry 45 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1523e which is beyond the MFT. Deleting index entry 47 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151e5 which is beyond the MFT. Deleting index entry 49 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15214 which is beyond the MFT. Deleting index entry 58 in index $I30 of file 84833. Index entry 6e of index $I30 in file 0x14b61 points to unused file 0xd182. Deleting index entry 6e in index $I30 of file 84833. Unable to locate the file name attribute of index entry a0 of index $I30 with parent 0x14b61 in file 0xd29c. Deleting index entry a0 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521b which is beyond the MFT. Deleting index entry cd in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15249 which is beyond the MFT. Deleting index entry d6 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15242 which is beyond the MFT. Deleting index entry df in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15227 which is beyond the MFT. Deleting index entry ea in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1522e which is beyond the MFT. Deleting index entry f3 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f2 which is beyond the MFT. Deleting index entry ff in index $I30 of file 84833. An index entry of index $I30 in file 0x14b62 points to file 0x15254 which is beyond the MFT. Deleting index entry 1ed39b36ad4bd48c91d22cbafd7390f1ea38da in index $I30 of file 84834. An index entry of index $I30 in file 0x14b75 points to file 0x15224 which is beyond the MFT. Deleting index entry 96260247010fe9811fea773c08c5f3a314df3f in index $I30 of file 84853. An index entry of index $I30 in file 0x14b79 points to file 0x15219 which is beyond the MFT. Deleting index entry 8f689724ca23528dd4f4ab8b475ace6edcb8f5 in index $I30 of file 84857. An index entry of index $I30 in file 0x14b7c points to file 0x15223 which is beyond the MFT. Deleting index entry 1df17cf850656be42c947cba6295d29c248d94 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15217 which is beyond the MFT. Deleting index entry 31db8a3c72a3e44769bbd8db58d36f8298242c in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15267 which is beyond the MFT. Deleting index entry 8e1254d755ff1882d61c07011272bac3612f57 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b82 points to file 0x15246 which is beyond the MFT. Deleting index entry f959bfaf9643c1b9e78d5ecf8f669133efdbf3 in index $I30 of file 84866. An index entry of index $I30 in file 0x14b88 points to file 0x151fe which is beyond the MFT. Deleting index entry 7e9aa15b1196b2c60116afa4ffa613397f2185 in index $I30 of file 84872. An index entry of index $I30 in file 0x14b8a points to file 0x151ea which is beyond the MFT. Deleting index entry 73cb0cd248e494bb508f41b55d862e84cdd6e0 in index $I30 of file 84874. An index entry of index $I30 in file 0x14b8e points to file 0x15264 which is beyond the MFT. Deleting index entry bd555d9f0383cc14c317120149e9376a8094c4 in index $I30 of file 84878. An index entry of index $I30 in file 0x14b96 points to file 0x15212 which is beyond the MFT. Deleting index entry 630dba40562d991bc6cbb6fed4ba638542e9c5 in index $I30 of file 84886. An index entry of index $I30 in file 0x14b99 points to file 0x151ec which is beyond the MFT. Deleting index entry 478be31ca8e538769246e22bba3330d81dc3c8 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b99 points to file 0x15258 which is beyond the MFT. Deleting index entry 66c60c0a0f3253bc9a5112697e4cbb0dfc0c78 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b9c points to file 0x15238 which is beyond the MFT. Deleting index entry 1c7ceeddc2953496f9ffbfc0b6fb28846e3fe3 in index $I30 of file 84892. An index entry of index $I30 in file 0x14b9c points to file 0x15247 which is beyond the MFT. Deleting index entry ae6e32ffc49d897d8f8aeced970a90d3653533 in index $I30 of file 84892. An index entry of index $I30 in file 0x14ba0 points to file 0x15233 which is beyond the MFT. Deleting index entry f71c7d874e45179a32e138b49bf007e5bbf514 in index $I30 of file 84896. Index entry 2e04fefbd794f050d45e7a717d009e39204431 of index $I30 in file 0x14ba7 points to unused file 0xd097. Deleting index entry 2e04fefbd794f050d45e7a717d009e39204431 in index $I30 of file 84903. An index entry of index $I30 in file 0x14baa points to file 0x15241 which is beyond the MFT. Deleting index entry 0dda7dec1c635cd646dfef308e403c2843d5dc in index $I30 of file 84906. An index entry of index $I30 in file 0x14baa points to file 0x151fc which is beyond the MFT. Deleting index entry 98151e654dd546edcfdec630bc82d90619ac8e in index $I30 of file 84906. An index entry of index $I30 in file 0x14bb1 points to file 0x151e9 which is beyond the MFT. Deleting index entry 1997c5be62ffeebc99253cced7608415e38e4e in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x1521d which is beyond the MFT. Deleting index entry 6bf3aedefd3ac62d9c49cad72d05e8c0ad242c in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x151f4 which is beyond the MFT. Deleting index entry 907b755afdca14c00be0010962d0861af29264 in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb3 points to file 0x15218 which is beyond the MFT. Deleting index entry

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Drupal: Exposing a module's data to Views2 using its API

    - by Sepehr Lajevardi
    I'm forking the filefield_stats module to provide it with the ability of exposing data into the Views module via the API. The filefield_stats module db table schema is as follow: <?php function filefield_stats_schema() { $schema['filefield_stats'] = array( 'fields' => array( 'fid' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'Primary Key: the {files}.fid'), 'vid' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'Primary Key: the {node}.vid'), 'uid' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'The {users}.uid of the downloader'), 'timestamp' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'The timestamp of the download'), 'hostname' => array('type' => 'varchar', 'length' => 128, 'not null' => TRUE, 'default' => '', 'description' => 'The hostname downloading the file (usually IP)'), 'referer' => array('type' => 'text', 'not null' => FALSE, 'description' => 'Referer for the download'), ), 'indexes' => array('fid_vid' => array('fid', 'vid')), ); return $schema; } ?> Well, so I implemented the hook_views_api() in filefield_stats.module & added a filefield_stats.views.inc file in the module's root directory, here it is: <?php // $Id$ /** * @file * Provide the ability of exposing data to Views2, for filefield_stats module. */ function filefield_stats_views_data() { $data = array(); $data['filefield_stats']['table']['group'] = t('FilefieldStats'); // Referencing the {node_revisions} table. $data['filefield_stats']['table']['join'] = array( 'node_revisions' => array( 'left_field' => 'vid', 'field' => 'vid', ), /*'files' => array( 'left_field' => 'fid', 'field' => 'fid', ), 'users' => array( 'left_field' => 'uid', 'field' => 'uid', ),*/ ); // Introducing filefield_stats table fields to Views2. // vid: The node's revision ID which wrapped the downloaded file $data['filefield_stats']['vid'] = array( 'title' => t('Node revision ID'), 'help' => t('The node\'s revision ID which wrapped the downloaded file'), 'relationship' => array( 'base' => 'node_revisions', 'field' => 'vid', 'handler' => 'views_handler_relationship', 'label' => t('Node Revision Reference.'), ), ); // uid: The ID of the user who downloaded the file. $data['filefield_stats']['uid'] = array( 'title' => t('User ID'), 'help' => t('The ID of the user who downloaded the file.'), 'relationship' => array( 'base' => 'users', 'field' => 'uid', 'handler' => 'views_handler_relationship', 'label' => t('User Reference.'), ), ); // fid: The ID of the downloaded file. $data['filefield_stats']['fid'] = array( 'title' => t('File ID'), 'help' => t('The ID of the downloaded file.'), 'relationship' => array( 'base' => 'files', 'field' => 'fid', 'handler' => 'views_handler_relationship', 'label' => t('File Reference.'), ), ); // hostname: The hostname which the file has been downloaded from. $data['filefield_stats']['hostname'] = array( 'title' => t('The Hostname'), 'help' => t('The hostname which the file has been downloaded from.'), 'field' => array( 'handler' => 'views_handler_field', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort', ), 'filter' => array( 'handler' => 'views_handler_filter_string', ), 'argument' => array( 'handler' => 'views_handler_argument_string', ), ); // referer: The referer address which the file download link has been triggered from. $data['filefield_stats']['referer'] = array( 'title' => t('The Referer'), 'help' => t('The referer which the file download link has been triggered from.'), 'field' => array( 'handler' => 'views_handler_field', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort', ), 'filter' => array( 'handler' => 'views_handler_filter_string', ), 'argument' => array( 'handler' => 'views_handler_argument_string', ), ); // timestamp: The time of the download. $data['filefield_stats']['timestamp'] = array( 'title' => t('Download Time'), 'help' => t('The time of the download.'), 'field' => array( 'handler' => 'views_handler_field_date', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort_date', ), 'filter' => array( 'handler' => 'views_handler_filter_date', ), ); return $data; } // filefield_stats_views_data() ?> According to the Views2 documentations this should work as a minimum, I think. But it doesn't! Also there is no error of any kind, when I come through the views UI, there's nothing about filefield_stats data. Any idea?

    Read the article

  • I get java.lang.NullPointerException when trying to get the contents of the database in Android

    - by ncountr
    I am using 8 EditText boxes from the NewCard.xml from which i am taking the values and when the save button is pressed i am storing the values into a database, in the same process of saving i am trying to get the values and present them into 8 different TextView boxes on the main.xml file and when i press the button i get an FC from the emulator and the resulting error is java.lang.NullPointerException. If Some 1 could help me that would be great, since i have never used databases and this is my first application for android and this is the only thing keepeng me to complete the whole thing and publish it on the market like a free app. Here's the full code from NewCard.java. public class NewCard extends Activity { private static String[] FROM = { _ID, FIRST_NAME, LAST_NAME, POSITION, POSTAL_ADDRESS, PHONE_NUMBER, FAX_NUMBER, MAIL_ADDRESS, WEB_ADDRESS}; private static String ORDER_BY = FIRST_NAME; private CardsData cards; EditText First_Name; EditText Last_Name; EditText Position; EditText Postal_Address; EditText Phone_Number; EditText Fax_Number; EditText Mail_Address; EditText Web_Address; Button New_Cancel; Button New_Save; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.newcard); cards = new CardsData(this); //Define the Cancel Button in NewCard Activity New_Cancel = (Button) this.findViewById(R.id.new_cancel_button); //Define the Cancel Button Activity/s New_Cancel.setOnClickListener ( new OnClickListener() { public void onClick(View arg0) { NewCancelDialog(); } } );//End of the Cancel Button Activity/s //Define the Save Button in NewCard Activity New_Save = (Button) this.findViewById(R.id.new_save_button); //Define the EditText Fields to Get Their Values Into the Database First_Name = (EditText) this.findViewById(R.id.new_first_name); Last_Name = (EditText) this.findViewById(R.id.new_last_name); Position = (EditText) this.findViewById(R.id.new_position); Postal_Address = (EditText) this.findViewById(R.id.new_postal_address); Phone_Number = (EditText) this.findViewById(R.id.new_phone_number); Fax_Number = (EditText) this.findViewById(R.id.new_fax_number); Mail_Address = (EditText) this.findViewById(R.id.new_mail_address); Web_Address = (EditText) this.findViewById(R.id.new_web_address); //Define the Save Button Activity/s New_Save.setOnClickListener ( new OnClickListener() { public void onClick(View arg0) { //Add Code For Saving The Attributes Into The Database try { addCard(First_Name.getText().toString(), Last_Name.getText().toString(), Position.getText().toString(), Postal_Address.getText().toString(), Integer.parseInt(Phone_Number.getText().toString()), Integer.parseInt(Fax_Number.getText().toString()), Mail_Address.getText().toString(), Web_Address.getText().toString()); Cursor cursor = getCard(); showCard(cursor); } finally { cards.close(); NewCard.this.finish(); } } } );//End of the Save Button Activity/s } //======================================================================================// //DATABASE FUNCTIONS private void addCard(String firstname, String lastname, String position, String postaladdress, int phonenumber, int faxnumber, String mailaddress, String webaddress) { // Insert a new record into the Events data source. // You would do something similar for delete and update. SQLiteDatabase db = cards.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(FIRST_NAME, firstname); values.put(LAST_NAME, lastname); values.put(POSITION, position); values.put(POSTAL_ADDRESS, postaladdress); values.put(PHONE_NUMBER, phonenumber); values.put(FAX_NUMBER, phonenumber); values.put(MAIL_ADDRESS, mailaddress); values.put(WEB_ADDRESS, webaddress); db.insertOrThrow(TABLE_NAME, null, values); } private Cursor getCard() { // Perform a managed query. The Activity will handle closing // and re-querying the cursor when needed. SQLiteDatabase db = cards.getReadableDatabase(); Cursor cursor = db.query(TABLE_NAME, FROM, null, null, null, null, ORDER_BY); startManagingCursor(cursor); return cursor; } private void showCard(Cursor cursor) { // Stuff them all into a big string long id = 0; String firstname = null; String lastname = null; String position = null; String postaladdress = null; long phonenumber = 0; long faxnumber = 0; String mailaddress = null; String webaddress = null; while (cursor.moveToNext()) { // Could use getColumnIndexOrThrow() to get indexes id = cursor.getLong(0); firstname = cursor.getString(1); lastname = cursor.getString(2); position = cursor.getString(3); postaladdress = cursor.getString(4); phonenumber = cursor.getLong(5); faxnumber = cursor.getLong(6); mailaddress = cursor.getString(7); webaddress = cursor.getString(8); } // Display on the screen add for each textView TextView ids = (TextView) findViewById(R.id.id); TextView fn = (TextView) findViewById(R.id.firstname); TextView ln = (TextView) findViewById(R.id.lastname); TextView pos = (TextView) findViewById(R.id.position); TextView pa = (TextView) findViewById(R.id.postaladdress); TextView pn = (TextView) findViewById(R.id.phonenumber); TextView fxn = (TextView) findViewById(R.id.faxnumber); TextView ma = (TextView) findViewById(R.id.mailaddress); TextView wa = (TextView) findViewById(R.id.webaddress); ids.setText(String.valueOf(id)); fn.setText(String.valueOf(firstname)); ln.setText(String.valueOf(lastname)); pos.setText(String.valueOf(position)); pa.setText(String.valueOf(postaladdress)); pn.setText(String.valueOf(phonenumber)); fxn.setText(String.valueOf(faxnumber)); ma.setText(String.valueOf(mailaddress)); wa.setText(String.valueOf(webaddress)); } //======================================================================================// //Define the Dialog that alerts you when you press the Cancel button private void NewCancelDialog() { new AlertDialog.Builder(this) .setMessage("Are you sure you want to cancel?") .setTitle("Cancel") .setCancelable(false) .setPositiveButton("Yes", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { NewCard.this.finish(); } }) .setNegativeButton("No", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { dialog.cancel(); } }) .show(); }//End of the Cancel Dialog }

    Read the article

  • Unable to receive any emails using postfix, dovecot, mysql, and virtual domain/mailboxes

    - by stkdev248
    I have been working on configuring my mail server for the last couple of weeks using postfix, dovecot, and mysql. I have one virtual domain and a few virtual mailboxes. Using squirrelmail I have been able to log into my accounts and send emails out (e.g. I can send to googlemail just fine), however I am not able to receive any emails--not from the outside world nor from within my own network. I am able to telnet in using localhost, my private ip, and my public ip on port 25 without any problems (I've tried it from the server itself and from another computer on my network). This is what I get in my logs when I send an email from my googlemail account to my mail server: mail.log Apr 14 07:36:06 server1 postfix/qmgr[1721]: BE01B520538: from=, size=733, nrcpt=1 (queue active) Apr 14 07:36:06 server1 postfix/pipe[3371]: 78BC0520510: to=, relay=dovecot, delay=45421, delays=45421/0/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied) Apr 14 07:36:06 server1 postfix/pipe[3391]: 8261B520534: to=, relay=dovecot, delay=38036, delays=38036/0.06/0/0.12, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3378]: 63927520532: to=, relay=dovecot, delay=38105, delays=38105/0.02/0/0.17, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3375]: 07F65520522: to=, relay=dovecot, delay=39467, delays=39467/0.01/0/0.17, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3381]: EEDE9520527: to=, relay=dovecot, delay=38361, delays=38360/0.04/0/0.15, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3379]: 67DFF520517: to=, relay=dovecot, delay=40475, delays=40475/0.03/0/0.16, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3387]: 3C7A052052E: to=, relay=dovecot, delay=38259, delays=38259/0.05/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3394]: BE01B520538: to=, relay=dovecot, delay=37682, delays=37682/0.07/0/0.11, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:07 server1 postfix/pipe[3384]: 3C7A052052E: to=, relay=dovecot, delay=38261, delays=38259/0.04/0/1.3, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max connection rate 1/60s for (smtp:209.85.213.169) at Apr 14 07:35:32 Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max connection count 1 for (smtp:209.85.213.169) at Apr 14 07:35:32 Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max cache size 1 at Apr 14 07:35:32 Apr 14 07:41:06 server1 postfix/qmgr[1721]: ED6005203B7: from=, size=1463, nrcpt=1 (queue active) Apr 14 07:41:06 server1 postfix/pipe[4594]: ED6005203B7: to=, relay=dovecot, delay=334, delays=334/0.01/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:51:06 server1 postfix/qmgr[1721]: ED6005203B7: from=, size=1463, nrcpt=1 (queue active) Apr 14 07:51:06 server1 postfix/pipe[4604]: ED6005203B7: to=, relay=dovecot, delay=933, delays=933/0.02/0/0.12, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) mail-dovecot-log (the log I set for debugging): Apr 14 07:28:26 auth: Info: mysql(127.0.0.1): Connected to database postfixadmin Apr 14 07:28:26 auth: Debug: sql([email protected],127.0.0.1): query: SELECT password FROM mailbox WHERE username = '[email protected]' Apr 14 07:28:26 auth: Debug: client out: OK 1 [email protected] Apr 14 07:28:26 auth: Debug: master in: REQUEST 1809973249 3356 1 7cfb822db820fc5da67d0776b107cb3f Apr 14 07:28:26 auth: Debug: sql([email protected],127.0.0.1): SELECT '/home/vmail/mydomain.com/some.user1' as home, 5000 AS uid, 5000 AS gid FROM mailbox WHERE username = '[email protected]' Apr 14 07:28:26 auth: Debug: master out: USER 1809973249 [email protected] home=/home/vmail/mydomain.com/some.user1 uid=5000 gid=5000 Apr 14 07:28:26 imap-login: Info: Login: user=, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=3360, secured Apr 14 07:28:26 imap([email protected]): Debug: Effective uid=5000, gid=5000, home=/home/vmail/mydomain.com/some.user1 Apr 14 07:28:26 imap([email protected]): Debug: maildir++: root=/home/vmail/mydomain.com/some.user1/Maildir, index=/home/vmail/mydomain.com/some.user1/Maildir/indexes, control=, inbox=/home/vmail/mydomain.com/some.user1/Maildir Apr 14 07:48:31 imap([email protected]): Info: Disconnected: Logged out bytes=85/681 From the output above I'm pretty sure that my problems all stem from (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ), but I have no idea why I'm getting that error. I've have the permissions to that log set just like the other mail logs: root@server1:~# ls -l /var/log/mail* -rw-r----- 1 syslog adm 196653 2012-04-14 07:58 /var/log/mail-dovecot.log -rw-r----- 1 syslog adm 62778 2012-04-13 21:04 /var/log/mail.err -rw-r----- 1 syslog adm 497767 2012-04-14 08:01 /var/log/mail.log Does anyone have any idea what I may be doing wrong? Here are my main.cf and master.cf files: main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = server1.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all # Virtual Configs virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = mysql:/etc/postfix/mysql_virtual_mailbox_domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cf relay_domains = mysql:/etc/postfix/mysql_relay_domains.cf smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unauth_destination, reject_unauth_pipelining, reject_invalid_hostname smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous virtual_transport=dovecot dovecot_destination_recipient_limit = 1 master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #smtp inet n - - - 1 postscreen #smtpd pass - - - - - smtpd #dnsblog unix - - - - 0 dnsblog #tlsproxy unix - - - - 0 tlsproxy #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # #cyrus unix - n n - - pipe # user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient}

    Read the article

  • Filtering bad requests from Apache -> logger -> rsyslog to syslog-ng on a remote logging server possible?

    - by zeyus
    EDIT: Thanks for the help Here is a quick idea of the setup: webserver X In apache httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vcombined CustomLog "|/usr/bin/logger -p local6.info -t access " vcombined In rsyslog.conf: *.* @logserver Logserver syslog-ng.conf: ... parser p_apache {csv-parser(columns( "APACHE.VIRTUAL_HOST", "APACHE.CLIENT_IP", "APACHE.IDENT_NAME", "APACHE.USER_NAME", "APACHE.TIMESTAMP", "APACHE.REQUEST_URL", "APACHE.REQUEST_STATUS", "APACHE.CONTENT_LENGTH", "APACHE.REFERER", "APACHE.USER_AGENT", "APACHE.PROCESS_TIME", "APACHE.SERVER_NAME") # flags: # escape-none,escape-backslash,escape-double-char, # strip-whitespace flags(escape-double-char,strip-whitespace) delimiters(" ") quote-pairs('""[]') );}; ... source s_net { udp(ip(0.0.0.0) port(514) so_rcvbuf(1048576)); }; destination hosts_acc { file("/var/log/hosts/$HOST/${APACHE.VIRTUAL_HOST}_acc.log"); }; filter f_apacheacc { facility(local6); }; log { source(s_net); parser(p_apache); filter(f_apacheacc); destination(hosts_acc); }; ... The log's get there just fine, but there are a LOT of logs like the following: -rw------- 1 root root 5726 Apr 6 01:02 xc3\x9d\xc3\x9ed$yA;_acc.log -rw------- 1 root root 23435 Apr 6 01:06 \xc3\x9ed$yA;_acc.log -rw------- 1 root root 745 Apr 6 00:57 xc3\x9ed$yA;_acc.log -rw------- 1 root root 8440 Apr 5 22:50 \xc3\xaf_F\xc3\x95$yA;_acc.log -rw------- 1 root root 3112 Apr 6 00:58 xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA;_acc.log -rw------- 1 root root 4220 Apr 5 22:03 xe2\x80\x98\twd\xc2\xa2\xc2\xb0\xc3\x96$yA;_acc.log -rw------- 1 root root 1055 Apr 5 22:03 xe2\x80\x98\xc2\x9dw\xc3\x94\xc3\xb4T\xc5\x93$yA;_acc.log -rw------- 1 root root 1821 Apr 6 00:58 \xe2\x80\x98\xc3\x9d\xc3\x9ed$yA;_acc.log -rw------- 1 root root 2875 Apr 6 01:02 xe2\x80\x98\xc3\x9d\xc3\x9ed$yA;_acc.log -rw------- 1 root root 3165 Apr 5 22:48 \xe2\x80\x99-w\xc3\xaf_F\xc3\x95$yA;_acc.log -rw------- 1 root root 3165 Apr 5 22:40 \xe2\x80\x99\xe2\x80\x9aw\xe2\x82\xac\xc2\xbd\xe2\x80\x9d($yA;_acc.log -rw------- 1 root root 15825 Apr 5 22:50 xe2\x80\x99\xe2\x80\x9aw\xe2\x82\xac\xc2\xbd\xe2\x80\x9d($yA;_acc.log -rw------- 1 root root 1055 Apr 5 22:39 \xe2\x80\x9aw\xe2\x82\xac\xc2\xbd\xe2\x80\x9d($yA;_acc.log -rw------- 1 root root 2110 Apr 5 22:50 xe2\x80\x9aw\xe2\x82\xac\xc2\xbd\xe2\x80\x9d($yA;_acc.log -rw------- 1 root root 2034 Apr 5 22:50 \xe2\x80\x9d($yA;_acc.log -rw------- 1 root root 4066 Apr 5 22:45 xe2\x80\x9d($yA;_acc.log -rw------- 1 root root 7212 Apr 6 13:30 \xe2\x80\xb9>$yA;_acc.log -rw------- 1 root root 3000 Apr 6 13:25 xe2\x80\xb9>$yA;_acc.log My question is where, and how can I filter these out, I don't want them on the filesystem (But actually I guess it wouldn't be a bad idea to keep them logged, but in their correct VHost file) Here is an example VHost <VirtualHost *:80> ServerAdmin [email protected] ServerName xxx.xx DocumentRoot /var/www/vhosts/xxx <Directory /var/www/vhosts/xxx> AllowOverride All Options All RewriteEngine on </Directory> </VirtualHost> And the default "catch-all" vhost at the bottom of the vhosts config file: <VirtualHost *:80> ServerName default ServerAlias * ServerAlias catchall.xxx.xx DocumentRoot /var/www/vhosts/nodomain <Directory "/var/www/vhosts/nodomain"> Options Indexes FollowSymLinks AllowOverride none Allow from All </Directory> CustomLog /dev/null combined ErrorLog /dev/null </VirtualHost> I had posted this in a related question but It's better in it's own question. Here are some examples from inside the log files r_acc.log: Apr 7 11:16:27 xxxxx access: r PC 5.0; eSobiSubscriber 2.0.4.16; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C)" Apr 7 11:16:28 xxxxx access: r PC 5.0; eSobiSubscriber 2.0.4.16; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C)" ######################## D46-28E2-0FBC95-78798EV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA;_acc.log: Apr 7 14:54:06 xxxxx access: D46-28E2-0FBC95-78798EV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; B557000E-F20D-35DD-021A-9824EC-17A4AFV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; 3BD03D7B-EEFD-83FF-7599-B751AD-6F0A2EV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; 9CAE0724-D455-0B31-3378-871C11-BBD0A4V\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; C1E24799-3979-2452-81-3BAA0FFD361F5A; 0E701CBC-5832-5AB6-D5-CFBF9BDE863EAA; 464714B1-B3E2-774A-A4-FEA612A46CEE06; 74C817B0-D081-D2CC-6D-C4EF0F1B4F49BB; 1338B1DE-67CD-977C-B35D-1F2C4441DD6A; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729; OfficeLiveConnector.1.5; OfficeLivePatch.1.3; .NET4.0C; BRI/2)" ######################## V\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA;_acc.log: Apr 7 14:55:04 xxxxx access: V\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; FEEACE4F-092A-1D46-28E2-0FBC95-78798EV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; B557000E-F20D-35DD-021A-9824EC-17A4AFV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; 3BD03D7B-EEFD-83FF-7599-B751AD-6F0A2EV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; 9CAE0724-D455-0B31-3378-871C11-BBD0A4V\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; C1E24799-3979-2452-81-3BAA0FFD361F5A; 0E701CBC-5832-5AB6-D5-CFBF9BDE863EAA; 464714B1-B3E2-774A-A4-FEA612A46CEE06; 74C817B0-D081-D2CC-6D-C4EF0F1B4F49BB; 1338B1DE-67CD-977C-B35D-1F2C4441DD6A; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729; OfficeLiveConnector.1.5; OfficeLivePatch.1.3; .NET4.0C; BRI/2)" ################### xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA;_acc.log: Apr 7 19:48:39 xxxxx access: xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; 3C12D25C-9D40-91CF-1F40-AC-B1A083426DV-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; D4713FA8-0142-A0C2-4812-BA-E03221005BV-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; 199BAF2A-ECD5-39FA-65C3-E8-B107FAFF08V-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; 384BDA70-9954-7744-05A0-C4-C7D9FEA685V-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; EE7292A9-333C-AF70-5A7F-55-CAA7D0BA39V-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; -AD7D48FA3A55-2A33-D10B-B4B66276D8B8; -166A9C6A2E71-24DF-A192-C8258AA4DE14; -00077C6C84E0-A302-4954-3D6D17C54D31; 3F56C318-EC3C-432B-680F-7E4BB2B852C4; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C)" Apr 7 19:48:39 xxxxx access: xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; 3C12D25C-9D40-91CF-1F40-AC-B1A083426DV-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; D4713FA8-0142-A0C2-4812-BA-E03221005BV-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; 199BAF2A-ECD5-39FA-65C3-E8-B107FAFF08V-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; 384BDA70-9954-7744-05A0-C4-C7D9FEA685V-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; EE7292A9-333C-AF70-5A7F-55-CAA7D0BA39V-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; -AD7D48FA3A55-2A33-D10B-B4B66276D8B8; -166A9C6A2E71-24DF-A192-C8258AA4DE14; -00077C6C84E0-A302-4954-3D6D17C54D31; 3F56C318-EC3C-432B-680F-7E4BB2B852C4; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C)" Thanks

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • in MSSQL Server 2005 Dev Edition, I faced index corruption

    - by tranhuyhung
    Hi all, When running stored procedures in MSSQL Server, I found it failed and the DBMS (MSSQL Server 2005 Dev Edition) notified that some indexes are corrupted. Please advice me, here below is DBCC logs: DBCC results for 'itopup_dev'. Service Broker Msg 9675, State 1: Message Types analyzed: 14. Service Broker Msg 9676, State 1: Service Contracts analyzed: 6. Service Broker Msg 9667, State 1: Services analyzed: 3. Service Broker Msg 9668, State 1: Service Queues analyzed: 3. Service Broker Msg 9669, State 1: Conversation Endpoints analyzed: 0. Service Broker Msg 9674, State 1: Conversation Groups analyzed: 0. Service Broker Msg 9670, State 1: Remote Service Bindings analyzed: 0. DBCC results for 'sys.sysrowsetcolumns'. There are 1148 rows in 14 pages for object "sys.sysrowsetcolumns". DBCC results for 'sys.sysrowsets'. There are 187 rows in 2 pages for object "sys.sysrowsets". DBCC results for 'sysallocunits'. There are 209 rows in 3 pages for object "sysallocunits". DBCC results for 'sys.sysfiles1'. There are 2 rows in 1 pages for object "sys.sysfiles1". DBCC results for 'sys.syshobtcolumns'. There are 1148 rows in 14 pages for object "sys.syshobtcolumns". DBCC results for 'sys.syshobts'. There are 187 rows in 2 pages for object "sys.syshobts". DBCC results for 'sys.sysftinds'. There are 0 rows in 0 pages for object "sys.sysftinds". DBCC results for 'sys.sysserefs'. There are 209 rows in 1 pages for object "sys.sysserefs". DBCC results for 'sys.sysowners'. There are 15 rows in 1 pages for object "sys.sysowners". DBCC results for 'sys.sysprivs'. There are 135 rows in 1 pages for object "sys.sysprivs". DBCC results for 'sys.sysschobjs'. There are 817 rows in 21 pages for object "sys.sysschobjs". DBCC results for 'sys.syscolpars'. There are 2536 rows in 71 pages for object "sys.syscolpars". DBCC results for 'sys.sysnsobjs'. There are 1 rows in 1 pages for object "sys.sysnsobjs". DBCC results for 'sys.syscerts'. There are 0 rows in 0 pages for object "sys.syscerts". DBCC results for 'sys.sysxprops'. There are 12 rows in 4 pages for object "sys.sysxprops". DBCC results for 'sys.sysscalartypes'. There are 27 rows in 1 pages for object "sys.sysscalartypes". DBCC results for 'sys.systypedsubobjs'. There are 0 rows in 0 pages for object "sys.systypedsubobjs". DBCC results for 'sys.sysidxstats'. There are 466 rows in 15 pages for object "sys.sysidxstats". DBCC results for 'sys.sysiscols'. There are 616 rows in 6 pages for object "sys.sysiscols". DBCC results for 'sys.sysbinobjs'. There are 23 rows in 1 pages for object "sys.sysbinobjs". DBCC results for 'sys.sysobjvalues'. There are 1001 rows in 376 pages for object "sys.sysobjvalues". DBCC results for 'sys.sysclsobjs'. There are 14 rows in 1 pages for object "sys.sysclsobjs". DBCC results for 'sys.sysrowsetrefs'. There are 0 rows in 0 pages for object "sys.sysrowsetrefs". DBCC results for 'sys.sysremsvcbinds'. There are 0 rows in 0 pages for object "sys.sysremsvcbinds". DBCC results for 'sys.sysxmitqueue'. There are 0 rows in 0 pages for object "sys.sysxmitqueue". DBCC results for 'sys.sysrts'. There are 1 rows in 1 pages for object "sys.sysrts". DBCC results for 'sys.sysconvgroup'. There are 0 rows in 0 pages for object "sys.sysconvgroup". DBCC results for 'sys.sysdesend'. There are 0 rows in 0 pages for object "sys.sysdesend". DBCC results for 'sys.sysdercv'. There are 0 rows in 0 pages for object "sys.sysdercv". DBCC results for 'sys.syssingleobjrefs'. There are 317 rows in 2 pages for object "sys.syssingleobjrefs". DBCC results for 'sys.sysmultiobjrefs'. There are 3607 rows in 37 pages for object "sys.sysmultiobjrefs". DBCC results for 'sys.sysdbfiles'. There are 2 rows in 1 pages for object "sys.sysdbfiles". DBCC results for 'sys.sysguidrefs'. There are 0 rows in 0 pages for object "sys.sysguidrefs". DBCC results for 'sys.sysqnames'. There are 91 rows in 1 pages for object "sys.sysqnames". DBCC results for 'sys.sysxmlcomponent'. There are 93 rows in 1 pages for object "sys.sysxmlcomponent". DBCC results for 'sys.sysxmlfacet'. There are 97 rows in 1 pages for object "sys.sysxmlfacet". DBCC results for 'sys.sysxmlplacement'. There are 17 rows in 1 pages for object "sys.sysxmlplacement". DBCC results for 'sys.sysobjkeycrypts'. There are 0 rows in 0 pages for object "sys.sysobjkeycrypts". DBCC results for 'sys.sysasymkeys'. There are 0 rows in 0 pages for object "sys.sysasymkeys". DBCC results for 'sys.syssqlguides'. There are 0 rows in 0 pages for object "sys.syssqlguides". DBCC results for 'sys.sysbinsubobjs'. There are 0 rows in 0 pages for object "sys.sysbinsubobjs". DBCC results for 'TBL_BONUS_TEMPLATES'. There are 0 rows in 0 pages for object "TBL_BONUS_TEMPLATES". DBCC results for 'TBL_ROLE_PAGE_GROUP'. There are 18 rows in 1 pages for object "TBL_ROLE_PAGE_GROUP". DBCC results for 'TBL_BONUS_LEVELS'. There are 0 rows in 0 pages for object "TBL_BONUS_LEVELS". DBCC results for 'TBL_SUPERADMIN'. There are 1 rows in 1 pages for object "TBL_SUPERADMIN". DBCC results for 'TBL_ADMIN_ROLES'. There are 11 rows in 1 pages for object "TBL_ADMIN_ROLES". DBCC results for 'TBL_ADMIN_USER_ROLE'. There are 42 rows in 1 pages for object "TBL_ADMIN_USER_ROLE". DBCC results for 'TBL_BONUS_CALCULATION_HISTORIES'. There are 0 rows in 0 pages for object "TBL_BONUS_CALCULATION_HISTORIES". DBCC results for 'TBL_MERCHANT_MOBILES'. There are 0 rows in 0 pages for object "TBL_MERCHANT_MOBILES". DBCC results for 'TBL_ARCHIVE_EXPORTED_SOFTPINS'. There are 16030918 rows in 35344 pages for object "TBL_ARCHIVE_EXPORTED_SOFTPINS". DBCC results for 'TBL_ARCHIVE_LOGS'. There are 280 rows in 2 pages for object "TBL_ARCHIVE_LOGS". DBCC results for 'TBL_ADMIN_USERS'. There are 29 rows in 1 pages for object "TBL_ADMIN_USERS". DBCC results for 'TBL_SYSTEM_ALERT_GROUPS'. There are 4 rows in 1 pages for object "TBL_SYSTEM_ALERT_GROUPS". DBCC results for 'TBL_EXPORTED_TRANSACTIONS'. There are 7848 rows in 89 pages for object "TBL_EXPORTED_TRANSACTIONS". DBCC results for 'TBL_SYSTEM_ALERTS'. There are 968 rows in 9 pages for object "TBL_SYSTEM_ALERTS". DBCC results for 'TBL_SYSTEM_ALERT_GROUP_MEMBERS'. There are 1 rows in 1 pages for object "TBL_SYSTEM_ALERT_GROUP_MEMBERS". DBCC results for 'TBL_ESTIMATED_TIME'. There are 11 rows in 1 pages for object "TBL_ESTIMATED_TIME". DBCC results for 'TBL_SYSTEM_ALERT_MEMBERS'. There are 0 rows in 1 pages for object "TBL_SYSTEM_ALERT_MEMBERS". DBCC results for 'TBL_COMMISSIONS'. There are 10031 rows in 106 pages for object "TBL_COMMISSIONS". DBCC results for 'TBL_CATEGORIES'. There are 3 rows in 1 pages for object "TBL_CATEGORIES". DBCC results for 'TBL_SERVICE_PROVIDERS'. There are 11 rows in 1 pages for object "TBL_SERVICE_PROVIDERS". DBCC results for 'TBL_CATEGORY_SERVICE_PROVIDER'. There are 11 rows in 1 pages for object "TBL_CATEGORY_SERVICE_PROVIDER". DBCC results for 'TBL_PRODUCTS'. There are 73 rows in 6 pages for object "TBL_PRODUCTS". DBCC results for 'TBL_MERCHANT_KEYS'. There are 291 rows in 30 pages for object "TBL_MERCHANT_KEYS". DBCC results for 'TBL_POS_UNLOCK_KEYS'. There are 0 rows in 0 pages for object "TBL_POS_UNLOCK_KEYS". DBCC results for 'TBL_POS'. There are 0 rows in 0 pages for object "TBL_POS". DBCC results for 'TBL_IMPORT_BATCHES'. There are 3285 rows in 84 pages for object "TBL_IMPORT_BATCHES". DBCC results for 'TBL_IMPORT_KEYS'. There are 2 rows in 1 pages for object "TBL_IMPORT_KEYS". DBCC results for 'TBL_PRODUCT_COMMISSION_TEMPLATES'. There are 634 rows in 4 pages for object "TBL_PRODUCT_COMMISSION_TEMPLATES". DBCC results for 'TBL_POS_SETTLE_TRANSACTIONS'. There are 0 rows in 0 pages for object "TBL_POS_SETTLE_TRANSACTIONS". DBCC results for 'TBL_CHANGE_KEY_SOFTPINS'. There are 0 rows in 1 pages for object "TBL_CHANGE_KEY_SOFTPINS". DBCC results for 'TBL_POS_RETURN_TRANSACTIONS'. There are 0 rows in 0 pages for object "TBL_POS_RETURN_TRANSACTIONS". DBCC results for 'TBL_POS_SOFTPINS'. There are 0 rows in 0 pages for object "TBL_POS_SOFTPINS". DBCC results for 'TBL_POS_MENUS'. There are 0 rows in 0 pages for object "TBL_POS_MENUS". DBCC results for 'TBL_COMMISSION_TEMPLATES'. There are 23 rows in 1 pages for object "TBL_COMMISSION_TEMPLATES". DBCC results for 'TBL_DOWNLOAD_TRANSACTIONS'. There are 170820 rows in 1789 pages for object "TBL_DOWNLOAD_TRANSACTIONS". DBCC results for 'TBL_IMPORT_TEMP_SOFTPINS'. There are 0 rows in 1 pages for object "TBL_IMPORT_TEMP_SOFTPINS". DBCC results for 'TBL_REGIONS'. There are 2 rows in 1 pages for object "TBL_REGIONS". DBCC results for 'TBL_SOFTPINS'. There are 9723677 rows in 126611 pages for object "TBL_SOFTPINS". DBCC results for 'sysdiagrams'. There are 0 rows in 0 pages for object "sysdiagrams". DBCC results for 'TBL_SYNCHRONIZE_TRANSACTIONS'. There are 9302 rows in 53 pages for object "TBL_SYNCHRONIZE_TRANSACTIONS". DBCC results for 'TBL_SALEMEN'. There are 32 rows in 1 pages for object "TBL_SALEMEN". DBCC results for 'TBL_RESERVATION_SOFTPINS'. There are 131431 rows in 1629 pages for object "TBL_RESERVATION_SOFTPINS". DBCC results for 'TBL_SYNCHRONIZE_TRANSACTION_ITEMS'. There are 5345 rows in 16 pages for object "TBL_SYNCHRONIZE_TRANSACTION_ITEMS". DBCC results for 'TBL_ACCOUNTS'. There are 1 rows in 1 pages for object "TBL_ACCOUNTS". DBCC results for 'TBL_SYNCHRONIZE_TRANSACTION_SOFTPIN'. There are 821988 rows in 2744 pages for object "TBL_SYNCHRONIZE_TRANSACTION_SOFTPIN". *DBCC results for 'TBL_EXPORTED_SOFTPINS'. Msg 8928, Level 16, State 1, Line 1 Object ID 1716917188, index ID 1, partition ID 72057594046119936, alloc unit ID 72057594050838528 (type In-row data): Page (1:677314) could not be processed. See other errors for details. Msg 8939, Level 16, State 7, Line 1 Table error: Object ID 1716917188, index ID 1, partition ID 72057594046119936, alloc unit ID 72057594050838528 (type In-row data), page (1:677314). Test (m_freeData = PAGEHEADSIZE && m_freeData <= (UINT)PAGESIZE - m_slotCnt * sizeof (Slot)) failed. Values are 15428 and 7240. There are 2267937 rows in 6133 pages for object "TBL_EXPORTED_SOFTPINS". CHECKDB found 0 allocation errors and 2 consistency errors in table 'TBL_EXPORTED_SOFTPINS' (object ID 1716917188).* DBCC results for 'TBL_DOWNLOAD_SOFTPINS'. There are 7029404 rows in 17999 pages for object "TBL_DOWNLOAD_SOFTPINS". DBCC results for 'TBL_MERCHANT_BALANCE_CREDIT_PAID'. There are 0 rows in 0 pages for object "TBL_MERCHANT_BALANCE_CREDIT_PAID". DBCC results for 'TBL_ARCHIVE_SOFTPINS'. There are 44015040 rows in 683692 pages for object "TBL_ARCHIVE_SOFTPINS". DBCC results for 'TBL_ACCOUNT_BALANCE_LOGS'. There are 0 rows in 0 pages for object "TBL_ACCOUNT_BALANCE_LOGS". DBCC results for 'TBL_BLOCK_BATCHES'. There are 23 rows in 1 pages for object "TBL_BLOCK_BATCHES". DBCC results for 'TBL_BLOCK_BATCH_SOFTPIN'. There are 396 rows in 1 pages for object "TBL_BLOCK_BATCH_SOFTPIN". DBCC results for 'TBL_MERCHANTS'. There are 290 rows in 22 pages for object "TBL_MERCHANTS". DBCC results for 'TBL_DOWNLOAD_TRANSACTION_ITEMS'. There are 189296 rows in 1241 pages for object "TBL_DOWNLOAD_TRANSACTION_ITEMS". DBCC results for 'TBL_BLOCK_BATCH_CONDITIONS'. There are 23 rows in 1 pages for object "TBL_BLOCK_BATCH_CONDITIONS". DBCC results for 'TBL_SP_ADVERTISEMENTS'. There are 6 rows in 1 pages for object "TBL_SP_ADVERTISEMENTS". DBCC results for 'TBL_SERVER_KEYS'. There are 1 rows in 1 pages for object "TBL_SERVER_KEYS". DBCC results for 'TBL_ARCHIVE_DOWNLOAD_SOFTPINS'. There are 27984122 rows in 60773 pages for object "TBL_ARCHIVE_DOWNLOAD_SOFTPINS". DBCC results for 'TBL_ACCOUNT_BALANCE_REQUESTS'. There are 0 rows in 0 pages for object "TBL_ACCOUNT_BALANCE_REQUESTS". DBCC results for 'TBL_MERCHANT_TERMINALS'. There are 633 rows in 4 pages for object "TBL_MERCHANT_TERMINALS". DBCC results for 'TBL_SP_PREFIXES'. There are 6 rows in 1 pages for object "TBL_SP_PREFIXES". DBCC results for 'TBL_DIRECT_TOPUP_TRANSACTIONS'. There are 43 rows in 1 pages for object "TBL_DIRECT_TOPUP_TRANSACTIONS". DBCC results for 'TBL_MERCHANT_BALANCE_REQUESTS'. There are 19367 rows in 171 pages for object "TBL_MERCHANT_BALANCE_REQUESTS". DBCC results for 'TBL_ACTION_LOGS'. There are 133714 rows in 1569 pages for object "TBL_ACTION_LOGS". DBCC results for 'sys.queue_messages_1977058079'. There are 0 rows in 0 pages for object "sys.queue_messages_1977058079". DBCC results for 'sys.queue_messages_2009058193'. There are 0 rows in 0 pages for object "sys.queue_messages_2009058193". DBCC results for 'TBL_CODES'. There are 98 rows in 1 pages for object "TBL_CODES". DBCC results for 'TBL_MERCHANT_BALANCE_LOGS'. There are 183498 rows in 3178 pages for object "TBL_MERCHANT_BALANCE_LOGS". DBCC results for 'TBL_MERCHANT_CHANNEL_TEMPLATE'. There are 397 rows in 2 pages for object "TBL_MERCHANT_CHANNEL_TEMPLATE". DBCC results for 'sys.queue_messages_2041058307'. There are 0 rows in 0 pages for object "sys.queue_messages_2041058307". DBCC results for 'TBL_VNPTEPAY'. There are 0 rows in 0 pages for object "TBL_VNPTEPAY". DBCC results for 'TBL_PAGE_GROUPS'. There are 10 rows in 1 pages for object "TBL_PAGE_GROUPS". DBCC results for 'TBL_PAGE_GROUP_PAGE'. There are 513 rows in 2 pages for object "TBL_PAGE_GROUP_PAGE". DBCC results for 'TBL_ACCOUNT_CHANNEL_TEMPLATE'. There are 0 rows in 0 pages for object "TBL_ACCOUNT_CHANNEL_TEMPLATE". DBCC results for 'TBL_PAGES'. There are 148 rows in 3 pages for object "TBL_PAGES". CHECKDB found 0 allocation errors and 2 consistency errors in database 'itopup_dev'. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (itopup_dev). DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    Read the article

  • Solr DataImportHandler configuration

    - by talo
    I want to get data from mysql database with the help of DataImportHandler so i can create indexes. Now I've configured my Solr instance so that it works on Tomcat (the example admin page), but if I try to change the sorlconfig.xml file i'll get the error message. I'm working with Solr 3.6 So my configuration is: In solrconfig.xml i added: <dataDir>${solr.data.dir:/usr/share/tomcat7/solr2}</dataDir> to specify my working directory and then <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler"> <lst name="defaults"> <str name="config">/usr/share/tomcat7/solr2/conf/data-config.xml</str> </lst> </requestHandler> to specify new request handler. Theese two are my lib directives for DIH. Do i need to change them? <lib dir="../../dist/" regex="apache-solr-dataimporthandler-\d.*\.jar" /> <lib dir="../../contrib/dataimporthandler/lib/" regex=".*\.jar" /> I also created data-config.xml file and added following: <?xml version="1.0" encoding="UTF-8"?> <dataConfig> <dataSource type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/ethicsweb_experts" user="root" password=""/> <document> <entity name="experts" datasource="mysql" pk="mainid" query="SELECT experts.mainid as mainid FROM experts WHERE validRec = 'y'"> <field column="mainid" name="mainid"/> <field column="validRec" name="validRec"/> </entity> </document> I've coppied following jars to tomcat/lib folder (DIH jar files and mysql JDBC connector jar file) apache-solr-dataimporthandler-3.6.0.jar apache-solr-dataimporthandler-extras-3.6.0.jar mysql-connector-java-5.1.20-bin.jar Also in the schema.xml file i added folowing fields: <field name="mainid" type="int" indexed="true" stored="true" /> <field name="validRec" type="string" indexed="true" stored="true" /> <field name="recSource" type="string" indexed="true" stored="true" /> <uniqueKey>mainid</uniqueKey> But now when i try to acces: http://localhost:8080/solr2/ I get following output: HTTP Status 500 - Severe errors in solr configuration. Check your log files for more detailed information on what may be wrong. If you want solr to continue after configuration errors, change: false in solr.xml ------------------------------------------------------------- org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) ------------------------------------------------------------- java.lang.NoClassDefFoundError: org/apache/solr/util/plugin/SolrCoreAware at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:791) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698) at java.lang.ClassLoader.loadClass(ClassLoader.java:410) at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:378) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:419) at org.apache.solr.core.SolrCore.createRequestHandler(SolrCore.java:455) at org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:159) at org.apache.solr.core.SolrCore.(SolrCore.java:563) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:483) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:335) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:219) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.ClassNotFoundException: org.apache.solr.util.plugin.SolrCoreAware at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) ... 47 more My log entries show me that: SEVERE: java.lang.NoClassDefFoundError: org/apache/solr/util/plugin/SolrCoreAware at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:791) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698) at java.lang.ClassLoader.loadClass(ClassLoader.java:410) at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:378) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:419) at org.apache.solr.core.SolrCore.createRequestHandler(SolrCore.java:455) at org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:159) at org.apache.solr.core.SolrCore.(SolrCore.java:563) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:483) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:335) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:219) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.ClassNotFoundException: org.apache.solr.util.plugin.SolrCoreAware at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) ... 47 more Jun 15, 2012 4:07:50 PM org.apache.solr.servlet.SolrDispatchFilter init SEVERE: Could not start Solr. Check solr/home property and the logs org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Jun 15, 2012 4:07:50 PM org.apache.solr.common.SolrException log SEVERE: org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) So now I wonder where did i screw up the configuration for the DataImportHandler. Do i need to specify more jar files? Did i put jar files in the correct directory? Any help would be greatly appreciated.

    Read the article

  • Mulitple full joins in Postgres is slow

    - by blast83
    I have a program to use the IMDB database and am having very slow performance on my query. It appears that it doesn't use my where condition until after it materializes everything. I looked around for hints to use but nothing seems to work. Here is my query: SELECT * FROM name as n1 FULL JOIN aka_name ON n1.id = aka_name.person_id FULL JOIN cast_info as t2 ON n1.id = t2.person_id FULL JOIN person_info as t3 ON n1.id = t3.person_id FULL JOIN char_name as t4 ON t2.person_role_id = t4.id FULL JOIN role_type as t5 ON t2.role_id = t5.id FULL JOIN title as t6 ON t2.movie_id = t6.id FULL JOIN aka_title as t7 ON t6.id = t7.movie_id FULL JOIN complete_cast as t8 ON t6.id = t8.movie_id FULL JOIN kind_type as t9 ON t6.kind_id = t9.id FULL JOIN movie_companies as t10 ON t6.id = t10.movie_id FULL JOIN movie_info as t11 ON t6.id = t11.movie_id FULL JOIN movie_info_idx as t19 ON t6.id = t19.movie_id FULL JOIN movie_keyword as t12 ON t6.id = t12.movie_id FULL JOIN movie_link as t13 ON t6.id = t13.linked_movie_id FULL JOIN link_type as t14 ON t13.link_type_id = t14.id FULL JOIN keyword as t15 ON t12.keyword_id = t15.id FULL JOIN company_name as t16 ON t10.company_id = t16.id FULL JOIN company_type as t17 ON t10.company_type_id = t17.id FULL JOIN comp_cast_type as t18 ON t8.status_id = t18.id WHERE n1.id = 2003 Very table is related to each other on the join via foreign-key constraints and have indexes for all the mentioned columns. The query plan details: "Hash Left Join (cost=5838187.01..13756845.07 rows=15579622 width=835) (actual time=146879.213..146891.861 rows=20 loops=1)" " Hash Cond: (t8.status_id = t18.id)" " -> Hash Left Join (cost=5838185.92..13542624.18 rows=15579622 width=822) (actual time=146879.199..146891.833 rows=20 loops=1)" " Hash Cond: (t10.company_type_id = t17.id)" " -> Hash Left Join (cost=5838184.83..13328403.29 rows=15579622 width=797) (actual time=146879.165..146891.781 rows=20 loops=1)" " Hash Cond: (t10.company_id = t16.id)" " -> Hash Left Join (cost=5828372.95..10061752.03 rows=15579622 width=755) (actual time=146426.483..146429.756 rows=20 loops=1)" " Hash Cond: (t12.keyword_id = t15.id)" " -> Hash Left Join (cost=5825164.23..6914088.45 rows=15579622 width=731) (actual time=146372.411..146372.529 rows=20 loops=1)" " Hash Cond: (t13.link_type_id = t14.id)" " -> Merge Left Join (cost=5825162.82..6699867.24 rows=15579622 width=715) (actual time=146372.366..146372.472 rows=20 loops=1)" " Merge Cond: (t6.id = t13.linked_movie_id)" " -> Merge Left Join (cost=5684009.29..6378956.77 rows=15579622 width=699) (actual time=144019.620..144019.711 rows=20 loops=1)" " Merge Cond: (t6.id = t12.movie_id)" " -> Merge Left Join (cost=5182403.90..5622400.75 rows=8502523 width=687) (actual time=136849.731..136849.809 rows=20 loops=1)" " Merge Cond: (t6.id = t19.movie_id)" " -> Merge Left Join (cost=4974472.00..5315778.48 rows=8502523 width=637) (actual time=134972.032..134972.099 rows=20 loops=1)" " Merge Cond: (t6.id = t11.movie_id)" " -> Merge Left Join (cost=1830064.81..2033131.89 rows=1341632 width=561) (actual time=63784.035..63784.062 rows=2 loops=1)" " Merge Cond: (t6.id = t10.movie_id)" " -> Nested Loop Left Join (cost=1417360.29..1594294.02 rows=1044480 width=521) (actual time=59279.246..59279.264 rows=1 loops=1)" " Join Filter: (t6.kind_id = t9.id)" " -> Merge Left Join (cost=1417359.22..1429787.34 rows=1044480 width=507) (actual time=59279.222..59279.224 rows=1 loops=1)" " Merge Cond: (t6.id = t8.movie_id)" " -> Merge Left Join (cost=1405731.84..1414378.65 rows=1044480 width=491) (actual time=59121.773..59121.775 rows=1 loops=1)" " Merge Cond: (t6.id = t7.movie_id)" " -> Sort (cost=1346206.04..1348817.24 rows=1044480 width=416) (actual time=58095.230..58095.231 rows=1 loops=1)" " Sort Key: t6.id" " Sort Method: quicksort Memory: 17kB" " -> Hash Left Join (cost=172406.29..456387.53 rows=1044480 width=416) (actual time=57969.371..58095.208 rows=1 loops=1)" " Hash Cond: (t2.movie_id = t6.id)" " -> Hash Left Join (cost=104700.38..256885.82 rows=1044480 width=358) (actual time=49981.493..50006.303 rows=1 loops=1)" " Hash Cond: (t2.role_id = t5.id)" " -> Hash Left Join (cost=104699.11..242522.95 rows=1044480 width=343) (actual time=49981.441..50006.250 rows=1 loops=1)" " Hash Cond: (t2.person_role_id = t4.id)" " -> Hash Left Join (cost=464.96..12283.95 rows=1044480 width=269) (actual time=0.071..0.087 rows=1 loops=1)" " Hash Cond: (n1.id = t3.person_id)" " -> Nested Loop Left Join (cost=0.00..49.39 rows=7680 width=160) (actual time=0.051..0.066 rows=1 loops=1)" " -> Nested Loop Left Join (cost=0.00..17.04 rows=3 width=119) (actual time=0.038..0.041 rows=1 loops=1)" " -> Index Scan using name_pkey on name n1 (cost=0.00..8.68 rows=1 width=39) (actual time=0.022..0.024 rows=1 loops=1)" " Index Cond: (id = 2003)" " -> Index Scan using aka_name_idx_person on aka_name (cost=0.00..8.34 rows=1 width=80) (actual time=0.010..0.010 rows=0 loops=1)" " Index Cond: ((aka_name.person_id = 2003) AND (n1.id = aka_name.person_id))" " -> Index Scan using cast_info_idx_pid on cast_info t2 (cost=0.00..10.77 rows=1 width=41) (actual time=0.011..0.020 rows=1 loops=1)" " Index Cond: ((t2.person_id = 2003) AND (n1.id = t2.person_id))" " -> Hash (cost=463.26..463.26 rows=136 width=109) (actual time=0.010..0.010 rows=0 loops=1)" " -> Index Scan using person_info_idx_pid on person_info t3 (cost=0.00..463.26 rows=136 width=109) (actual time=0.009..0.009 rows=0 loops=1)" " Index Cond: (person_id = 2003)" " -> Hash (cost=42697.62..42697.62 rows=2442362 width=74) (actual time=49305.872..49305.872 rows=2442362 loops=1)" " -> Seq Scan on char_name t4 (cost=0.00..42697.62 rows=2442362 width=74) (actual time=14.066..22775.087 rows=2442362 loops=1)" " -> Hash (cost=1.12..1.12 rows=12 width=15) (actual time=0.024..0.024 rows=12 loops=1)" " -> Seq Scan on role_type t5 (cost=0.00..1.12 rows=12 width=15) (actual time=0.012..0.014 rows=12 loops=1)" " -> Hash (cost=31134.07..31134.07 rows=1573507 width=58) (actual time=7841.225..7841.225 rows=1573507 loops=1)" " -> Seq Scan on title t6 (cost=0.00..31134.07 rows=1573507 width=58) (actual time=21.507..2799.443 rows=1573507 loops=1)" " -> Materialize (cost=59525.80..63203.88 rows=294246 width=75) (actual time=812.376..984.958 rows=192075 loops=1)" " -> Sort (cost=59525.80..60261.42 rows=294246 width=75) (actual time=812.363..922.452 rows=192075 loops=1)" " Sort Key: t7.movie_id" " Sort Method: external merge Disk: 24880kB" " -> Seq Scan on aka_title t7 (cost=0.00..6646.46 rows=294246 width=75) (actual time=24.652..164.822 rows=294246 loops=1)" " -> Materialize (cost=11627.38..12884.43 rows=100564 width=16) (actual time=123.819..149.086 rows=41907 loops=1)" " -> Sort (cost=11627.38..11878.79 rows=100564 width=16) (actual time=123.807..138.530 rows=41907 loops=1)" " Sort Key: t8.movie_id" " Sort Method: external merge Disk: 3136kB" " -> Seq Scan on complete_cast t8 (cost=0.00..1549.64 rows=100564 width=16) (actual time=0.013..10.744 rows=100564 loops=1)" " -> Materialize (cost=1.08..1.15 rows=7 width=14) (actual time=0.016..0.029 rows=7 loops=1)" " -> Seq Scan on kind_type t9 (cost=0.00..1.07 rows=7 width=14) (actual time=0.011..0.013 rows=7 loops=1)" " -> Materialize (cost=412704.52..437969.09 rows=2021166 width=40) (actual time=3420.356..4278.545 rows=1028995 loops=1)" " -> Sort (cost=412704.52..417757.43 rows=2021166 width=40) (actual time=3420.349..3953.483 rows=1028995 loops=1)" " Sort Key: t10.movie_id" " Sort Method: external merge Disk: 90960kB" " -> Seq Scan on movie_companies t10 (cost=0.00..35214.66 rows=2021166 width=40) (actual time=13.271..566.893 rows=2021166 loops=1)" " -> Materialize (cost=3144407.19..3269057.42 rows=9972019 width=76) (actual time=65485.672..70083.219 rows=5039009 loops=1)" " -> Sort (cost=3144407.19..3169337.23 rows=9972019 width=76) (actual time=65485.667..68385.550 rows=5038999 loops=1)" " Sort Key: t11.movie_id" " Sort Method: external merge Disk: 735512kB" " -> Seq Scan on movie_info t11 (cost=0.00..212815.19 rows=9972019 width=76) (actual time=15.750..15715.608 rows=9972019 loops=1)" " -> Materialize (cost=207925.01..219867.92 rows=955433 width=50) (actual time=1483.989..1785.636 rows=429401 loops=1)" " -> Sort (cost=207925.01..210313.59 rows=955433 width=50) (actual time=1483.983..1654.165 rows=429401 loops=1)" " Sort Key: t19.movie_id" " Sort Method: external merge Disk: 31720kB" " -> Seq Scan on movie_info_idx t19 (cost=0.00..15047.33 rows=955433 width=50) (actual time=7.284..221.597 rows=955433 loops=1)" " -> Materialize (cost=501605.39..537645.64 rows=2883220 width=12) (actual time=5823.040..6868.242 rows=1597396 loops=1)" " -> Sort (cost=501605.39..508813.44 rows=2883220 width=12) (actual time=5823.026..6477.517 rows=1597396 loops=1)" " Sort Key: t12.movie_id" " Sort Method: external merge Disk: 78888kB" " -> Seq Scan on movie_keyword t12 (cost=0.00..44417.20 rows=2883220 width=12) (actual time=11.672..839.498 rows=2883220 loops=1)" " -> Materialize (cost=141143.93..152995.81 rows=948150 width=16) (actual time=1916.356..2253.004 rows=478358 loops=1)" " -> Sort (cost=141143.93..143514.31 rows=948150 width=16) (actual time=1916.344..2125.698 rows=478358 loops=1)" " Sort Key: t13.linked_movie_id" " Sort Method: external merge Disk: 29632kB" " -> Seq Scan on movie_link t13 (cost=0.00..14607.50 rows=948150 width=16) (actual time=27.610..297.962 rows=948150 loops=1)" " -> Hash (cost=1.18..1.18 rows=18 width=16) (actual time=0.020..0.020 rows=18 loops=1)" " -> Seq Scan on link_type t14 (cost=0.00..1.18 rows=18 width=16) (actual time=0.010..0.012 rows=18 loops=1)" " -> Hash (cost=1537.10..1537.10 rows=91010 width=24) (actual time=54.055..54.055 rows=91010 loops=1)" " -> Seq Scan on keyword t15 (cost=0.00..1537.10 rows=91010 width=24) (actual time=0.006..14.703 rows=91010 loops=1)" " -> Hash (cost=4585.61..4585.61 rows=245461 width=42) (actual time=445.269..445.269 rows=245461 loops=1)" " -> Seq Scan on company_name t16 (cost=0.00..4585.61 rows=245461 width=42) (actual time=12.037..309.961 rows=245461 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=25) (actual time=0.013..0.013 rows=4 loops=1)" " -> Seq Scan on company_type t17 (cost=0.00..1.04 rows=4 width=25) (actual time=0.009..0.010 rows=4 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=13) (actual time=0.006..0.006 rows=4 loops=1)" " -> Seq Scan on comp_cast_type t18 (cost=0.00..1.04 rows=4 width=13) (actual time=0.002..0.003 rows=4 loops=1)" "Total runtime: 147055.016 ms" Is there anyway to force the name.id = 2003 before it tries to join all the tables together? As you can see, the end result is 4 tuples but it seems like it should be a fast join by using the available index after it limited it down with the name clause, although very complex.

    Read the article

  • Django Deploy trouble

    - by i-Malignus
    Well, i've walking around this for a couples of days now... I think is time to ask for some help, i think my installation is ok... Server OS: Centos 5 Python -v 2.6.5 Django -v (1, 1, 1, 'final', 0) my apache conf: <VirtualHost *:80> DocumentRoot /opt/workshop ServerName taller.antell.com.py WSGIScriptAlias / /opt/workshop/workshop.wsgi WSGIDaemonProcess taller.antell.com.py user=ignacio group=ignacio processes=2 threads=25 ErrorLog /opt/workshop/apache.error.log CustomLog /opt/workshop/apache.custom.log combined <Directory "/opt/workshop"> Options +ExecCGI +FollowSymLinks -Indexes -MultiViews AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> my mod_wsgi conf: import os import sys sys.path.append('/opt/workshop') os.environ['DJANGO_SETTINGS_MODULE'] = 'workshop.settings' os.environ['PYTHON_EGG_CACHE'] = '/tmp/.python-eggs' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler( ) the error that i'm getting on my apache error log is: [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] mod_wsgi (pid=11459): Exception occurred processing WSGI script '/opt/workshop/workshop.wsgi'. [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] Traceback (most recent call last): [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 241, in __call__ [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] response = self.get_response(request) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/base.py", line 134, in get_response [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return self.handle_uncaught_exception(request, resolver, exc_info) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/base.py", line 154, in handle_uncaught_exception [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return debug.technical_500_response(request, *exc_info) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/views/debug.py", line 40, in technical_500_response [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] html = reporter.get_traceback_html() [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/views/debug.py", line 114, in get_traceback_html [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return t.render(c) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/__init__.py", line 178, in render [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return self.nodelist.render(context) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/__init__.py", line 779, in render [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] bits.append(self.render_node(node, context)) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/debug.py", line 81, in render_node [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] raise wrapped [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] TemplateSyntaxError: Caught an exception while rendering: No module named vehicles [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] Original Traceback (most recent call last): [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/debug.py", line 71, in render_node [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] result = node.render(context) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/debug.py", line 87, in render [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] output = force_unicode(self.filter_expression.resolve(context)) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/__init__.py", line 572, in resolve [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] new_obj = func(obj, *arg_vals) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/defaultfilters.py", line 687, in date [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return format(value, arg) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/dateformat.py", line 269, in format [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return df.format(format_string) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/dateformat.py", line 30, in format [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] pieces.append(force_unicode(getattr(self, piece)())) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/dateformat.py", line 175, in r [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return self.format('D, j M Y H:i:s O') [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/dateformat.py", line 30, in format [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] pieces.append(force_unicode(getattr(self, piece)())) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/encoding.py", line 71, in force_unicode [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] s = unicode(s) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/functional.py", line 201, in __unicode_cast [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return self.__func(*self.__args, **self.__kw) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/translation/__init__.py", line 62, in ugettext [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return real_ugettext(message) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/translation/trans_real.py", line 286, in ugettext [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return do_translate(message, 'ugettext') [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/translation/trans_real.py", line 276, in do_translate [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] _default = translation(settings.LANGUAGE_CODE) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/translation/trans_real.py", line 194, in translation [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] default_translation = _fetch(settings.LANGUAGE_CODE) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/translation/trans_real.py", line 180, in _fetch [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] app = import_module(appname) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/importlib.py", line 35, in import_module [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] __import__(name) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] ImportError: No module named vehicles [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] mod_wsgi (pid=11463): Exception occurred processing WSGI script '/opt/workshop/workshop.wsgi'. [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] Traceback (most recent call last): [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 241, in __call__ [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] response = self.get_response(request) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/base.py", line 73, in get_response [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] response = middleware_method(request) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/middleware/common.py", line 56, in process_request [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] if (not _is_valid_path(request.path_info) and [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/middleware/common.py", line 142, in _is_valid_path [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] urlresolvers.resolve(path) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/urlresolvers.py", line 303, in resolve [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return get_resolver(urlconf).resolve(path) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/urlresolvers.py", line 218, in resolve [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] sub_match = pattern.resolve(new_path) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/urlresolvers.py", line 216, in resolve [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] for pattern in self.url_patterns: [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/urlresolvers.py", line 245, in _get_url_patterns [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/urlresolvers.py", line 240, in _get_urlconf_module [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] self._urlconf_module = import_module(self.urlconf_name) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/importlib.py", line 35, in import_module [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] __import__(name) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] ImportError: No module named vehicles.urls Please give my a hand, i stuck... Obviously is a problem with my vehicle module (the only one in the app), another thing is that when i try: [root@localhost workshop]# python manage.py runserver 0:8000 The app runs perfectly, i think that the problem is something near the wsgi conf, something is not clicking.... Tks... Update: workshop dir looks like... [root@localhost workshop]# ls -l total 504 -rw-r--r-- 1 root root 22706 Apr 21 15:17 apache.custom.log -rw-r--r-- 1 root root 408141 Apr 21 15:17 apache.error.log -rw-r--r-- 1 root root 0 Apr 17 10:56 __init__.py -rw-r--r-- 1 root root 124 Apr 21 11:09 __init__.pyc -rw-r--r-- 1 root root 542 Apr 17 10:56 manage.py -rw-r--r-- 1 root root 3326 Apr 17 10:56 settings.py -rw-r--r-- 1 root root 2522 Apr 21 11:09 settings.pyc drw-r--r-- 4 root root 4096 Apr 17 10:56 templates -rw-r--r-- 1 root root 381 Apr 21 13:42 urls.py -rw-r--r-- 1 root root 398 Apr 21 13:00 urls.pyc drw-r--r-- 2 root root 4096 Apr 21 13:44 vehicles -rw-r--r-- 1 root root 38912 Apr 17 10:56 workshop.db -rw-r--r-- 1 root root 263 Apr 21 15:30 workshop.wsgi vehicles dir [root@localhost vehicles]# ls -l total 52 -rw-r--r-- 1 root root 390 Apr 17 10:56 admin.py -rw-r--r-- 1 root root 967 Apr 21 13:00 admin.pyc -rw-r--r-- 1 root root 732 Apr 17 10:56 forms.py -rw-r--r-- 1 root root 2086 Apr 21 13:00 forms.pyc -rw-r--r-- 1 root root 0 Apr 17 10:56 __init__.py -rw-r--r-- 1 root root 133 Apr 21 11:36 __init__.pyc -rw-r--r-- 1 root root 936 Apr 17 10:56 models.py -rw-r--r-- 1 root root 1827 Apr 21 11:36 models.pyc -rw-r--r-- 1 root root 514 Apr 17 10:56 tests.py -rw-r--r-- 1 root root 989 Apr 21 13:44 tests.pyc -rw-r--r-- 1 root root 1035 Apr 17 10:56 urls.py -rw-r--r-- 1 root root 1935 Apr 21 13:00 urls.pyc -rw-r--r-- 1 root root 3164 Apr 17 10:56 views.py -rw-r--r-- 1 root root 4081 Apr 21 13:00 views.pyc Update 2: this is my settings.py # Django settings for workshop project. DEBUG = True TEMPLATE_DEBUG = DEBUG ADMINS = ( ('Ignacio Rojas', '[email protected]'), ('Fabian Biedermann', '[email protected]'), ) MANAGERS = ADMINS DATABASE_ENGINE = 'sqlite3' DATABASE_NAME = '/opt/workshop/workshop.db' DATABASE_USER = '' DATABASE_PASSWORD = '' DATABASE_HOST = '' DATABASE_PORT = '' # Local time zone for this installation. Choices can be found here: # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name # although not all choices may be available on all operating systems. # If running in a Windows environment this must be set to the same as your # system time zone. TIME_ZONE = 'America/Asuncion' # Language code for this installation. All choices can be found here: # http://www.i18nguy.com/unicode/language-identifiers.html LANGUAGE_CODE = 'es-py' SITE_ID = 1 # If you set this to False, Django will make some optimizations so as not # to load the internationalization machinery. USE_I18N = True # Absolute path to the directory that holds media. # Example: "/home/media/media.lawrence.com/" MEDIA_ROOT = '' # URL that handles the media served from MEDIA_ROOT. Make sure to use a # trailing slash if there is a path component (optional in other cases). # Examples: "http://media.lawrence.com", "http://example.com/media/" MEDIA_URL = '' # URL prefix for admin media -- CSS, JavaScript and images. Make sure to use a # trailing slash. # Examples: "http://foo.com/media/", "/media/". ADMIN_MEDIA_PREFIX = '/media/' # Make this unique, and don't share it with anybody. SECRET_KEY = '11y0_jb=+b4^nq@2-fo#g$-ihk5*v&d5-8hg_y0i@*9$w8jalp' MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', ) ROOT_URLCONF = 'workshop.urls' TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. "/opt/workshop/templates" ) INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'workshop.vehicles', ) TEMPLATE_CONTEXT_PROCESSORS = ( 'django.core.context_processors.auth', 'django.core.context_processors.debug', 'django.core.context_processors.i18n', 'django.core.context_processors.media', )

    Read the article

  • Images missing after moving Django to new server

    - by miszczu
    I'm moving Django project to new server. I'm newbie in Django, and I don't know where should be upload folder. There are all images which should be displayed on website. In config file I haven't seen upload folder I could specify, so I'm guessing it always should be the same location for django projects or I just can't find it. Locations are saved in database. When I've put uploaded files into media folder, so url was like domain.co.uk/media/upload/media/images/year/month/day/image_name.ext and the same is on the old website, images on website ware still missing. All images are visible if I put url by hand, but django doesn't seems to see files. Also I check django log file: 2012-05-30 09:13:33,393 ERROR render: Thumbnail tag failed: [in /usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py (line 49)] Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py", line 45, in render return self._render(context) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py", line 97, in _render file_, geometry, **options File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/base.py", line 50, in get_thumbnail cached = default.kvstore.get(thumbnail) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/kvstores/base.py", line 25, in get return self._get(image_file.key) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/kvstores/base.py", line 123, in _get value = self._get_raw(add_prefix(key, identity)) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/kvstores/cached_db_kvstore.py", line 26, in _get_raw value = KVStoreModel.objects.get(key=key).value File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 132, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 344, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 82, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 273, in iterator for row in compiler.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 680, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 735, in execute_sql cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py", line 86, in execute return self.cursor.execute(query, args) File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue DatabaseError: (1146, "Table 'thumbnail_kvstore' doesn't exist") 2012-05-30 09:13:33,396 DEBUG execute: (0.000) SELECT `freetext_freetext`.`id`, `freetext_freetext`.`key`, `freetext_freetext`.`content`, `freetext_freetext`.`active` FROM `freetext_freetext` WHERE (`freetext_freetext`.`active` = True AND `freetext_freetext`.`key` = office-closed-message ); args=(True, u'office-closed-message') [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] 2012-05-30 09:13:33,399 DEBUG execute: (0.000) SELECT `menus_menu`.`id`, `menus_menu`.`name`, `menus_menu`.`slug`, `menus_menu`.`base_url`, `menus_menu`.`description`, `menus_menu`.`enabled` FROM `menus_menu` WHERE (`menus_menu`.`enabled` = True AND `menus_menu`.`slug` = about ); args=(True, u'about') [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] 2012-05-30 09:13:33,401 DEBUG execute: (0.000) SELECT `menus_menuitem`.`id`, `menus_menuitem`.`menu_id`, `menus_menuitem`.`title`, `menus_menuitem`.`url`, `menus_menuitem`.`order` FROM `menus_menuitem` INNER JOIN `menus_menu` ON (`menus_menuitem`.`menu_id` = `menus_menu`.`id`) WHERE `menus_menu`.`slug` = about ORDER BY `menus_menuitem`.`order` ASC; args=(u'about',) [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] 2012-05-30 09:13:33,404 DEBUG execute: (0.000) SELECT `freetext_freetext`.`id`, `freetext_freetext`.`key`, `freetext_freetext`.`content`, `freetext_freetext`.`active` FROM `freetext_freetext` WHERE (`freetext_freetext`.`active` = True AND `freetext_freetext`.`key` = contactdetails-footer ); args=(True, u'contactdetails-footer') [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] I checked database and there is no table calls thumbnail_kvstore, but I have database backup, and in backup files this table doesn't exist. All uploaded files I get are in media/uploads/media/. Also I'm getting errors on some pages: Syntax error. Expected: ``thumbnail source geometry [key1=val1 key2=val2...] as var`` /usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py in __init__, line 72 In template /var/www/vhosts/domain.co.uk/sites/apps/shop/products/templates/products/product_detail.html, error at line 34 {% thumbnail image.file "800x700" detail as zoom %} Maybe some modules I install are not in the right version. Dont know how to fix it. Im using, CentOS 6, mod_wsgi, apache, python 2.6. Update 1.0: On the old server was Django 1.3, on the new one is Django 1.3.1 Update 1.1: I this i know where is the problem. I tried python manage.py syncdb and this is output: Syncing... Creating tables ... The following content types are stale and need to be deleted: orders | ordercontact Any objects related to these content types by a foreign key will also be deleted. Are you sure you want to delete these content types? If you're unsure, answer 'no'. Type 'yes' to continue, or 'no' to cancel: no Installing custom SQL ... Installing indexes ... No fixtures found. Synced: > django.contrib.auth > django.contrib.contenttypes > django.contrib.sessions > django.contrib.sites > django.contrib.messages > django.contrib.admin > django.contrib.admindocs > django.contrib.markup > django.contrib.sitemaps > django.contrib.redirects > django_filters > freetext > sorl.thumbnail > django_extensions > south > currencies > pagination > tagging > honeypot > core > faq > logentry > menus > news > shop > shop.cart > shop.orders Not synced (use migrations): - dbtemplates - contactform - links - media - pages - popularity - testimonials - shop.brands - shop.collections - shop.discount - shop.pricing - shop.product_types - shop.products - shop.shipping - shop.tax (use ./manage.py migrate to migrate these) Next I run python manage.py migrate, and thats what i get: Running migrations for dbtemplates: - Migrating forwards to 0002_auto__del_unique_template_name. > dbtemplates:0001_initial ! Error found during real run of migration! Aborting. ! Since you have a database that does not support running ! schema-altering statements in transactions, we have had ! to leave it in an interim state between migrations. ! You *might* be able to recover with: = DROP TABLE `django_template` CASCADE; [] = DROP TABLE `django_template_sites` CASCADE; [] ! The South developers regret this has happened, and would ! like to gently persuade you to consider a slightly ! easier-to-deal-with DBMS. ! NOTE: The error which caused the migration to fail is further up. Traceback (most recent call last): File "manage.py", line 13, in <module> execute_manager(settings) File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 220, in execute output = self.handle(*args, **options) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/management/commands/migrate.py", line 105, in handle ignore_ghosts = ignore_ghosts, File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/__init__.py", line 191, in migrate_app success = migrator.migrate_many(target, workplan, database) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 221, in migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, database) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 292, in migrate_many result = self.migrate(migration, database) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 125, in migrate result = self.run(migration) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 99, in run return self.run_migration(migration) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 81, in run_migration migration_function() File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 57, in <lambda> return (lambda: direction(orm)) File "/usr/lib/python2.6/site-packages/django_dbtemplates-1.3-py2.6.egg/dbtemplates/migrations/0001_initial.py", line 18, in forwards ('last_changed', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now)), File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/db/generic.py", line 226, in create_table ', '.join([col for col in columns if col]), File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/db/generic.py", line 150, in execute cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py", line 86, in execute return self.cursor.execute(query, args) File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue _mysql_exceptions.OperationalError: (1050, "Table 'django_template' already exists") Also i run python manage.py migrate --list, and uotput is: dbtemplates (*) 0001_initial (*) 0002_auto__del_unique_template_name contactform (*) 0001_initial (*) 0002_auto__add_callback (*) 0003_auto__add_field_callback_notes (*) 0004_auto__add_field_callback_is_closed__add_field_callback_closed (*) 0005_auto__add_field_callback_url (*) 0006_auto__add_contact (*) 0007_auto__add_field_contact_category (*) 0008_auto__add_field_contact_url links (*) 0001_initial (*) 0002_auto__add_field_category_enabled__add_field_category_order media (*) 0001_initial (*) 0002_auto__del_field_image_external_url__add_field_image_link_url__del_fiel (*) 0003_add_model_FileAttachment (*) 0004_auto__chg_field_file_slug__chg_field_image_slug (*) 0005_auto__chg_field_image_file (*) 0006_auto__chg_field_file_file pages (*) 0001_initial (*) 0002_auto__chg_field_page_meta_description__chg_field_page_meta_title__chg_ (*) 0003_auto__add_field_page_show_in_sitemap (*) 0004_auto__add_field_page_changefreq__add_field_page_priority popularity (*) 0001_initial testimonials (*) 0001_initial (*) 0002_auto__add_field_testimonial_is_featured brands (*) 0001_initial (*) 0002_auto__add_field_brand_template (*) 0003_auto__chg_field_brand_meta_description__chg_field_brand_meta_title__ch (*) 0004_auto__add_field_brand_url (*) 0005_auto__del_field_brand_image__add_field_brand_logo collections (*) 0001_initial (*) 0002_auto__add_field_collection_discount (*) 0003_auto__chg_field_collection_meta_description__chg_field_collection_meta (*) 0004_auto__add_field_collection_is_featured (*) 0005_auto__add_field_collection_order discount (*) 0001_initial (*) 0002_added_field_discount_description (*) 0003_auto__add_field_discountvoucher_automatic (*) 0004_auto__add_field_discountvoucher_collection (*) 0005_auto__del_field_discountvoucher_collection (*) 0006_auto__chg_field_discountvoucher_expiry_date pricing (*) 0001_initial (*) 0002_auto__add_pricingrule product_types (*) 0001_initial (*) 0002_auto__add_field_producttype_meta_title__add_field_producttype_meta_des (*) 0003_auto__add_field_producttype_summary__add_field_producttype_description products (*) 0001_initial (*) 0002_auto__del_field_product_is_featured (*) 0003_auto__chg_field_product_meta_keywords__chg_field_product_meta_descript (*) 0004_auto shipping (*) 0001_initial (*) 0002_auto__add_field_shippingmethod_includes_tax__add_field_shippingmethod_ (*) 0003_auto__add_field_shippingmethod_order (*) 0004_auto__del_field_shippingmethod_tax_rate__del_field_shippingmethod_incl (*) 0005_auto__del_field_shippingrule_enabled tax (*) 0001_initial (*) 0002_auto__add_field_taxrate_internal_name (*) 0003_initial_internal_names (*) 0004_auto__add_unique_taxrate_internal_name (*) 0005_force_unique_taxrate_name (*) 0006_auto__add_unique_taxrate_name After that some images source were something like this: src="cache/1e/bd/1ebd719910aa843238028edd5fe49e71.jpg" Is any1 could help me with syncdb pledase?

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • ASP.NET MVC2 custom rolemanager (webconfig problem)

    - by ile
    Structure of the web: SAMembershipProvider.cs namespace User.Membership { public class SAMembershipProvider : MembershipProvider { #region - Properties - private int NewPasswordLength { get; set; } private string ConnectionString { get; set; } //private MachineKeySection MachineKey { get; set; } //Used when determining encryption key values. public bool enablePasswordReset { get; set; } public bool enablePasswordRetrieval { get; set; } public bool requiresQuestionAndAnswer { get; set; } public bool requiresUniqueEmail { get; set; } public int maxInvalidPasswordAttempts { get; set; } public int passwordAttemptWindow { get; set; } public MembershipPasswordFormat passwordFormat { get; set; } public int minRequiredNonAlphanumericCharacters { get; set; } public int minRequiredPasswordLength { get; set; } public string passwordStrengthRegularExpression { get; set; } public override string ApplicationName { get; set; } // Indicates whether passwords can be retrieved using the provider's GetPassword method. // This property is read-only. public override bool EnablePasswordRetrieval { get { return enablePasswordRetrieval; } } // Indicates whether passwords can be reset using the provider's ResetPassword method. // This property is read-only. public override bool EnablePasswordReset { get { return enablePasswordReset; } } // Indicates whether a password answer must be supplied when calling the provider's GetPassword and ResetPassword methods. // This property is read-only. public override bool RequiresQuestionAndAnswer { get { return requiresQuestionAndAnswer; } } public override int MaxInvalidPasswordAttempts { get { return maxInvalidPasswordAttempts; } } // For a description, see MaxInvalidPasswordAttempts. // This property is read-only. public override int PasswordAttemptWindow { get { return passwordAttemptWindow; } } // Indicates whether each registered user must have a unique e-mail address. // This property is read-only. public override bool RequiresUniqueEmail { get { return requiresUniqueEmail; } } public override MembershipPasswordFormat PasswordFormat { get { return passwordFormat; } } // The minimum number of characters required in a password. // This property is read-only. public override int MinRequiredPasswordLength { get { return minRequiredPasswordLength; } } // The minimum number of non-alphanumeric characters required in a password. // This property is read-only. public override int MinRequiredNonAlphanumericCharacters { get { return minRequiredNonAlphanumericCharacters; } } // A regular expression specifying a pattern to which passwords must conform. // This property is read-only. public override string PasswordStrengthRegularExpression { get { return passwordStrengthRegularExpression; } } #endregion #region - Methods - public override void Initialize(string name, NameValueCollection config) { throw new NotImplementedException(); } public override bool ChangePassword(string username, string oldPassword, string newPassword) { throw new NotImplementedException(); } public override bool ChangePasswordQuestionAndAnswer(string username, string password, string newPasswordQuestion, string newPasswordAnswer) { throw new NotImplementedException(); } // Takes, as input, a user name, password, e-mail address, and other information and adds a new user // to the membership data source. CreateUser returns a MembershipUser object representing the newly // created user. It also accepts an out parameter (in Visual Basic, ByRef) that returns a // MembershipCreateStatus value indicating whether the user was successfully created or, if the user // was not created, the reason why. If the user was not created, CreateUser returns null. // Before creating a new user, CreateUser calls the provider's virtual OnValidatingPassword method to // validate the supplied password. It then creates the user or cancels the action based on the outcome of the call. public override MembershipUser CreateUser(string username, string password, string email, string passwordQuestion, string passwordAnswer, bool isApproved, object providerUserKey, out MembershipCreateStatus status) { throw new NotImplementedException(); } public override bool DeleteUser(string username, bool deleteAllRelatedData) { throw new NotImplementedException(); } public override MembershipUserCollection FindUsersByEmail(string emailToMatch, int pageIndex, int pageSize, out int totalRecords) { throw new NotImplementedException(); } // Returns a MembershipUserCollection containing MembershipUser objects representing users whose user names // match the usernameToMatch input parameter. Wildcard syntax is data source-dependent. MembershipUser objects // in the MembershipUserCollection are sorted by user name. If FindUsersByName finds no matching users, it // returns an empty MembershipUserCollection. // For an explanation of the pageIndex, pageSize, and totalRecords parameters, see the GetAllUsers method. public override MembershipUserCollection FindUsersByName(string usernameToMatch, int pageIndex, int pageSize, out int totalRecords) { throw new NotImplementedException(); } // Returns a MembershipUserCollection containing MembershipUser objects representing all registered users. If // there are no registered users, GetAllUsers returns an empty MembershipUserCollection // The results returned by GetAllUsers are constrained by the pageIndex and pageSize input parameters. pageSize // specifies the maximum number of MembershipUser objects to return. pageIndex identifies which page of results // to return. Page indexes are 0-based. // // GetAllUsers also takes an out parameter (in Visual Basic, ByRef) named totalRecords that, on return, holds // a count of all registered users. public override MembershipUserCollection GetAllUsers(int pageIndex, int pageSize, out int totalRecords) { throw new NotImplementedException(); } // Returns a count of users that are currently online-that is, whose LastActivityDate is greater than the current // date and time minus the value of the membership service's UserIsOnlineTimeWindow property, which can be read // from Membership.UserIsOnlineTimeWindow. UserIsOnlineTimeWindow specifies a time in minutes and is set using // the <membership> element's userIsOnlineTimeWindow attribute. public override int GetNumberOfUsersOnline() { throw new NotImplementedException(); } // Takes, as input, a user name and a password answer and returns that user's password. If the user name is not // valid, GetPassword throws a ProviderException. // Before retrieving a password, GetPassword verifies that EnablePasswordRetrieval is true. If // EnablePasswordRetrieval is false, GetPassword throws a NotSupportedException. If EnablePasswordRetrieval is // true but the password format is hashed, GetPassword throws a ProviderException since hashed passwords cannot, // by definition, be retrieved. A membership provider should also throw a ProviderException from Initialize if // EnablePasswordRetrieval is true but the password format is hashed. // // GetPassword also checks the value of the RequiresQuestionAndAnswer property before retrieving a password. If // RequiresQuestionAndAnswer is true, GetPassword compares the supplied password answer to the stored password // answer and throws a MembershipPasswordException if the two don't match. GetPassword also throws a // MembershipPasswordException if the user whose password is being retrieved is currently locked out. public override string GetPassword(string username, string answer) { throw new NotImplementedException(); } // Takes, as input, a user name or user ID (the method is overloaded) and a Boolean value indicating whether // to update the user's LastActivityDate to show that the user is currently online. GetUser returns a MembershipUser // object representing the specified user. If the user name or user ID is invalid (that is, if it doesn't represent // a registered user) GetUser returns null (Nothing in Visual Basic). public override MembershipUser GetUser(object providerUserKey, bool userIsOnline) { throw new NotImplementedException(); } // Takes, as input, a user name or user ID (the method is overloaded) and a Boolean value indicating whether to // update the user's LastActivityDate to show that the user is currently online. GetUser returns a MembershipUser // object representing the specified user. If the user name or user ID is invalid (that is, if it doesn't represent // a registered user) GetUser returns null (Nothing in Visual Basic). public override MembershipUser GetUser(string username, bool userIsOnline) { throw new NotImplementedException(); } // Takes, as input, an e-mail address and returns the first registered user name whose e-mail address matches the // one supplied. // If it doesn't find a user with a matching e-mail address, GetUserNameByEmail returns an empty string. public override string GetUserNameByEmail(string email) { throw new NotImplementedException(); } // Virtual method called when a password is created. The default implementation in MembershipProvider fires a // ValidatingPassword event, so be sure to call the base class's OnValidatingPassword method if you override // this method. The ValidatingPassword event allows applications to apply additional tests to passwords by // registering event handlers. // A custom provider's CreateUser, ChangePassword, and ResetPassword methods (in short, all methods that record // new passwords) should call this method. protected override void OnValidatingPassword(ValidatePasswordEventArgs e) { base.OnValidatingPassword(e); } // Takes, as input, a user name and a password answer and replaces the user's current password with a new, random // password. ResetPassword then returns the new password. A convenient mechanism for generating a random password // is the Membership.GeneratePassword method. // If the user name is not valid, ResetPassword throws a ProviderException. ResetPassword also checks the value of // the RequiresQuestionAndAnswer property before resetting a password. If RequiresQuestionAndAnswer is true, // ResetPassword compares the supplied password answer to the stored password answer and throws a // MembershipPasswordException if the two don't match. // // Before resetting a password, ResetPassword verifies that EnablePasswordReset is true. If EnablePasswordReset is // false, ResetPassword throws a NotSupportedException. If the user whose password is being changed is currently // locked out, ResetPassword throws a MembershipPasswordException. // // Before resetting a password, ResetPassword calls the provider's virtual OnValidatingPassword method to validate // the new password. It then resets the password or cancels the action based on the outcome of the call. If the new // password is invalid, ResetPassword throws a ProviderException. // // Following a successful password reset, ResetPassword updates the user's LastPasswordChangedDate. public override string ResetPassword(string username, string answer) { throw new NotImplementedException(); } // Unlocks (that is, restores login privileges for) the specified user. UnlockUser returns true if the user is // successfully unlocked. Otherwise, it returns false. If the user is already unlocked, UnlockUser simply returns true. public override bool UnlockUser(string userName) { throw new NotImplementedException(); } // Takes, as input, a MembershipUser object representing a registered user and updates the information stored for // that user in the membership data source. If any of the input submitted in the MembershipUser object is not valid, // UpdateUser throws a ProviderException. // Note that UpdateUser is not obligated to allow all the data that can be encapsulated in a MembershipUser object to // be updated in the data source. public override void UpdateUser(MembershipUser user) { throw new NotImplementedException(); } // Takes, as input, a user name and a password and verifies that they are valid-that is, that the membership data // source contains a matching user name and password. ValidateUser returns true if the user name and password are // valid, if the user is approved (that is, if MembershipUser.IsApproved is true), and if the user isn't currently // locked out. Otherwise, it returns false. // Following a successful validation, ValidateUser updates the user's LastLoginDate and fires an // AuditMembershipAuthenticationSuccess Web event. Following a failed validation, it fires an // // AuditMembershipAuthenticationFailure Web event. public override bool ValidateUser(string username, string password) { throw new NotImplementedException(); //if (string.IsNullOrEmpty(password.Trim())) return false; //string hash = EncryptPassword(password); //User user = _repository.GetByUserName(username); //if (user == null) return false; //if (user.Password == hash) //{ // User = user; // return true; //} //return false; } #endregion /// <summary> /// Procuses an MD5 hash string of the password /// </summary> /// <param name="password">password to hash</param> /// <returns>MD5 Hash string</returns> protected string EncryptPassword(string password) { //we use codepage 1252 because that is what sql server uses byte[] pwdBytes = Encoding.GetEncoding(1252).GetBytes(password); byte[] hashBytes = System.Security.Cryptography.MD5.Create().ComputeHash(pwdBytes); return Encoding.GetEncoding(1252).GetString(hashBytes); } } // End Class } SARoleProvider.cs namespace User.Membership { public class SARoleProvider : RoleProvider { #region - Properties - // The name of the application using the role provider. ApplicationName is used to scope // role data so that applications can choose whether to share role data with other applications. // This property can be read and written. public override string ApplicationName { get; set; } #endregion #region - Methods - public override void Initialize(string name, NameValueCollection config) { throw new NotImplementedException(); } // Takes, as input, a list of user names and a list of role names and adds the specified users to // the specified roles. // AddUsersToRoles throws a ProviderException if any of the user names or role names do not exist. // If any user name or role name is null (Nothing in Visual Basic), AddUsersToRoles throws an // ArgumentNullException. If any user name or role name is an empty string, AddUsersToRoles throws // an ArgumentException. public override void AddUsersToRoles(string[] usernames, string[] roleNames) { throw new NotImplementedException(); } // Takes, as input, a role name and creates the specified role. // CreateRole throws a ProviderException if the role already exists, the role name contains a comma, // or the role name exceeds the maximum length allowed by the data source. public override void CreateRole(string roleName) { throw new NotImplementedException(); } // Takes, as input, a role name and a Boolean value that indicates whether to throw an exception if there // are users currently associated with the role, and then deletes the specified role. // If the throwOnPopulatedRole input parameter is true and the specified role has one or more members, // DeleteRole throws a ProviderException and does not delete the role. If throwOnPopulatedRole is false, // DeleteRole deletes the role whether it is empty or not. // // When DeleteRole deletes a role and there are users assigned to that role, it also removes users from the role. public override bool DeleteRole(string roleName, bool throwOnPopulatedRole) { throw new NotImplementedException(); } // Takes, as input, a search pattern and a role name and returns a list of users belonging to the specified role // whose user names match the pattern. Wildcard syntax is data-source-dependent and may vary from provider to // provider. User names are returned in alphabetical order. // If the search finds no matches, FindUsersInRole returns an empty string array (a string array with no elements). // If the role does not exist, FindUsersInRole throws a ProviderException. public override string[] FindUsersInRole(string roleName, string usernameToMatch) { throw new NotImplementedException(); } // Returns the names of all existing roles. If no roles exist, GetAllRoles returns an empty string array (a string // array with no elements). public override string[] GetAllRoles() { throw new NotImplementedException(); } // Takes, as input, a user name and returns the names of the roles to which the user belongs. // If the user is not assigned to any roles, GetRolesForUser returns an empty string array // (a string array with no elements). If the user name does not exist, GetRolesForUser throws a // ProviderException. public override string[] GetRolesForUser(string username) { throw new NotImplementedException(); //User user = _repository.GetByUserName(username); //string[] roles = new string[user.Role.Rights.Count + 1]; //roles[0] = user.Role.Description; //int idx = 0; //foreach (Right right in user.Role.Rights) // roles[++idx] = right.Description; //return roles; } public override string[] GetUsersInRole(string roleName) { throw new NotImplementedException(); } // Takes, as input, a role name and returns the names of all users assigned to that role. // If no users are associated with the specified role, GetUserInRole returns an empty string array (a string array with // no elements). If the role does not exist, GetUsersInRole throws a ProviderException. public override bool IsUserInRole(string username, string roleName) { throw new NotImplementedException(); //User user = _repository.GetByUserName(username); //if (user != null) // return user.IsInRole(roleName); //else // return false; } // Takes, as input, a list of user names and a list of role names and removes the specified users from the specified roles. // RemoveUsersFromRoles throws a ProviderException if any of the users or roles do not exist, or if any user specified // in the call does not belong to the role from which he or she is being removed. public override void RemoveUsersFromRoles(string[] usernames, string[] roleNames) { throw new NotImplementedException(); } // Takes, as input, a role name and determines whether the role exists. public override bool RoleExists(string roleName) { throw new NotImplementedException(); } #endregion } // End Class } From Web.config: <membership defaultProvider="SAMembershipProvider" userIsOnlineTimeWindow="15"> <providers> <clear/> <add name="SAMembershipProvider" type="User.Membership.SAMembershipProvider, User" /> </providers> </membership> <roleManager defaultProvider="SARoleProvider" enabled="true" cacheRolesInCookie="true"> <providers> <clear/> <add name="SARoleProvider" type="User.Membership.SARoleProvider" /> </providers> </roleManager> When running project, I get following error: Server Error in '/' Application. Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: The method or operation is not implemented. Source Error: Line 71: <providers> Line 72: <clear/> Line 73: <add name="SARoleProvider" type="User.Membership.SARoleProvider" /> Line 74: </providers> Line 75: </roleManager> I tried: <add name="SARoleProvider" type="User.Membership.SARoleProvider, User" /> and <add name="SARoleProvider" type="User.Membership.SARoleProvider, SARoleProvider" /> and <add name="SARoleProvider" type="User.Membership.SARoleProvider, User.Membership" /> but none works Any idea what's wrong here? Thanks, Ile

    Read the article

  • Asp.NET custom templated datalist throws argument out of range (index) on button press

    - by MrTortoise
    I have a class BaseTemplate public abstract class BaseTemplate : ITemplate This adds the controls, and provides abstract methods to implement in the inheriting class. The inheriting class then adds its html according to its data source and manages the data binding. This all works fine - I get the control appearing with properly parsed html. The problem is that the base class adds controls into the template that have their own CommandName arguments; the idea is that the class that implements the custom templated dataList will provide the logic of setting the Selected and Edit Indexes. This class also manages the data binding, etc. It sets all of the templates on the datalist in the Init method (which was another cause of this exception). The exception gets thrown when I hit one of these buttons - I have tried hooking up both their click and command events everywhere in case this was the problem. I have also ensured that their command names do not match any of the system ones. The stack trace does not include any references to my methods or objects which is why I am so stuck. It is the most unhelpful message I can imagine. The really frustrating thing is that I cannot get a breakpoint to fire - i.e. the problem is happening after I click the button, but before and of my code can execute. The last time this exception happened was when I had this code in a user control and was assigning the templates to the datalist in the PageLoad. I moved these into init to fix that problem; however, this is a problem that was there then and I have no idea what is causing it let alone how to solve it (and index out of range doesn't really help without knowing what index.) The Exception Details Exception Details: System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values. Parameter name: index The Stack Trace: [ArgumentOutOfRangeException: Specified argument was out of the range of valid values. Parameter name: index] System.Web.UI.ControlCollection.get_Item(Int32 index) +8665582 System.Web.UI.WebControls.DataList.GetItem(ListItemType itemType, Int32 repeatIndex) +8667655 System.Web.UI.WebControls.DataList.System.Web.UI.WebControls.IRepeatInfoUser.GetItemStyle(ListItemType itemType, Int32 repeatIndex) +11 System.Web.UI.WebControls.RepeatInfo.RenderVerticalRepeater(HtmlTextWriter writer, IRepeatInfoUser user, Style controlStyle, WebControl baseControl) +8640873 System.Web.UI.WebControls.RepeatInfo.RenderRepeater(HtmlTextWriter writer, IRepeatInfoUser user, Style controlStyle, WebControl baseControl) +27 System.Web.UI.WebControls.DataList.RenderContents(HtmlTextWriter writer) +208 System.Web.UI.WebControls.BaseDataList.Render(HtmlTextWriter writer) +30 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +134 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +19 System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter writer) +163 System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) +32 System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) +51 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) +40 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +134 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +19 System.Web.UI.Page.Render(HtmlTextWriter writer) +29 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1266 The code Base class: public abstract class BaseTemplate : ITemplate { ListItemType _templateType; public BaseTemplate(ListItemType theTemplateType) { _templateType = theTemplateType; } public ListItemType ListItemType { get { return _templateType; } } #region ITemplate Members public void InstantiateIn(Control container) { PlaceHolder ph = new PlaceHolder(); container.Controls.Add(ph); Literal l = new Literal(); switch (_templateType) { case ListItemType.Header: { ph.Controls.Add(new LiteralControl(@"<table><tr>")); InstantiateInHeader(ph); ph.Controls.Add(new LiteralControl(@"</tr>")); break; } case ListItemType.Footer: { ph.Controls.Add(new LiteralControl(@"<tr>")); InstantiateInFooter(ph); ph.Controls.Add(new LiteralControl(@"</tr></table>")); break; } case ListItemType.Item: { ph.Controls.Add(new LiteralControl(@"<tr>")); InstantiateInItem(ph); ph.Controls.Add(new LiteralControl(@"<td>")); Button select = new Button(); select.ID = "btnSelect"; select.CommandName = "SelectRow"; select.Text = "Select"; ph.Controls.Add(select); ph.Controls.Add(new LiteralControl(@"</td>")); ph.Controls.Add(new LiteralControl(@"</tr>")); ph.DataBinding += new EventHandler(ph_DataBinding); break; } case ListItemType.AlternatingItem: { ph.Controls.Add(new LiteralControl(@"<tr>")); InstantiateInAlternatingItem(ph); ph.Controls.Add(new LiteralControl(@"<td>")); Button select = new Button(); select.ID = "btnSelect"; select.CommandName = "SelectRow"; select.Text = "Select"; ph.Controls.Add(select); ph.Controls.Add(new LiteralControl(@"</td>")); ph.Controls.Add(new LiteralControl(@"</tr>")); ph.DataBinding+=new EventHandler(ph_DataBinding); break; } case ListItemType.SelectedItem: { ph.Controls.Add(new LiteralControl(@"<tr>")); InstantiateInItem(ph); ph.Controls.Add(new LiteralControl(@"<td>")); Button edit = new Button(); edit.ID = "btnEdit"; edit.CommandName = "EditRow"; edit.Text = "Edit"; ph.Controls.Add(edit); Button delete = new Button(); delete.ID = "btnDelete"; delete.CommandName = "DeleteRow"; delete.Text = "Delete"; ph.Controls.Add(delete); ph.Controls.Add(new LiteralControl(@"</td>")); ph.Controls.Add(new LiteralControl(@"</tr>")); ph.DataBinding += new EventHandler(ph_DataBinding); break; } case ListItemType.EditItem: { ph.Controls.Add(new LiteralControl(@"<tr>")); InstantiateInEdit(ph); ph.Controls.Add(new LiteralControl(@"<td>")); Button save = new Button(); save.ID = "btnSave"; save.CommandName = "SaveRow"; save.Text = "Save"; ph.Controls.Add(save); Button cancel = new Button(); cancel.ID = "btnCancel"; cancel.CommandName = "CancelRow"; cancel.Text = "Cancel"; ph.Controls.Add(cancel); ph.Controls.Add(new LiteralControl(@"</td>")); ph.Controls.Add(new LiteralControl(@"</tr>")); ph.DataBinding += new EventHandler(ph_DataBinding); break; } case ListItemType.Separator: { InstantiateInSeperator(ph); break; } } } void ph_DataBinding(object sender, EventArgs e) { DataBindingOverride(sender, e); } /// <summary> /// the controls placed into the PlaceHolder will get wrapped in &lt;table&gt;&lt;tr&gt; &lt;/tr&gt;. I.e. you need to provide the column names wrapped in &lt;td&gt;&lt;/td&gt; tags. /// </summary> /// <param name="header"></param> public abstract void InstantiateInHeader(PlaceHolder ph); /// <summary> /// the controls will have a column added after them and so require each column to be properly wrapped in &lt;td&gt;&lt;/td&gt; tags. The &lt;tr&gt;&lt;/tr&gt; is handled in the base class. /// </summary> /// <param name="ph"></param> public abstract void InstantiateInItem(PlaceHolder ph); /// <summary> /// the controls will have a column added after them and so require each column to be properly wrapped in &lt;td&gt;&lt;/td&gt; tags. The &lt;tr&gt;&lt;/tr&gt; is handled in the base class. /// </summary> /// <param name="ph"></param> public abstract void InstantiateInAlternatingItem(PlaceHolder ph); /// <summary> /// the controls will have a column added after them and so require each column to be properly wrapped in &lt;td&gt;&lt;/td&gt; tags. The &lt;tr&gt;&lt;/tr&gt; is handled in the base class. /// </summary> /// <param name="ph"></param> public abstract void InstantiateInEdit(PlaceHolder ph); /// <summary> /// Any html used in the footer will have &lt;/tr&gt;&lt;table&gt; appended to the end. /// &lt;tr&gt; will be appended to the front. /// </summary> /// <param name="ph"></param> public abstract void InstantiateInFooter(PlaceHolder ph); /// <summary> /// the controls will have a column added after them and so require each column to be properly wrapped in &lt;td&gt;&lt;/td&gt; tags. The &lt;tr&gt;&lt;/tr&gt; is handled in the base class. /// Adds Delete and Edit Buttons after the table contents. /// </summary> /// <param name="ph"></param> public abstract void InstantiateInSelectedItem(PlaceHolder ph); /// <summary> /// The base class provides no &lt;tr&gt;&lt;/tr&gt; tags /// </summary> /// <param name="ph"></param> public abstract void InstantiateInSeperator(PlaceHolder ph); /// <summary> /// Use this method to bind the controls to their data. /// </summary> /// <param name="sender"></param> /// <param name="e"></param> public abstract void DataBindingOverride(object sender, EventArgs e); #endregion } Inheriting class: public class NominalGroupTemplate : BaseTemplate { public NominalGroupTemplate(ListItemType theListItemType) : base(theListItemType) { } public override void InstantiateInHeader(PlaceHolder ph) { ph.Controls.Add(new LiteralControl(@"<td>ID</td><td>Group</td><td>IsPositive</td>")); } public override void InstantiateInItem(PlaceHolder ph) { ph.Controls.Add(new LiteralControl(@"<td>")); Label lblID = new Label(); lblID.ID = "lblID"; ph.Controls.Add(lblID); ph.Controls.Add(new LiteralControl(@"</td><td>")); Label lblGroup = new Label(); lblGroup.ID = "lblGroup"; ph.Controls.Add(lblGroup); ph.Controls.Add(new LiteralControl(@"</td><td>")); CheckBox chkIsPositive = new CheckBox(); chkIsPositive.ID = "chkIsPositive"; chkIsPositive.Enabled = false; ph.Controls.Add(chkIsPositive); ph.Controls.Add(new LiteralControl(@"</td>")); } public override void InstantiateInAlternatingItem(PlaceHolder ph) { InstantiateInItem(ph); } public override void InstantiateInEdit(PlaceHolder ph) { ph.Controls.Add(new LiteralControl(@"<td>")); Label lblID = new Label(); lblID.ID = "lblID"; ph.Controls.Add(lblID); ph.Controls.Add(new LiteralControl(@"</td><td>")); TextBox txtGroup = new TextBox(); txtGroup.ID = "txtGroup"; txtGroup.Visible = true; txtGroup.Enabled = true ; ph.Controls.Add(txtGroup); ph.Controls.Add(new LiteralControl(@"</td><td>")); CheckBox chkIsPositive = new CheckBox(); chkIsPositive.ID = "chkIsPositive"; chkIsPositive.Visible = true; chkIsPositive.Enabled = true ; ph.Controls.Add(chkIsPositive); ph.Controls.Add(new LiteralControl(@"</td>")); } public override void InstantiateInFooter(PlaceHolder ph) { InstantiateInHeader(ph); } public override void InstantiateInSelectedItem(PlaceHolder ph) { ph.Controls.Add(new LiteralControl(@"<td>")); Label lblID = new Label(); lblID.ID = "lblID"; ph.Controls.Add(lblID); ph.Controls.Add(new LiteralControl(@"</td><td>")); TextBox txtGroup = new TextBox(); txtGroup.ID = "txtGroup"; txtGroup.Visible = true; txtGroup.Enabled = false; ph.Controls.Add(txtGroup); ph.Controls.Add(new LiteralControl(@"</td><td>")); CheckBox chkIsPositive = new CheckBox(); chkIsPositive.ID = "chkIsPositive"; chkIsPositive.Visible = true; chkIsPositive.Enabled = false; ph.Controls.Add(chkIsPositive); ph.Controls.Add(new LiteralControl(@"</td>")); } public override void InstantiateInSeperator(PlaceHolder ph) { } public override void DataBindingOverride(object sender, EventArgs e) { PlaceHolder ph = (PlaceHolder)sender; DataListItem li = (DataListItem)ph.NamingContainer; int id = Convert.ToInt32(DataBinder.Eval(li.DataItem, "ID")); string group = (string)DataBinder.Eval(li.DataItem, "Group"); bool isPositive = Convert.ToBoolean(DataBinder.Eval(li.DataItem, "IsPositive")); switch (this.ListItemType) { case ListItemType.Item: case ListItemType.AlternatingItem: { ((Label)ph.FindControl("lblID")).Text = id.ToString(); ((Label)ph.FindControl("lblGroup")).Text = group; ((CheckBox)ph.FindControl("chkIsPositive")).Text = isPositive.ToString(); break; } case ListItemType.EditItem: case ListItemType.SelectedItem: { ((TextBox)ph.FindControl("lblID")).Text = id.ToString(); ((TextBox)ph.FindControl("txtGroup")).Text = group; ((CheckBox)ph.FindControl("chkIsPositive")).Text = isPositive.ToString(); break; } } } } From here I added the control to a page the code behind public partial class NominalGroupbroke : System.Web.UI.UserControl { public void SetNominalGroupList(IList<BONominalGroup> theNominalGroups) { XElement data = Serialiser<BONominalGroup>.SerialiseObjectList(theNominalGroups); ViewState.Add("nominalGroups", data.ToString()); dlNominalGroup.DataSource = theNominalGroups; dlNominalGroup.DataBind(); } protected void Page_init() { dlNominalGroup.HeaderTemplate = new NominalGroupTemplate(ListItemType.Header); dlNominalGroup.ItemTemplate = new NominalGroupTemplate(ListItemType.Item); dlNominalGroup.AlternatingItemTemplate = new NominalGroupTemplate(ListItemType.AlternatingItem); dlNominalGroup.SeparatorTemplate = new NominalGroupTemplate(ListItemType.Separator); dlNominalGroup.SelectedItemTemplate = new NominalGroupTemplate(ListItemType.SelectedItem); dlNominalGroup.EditItemTemplate = new NominalGroupTemplate(ListItemType.EditItem); dlNominalGroup.FooterTemplate = new NominalGroupTemplate(ListItemType.Footer); } protected void Page_Load(object sender, EventArgs e) { dlNominalGroup.ItemCommand += new DataListCommandEventHandler(dlNominalGroup_ItemCommand); } void dlNominalGroup_Init(object sender, EventArgs e) { dlNominalGroup.HeaderTemplate = new NominalGroupTemplate(ListItemType.Header); dlNominalGroup.ItemTemplate = new NominalGroupTemplate(ListItemType.Item); dlNominalGroup.AlternatingItemTemplate = new NominalGroupTemplate(ListItemType.AlternatingItem); dlNominalGroup.SeparatorTemplate = new NominalGroupTemplate(ListItemType.Separator); dlNominalGroup.SelectedItemTemplate = new NominalGroupTemplate(ListItemType.SelectedItem); dlNominalGroup.EditItemTemplate = new NominalGroupTemplate(ListItemType.EditItem); dlNominalGroup.FooterTemplate = new NominalGroupTemplate(ListItemType.Footer); } void dlNominalGroup_DataBinding(object sender, EventArgs e) { } void deleteNominalGroup(int index) { XElement data = XElement.Parse(Convert.ToString( ViewState["nominalGroups"] )); IList<BONominalGroup> list = Serialiser<BONominalGroup>.DeserialiseObjectList(data); FENominalGroup.DeleteNominalGroup(list[index].ID); list.RemoveAt(index); data = Serialiser<BONominalGroup>.SerialiseObjectList(list); ViewState["nominalGroups"] = data.ToString(); dlNominalGroup.DataSource = list; dlNominalGroup.DataBind(); } void updateNominalGroup(DataListItem theItem) { XElement data = XElement.Parse(Convert.ToString( ViewState["nominalGroups"])); IList<BONominalGroup> list = Serialiser<BONominalGroup>.DeserialiseObjectList(data); BONominalGroup old = list[theItem.ItemIndex]; BONominalGroup n = new BONominalGroup(); byte id = Convert.ToByte(((TextBox)theItem.FindControl("lblID")).Text); string group = ((TextBox)theItem.FindControl("txtGroup")).Text; bool isPositive = Convert.ToBoolean(((CheckBox)theItem.FindControl("chkIsPositive")).Text); n.ID = id; n.Group = group; n.IsPositive = isPositive; FENominalGroup.UpdateNominalGroup(old, n); list[theItem.ItemIndex] = n; data = Serialiser<BONominalGroup>.SerialiseObjectList(list); ViewState["nominalGroups"] = data.ToString(); } void dlNominalGroup_ItemCommand(object source, DataListCommandEventArgs e) { DataList l = (DataList)source; switch (e.CommandName) { case "SelectRow": { if (l.EditItemIndex == -1) { l.SelectedIndex = e.Item.ItemIndex; l.EditItemIndex = -1; } break; } case "EditRow": { if (l.SelectedIndex == e.Item.ItemIndex) { l.EditItemIndex = e.Item.ItemIndex; } break; } case "DeleteRow": { deleteNominalGroup(e.Item.ItemIndex); l.EditItemIndex = -1; try { l.SelectedIndex = e.Item.ItemIndex; } catch { l.SelectedIndex = -1; } break; } case "CancelRow": { l.SelectedIndex = l.EditItemIndex; l.EditItemIndex = -1; break; } case "SaveRow": { updateNominalGroup(e.Item); try { l.SelectedIndex = e.Item.ItemIndex; } catch { l.SelectedIndex = -1; } l.EditItemIndex = -1; break; } } } Lots of code there, I'm afraid, but it should build. Thanks if anyone manages to spot my silliness. The BONominalGroup class (please ignore my crazy getHash override, I'm not proud of it). IAudit can just be an empty interface here and all will be fine. It used to inherit from another class, I have cleaned that out - so the serialization logic may be broken here. public class BONominalGroup { public BONominalGroup() #region Fields and properties private Int16 _ID; public Int16 ID { get { return _ID; } set { _ID = value; } } private string _group; public string Group { get { return _group; } set { _group = value; } } private bool _isPositve; public bool IsPositive { get { return _isPositve; } set { _isPositve = value; } } #endregion public override bool Equals(object obj) { bool retVal = false; BONominalGroup ng = obj as BONominalGroup; if (ng!=null) if (ng._group == this._group && ng._ID == this.ID && ng.IsPositive == this.IsPositive) { retVal = true; } return retVal; } public override int GetHashCode() { return ToString().GetHashCode(); } public override string ToString() { return "BONominalGroup{ID:" + this.ID.ToString() + ",Group:" + this.Group.ToString() + ",IsPositive:" + this.IsPositive.ToString() + "," + "}"; } #region IXmlSerializable Members public override void ReadXml(XmlReader reader) { reader.ReadStartElement("BONominalGroup"); this.ID = Convert.ToByte(reader.ReadElementString("id")); this.Group = reader.ReadElementString("group"); this.IsPositive = Convert.ToBoolean(reader.ReadElementString("isPositive")); base.ReadXml(reader); reader.ReadEndElement(); } public override void WriteXml(XmlWriter writer) { writer.WriteElementString("id", this.ID.ToString()); writer.WriteElementString("group", this.Group); writer.WriteElementString("isPositive", this.IsPositive.ToString()); // writer.WriteStartElement("BOBase"); // base.WriteXml(writer); writer.WriteEndElement(); } #endregion }

    Read the article

  • C++ - Conway's Game of Life & Stepping Backwards

    - by Gabe
    I was able to create a version Conway's Game of Life that either stepped forward each click, or just ran forward using a timer. (I'm doing this using Qt.) Now, I need to be able to save all previous game grids, so that I can step backwards by clicking a button. I'm trying to use a stack, and it seems like I'm pushing the old gridcells onto the stack correctly. But when I run it in QT, the grids don't change when I click BACK. I've tried different things for the last three hours, to no avail. Any ideas? gridwindow.cpp - My problem should be in here somewhere. Probably the handleBack() func. #include <iostream> #include "gridwindow.h" using namespace std; // Constructor for window. It constructs the three portions of the GUI and lays them out vertically. GridWindow::GridWindow(QWidget *parent,int rows,int cols) : QWidget(parent) { QHBoxLayout *header = setupHeader(); // Setup the title at the top. QGridLayout *grid = setupGrid(rows,cols); // Setup the grid of colored cells in the middle. QHBoxLayout *buttonRow = setupButtonRow(); // Setup the row of buttons across the bottom. QVBoxLayout *layout = new QVBoxLayout(); // Puts everything together. layout->addLayout(header); layout->addLayout(grid); layout->addLayout(buttonRow); setLayout(layout); } // Destructor. GridWindow::~GridWindow() { delete title; } // Builds header section of the GUI. QHBoxLayout* GridWindow::setupHeader() { QHBoxLayout *header = new QHBoxLayout(); // Creates horizontal box. header->setAlignment(Qt::AlignHCenter); this->title = new QLabel("CONWAY'S GAME OF LIFE",this); // Creates big, bold, centered label (title): "Conway's Game of Life." this->title->setAlignment(Qt::AlignHCenter); this->title->setFont(QFont("Arial", 32, QFont::Bold)); header->addWidget(this->title); // Adds widget to layout. return header; // Returns header to grid window. } // Builds the grid of cells. This method populates the grid's 2D array of GridCells with MxN cells. QGridLayout* GridWindow::setupGrid(int rows,int cols) { isRunning = false; QGridLayout *grid = new QGridLayout(); // Creates grid layout. grid->setHorizontalSpacing(0); // No empty spaces. Cells should be contiguous. grid->setVerticalSpacing(0); grid->setSpacing(0); grid->setAlignment(Qt::AlignHCenter); for(int i=0; i < rows; i++) //Each row is a vector of grid cells. { std::vector<GridCell*> row; // Creates new vector for current row. cells.push_back(row); for(int j=0; j < cols; j++) { GridCell *cell = new GridCell(); // Creates and adds new cell to row. cells.at(i).push_back(cell); grid->addWidget(cell,i,j); // Adds to cell to grid layout. Column expands vertically. grid->setColumnStretch(j,1); } grid->setRowStretch(i,1); // Sets row expansion horizontally. } return grid; // Returns grid. } // Builds footer section of the GUI. QHBoxLayout* GridWindow::setupButtonRow() { QHBoxLayout *buttonRow = new QHBoxLayout(); // Creates horizontal box for buttons. buttonRow->setAlignment(Qt::AlignHCenter); // Clear Button - Clears cell; sets them all to DEAD/white. QPushButton *clearButton = new QPushButton("CLEAR"); clearButton->setFixedSize(100,25); connect(clearButton, SIGNAL(clicked()), this, SLOT(handlePause())); // Pauses timer before clearing. connect(clearButton, SIGNAL(clicked()), this, SLOT(handleClear())); // Connects to clear function to make all cells DEAD/white. buttonRow->addWidget(clearButton); // Forward Button - Steps one step forward. QPushButton *forwardButton = new QPushButton("FORWARD"); forwardButton->setFixedSize(100,25); connect(forwardButton, SIGNAL(clicked()), this, SLOT(handleForward())); // Signals to handleForward function.. buttonRow->addWidget(forwardButton); // Back Button - Steps one step backward. QPushButton *backButton = new QPushButton("BACK"); backButton->setFixedSize(100,25); connect(backButton, SIGNAL(clicked()), this, SLOT(handleBack())); // Signals to handleBack funciton. buttonRow->addWidget(backButton); // Start Button - Starts game when user clicks. Or, resumes game after being paused. QPushButton *startButton = new QPushButton("START/RESUME"); startButton->setFixedSize(100,25); connect(startButton, SIGNAL(clicked()), this, SLOT(handlePause())); // Deletes current timer if there is one. Then restarts everything. connect(startButton, SIGNAL(clicked()), this, SLOT(handleStart())); // Signals to handleStart function. buttonRow->addWidget(startButton); // Pause Button - Pauses simulation of game. QPushButton *pauseButton = new QPushButton("PAUSE"); pauseButton->setFixedSize(100,25); connect(pauseButton, SIGNAL(clicked()), this, SLOT(handlePause())); // Signals to pause function which pauses timer. buttonRow->addWidget(pauseButton); // Quit Button - Exits program. QPushButton *quitButton = new QPushButton("EXIT"); quitButton->setFixedSize(100,25); connect(quitButton, SIGNAL(clicked()), qApp, SLOT(quit())); // Signals the quit slot which ends the program. buttonRow->addWidget(quitButton); return buttonRow; // Returns bottom of layout. } /* SLOT method for handling clicks on the "clear" button. Receives "clicked" signals on the "Clear" button and sets all cells to DEAD. */ void GridWindow::handleClear() { for(unsigned int row=0; row < cells.size(); row++) // Loops through current rows' cells. { for(unsigned int col=0; col < cells[row].size(); col++) // Loops through the rows'columns' cells. { GridCell *cell = cells[row][col]; // Grab the current cell & set its value to dead. cell->setType(DEAD); } } } /* SLOT method for handling clicks on the "start" button. Receives "clicked" signals on the "start" button and begins game simulation. */ void GridWindow::handleStart() { isRunning = true; // It is running. Sets isRunning to true. this->timer = new QTimer(this); // Creates new timer. connect(this->timer, SIGNAL(timeout()), this, SLOT(timerFired())); // Connect "timerFired" method class to the "timeout" signal fired by the timer. this->timer->start(500); // Timer to fire every 500 milliseconds. } /* SLOT method for handling clicks on the "pause" button. Receives "clicked" signals on the "pause" button and stops the game simulation. */ void GridWindow::handlePause() { if(isRunning) // If it is running... this->timer->stop(); // Stops the timer. isRunning = false; // Set to false. } void GridWindow::handleForward() { if(isRunning); // If it's running, do nothing. else timerFired(); // It not running, step forward one step. } void GridWindow::handleBack() { std::vector<std::vector<GridCell*> > cells2; if(isRunning); // If it's running, do nothing. else if(backStack.empty()) cout << "EMPTYYY" << endl; else { cells2 = backStack.peek(); for (unsigned int f = 0; f < cells.size(); f++) // Loop through cells' rows. { for (unsigned int g = 0; g < cells.at(f).size(); g++) // Loop through cells columns. { cells[f][g]->setType(cells2[f][g]->getType()); // Set cells[f][g]'s type to cells2[f][g]'s type. } } cout << "PRE=POP" << endl; backStack.pop(); cout << "OYYYY" << endl; } } // Accessor method - Gets the 2D vector of grid cells. std::vector<std::vector<GridCell*> >& GridWindow::getCells() { return this->cells; } /* TimerFired function: 1) 2D-Vector cells2 is declared. 2) cells2 is initliazed with loops/push_backs so that all its cells are DEAD. 3) We loop through cells, and count the number of LIVE neighbors next to a given cell. --> Depending on how many cells are living, we choose if the cell should be LIVE or DEAD in the next simulation, according to the rules. -----> We save the cell type in cell2 at the same indice (the same row and column cell in cells2). 4) After check all the cells (and save the next round values in cells 2), we set cells's gridcells equal to cells2 gridcells. --> This causes the cells to be redrawn with cells2 types (white or black). */ void GridWindow::timerFired() { backStack.push(cells); std::vector<std::vector<GridCell*> > cells2; // Holds new values for 2D vector. These are the next simulation round of cell types. for(unsigned int i = 0; i < cells.size(); i++) // Loop through the rows of cells2. (Same size as cells' rows.) { vector<GridCell*> row; // Creates Gridcell* vector to push_back into cells2. cells2.push_back(row); // Pushes back row vectors into cells2. for(unsigned int j = 0; j < cells[i].size(); j++) // Loop through the columns (the cells in each row). { GridCell *cell = new GridCell(); // Creates new GridCell. cell->setType(DEAD); // Sets cell type to DEAD/white. cells2.at(i).push_back(cell); // Pushes back the DEAD cell into cells2. } // This makes a gridwindow the same size as cells with all DEAD cells. } for (unsigned int m = 0; m < cells.size(); m++) // Loop through cells' rows. { for (unsigned int n = 0; n < cells.at(m).size(); n++) // Loop through cells' columns. { unsigned int neighbors = 0; // Counter for number of LIVE neighbors for a given cell. // We know check all different variations of cells[i][j] to count the number of living neighbors for each cell. // We check m > 0 and/or n > 0 to make sure we don't access negative indexes (ex: cells[-1][0].) // We check m < size to make sure we don't try to access rows out of the vector (ex: row 5, if only 4 rows). // We check n < row size to make sure we don't access column item out of the vector (ex: 10th item in a column of only 9 items). // If we find that the Type = 1 (it is LIVE), then we add 1 to the neighbor. // Else - we add nothing to the neighbor counter. // Neighbor is the number of LIVE cells next to the current cell. if(m > 0 && n > 0) { if (cells[m-1][n-1]->getType() == 1) neighbors += 1; } if(m > 0) { if (cells[m-1][n]->getType() == 1) neighbors += 1; if(n < (cells.at(m).size() - 1)) { if (cells[m-1][n+1]->getType() == 1) neighbors += 1; } } if(n > 0) { if (cells[m][n-1]->getType() == 1) neighbors += 1; if(m < (cells.size() - 1)) { if (cells[m+1][n-1]->getType() == 1) neighbors += 1; } } if(n < (cells.at(m).size() - 1)) { if (cells[m][n+1]->getType() == 1) neighbors += 1; } if(m < (cells.size() - 1)) { if (cells[m+1][n]->getType() == 1) neighbors += 1; } if(m < (cells.size() - 1) && n < (cells.at(m).size() - 1)) { if (cells[m+1][n+1]->getType() == 1) neighbors += 1; } // Done checking number of neighbors for cells[m][n] // Now we change cells2 if it should switch in the next simulation step. // cells2 holds the values of what cells should be on the next iteration of the game. // We can't change cells right now, or it would through off our other cell values. // Apply game rules to cells: Create new, updated grid with the roundtwo vector. // Note - LIVE is 1; DEAD is 0. if (cells[m][n]->getType() == 1 && neighbors < 2) // If cell is LIVE and has less than 2 LIVE neighbors -> Set to DEAD. cells2[m][n]->setType(DEAD); else if (cells[m][n]->getType() == 1 && neighbors > 3) // If cell is LIVE and has more than 3 LIVE neighbors -> Set to DEAD. cells2[m][n]->setType(DEAD); else if (cells[m][n]->getType() == 1 && (neighbors == 2 || neighbors == 3)) // If cell is LIVE and has 2 or 3 LIVE neighbors -> Set to LIVE. cells2[m][n]->setType(LIVE); else if (cells[m][n]->getType() == 0 && neighbors == 3) // If cell is DEAD and has 3 LIVE neighbors -> Set to LIVE. cells2[m][n]->setType(LIVE); } } // Now we've gone through all of cells, and saved the new values in cells2. // Now we loop through cells and set all the cells' types to those of cells2. for (unsigned int f = 0; f < cells.size(); f++) // Loop through cells' rows. { for (unsigned int g = 0; g < cells.at(f).size(); g++) // Loop through cells columns. { cells[f][g]->setType(cells2[f][g]->getType()); // Set cells[f][g]'s type to cells2[f][g]'s type. } } } stack.h - Here's my stack. #ifndef STACK_H_ #define STACK_H_ #include <iostream> #include "node.h" template <typename T> class Stack { private: Node<T>* top; int listSize; public: Stack(); int size() const; bool empty() const; void push(const T& value); void pop(); T& peek() const; }; template <typename T> Stack<T>::Stack() : top(NULL) { listSize = 0; } template <typename T> int Stack<T>::size() const { return listSize; } template <typename T> bool Stack<T>::empty() const { if(listSize == 0) return true; else return false; } template <typename T> void Stack<T>::push(const T& value) { Node<T>* newOne = new Node<T>(value); newOne->next = top; top = newOne; listSize++; } template <typename T> void Stack<T>::pop() { Node<T>* oldT = top; top = top->next; delete oldT; listSize--; } template <typename T> T& Stack<T>::peek() const { return top->data; // Returns data in top item. } #endif gridcell.cpp - Gridcell implementation #include <iostream> #include "gridcell.h" using namespace std; // Constructor: Creates a grid cell. GridCell::GridCell(QWidget *parent) : QFrame(parent) { this->type = DEAD; // Default: Cell is DEAD (white). setFrameStyle(QFrame::Box); // Set the frame style. This is what gives each box its black border. this->button = new QPushButton(this); //Creates button that fills entirety of each grid cell. this->button->setSizePolicy(QSizePolicy::Expanding,QSizePolicy::Expanding); // Expands button to fill space. this->button->setMinimumSize(19,19); //width,height // Min height and width of button. QHBoxLayout *layout = new QHBoxLayout(); //Creates a simple layout to hold our button and add the button to it. layout->addWidget(this->button); setLayout(layout); layout->setStretchFactor(this->button,1); // Lets the buttons expand all the way to the edges of the current frame with no space leftover layout->setContentsMargins(0,0,0,0); layout->setSpacing(0); connect(this->button,SIGNAL(clicked()),this,SLOT(handleClick())); // Connects clicked signal with handleClick slot. redrawCell(); // Calls function to redraw (set new type for) the cell. } // Basic destructor. GridCell::~GridCell() { delete this->button; } // Accessor for the cell type. CellType GridCell::getType() const { return(this->type); } // Mutator for the cell type. Also has the side effect of causing the cell to be redrawn on the GUI. void GridCell::setType(CellType type) { this->type = type; redrawCell(); // Sets type and redraws cell. } // Handler slot for button clicks. This method is called whenever the user clicks on this cell in the grid. void GridCell::handleClick() { // When clicked on... if(this->type == DEAD) // If type is DEAD (white), change to LIVE (black). type = LIVE; else type = DEAD; // If type is LIVE (black), change to DEAD (white). setType(type); // Sets new type (color). setType Calls redrawCell() to recolor. } // Method to check cell type and return the color of that type. Qt::GlobalColor GridCell::getColorForCellType() { switch(this->type) { default: case DEAD: return Qt::white; case LIVE: return Qt::black; } } // Helper method. Forces current cell to be redrawn on the GUI. Called whenever the setType method is invoked. void GridCell::redrawCell() { Qt::GlobalColor gc = getColorForCellType(); //Find out what color this cell should be. this->button->setPalette(QPalette(gc,gc)); //Force the button in the cell to be the proper color. this->button->setAutoFillBackground(true); this->button->setFlat(true); //Force QT to NOT draw the borders on the button } Thanks a lot. Let me know if you need anything else.

    Read the article

< Previous Page | 62 63 64 65 66