Search Results

Search found 11978 results on 480 pages for 'shallow copy'.

Page 475/480 | < Previous Page | 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • Simple GET operation with JSON data in ADF Mobile

    - by PadmajaBhat
    Usecase: This sample uses a RESTful service which contains a GET method that fetches employee details for an employee with given employee ID along with other methods. The data is fetched in JSON format. This RESTful service is then invoked via ADF Mobile and the JSON data thus obtained is parsed and rendered in mobile in a table. Prerequisite: Download JDev build JDEVADF_11.1.2.4.0_GENERIC_130421.1600.6436.1 or higher with mobile support.  Steps: Run EmployeeService.java in JSONService.zip. This is a simple service with a method, getEmpById(id) that takes employee ID as parameter and produces employee details in JSON format. Copy the target URL generated on running this service. The target URL will be as shown below: http://127.0.0.1:7101/JSONService-Project1-context-root/jersey/project1 Now, let us invoke this service in our mobile application. For this, create an ADF Mobile application.  Name the application JSON_SearchByEmpID and finish the wizard. Now, let us create a connection to our service. To do this, we create a URL Connection. Invoke new gallery wizard on ApplicationController project.  Select URL Connection option. In the Create URL Connection window, enter connection name as ‘conn’. For URL endpoint, supply the URL you copied earlier on running the service. Remember to use your system IP instead of localhost. Test the connection and click OK. At this point, a connection to the REST service has been created. Since JSON data is not supported directly in WSDC wizard, we need to invoke the operation through Java code using RestServiceAdapter. For this, in the ApplicationController project, create a Java class called ‘EmployeeDC’. We will be creating DC from this class. Add the following code to the newly created class to invoke the getEmpById method. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public Employee fetchEmpDetails(){ RestServiceAdapter restServiceAdapter = Model.createRestServiceAdapter(); restServiceAdapter.clearRequestProperties(); restServiceAdapter.setConnectionName("conn"); //URL connection created with this name restServiceAdapter.setRequestType(RestServiceAdapter.REQUEST_TYPE_GET); restServiceAdapter.addRequestProperty("Content-Type", "application/json"); restServiceAdapter.addRequestProperty("Accept", "application/json; charset=UTF-8"); restServiceAdapter.setRetryLimit(0); restServiceAdapter.setRequestURI("/getById/"+inputEmpID); String response = ""; JSONBeanSerializationHelper jsonHelper = new JSONBeanSerializationHelper(); try { response = restServiceAdapter.send(""); //Invoke the GET operation System.out.println("Response received!"); Employee responseObject = (Employee) jsonHelper.fromJSON(Employee.class, response); return responseObject; } catch (Exception e) { } return null; } Here, in lines 2 to 9, we create the RestServiceAdapter and set various properties required to invoke the web service. At line 4, we are pointing to the connection ‘conn’ created previously. Since we want to invoke getEmpById method of the service, which is defined by the URL http://IP:7101/REST_Sanity_JSON-Project1-context-root/resources/project1/getById/{id} we are updating the request URI to point to this URI at line 9. inputEmpID is a variable that will hold the value input by the user for employee ID. This we will be creating in a while. As the method we are invoking is a GET operation and consumes json data, these properties are being set in lines 5 through 7. Finally, we are sending the request in line 13. In line 15, we use jsonHelper.fromJSON to convert received JSON data to a Java object. The required Java objects' structure is defined in class Employee.java whose structure is provided later. Since the response from our service is a simple response consisting of attributes like employee Id, name, design etc, we will just return this parsed response (line 16) and use it to create DC. As mentioned previously, we would like the user to input the employee ID for which he/she wants to perform search. So, in the same class, define a variable inputEmpID which will hold the value input by the user. Generate accessors for this variable. Lastly, we need to create Employee class. Employee class will define how we want to structure the JSON object received from the service. To design the Employee class, run the services’ method in the browser or via analyzer using path parameter as 1. This will give you the output JSON structure. Ours is a simple service that returns a JSONObject with a set of data. Hence, Employee class will just contain this set of data defined with the proper data types. Create Employee.java in the same project as EmployeeDC.java and write the below code: package application; import oracle.adfmf.java.beans.PropertyChangeListener; import oracle.adfmf.java.beans.PropertyChangeSupport; public class Employee { private String dept; private String desig; private int id; private String name; private int salary; private PropertyChangeSupport propertyChangeSupport = new PropertyChangeSupport(this); public void setDept(String dept) {         String oldDept = this.dept; this.dept = dept; propertyChangeSupport.firePropertyChange("dept", oldDept, dept); } public String getDept() { return dept; } public void setDesig(String desig) { String oldDesig = this.desig; this.desig = desig; propertyChangeSupport.firePropertyChange("desig", oldDesig, desig); } public String getDesig() { return desig; } public void setId(int id) { int oldId = this.id; this.id = id; propertyChangeSupport.firePropertyChange("id", oldId, id); } public int getId() { return id; } public void setName(String name) { String oldName = this.name; this.name = name; propertyChangeSupport.firePropertyChange("name", oldName, name); } public String getName() { return name; } public void setSalary(int salary) { int oldSalary = this.salary; this.salary = salary; propertyChangeSupport.firePropertyChange("salary", oldSalary, salary); } public int getSalary() { return salary; } public void addPropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.addPropertyChangeListener(l); } public void removePropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.removePropertyChangeListener(l);     } } Now, let us create a DC out of EmployeeDC.java.  DC as shown below is created. Now, you can design the mobile page as usual and invoke the operation of the service. To design the page, go to ViewController project and locate adfmf-feature.xml. Create a new feature called ‘SearchFeature’ by clicking the plus icon. Go the content tab and add an amx page. Call it SearchPage.amx. Call it SearchPage.amx. Remove primary and secondary buttons as we don’t need them and rename the header. Drag and drop inputEmpID from the DC palette onto Panel Page in the structure pane as input text with label. Next, drop fetchEmpDetails method as an ADF button. For a change, let us display the output in a table component instead of the usual form. However, you will notice that if you drag and drop Employee onto the structure pane, there is no option for ADF Mobile Table. Hence, we will need to create the table on our own. To do this, let us first drop Employee as an ADF Read -Only form. This step is needed to get the required bindings. We will be deleting this form in a while. Now, from the Component palette, search for ‘Table Layout’. Drag and drop this below the command button.  Within the tablelayout, insert ‘Row Layout’ and ‘Cell Format’ components. Final table structure should be as shown below. Here, we have also defined some inline styling to render the UI in a nice manner. <amx:tableLayout id="tl1" borderWidth="2" halign="center" inlineStyle="vertical-align:middle;" width="100%" cellPadding="10"> <amx:rowLayout id="rl1" > <amx:cellFormat id="cf1" width="30%"> <amx:outputText value="#{bindings.dept.hints.label}" id="ot7" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf2"> <amx:outputText value="#{bindings.dept.inputValue}" id="ot8" /> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl2"> <amx:cellFormat id="cf3" width="30%"> <amx:outputText value="#{bindings.desig.hints.label}" id="ot9" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf4" > <amx:outputText value="#{bindings.desig.inputValue}" id="ot10"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl3"> <amx:cellFormat id="cf5" width="30%"> <amx:outputText value="#{bindings.id.hints.label}" id="ot11" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf6" > <amx:outputText value="#{bindings.id.inputValue}" id="ot12"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl4"> <amx:cellFormat id="cf7" width="30%"> <amx:outputText value="#{bindings.name.hints.label}" id="ot13" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf8"> <amx:outputText value="#{bindings.name.inputValue}" id="ot14"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl5"> <amx:cellFormat id="cf9" width="30%"> <amx:outputText value="#{bindings.salary.hints.label}" id="ot15" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf10"> <amx:outputText value="#{bindings.salary.inputValue}" id="ot16"/> </amx:cellFormat> </amx:rowLayout>     </amx:tableLayout> The values used in the output text of the table come from the bindings obtained from the ADF Form created earlier. As we have used the bindings and don’t need the form anymore, let us delete the form.  One last thing before we deploy. When user changes employee ID, we want to clear the table contents. For this we associate a value change listener with the input text box. Click New in the resulting dialog to create a managed bean. Next, we create a method within the managed bean. For this, click on the New button associated with method. Call the method ‘empIDChange’. Open myClass.java and write the below code in empIDChange(). public void empIDChange(ValueChangeEvent valueChangeEvent) { // Add event code here... //Resetting the values to blank values when employee id changes AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext(); ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{bindings.dept.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.desig.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.id.inputValue}", int.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.name.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.salary.inputValue}", int.class); ve.setValue(adfELContext, ""); } That’s it. Deploy the application to android emulator or device. Some snippets from the app.

    Read the article

  • CodePlex Daily Summary for Monday, June 17, 2013

    CodePlex Daily Summary for Monday, June 17, 2013Popular ReleasesKooboo CMS: Kooboo CMS 4.1.1: The stable release of Kooboo CMS 4.1.0 with fixed the following issues: https://github.com/Kooboo/CMS/issues/1 https://github.com/Kooboo/CMS/issues/11 https://github.com/Kooboo/CMS/issues/13 https://github.com/Kooboo/CMS/issues/15 https://github.com/Kooboo/CMS/issues/19 https://github.com/Kooboo/CMS/issues/20 https://github.com/Kooboo/CMS/issues/24 https://github.com/Kooboo/CMS/issues/43 https://github.com/Kooboo/CMS/issues/45 https://github.com/Kooboo/CMS/issues/46 https://github....VidCoder: 1.5.0 Beta: The betas have started up again! If you were previously on the beta track you will need to install this to get back on it. That's because you can now run both the Beta and Stable version of VidCoder side-by-side! Note that the OpenCL and Intel QuickSync changes being tested by HandBrake are not in the betas yet. They will appear when HandBrake integrates them into the main branch. Updated HandBrake core to SVN 5590. This adds a new FDK AAC encoder. The FAAC encoder has been removed and now...Wsus Package Publisher: Release v1.2.1306.16: Date/Time are displayed as Local Time. (Last Contact, Last Report and DeadLine) Wpp now remember the last used path for update publishing. (See 'Settings' Form for options) Add an option to allow users to publish an update even if the Framework has judged the certificate as invalid. (Attention : Using this option will NOT allow you to publish or revise an update if your certificate is really invalid). When publishing a new update, filter update files to ensure that there is not files wi...Employee Info Starter Kit: v6.0 - ASP.NET MVC Edition: Release Home - Getting Started - Hands on Coding Walkthrough – Technology Stack - Design & Architecture EISK v6.0 – ASP.NET MVC edition bundles most of the greatest and successful platforms, frameworks and technologies together, to enable web developers to learn and build manageable and high performance web applications with rich user experience effectively and quickly. User End SpecificationsCreating a new employee record Read existing employee records Update an existing employee reco...OLAP PivotTable Extensions: Release 0.8.1: Use the 32-bit download for... Excel 2007 Excel 2010 32-bit (even Excel 2010 32-bit on a 64-bit operating system) Excel 2013 32-bit (even Excel 2013 32-bit on a 64-bit operating system) Use the 64-bit download for... Excel 2010 64-bit Excel 2013 64-bit Just download and run the EXE. There is no need to uninstall the previous release. If you have problems getting the add-in to work, see the Troubleshooting Installation wiki page. The new features in this release are: View #VALUE! Err...DirectXTex texture processing library: June 2013: June 15, 2013 Custom filtering implementation for Resize & GenerateMipMaps(3D) - Point, Box, Linear, Cubic, and Triangle TEX_FILTER_TRIANGLE finite low-pass triangle filter TEX_FILTER_WRAP, TEX_FILTER_MIRROR texture semantics for custom filtering TEX_FILTER_BOX alias for TEX_FILTER_FANT WIC Ordered and error diffusion dithering for non-WIC conversion sRGB gamma correct custom filtering and conversion DDS_FLAGS_EXPAND_LUMINANCE - Reader conversion option for L8, L16, and A8L8 legacy ...WPF Application Framework (WAF): WPF Application Framework (WAF) 3.0.0.440: Version: 3.0.0.440 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Please build the whole solution before you start one of the sample applications. Requirements .NET Framework 4.5 (The package contains a solution file for Visual Studio 2012) Changelog Legend: [B] Breaking change; [O] Marked member as obsolete Samples: Use ValueConverters via StaticResource instead of x:Static. Other Downloads Downloads OverviewWatchersNET.TagCloud: WatchersNET.TagCloud 02.02.03: changes Common Words are limited to the current PortalSFDL.NET: SFDL.NET v1.1.0.5: Changelog: Implemeted SFDL Container v4 (AES Encryption, Set Character Set) Added Stopwatch (download time) Many Bugfixes and ImprovementsBlackJumboDog: Ver5.9.1: 2013.06.13 Ver5.9.1 (1) Web??????SSI?#include???、CGI?????????????????????? (2) ???????????????????????????Lakana - WPF Framework: Lakana V2.1 RTM: - Dynamic text localization - A new application wide message busFree language translator and file converter: Free Language Translator 3.3: some bug fixes and a new link to video tutorials on Youtube.Pokemon Battle Online: ETV: ETV???2012?12??????,????,???????$/PBO/branches/PrivateBeta??。 ???????bug???????。 ???? Server??????,?????。 ?????????,?????????????,?????????。 ????????,????,?????????,???????????(??)??。 ???? ????????????。 ???????。 ???PP????,????????????????????PP????,??3。 ?????????????,??????????。 ???????? ??? ?? ???? ??? ???? ?? ?????????? ?? ??? ??? ??? ???????? ???? ???? ???????????????、???????????,??“???????”??。 ???bug ???Modern UI for WPF: Modern UI 1.0.4: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extension includes the ModernUI.WPF NuGet package. DownloadToolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.11): XrmToolbox improvement Add exception handling when loading plugins Updated information panel for displaying two lines of text Tools improvementMetadata Document Generator (v1.2013.6.10)New tool Web Resources Manager (v1.2013.6.11)Retrieve list of unused web resources Retrieve web resources from a solution All tools listAccess Checker (v1.2013.2.5) Attribute Bulk Updater (v1.2013.1.17) FetchXml Tester (v1.2013.3.4) Iconator (v1.2013.1.17) Metadata Document Generator (v1.2013.6.10) Privilege...Document.Editor: 2013.23: What's new for Document.Editor 2013.23: New Insert Emoticon support Improved Format support Minor Bug Fix's, improvements and speed upsChristoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.4 for VS2012: V2.4 - Release Date 6/10/2013 Items addressed in this 2.4 release Updated MSBuild Community Tasks reference to 1.4.0.61 Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesLayered Architecture Sample for .NET: Leave Sample - June 2013 (for .NET 4.5): Thank You for downloading Layered Architecture Sample. Please read the accompanying README.txt file for setup and installation instructions. This is the first set of a series of revised samples that will be released to illustrate the layered architecture design pattern. This version is only supported on Visual Studio 2012. This set contains 2 samples that illustrates the use of: ASP.NET Web Forms, ASP.NET Model Binding, Windows Communications Foundation (WCF), Windows Workflow Foundation (W...Papercut: Papercut 2013-6-10: Feature: Shows From, To, Date and Subject of Email. Feature: Async UI and loading spinner. Enhancement: Improved speed when loading large attachments. Enhancement: Decoupled SMTP server into secondary assembly. Enhancement: Upgraded to .NET v4. Fix: Messages lost when received very fast. Fix: Email encoding issues on display/Automatically detect message Encoding Installation Note:Installation is copy and paste. Incoming messages are written to the start-up directory of Papercut. If you do n...Supporting Guidance and Whitepapers: v1.BETA Unit test Generator Documentation: Welcome to the Unit Test Generator Once you’ve moved to Visual Studio 2012, what’s a dev to do without the Create Unit Tests feature? Based on the high demand on User Voice for this feature to be restored, the Visual Studio ALM Rangers have introduced the Unit Test Generator Visual Studio Extension. The extension adds the “create unit test” feature back, with a focus on automating project creation, adding references and generating stubs, extensibility, and targeting of multiple test framewor...New ProjectsAccord.NET Framework: Scientific computing, machine learning and computer vision framework.AllegroSharp: Biblioteka zapewniajaca obsluge serwisu aukcyjnego Allegro.pl w oparciu o udostepniona publicznie usluge Allegro Web API.Blindmap: A simple handy distraction-free mind-mapping tool for desktop/laptop Windows XP/7/8 PCs.Cryptosolic - The Cryptography & Software Licensing Framework for .Net: Cryptosolic is an Open Source Cryptography & Software Licensing Framework for .Net. More Information and Downloads available soon.DSM Web API: The "DSM Web API" project provides portable libraries to call the APIs exposed by a Synology's NAS running the DSM system.everynetsvn: this is a summary description.Extension Library: Provides a library of extension methods for commonly used .NET objects.Finance: this is a new projectFlight: A simple flight simulator with basic physics. Written in Java using the Processing library.gerencia2: This project is about a test of codeplex use whit a smal group of people, actually just two.GMFrameworkForBlog: MVC,EF,Framework,GMFHad: Had jean0617changbranch: jean0617changbranchLCC Handler V1.0: Complete software pack to handle Local Cheque Collection activities in Branches in India and abroad. This small software can handle folowing activities;Mdelete API: Delete all files and directores in windows shell. Support long path (less then 32000 chars) and network path (eg. \\server\share or \\127.0.0.1\share)Model View ViewModel with Controller: Build applications based on the Model-View-ViewModel philosophy! MySQL Connect 2 ASP.NET: Example project to show how to connect MySQL database to ASP.NET web project. IDE: Visual Studio 2010 Pro Programming language: C# Detailed information in the article here: http://epavlov.net/blog/2011/11/13/connect-to-mysql-in-visual-studio/ NEaTly Documented Code: NEaTly Documented Code helps documenting source code by taking comments directly from sources and by formatting them in a easy-to-read way. This is perfect for programming blog, for example. This project is inspired by (but not based on) Beautifully Documented Codenewsalert: this is a news alert project description.On-Screen Keyboard: An OnScreen Keyboad that accept both inputs of keyboard and mousePHP DocBlock Generator: Creates missing docblocks.Puzzle: Puzzle game. Add your image and hace fun! ;)QuickUIT: QuickUIT, short for 'Quick User Interface Testing', is an API for testing applications based on the Microsoft UI Automation framework. QuickUIT provides a simple, object-oriented interface for finding and interacting with automation elements. It is developed in C#. RapidXNA: A simple framework that aims to make starting up new XNA projects for Windows, Xbox 360 and Windows Phone 7 easier. Using RapidXNA you should be able to easily port between the 3 platforms with only minor code changes to your projects.Send HTML Wizard: Wizard to send a local html page, including referenced files, as emailSetup for Converting OLM to PST with Outlook OLM File Exporter: You can convert files from OLM to PST to access Mac Outlook on Windows Outlook using the Outlook Mac Exporter. Tool can be availed in FREE Download.Signalbox: Train dispatching simulation.Store Front: Welcome to store front this is going to built for the community by the community im sick of not having a word press style system in asp.net form for developers TLS: TLSTorrent & Podcast plugins: My Podcasts and My Torrents plugins for MediaPortalTorrentSimulation: This is a student project as academic activities.UnitXMLEditor: UnitXMLEditoruVersionClientCache: uVersionClientCache is a custom macro to always automatically version (URL querstring parameter) your files based on a MD5 hash of the file contents or file last modified date to prevent issues with client browsers caching an old file after you have changed it.Video Katalog: Upravljanje arhivom video materijalaVpNet binding for SignalR: VPNET for SignalR is a binding to Microsoft SignalR JQuery web socket implementation.Warlab: ???????????? ?????? ?? ??????WPF Diagram Designer Resourse: WPF Diagram Designer Resourse

    Read the article

  • CodePlex Daily Summary for Thursday, May 15, 2014

    CodePlex Daily Summary for Thursday, May 15, 2014Popular ReleasesVisualizers: Visualizer: This dll is the core of the project just copy and paste it in the following folder %Microsoft Visual Studio%\Common7\Packages\Debugger\Visualizers where %Microsoft Visual Studio% is the Visual studio installation folderQuickMon: Version 3.10: Adding the ability to see 'history' of Collector states (including details of errors or warnings at that time). The history size is configurable (default is switched off) and the Windows Service completely ignores keeping history (no UI or user to access it anyway). The Collector stats window now displays this history plus multiple collector stats windows can be opened at the same time. Additionally fixed a bug in the event log collector that reported an 'Error' state when an 'out of bounds' ...xFunc: xFunc 2.15.4: Fixed bug in Processor.csTFS Planning and Disaster Recovery Avoidance Guide: v1.4.BETA - TFS, DR and Azure IaaS Planning Guides: Welcome to the TFS Planning and DR Avoidance Guidance What is new? A new crisper, more compact style, which is easier to consume on multiple devices without sacrificing any content. Also included are the new TFS on Azure IaaS guide and supplementary guides. Note Capacity planning workbook and posters are included in the Everything Zip package. Quality-Bar Detail Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review ...WinAudit: WinAudit Freeware v3.0: WinAudit.exe v3.0 MD5: 88750CCF49FF7418199B2645755830FA Known Issues: 1. Report creation can be very slow when right-to-left (Hebrew) characters are present. 2. Emsisoft Anti-Malware may stop and/or quarantine WinAudit. This happens when WinAudit attempts to obtain a list if running programmes. You will need to set an exception rule in Emsisoft to allow WinAudit to run.MVCwCMS - ASP.NET MVC CMS: MVCwCMS 2.2.2: Updated CKFinder config. For the installation instructions visit the documentation page: https://mvcwcms.codeplex.com/documentationTerraMap (Terraria World Map Viewer): TerraMap 1.0.4: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed installer not uninstalling older versions The setup file will make sure .NET 4 is installed, install TerraMap, create desktop and start menu shortcuts, add a .wld file association, and launch TerraMap. If you prefer the zip ...WPF Localization Extension: v2.2.1: Issue #9277 Issue #9292 Issue #9311 Issue #9312 Issue #9313 Issue #9314CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.1.41167 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Oculus Rift support. Stereoscopic 3D display support. Variable walking / flying speed. Xbox 360 Controller support. Kinect for Windows support. Based on Firestorm viewer 4.6.5 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-1-41167-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http:/...ExtJS based ASP.NET Controls: FineUI v4.0.6: FineUI(???) ?? ExtJS ??? ASP.NET ??? FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ??????? ?????? IE 8.0+、Chrome、Firefox、Opera、Safari ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license) ???? ??:http://fineui.com/ ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI ???? ExtJS ????????,???? ExtJS ?,?????: 1. ????? FineUI ? ExtJS ? http://fineui.com/bbs/forum.ph...Office App Model Samples: Office App Model Samples v2.0: Office App Model Samples v2.0Readable Passphrase Generator: KeePass Plugin 0.13.0: Version 0.13.0 Added "mutators" which add uppercase and numbers to passphrases (to help complying with upper, lower, number complexity rules). Additional API methods which help consuming the generator from 3rd party c# projects. 13,160 words in the default dictionary (~600 more than previous release).CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.25.0: Release v1.0.25.0 MemberInfo/MethodInfo popup is now positioned properly to fit the screen In MethodInfo popup method signatures are word-wrapped Implemented Debug text value visualizer Pining sub-values from Watch PanelWrapper for the PAYMILL API: Paymill API Wrapper: Add Description in PreauthorizationHow to develop an autodialer / predictive dialer in C#: VoIP AutoDialer in C Sharp: This is the downloadable source code for this example project that is intended to help you in developing your own VoIP autodialer application in C#.R.NET: R.NET 1.5.12: R.NET 1.5.12 is a beta release towards R.NET 1.6. You are encouraged to use 1.5.12 now and give feedback. See the documentation for setup and usage instructions. Main changes for R.NET 1.5.12: The C stack limit was not disabled on Windows. For reasons possibly peculiar to R, this means that non-concurrent access to R from multiple threads was not stable. This is now fixed, with the fix validated with a unit test. Thanks to Odugen, skyguy94, and previously others (evolvedmicrobe, tomasp) fo...SEToolbox: SEToolbox 01.029.006 Release 1: Fix to allow keyboard search on load dialog. (type the first few letters of your save) Fixed check for new release. Changed the way ship details are loaded to alleviate load time for worlds with very large ships (100,000+ blocks). Fixed Image importer, was incorrectly listing 'Asteroid' as import option. Minor changes to menus (text and appearance) for clarity and OS consistency. Added in reading of world palette for color dialog editor. WIP on subsystem editor. Can now multiselec...Thaumcraft4 Research: Alpha 2 release, lots of awesome improvements: Performance inprovements asynchronous running of the search Search result-cap ( usefull for long routes, it wont try to find 3000 results ) statistics addedTiny Wifi Host: Tiny Wifi Host 3.0.0.0: Tiny Wifi Hotspot Creator (Portable) v3 size: 50KB-140KB New Features: Friendly name for connected devices instead of Mac-Address (Double click selected device to enter friendly name) Saves device names to devices.xml Better error reporting+solutions Warning sound when number of connected devices exceed a certain number. (useful when only certain number of devices must be connected at a time) Many Bug Fixes. NoAudio files does not include connect, disconnect and warning audio to dec...Media Companion: Media Companion MC3.597b: Thank you for being patient, againThere are a number of fixes in place with this release. and some new features added. Most are self explanatory, so check out the options in Preferences. Couple of new Features:* Movie - Allow save Title and Sort Title in Title Case format. * Movie - Allow save fanart.jpg if movie in folder. * TV - display episode source. Get episode source from episode filename. Fixed:* Movie - Added Fill Tags from plot keywords to Batch Rescraper. * Movie - Fixed TMDB s...New ProjectsBLADE View Engine - An independent RAZOR alternative: An alternative parser and view engine to Asp.net Razor that does not depend on Asp.net or .NET 4.5BPL++: A basic object orientated programming language built upon a virtual machine using C#Caedisi - A Cellular Automata Editor and Simulator for Network Decontamination: A research tool that explores the use of cellular automata in order to decontaminate a network attacked or infected by a virus. Cosmos Software Distrobution 1.0: No content until project released!ETCQuality: ETC QUALITY STAT SYSTEM. ????????????,??????????????。F.A.Q.: F.A.Q. project about Frequently asked questions. Help Viewer Redirector: Enables use of Help Viewer 2.0 or 2.1 with SQL Server instead of Help Viewer 1.0Jet.Payment.Cielo: Projeto contém integração com o serviço e-commerce Cielo, desenvolvido em C#. Criamos esse Helper para nossa própria necessidade. Nos ajude a melhorá-lo.Learning with Pati: Learning Javascript with PatiLeRenard: LeRenard is a collection of solutions (from core helper, extension methods) to libraries, all written in C# to help build applications.MCMP.NET: MCMP.NET allows ASP.NET applications to be added as contexts using mod_cluster.PowerShell Deployment Automation Framework: This project contains resources related to my blog at www.powershellcoach.compravda-f: ??????????? ? ?????? ??? ?????????? ??????sdir: Colorful, sorted and optionally rich directory enumeration for Windows.SienSchoofsProject: school project for .NET ExpertSorting collection of any type by several fields: Sorting collection of any type by several fields with using Reflection and Expression TreesTPS BarCode Scanner: This is going to be a collaboration platform for TPS team. We want to develop a simple low cost bar code generator and scanner system. This project is currentlVerifyDomainOutlookAddIn: ??????????????????????????????。Ynote Packages: Ynote Packages are a collection of tools which a play a crucial role in extending Ynote Classic and understanding it's true potential.??????-??????【??】??????????: ??????????????????,???、???!???????,????????????????,????????????,???! ?????-?????【??】?????????: ????????????、?????、?????、?????、?????、????,???????????,?????,??????! ?????-?????【??】?????????: ???????????????,?????????????? ??。??????????、????、????、?????????? ???????。 ??????-??????【??】??????????: ????????????????????,?????,???????,???????????,??????! ??????-??????【??】??????????: ??????????????、??????、????、?????、?????!????,????????????????!????。 ?????-?????【??】?????????: ???????????????????,?????????????????????,?????,????,???????. ?????-?????【??】?????????: ???????????????????、????????、????????、????????、???????,????????????。 ??????-??????【??】??????????: ????????????????、?????、?????、????、?????,??????????。????????????????! ??????-??????【??】??????????: ???????????????????????:????、????、??????????????,????????。????????! ????-????【??】????????: ?????????????????????,????????????,?????、??、????,?????,??????! ?????-?????【??】?????????: ???????????????????,??????????,????????、????,??????????,??????????。 ?????-?????【??】?????????: ???????,??????:?????,?????,??????,??????????,????????。????????! ????: ????????,????????????????????????,?????????,????????????????。????????????????????,??????????????,?????????????。 ????????????,?:   ??????   ??????   ????????   ??????-??????【??】??????????: ??????????????????????,???????????????,????????????????????! ??????-??????【??】??????????: ??????????????????、????,??100%????,??????,????????????,???????????! ????-????【??】????????: ???????????、??????????????????,????????,?????,??????,????,????,????! ?????-?????【??】?????????: ????????????????,??????????、??????,??????????、????、????、???????。 ?????--?????【??】?????????: ?????????????????????,???????????????,???????,?????,?????,????? !!!

    Read the article

  • Render To Texture Using OpenGL is not working but normal rendering works just fine

    - by Franky Rivera
    things I initialize at the beginning of the program I realize not all of these pertain to my issue I just copy and pasted what I had //overall initialized //things openGL related I initialize earlier on in the project glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClearDepth( 1.0f ); glEnable(GL_ALPHA_TEST); glEnable( GL_STENCIL_TEST ); glEnable(GL_DEPTH_TEST); glDepthFunc( GL_LEQUAL ); glEnable(GL_CULL_FACE); glFrontFace( GL_CCW ); glEnable(GL_COLOR_MATERIAL); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST ); //we also initialize our shader programs //(i added some shader program functions for definitions) //this enum list is else where in code //i figured it would help show you guys more about my //shader compile creation function right under this enum list VVVVVV /*enum eSHADER_ATTRIB_LOCATION { VERTEX_ATTRIB = 0, NORMAL_ATTRIB = 2, COLOR_ATTRIB, COLOR2_ATTRIB, FOG_COORD, TEXTURE_COORD_ATTRIB0 = 8, TEXTURE_COORD_ATTRIB1, TEXTURE_COORD_ATTRIB2, TEXTURE_COORD_ATTRIB3, TEXTURE_COORD_ATTRIB4, TEXTURE_COORD_ATTRIB5, TEXTURE_COORD_ATTRIB6, TEXTURE_COORD_ATTRIB7 }; */ //if we fail making our shader leave if( !testShader.CreateShader( "SimpleShader.vp", "SimpleShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; if( !testScreenShader.CreateShader( "ScreenShader.vp", "ScreenShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; SHADER PROGRAM FUNCTIONS bool CShaderProgram::CreateShader( const char* szVertexShaderName, const char* szFragmentShaderName, ... ) { //here are our handles for the openGL shaders int iGLVertexShaderHandle = -1, iGLFragmentShaderHandle = -1; //get our shader data char *vData = 0, *fData = 0; int vLength = 0, fLength = 0; LoadShaderFile( szVertexShaderName, &vData, &vLength ); LoadShaderFile( szFragmentShaderName, &fData, &fLength ); //data if( !vData ) return false; //data if( !fData ) { delete[] vData; return false; } //create both our shader objects iGLVertexShaderHandle = glCreateShader( GL_VERTEX_SHADER ); iGLFragmentShaderHandle = glCreateShader( GL_FRAGMENT_SHADER ); //well we got this far so we have dynamic data to clean up //load vertex shader glShaderSource( iGLVertexShaderHandle, 1, (const char**)(&vData), &vLength ); //load fragment shader glShaderSource( iGLFragmentShaderHandle, 1, (const char**)(&fData), &fLength ); //we are done with our data delete it delete[] vData; delete[] fData; //compile them both glCompileShader( iGLVertexShaderHandle ); //get shader status int iShaderOk; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLVertexShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLVertexShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szVertexShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLVertexShaderHandle); return false; } glCompileShader( iGLFragmentShaderHandle ); //get shader status glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //lets check to see if the fragment shader compiled int iCompiled = 0; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { //this shader did not compile leave return false; } //lets check to see if the fragment shader compiled glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //make our new shader program m_iShaderProgramHandle = glCreateProgram(); glAttachShader( m_iShaderProgramHandle, iGLVertexShaderHandle ); glAttachShader( m_iShaderProgramHandle, iGLFragmentShaderHandle ); glLinkProgram( m_iShaderProgramHandle ); int iLinked = 0; glGetProgramiv( m_iShaderProgramHandle, GL_LINK_STATUS, &iLinked ); if( !iLinked ) { //we didn't link return false; } //NOW LETS CREATE ALL OUR HANDLES TO OUR PROPER LIKING //start from this parameter va_list parseList; va_start( parseList, szFragmentShaderName ); //read in number of variables if any unsigned uiNum = 0; uiNum = va_arg( parseList, unsigned ); //for loop through our attribute pairs int enumType = 0; for( unsigned x = 0; x < uiNum; ++x ) { //specify our attribute locations enumType = va_arg( parseList, int ); char* name = va_arg( parseList, char* ); glBindAttribLocation( m_iShaderProgramHandle, enumType, name ); } //end our list parsing va_end( parseList ); //relink specify //we have custom specified our attribute locations glLinkProgram( m_iShaderProgramHandle ); //fill our handles InitializeHandles( ); //everything went great return true; } void CShaderProgram::InitializeHandles( void ) { m_uihMVP = glGetUniformLocation( m_iShaderProgramHandle, "mMVP" ); m_uihWorld = glGetUniformLocation( m_iShaderProgramHandle, "mWorld" ); m_uihView = glGetUniformLocation( m_iShaderProgramHandle, "mView" ); m_uihProjection = glGetUniformLocation( m_iShaderProgramHandle, "mProjection" ); ///////////////////////////////////////////////////////////////////////////////// //texture handles m_uihDiffuseMap = glGetUniformLocation( m_iShaderProgramHandle, "diffuseMap" ); if( m_uihDiffuseMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihDiffuseMap, RM_DIFFUSE+GL_TEXTURE0 ); (0)+ } m_uihNormalMap = glGetUniformLocation( m_iShaderProgramHandle, "normalMap" ); if( m_uihNormalMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihNormalMap, RM_NORMAL+GL_TEXTURE0 ); (1)+ } } void CShaderProgram::SetDiffuseMap( const unsigned& uihDiffuseMap ) { (0)+ glActiveTexture( RM_DIFFUSE+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihDiffuseMap ); } void CShaderProgram::SetNormalMap( const unsigned& uihNormalMap ) { (1)+ glActiveTexture( RM_NORMAL+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihNormalMap ); } //MY 2 TEST SHADERS also my math order is correct it pertains to my matrix ordering in my math library once again i've tested the basic rendering. rendering to the screen works fine ----------------------------------------SIMPLE SHADER------------------------------------- //vertex shader looks like this #version 330 in vec3 vVertexPos; in vec3 vNormal; in vec2 vTexCoord; uniform mat4 mWorld; // Model Matrix uniform mat4 mView; // Camera View Matrix uniform mat4 mProjection;// Camera Projection Matrix out vec2 vTexCoordVary; // Texture coord to the fragment program out vec3 vNormalColor; void main( void ) { //pass the texture coordinate vTexCoordVary = vTexCoord; vNormalColor = vNormal; //calculate our model view projection matrix mat4 mMVP = (( mWorld * mView ) * mProjection ); //result our position gl_Position = vec4( vVertexPos, 1 ) * mMVP; } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; in vec3 vNormalColor; uniform sampler2D diffuseMap; uniform sampler2D normalMap; out vec4 fragColor[2]; void main( void ) { //CORRECT fragColor[0] = texture( normalMap, vTexCoordVary ); fragColor[1] = vec4( vNormalColor, 1.0 ); }; ----------------------------------------SCREEN SHADER------------------------------------- //vertext shader looks like this #version 330 in vec3 vVertexPos; // This is the position of the vertex coming in in vec2 vTexCoord; // This is the texture coordinate.... out vec2 vTexCoordVary; // Texture coord to the fragment program void main( void ) { vTexCoordVary = vTexCoord; //set our position gl_Position = vec4( vVertexPos.xyz, 1.0f ); } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; // Incoming "varying" texture coordinate uniform sampler2D diffuseMap;//the tile detail texture uniform sampler2D normalMap; //the normal map from earlier out vec4 vTheColorOfThePixel; void main( void ) { //CORRECT vTheColorOfThePixel = texture( normalMap, vTexCoordVary ); }; .Class RenderTarget Main Functions //here is my render targets create function bool CRenderTarget::Create( const unsigned uiNumTextures, unsigned uiWidth, unsigned uiHeight, int iInternalFormat, bool bDepthWanted ) { if( uiNumTextures <= 0 ) return false; //generate our variables glGenFramebuffers(1, &m_uifboHandle); // Initialize FBO glBindFramebuffer(GL_FRAMEBUFFER, m_uifboHandle); m_uiNumTextures = uiNumTextures; if( bDepthWanted ) m_uiNumTextures += 1; m_uiTextureHandle = new unsigned int[uiNumTextures]; glGenTextures( uiNumTextures, m_uiTextureHandle ); for( unsigned x = 0; x < uiNumTextures-1; ++x ) { glBindTexture( GL_TEXTURE_2D, m_uiTextureHandle[x]); // Reserve space for our 2D render target glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, iInternalFormat, uiWidth, uiHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + x, GL_TEXTURE_2D, m_uiTextureHandle[x], 0); } //if we need one for depth testing if( bDepthWanted ) { glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0); glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0);*/ // Must attach texture to framebuffer. Has Stencil and depth glBindRenderbuffer(GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glRenderbufferStorage(GL_RENDERBUFFER, /*GL_DEPTH_STENCIL*/GL_DEPTH24_STENCIL8, TEXTURE_WIDTH, TEXTURE_HEIGHT ); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); } glBindFramebuffer(GL_FRAMEBUFFER, 0); //everything went fine return true; } void CRenderTarget::Bind( const int& iTargetAttachmentLoc, const unsigned& uiWhichTexture, const bool bBindFrameBuffer ) { if( bBindFrameBuffer ) glBindFramebuffer( GL_FRAMEBUFFER, m_uifboHandle ); if( uiWhichTexture < m_uiNumTextures ) glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + iTargetAttachmentLoc, m_uiTextureHandle[uiWhichTexture], 0); } void CRenderTarget::UnBind( void ) { //default our binding glBindFramebuffer( GL_FRAMEBUFFER, 0 ); } //this is all in a test project so here's my straight forward rendering function for testing this render function does basic rendering steps keep in mind i have already tested my textures i have already tested my box thats being rendered all basic rendering works fine its just when i try to render to a texture then display it in a render surface that it does not work. Also I have tested my render surface it is bound exactly to the screen coordinate space void TestRenderSteps( void ) { //Clear the color and the depth glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //bind the shader program glUseProgram( testShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().GetBufferHandle() ); //2) how our stream will be split here ( 4 bytes position, ..ext ) CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //send the needed information into the shader testShader.SetWorldMatrix( boxPosition ); testShader.SetViewMatrix( Static_Camera.GetView( ) ); testShader.SetProjectionMatrix( Static_Camera.GetProjection( ) ); testShader.SetDiffuseMap( iTextureID ); testShader.SetNormalMap( iTextureID2 ); GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 }; glDrawBuffers(2, buffers); //bind to our render target //RM_DIFFUSE, RM_NORMAL are enums (0 && 1) renderTarget.Bind( RM_DIFFUSE, 1, true ); renderTarget.Bind( RM_NORMAL, 1, false); //false because buffer is already bound //i clear here just to clear the texture to make it a default value of white //by doing this i can see if what im rendering to my screen is just drawing to the screen //or if its my render target defaulted glClearColor( 1.0f, 1.0f, 1.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //i have this box object which i draw testBox.Draw(); //the draw call looks like this //my normal rendering works just fine so i know this draw is fine // glDrawElementsBaseVertex( m_sides[x].GetPrimitiveType(), // m_sides[x].GetPrimitiveCount() * 3, // GL_UNSIGNED_INT, // BUFFER_OFFSET(sizeof(unsigned int) * m_sides[x].GetStartIndex()), // m_sides[x].GetStartVertex( ) ); //we unbind the target back to default renderTarget.UnBind(); //i stop mapping my vertex format CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().UnMapVertexStride(); //i go back to default in using no shader program glUseProgram( 0 ); //now that everything is drawn to the textures //lets draw our screen surface and pass it our 2 filled out textures //NOW RENDER THE TEXTURES WE COLLECTED TO THE SCREEN QUAD //bind the shader program glUseProgram( testScreenShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionTexBuffer().GetBufferHandle() ); //2) how our stream will be split here CVertexBufferManager::GetInstance()->GetPositionTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //pass our 2 filled out textures (in the shader im just using the diffuse //i wanted to see if i was rendering anything before i started getting into other techniques testScreenShader.SetDiffuseMap( renderTarget.GetTextureHandle(0) ); //SetDiffuseMap definitions in shader program class testScreenShader.SetNormalMap( renderTarget.GetTextureHandle(1) ); //SetNormalMap definitions in shader program class //DO the draw call drawing our screen rectangle glDrawElementsBaseVertex( m_ScreenRect.GetPrimitiveType(), m_ScreenRect.GetPrimitiveCount() * 3, GL_UNSIGNED_INT, BUFFER_OFFSET(sizeof(unsigned int) * m_ScreenRect.GetStartIndex()), m_ScreenRect.GetStartVertex( ) );*/ //unbind our vertex mapping CVertexBufferManager::GetInstance()->GetPositionTexBuffer().UnMapVertexStride(); //default to no shader program glUseProgram( 0 ); } Last words: 1) I can render my box just fine 2) i can render my screen rect just fine 3) I cannot render my box into a texture then display it into my screen rect 4) This entire project is just a test project I made to test different rendering practices. So excuse any "ugly-ish" unclean code. This was made just on a fly run through when I was trying new test cases.

    Read the article

  • CodePlex Daily Summary for Thursday, November 07, 2013

    CodePlex Daily Summary for Thursday, November 07, 2013Popular ReleasesCompare .NET Objects: Version 1.7.4.0: Manual merge of patch 15325 from Farris to fix issues 9075 and 9076 relating to defects with Ignoring the Collection Order Applied patch 15263 from MariuszWojcik to support LINQ enumerators.Toolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2013.9.25): XrmToolbox improvement Correct changing connection from the status dropdown Tools improvement Updated tool Audit Center (v1.2013.9.10) -> Publish entities Iconator (v1.2013.9.27) -> Optimized asynchronous loading of images and entities MetadataDocumentGenerator (v1.2013.11.6) -> Correct system entities reading with incorrect attribute type Script Manager (v1.2013.9.27) -> Retrieve only custom events SiteMapEditor (v1.2013.11.7) -> Reset of CRM 2013 SiteMap ViewLayoutReplicator (v1.201...Event-Based Components AppBuilder: AB3.AppDesigner.59: Iteration 59 (Feature): By selecting the center thumb of a selected wire you can add a point attribute by context menu (to redirect a wire). Therefore all possible wire outlines are possible... New: AddNewWirePointAttributeFlow, AddNewWirePointAttributeAdapter, NewPointAttributeAdder Improved: LineAdorner, WireLineDecoratorBase, WireLineSourceToTargetDecorator, WiresRenderer, ... See: https://ebcappbuilder.codeplex.com/wikipage?title=Modify%20the%20view%20of%20a%20wire Coming soon: Iterat...Microsoft SQL Server Product Samples: Database: SQL Server 2014 CTP2 In-Memory OLTP Sample, based: This sample showcases the new In-Memory OLTP feature, which is part of SQL Server 2014 CTP2. It shows the new memory-optimized tables and natively-compiled stored procedures, and can be used to show the performance benefit of in-memory OLTP. Installation instructions for the sample are included in the file ‘awinmemsample.doc’, which is part of the download. You can ask a question about this sample at the SQL Server Samples Forum Composite C1 CMS - Open Source on .NET: Composite C1 4.1: Composite C1 4.1 (4.1.5058.34326) Write a review for this release - help us improve, recommend us. Getting started If you are new to Composite C1 and want to install it: http://docs.composite.net/Getting-started What's new in Composite C1 4.1 The following are highlights of major changes since Composite C1 4.0: General user features: Drag-and-drop images and files like PDF and Word directly from own your desktop and folders into page content Allow you to install Composite Form Builder ...xFunc: xFunc 2.10.1: Fixed https://github.com/sys27/xFunc/issues/60.Win_8 (??? Devel Studio 2 ??? 3): Win8 0.8 + Sample (.dvs): ???????------------------------------------------ 1. ????????? ??????????? ????????? ??????. 2. ?????????? ???? , ????????? ? ?????????? ???? ????. 3. ?????????? ?????? ??????. 4. ?????????? ????????? ???????????. 5. ????????????? ??? . English----------------------------------------------------------------------- 1. Added ability to load icons. 2. Fixed bugs related to obtaining names of shapes. 3. Fixed a memory leak. 4. Fixed some defects. 5. Optimized code. ---------------------------...ConEmu - Windows console with tabs: ConEmu 131105 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe"CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.9.0: Implemented Recent Scripts list Added checking for plugin updates from AboutBox Multiple formatting improvements/fixes Implemented selection of the CLR version when preparing distribution package Added project panel button for showing plugin shortcuts list Added 'What's New?' panel Fixed auto-formatting scrolling artifact Implemented navigation to "logical" file (vs. auto-generated) file from output panel To avoid the DLLs getting locked by OS use MSI file for the installation.Home Access Plus+: v9.7: Updated: JSON.net Fixed: Issue with the Windows 8 App Added: Windows 8.1 App Added: Win: Self Signed HAP+ Install Support Added: Win: Delete File Support Added: Timeout for the Logon Tracker Removed: Error Dialogs on the User Card Fixed: Green line showing over the booking form Note: a web.config file update is requiredxUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net Visual Studio Runner: A placeholder for downloading Visual Studio runner VSIX files, in case the Gallery is down (or you want to downgrade to older versions).Social Network Importer for NodeXL: SocialNetImporter(v.1.9.1): This new version includes: - Include me option is back - Fixed the login bug reported latelyVeraCrypt: VeraCrypt version 1.0c: Changes between 1.0b and 1.0c (11 November 2013) : Set correctly the minimum required version in volumes header (this value must always follow the program version after any major changes). This also solves also the hidden volume issueCaptcha MVC: Captcha MVC 2.5: v 2.5: Added support for MVC 5. The DefaultCaptchaManager is no longer throws an error if the captcha values was entered incorrectly. Minor changes. v 2.4.1: Fixed issues with deleting incorrect values of the captcha token in the SessionStorageProvider. This could lead to a situation when the captcha was not working with the SessionStorageProvider. Minor changes. v 2.4: Changed the IIntelligencePolicy interface, added ICaptchaManager as parameter for all methods. Improved font size ...Duplica: duplica 0.2.498: this is first stable releaseDNN Blog: 06.00.01: 06.00.01 ReleaseThis is the first bugfix release of the new v6 blog module. These are the changes: Added some robustness in v5-v6 scripts to cater for some rare upgrade scenarios Changed the name of the module definition to avoid clash with Evoq Social Addition of sitemap providerVG-Ripper & PG-Ripper: VG-Ripper 2.9.50: changes NEW: Added Support for "ImageHostHQ.com" links NEW: Added Support for "ImgMoney.net" links NEW: Added Support for "ImgSavy.com" links NEW: Added Support for "PixTreat.com" links Bug fixesVidCoder: 1.5.11 Beta: Added Encode Details window. Exposes elapsed time, ETA, current and average FPS, running file size, current pass and pass progress. Open it by going to Windows -> Encode Details while an encode is running. Subtitle dialog now disables the "Burn In" checkbox when it's either unavailable or it's the only option. It also disables the "Forced Only" when the subtitle type doesn't support the "Forced" flag. Updated HandBrake core to SVN 5872. Fixed crash in the preview window when a source fil...Wsus Package Publisher: Release v1.3.1311.02: Add three new Actions in Custom Updates : Work with Files (Copy, Delete, Rename), Work with Folders (Add, Delete, Rename) and Work with Registry Keys (Add, Delete, Rename). Fix a bug, where after resigning an update, the display is not refresh. Modify the way WPP sort rows in 'Updates Detail Viewer' and 'Computer List Viewer' so that dates are correctly sorted. Add a Tab in the settings form to set Proxy settings when WPP needs to go on Internet. Fix a bug where 'Manage Catalogs Subsc...uComponents: uComponents v6.0.0: This release of uComponents will compile against and support the new API in Umbraco v6.1.0. What's new in uComponents v6.0.0? New DataTypesImage Point XML DropDownList XPath Templatable List New features / Resolved issuesThe following workitems have been implemented and/or resolved: 14781 14805 14808 14818 14854 14827 14868 14859 14790 14853 14790 DataType Grid 14788 14810 14873 14833 14864 14855 / 14860 14816 14823 Drag & Drop support for rows Su...New ProjectsAuto Mapping MVVM: A simple MVVM Kit that works with Windows 8 and Windows Phone 8, with the ability to automatically map between Views and ViewModels.BMS Converter: BMS Converter is a converter for .bms files (Be-Music Source), which converts them to audio and/or video files.Car Management: The software which is going to be developed aims to give an interface in order to manage cars booking with ABC University institution.Classic Algorithms: This is a collection of classic algorithms written in C#. I'll start with the Graphs.Community TFS Work Item Tracking Extensions: The Community TFS WIT Extensions project is a place to share tools and extensions to the TFS work item tracking system.D3N: "port" of D3 to .NETdoinik tara client: The Daily Star ClientLEGO MINDSTORMS EV3 API: API for the LEGO Mindstorms EV3 brick usable from desktop, Windows Phone and WinRT.LudejoWcf: Playground wcfMy Test: This is a demoPearson API Wrapper: Pearson API Wrapper enables you to develop Learning Apps faster !Pescar2013ShopAyelenArce: Lalalapescar2013-shop-ElectroShop: ya contiene: EF5 agrega: layouts bootstrap bootstrap js js linq(?) knockout(?) jquery jquery ui Pescar2013Shop-MaruMati: Venta de productos a través de una pagina web.Pescar2013Shop-The_Future_2014: primer proyecto.RatatoskSMS: SMS gateway based on library GSMComm, that can handle multiple GSM modems to send and recieve messages to/from database, with http interface.SharePoint Enhanced New: A replacement to the "New Document" split button in document libraries with a modal dialog displaying available document Content Types.Task Management Application: The Eisenhower Matrix with BacklogTridion: This project provides an analog to Tridion Core Services for older versions of Tridion that rely on the TOM (COM) architecture.Web Scripting - Assignment 2 - Website Prototype: Assignment 2 - Website PrototypeWindows Azure Custom Performance Counters: The project Windows Azure Custom Performance Counters is a startup task to help working with custom performance counters on web and worker roles.

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • cannot access localhost using ip

    - by Robert
    I have done a small web development project using eclipse. It runs well when I try running it on browser with url localhost:8080/myproject/home.html. But if I want to access it on another machine (laptop, mobile, etc. using the same wifi) it is not possible; it is not able to connect. After Googling for a while found out that I have to use the IP address instead of 'localhost'. So I tried 10.0.0.4:8080/myproject/home.html, but still does not work. In fact i am unable to open that url on the same machine (where localhost:8080/myproject/home.html works fine). I also added a new Inbound rule in control panel firewall settings, allowing access to all ports for protocol TCP. Still have problem in running application with the url 10.0.0.4:8080/myproject/home.html (both on same machine as well as laptop and mobile). FYI i am using Eclipse Indigo, Apache tomcat 6.0 and server.xml file contents is as below: <?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --><!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --><Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener SSLEngine="on" className="org.apache.catalina.core.AprLifecycleListener"/> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener"/> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener"/> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener"/> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"/> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource auth="Container" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" name="UserDatabase" pathname="conf/tomcat-users.xml" type="org.apache.catalina.UserDatabase"/> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" address="10.0.0.4" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443"/> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine defaultHost="localhost" name="Catalina"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host appBase="webapps" autoDeploy="true" name="localhost" unpackWARs="true" xmlNamespaceAware="false" xmlValidation="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <Context docBase="myproject" path="/myproject" reloadable="true" source="org.eclipse.jst.jee.server:myproject"/></Host> </Engine> </Service> </Server>

    Read the article

  • CodePlex Daily Summary for Wednesday, November 06, 2013

    CodePlex Daily Summary for Wednesday, November 06, 2013Popular ReleasesWin_8 (??? Devel Studio 2 ??? 3): Win8 0.8 + Sample (.dvs): ???????------------------------------------------ 1. ????????? ??????????? ????????? ??????. 2. ?????????? ???? , ????????? ? ?????????? ???? ????. 3. ?????????? ?????? ??????. 4. ?????????? ????????? ???????????. 5. ????????????? ??? . English----------------------------------------------------------------------- 1. Added ability to load icons. 2. Fixed bugs related to obtaining names of shapes. 3. Fixed a memory leak. 4. Fixed some defects. 5. Optimized code. ---------------------------...NWTCompiler: NWTCompiler v2.4.0: This version fixes a bug with Pinyin and adds support for the 2013 English NWT.ConEmu - Windows console with tabs: ConEmu 131105 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe"CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.9.0: Implemented Recent Scripts list Added checking for plugin updates from AboutBox Multiple formatting improvements/fixes Implemented selection of the CLR version when preparing distribution package Added project panel button for showing plugin shortcuts list Added 'What's New?' panel Fixed auto-formatting scrolling artifact Implemented navigation to "logical" file (vs. auto-generated) file from output panel To avoid the DLLs getting locked by OS use MSI file for the installation.Home Access Plus+: v9.7: Updated: JSON.net Fixed: Issue with the Windows 8 App Added: Windows 8.1 App Added: Win: Self Signed HAP+ Install Support Added: Win: Delete File Support Added: Timeout for the Logon Tracker Removed: Error Dialogs on the User Card Fixed: Green line showing over the booking form Note: a web.config file update is requiredWPF Extended DataGrid: WPF Extended DataGrid 2.0.0.10 binaries: Now row summaries are updated whenever autofilter value sis modified.Community Forums NNTP bridge: Community Forums NNTP Bridge V55 (LiveConnect): This is a which can be used with the new LiveConnect authentication and the MSDN forums. It fixed a bug where the authentication does not work after 1 hour. A logfile will be stored in "%AppData%\Community\CommunityForumsNNTPServer". If you have any problems please feel free to sent me the file "LogFile.txt" or attached it to a issue.xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net Visual Studio Runner: A placeholder for downloading Visual Studio runner VSIX files, in case the Gallery is down (or you want to downgrade to older versions).Social Network Importer for NodeXL: SocialNetImporter(v.1.9.1): This new version includes: - Include me option is back - Fixed the login bug reported latelyVeraCrypt: VeraCrypt version 1.0c: Changes between 1.0b and 1.0c (11 November 2013) : Set correctly the minimum required version in volumes header (this value must always follow the program version after any major changes). This also solves also the hidden volume issueCaptcha MVC: Captcha MVC 2.5: v 2.5: Added support for MVC 5. The DefaultCaptchaManager is no longer throws an error if the captcha values was entered incorrectly. Minor changes. v 2.4.1: Fixed issues with deleting incorrect values of the captcha token in the SessionStorageProvider. This could lead to a situation when the captcha was not working with the SessionStorageProvider. Minor changes. v 2.4: Changed the IIntelligencePolicy interface, added ICaptchaManager as parameter for all methods. Improved font size ...Role Based Views in Microsoft Dynamics CRM 2011: Role Based Views in CRM 2011 - 1.0.0.0: Set the default view for a user for a particular entity based on security role Hide/ Show views for a user for a particular entity based on his security role Choose the preferred role for a user for view configuration when the user have more than one security role in the system. Ability to exclude/ include a user from view configuration as per business requirementsDuplica: duplica 0.2.498: this is first stable releaseDNN Blog: 06.00.01: 06.00.01 ReleaseThis is the first bugfix release of the new v6 blog module. These are the changes: Added some robustness in v5-v6 scripts to cater for some rare upgrade scenarios Changed the name of the module definition to avoid clash with Evoq Social Addition of sitemap providerStock Track: Version 1.2 Stable: Overhaul and re-think of the user interface in normal mode. Added stock history view in normal mode. Allows user to enter orders in normal mode. Allow advanced user to run database queries within the program. Improved sales statistics feature, able to calculate against a single category. Updated database script file. Now compatible with lower version of SQL Server.VG-Ripper & PG-Ripper: VG-Ripper 2.9.50: changes NEW: Added Support for "ImageHostHQ.com" links NEW: Added Support for "ImgMoney.net" links NEW: Added Support for "ImgSavy.com" links NEW: Added Support for "PixTreat.com" links Bug fixesVidCoder: 1.5.11 Beta: Added Encode Details window. Exposes elapsed time, ETA, current and average FPS, running file size, current pass and pass progress. Open it by going to Windows -> Encode Details while an encode is running. Subtitle dialog now disables the "Burn In" checkbox when it's either unavailable or it's the only option. It also disables the "Forced Only" when the subtitle type doesn't support the "Forced" flag. Updated HandBrake core to SVN 5872. Fixed crash in the preview window when a source fil...Wsus Package Publisher: Release v1.3.1311.02: Add three new Actions in Custom Updates : Work with Files (Copy, Delete, Rename), Work with Folders (Add, Delete, Rename) and Work with Registry Keys (Add, Delete, Rename). Fix a bug, where after resigning an update, the display is not refresh. Modify the way WPP sort rows in 'Updates Detail Viewer' and 'Computer List Viewer' so that dates are correctly sorted. Add a Tab in the settings form to set Proxy settings when WPP needs to go on Internet. Fix a bug where 'Manage Catalogs Subsc...uComponents: uComponents v6.0.0: This release of uComponents will compile against and support the new API in Umbraco v6.1.0. What's new in uComponents v6.0.0? New DataTypesImage Point XML DropDownList XPath Templatable List New features / Resolved issuesThe following workitems have been implemented and/or resolved: 14781 14805 14808 14818 14854 14827 14868 14859 14790 14853 14790 DataType Grid 14788 14810 14873 14833 14864 14855 / 14860 14816 14823 Drag & Drop support for rows Su...SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 1.2.1: New FeaturesAdded option Limit to current basket subtotal to HadSpentAmount discount rule Items in product lists can be labelled as NEW for a configurable period of time Product templates can optionally display a discount sign when discounts were applied Added the ability to set multiple favicons depending on stores and/or themes Plugin management: multiple plugins can now be (un)installed in one go Added a field for the HTML body id to store entity (Developer) New property 'Extra...New Projects.Net PG: Just university projectA2D: 1. Cache System 2. Event System 3. IoC 4. Sql Dispatcher System 5. Session System 6. ???Command Bus 7. ????Advantage Browser: A web browser made in Visual Basic 2010 for all to add and learn from or just use.BarCoder: BarCoder is C# Web app for creating EAN-8 and EAN-16 bar codec in vector graphic image format.CLIDE .NET: CLIDE .NET The Command Line IDE for .NET Because Code is just CodeDevcken Java Library: Devcken's Java LibraryDouble Sides Flipping Control - Windows Phone: Double Sides Flipping Control The Control features the following: Two Sides Control which flipping across its center in Both Directions based on horizontal geFigTree FHMS (Funeral Home Management System): FigTree FHMS (Funeral Home Management System) application is outfitted for daily operations of your funeral home.InChatter: InChatter is a Instant messaging communication module for a desktop application programmed by C#. And the server is a WCF program.Mod.Training: Some helpful examples about Orchardy stuffPlupload MVC4 Demo: This project shows how to implement Plupload with an MVC applicationQuality Control Management System: QCMS is a web-based Test Management systemQuan Ly Nha Thuoc: Project Qu?n Lý Ti?m Thu?c Tây Email: phuoc.nh2953@sinhvien.hoasen.edu.vnRandomchaos DX11 Engine: An open source C++ DX 11 EngineService Gateway: The service gateway enables composition and agility for web sites and services.Sortable objects: Sortable objects server and client sideSTSADM ExportCrawlLog - SP Foundation 2010: STSADM extension to see/export SharePoint Fiundation 2010 MSSearch configuration and Crawl LogsTACACS Plus Extended: tacacs+ updates to support subnet specific configurations and reduced configuration with ldap access.TKinect: Framework for Testing Kinect ApplicationsWebApi Data Wrapper: This project - a collection of wrapper for WebAPI.WPF ExpressionEditor: Control representing expression editor. Allows usage of custom expression parsers.

    Read the article

  • Throttling Cache Events

    - by dxfelcey
    The real-time eventing feature in Coherence is great for relaying state changes to other systems or to users. However, sometimes not all changes need to or can be sent to consumers. For instance; If rapid changes cannot be consumed or interpreted as fast as they are being sent. A user looking at changing Stock prices may only be able to interpret and react to 1 change per second. A client may be using low bandwidth connection, so rapidly sending events will only result in them being queued and delayed A large number of clients may need to be notified of state changes and sending 100 events p/s to 1000 clients cannot be supported with the available hardware, but 10 events p/s to 1000 clients can. Note this example assumes that many of the state changes are to the same value. One simple approach to throttling Coherence cache events is to use a cache store to capture changes to one cache (data cache) and insert those changes periodically in another cache (events cache). Consumers interested in state changes to entires in the first cache register an interest (event listener) against the second event cache. By using the cache store write-behind feature rapid updates to the same cache entry are coalesced so that updates are merged and written at the interval configured to the event cache. The time interval at which changes are written to the events cache can easily be configured using the write-behind delay time in the cache configuration, as shown below.   <caching-schemes>     <distributed-scheme>       <scheme-name>CustomDistributedCacheScheme</scheme-name>       <service-name>CustomDistributedCacheService</service-name>       <thread-count>1</thread-count>       <backing-map-scheme>         <read-write-backing-map-scheme>           <scheme-name>CustomRWBackingMapScheme</scheme-name>           <internal-cache-scheme>             <local-scheme />           </internal-cache-scheme>           <cachestore-scheme>             <class-scheme>               <scheme-name>CustomCacheStoreScheme</scheme-name>               <class-name>com.oracle.coherence.test.CustomCacheStore</class-name>               <init-params>                 <init-param>                   <param-type>java.lang.String</param-type>                   <param-value>{cache-name}</param-value>                 </init-param>                 <init-param>                   <param-type>java.lang.String</param-type>                   <!-- The name of the cache to write events to -->                   <param-value>cqc-test</param-value>                 </init-param>               </init-params>             </class-scheme>           </cachestore-scheme>           <write-delay>1s</write-delay>           <write-batch-factor>0</write-batch-factor>         </read-write-backing-map-scheme>       </backing-map-scheme>       <autostart>true</autostart>     </distributed-scheme>   </caching-schemes> The cache store implementation to perform this throttling is trivial and only involves overriding the basic cache store functions. public class CustomCacheStore implements CacheStore { private String publishingCacheName; private String sourceCacheName; public CustomCacheStore(String sourceCacheStore, String publishingCacheName) { this.publishingCacheName = publishingCacheName; this.sourceCacheName = sourceCacheName; } @Override public Object load(Object key) { return null; } @Override public Map loadAll(Collection keyCollection) { return null; } @Override public void erase(Object key) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).remove(key); CacheFactory.log("Erasing entry: " + key, CacheFactory.LOG_DEBUG); } } @Override public void eraseAll(Collection keyCollection) { if (sourceCacheName != publishingCacheName) { for (Object key : keyCollection) { CacheFactory.getCache(publishingCacheName).remove(key); CacheFactory.log("Erasing collection entry: " + key, CacheFactory.LOG_DEBUG); } } } @Override public void store(Object key, Object value) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).put(key, value); CacheFactory.log("Storing entry (key=value): " + key + "=" + value, CacheFactory.LOG_DEBUG); } } @Override public void storeAll(Map entryMap) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).putAll(entryMap); CacheFactory.log("Storing entries: " + entryMap, CacheFactory.LOG_DEBUG); } } }  As you can see each cache store operation on the data cache results in a similar operation on event cache. This is a very simple pattern which has a lot of additional possibilities, but it also has a few drawbacks you should be aware of: This event throttling implementation will use additional memory as a duplicate copy of entries held in the data cache need to be held in the events cache too - 2 if the event cache has backups A data cache may already use a cache store, so a "multiplexing cache store pattern" must also be used to send changes to the existing and throttling cache store.  If you would like to try out this throttling example you can download it here. I hope its useful and let me know if you spot any further optimizations.

    Read the article

  • Nagios plugin script not working as expected

    - by Linker3000
    I have modified an off-the-shelf Nagios plugin perl script to (in theory) return a one or zero according to the existence, or not, of a file on a remote linux server. The script runs a remote ssh session and logs in as the nagios user. The remote linux servers have private keys setup for that user, and on the bash command line the script works as expected, but when run as a plugin it always returns '1' (true) even if the file does not exist. Some help with the logic or a comment on why things are not working as expected within Nagios would be appreciated. I'd prefer to use this ssh login method rather than having to install nrpe on all the linux servers. To run from a command line (assuming remote server has a user called nagios with a valid private key): ./check_reboot_required -e ssh -H remote-servers-ip-addr -p 'filename-to-check' -v Ta. #! /usr/bin/perl -w # # # License Information: # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. # ############################################################################ use POSIX; use strict; use Getopt::Long; use lib "/usr/lib/nagios/plugins" ; use vars qw($host $opt_V $opt_h $opt_v $verbose $PROGNAME $pattern $opt_p $mmin $opt_e $opt_t $opt_H $status $state $msg $msg_q $MAILQ $SHELL $device $used $avail $percent $fs $blocks $CMD $RMTOS); use utils qw(%ERRORS &print_revision &support &usage ); sub print_help (); sub print_usage (); sub process_arguments (); $ENV{'PATH'}=''; $ENV{'BASH_ENV'}=''; $ENV{'ENV'}=''; $PROGNAME = "check_reboot_required"; Getopt::Long::Configure('bundling'); $status = process_arguments(); if ($status){ print "ERROR: processing arguments\n"; exit $ERRORS{'UNKNOWN'}; } $SIG{'ALRM'} = sub { print ("ERROR: timed out waiting for $CMD on $host\n"); exit $ERRORS{'WARNING'}; }; $host = $opt_H; $pattern = $opt_p; print "Pattern >" . $pattern . "< " if $verbose; alarm($opt_t); #$CMD = "/usr/bin/find " . $pattern . " -type f 2>/dev/null| /usr/bin/wc -l"; $CMD = "[ -f " . $pattern . " ] && echo 1 || echo 0"; alarm($opt_t); ## get cmd output from remote system if (! open (OUTPUT, "$SHELL $host $CMD|" ) ) { print "ERROR: could not open $CMD on $host\n"; exit $ERRORS{'UNKNOWN'}; } my $perfdata = ""; my $state = "3"; my $msg = "Indeterminate result"; # only first line is relevant in this iteration. while (<OUTPUT>) { my $result = chomp($_); $msg = $result; print "Shell returned >" . $result . "< length is " . length($result) . " " if $verbose; if ( $result == 1 ) { $msg = "Reboot required (NB: Result still not accurate)" . $result ; $state = $ERRORS{'WARNING'}; last; } elsif ( $result == 0 ) { $msg = "No reboot required (NB: Result still not accurate) " . $result ; $state = $ERRORS{'OK'}; last; } else { $msg = "Output received, but it was neither a 1 nor a 0" ; last; } } close (OUTPUT); print "$msg | $perfdata\n"; exit $state; ##################################### #### subs sub process_arguments(){ GetOptions ("V" => \$opt_V, "version" => \$opt_V, "v" => \$opt_v, "verbose" => \$opt_v, "h" => \$opt_h, "help" => \$opt_h, "e=s" => \$opt_e, "shell=s" => \$opt_e, "p=s" => \$opt_p, "pattern=s" => \$opt_p, "t=i" => \$opt_t, "timeout=i" => \$opt_t, "H=s" => \$opt_H, "hostname=s" => \$opt_H ); if ($opt_V) { print_revision($PROGNAME,'$Revision: 1.0 $ '); exit $ERRORS{'OK'}; } if ($opt_h) { print_help(); exit $ERRORS{'OK'}; } if (defined $opt_v ){ $verbose = $opt_v; } if (defined $opt_e ){ if ( $opt_e eq "ssh" ) { if (-x "/usr/local/bin/ssh") { $SHELL = "/usr/local/bin/ssh"; } elsif ( -x "/usr/bin/ssh" ) { $SHELL = "/usr/bin/ssh"; } else { print_usage(); exit $ERRORS{'UNKNOWN'}; } } elsif ( $opt_e eq "rsh" ) { $SHELL = "/usr/bin/rsh"; } else { print_usage(); exit $ERRORS{'UNKNOWN'}; } } else { print_usage(); exit $ERRORS{'UNKNOWN'}; } unless (defined $opt_t) { $opt_t = $utils::TIMEOUT ; # default timeout } unless (defined $opt_H) { print_usage(); exit $ERRORS{'UNKNOWN'}; } return $ERRORS{'OK'}; } sub print_usage () { print "Usage: $PROGNAME -e <shell> -H <hostname> -p <directory/file pattern> [-t <timeout>] [-v verbose]\n"; } sub print_help () { print_revision($PROGNAME,'$Revision: 0.1 $'); print "\n"; print_usage(); print "\n"; print " Checks for the presence of a 'reboot-required' file on a remote host via SSH or RSH\n"; print "-e (--shell) = ssh or rsh (required)\n"; print "-H (--hostname) = remote server name (required)"; print "-p (--pattern) = File pattern for find command (default = /var/run/reboot-required)\n"; print "-t (--timeout) = Plugin timeout in seconds (default = $utils::TIMEOUT)\n"; print "-h (--help)\n"; print "-V (--version)\n"; print "-v (--verbose) = debugging output\n"; print "\n\n"; support(); }

    Read the article

  • "Can't Connect to Server" from 2nd virtual host on VPS

    - by chaoskreator
    I'm using Debian 7 Wheezy and Apache 2.2.22, and I'm setting up Virtual Hosts for a number of websites on my VPS. I've successfully configured the VirtualHost directives for one of the sites, but the second one continually gives "Problem Loading Page" in Firefox. I've run configtest and it has verified all my syntax is correct, and I've checked all the permissions. Everything on the 2nd domain is pretty much copy/pasted from the first, so I'm not sure what the issue is, as there are no entries into /var/log/apache2/error.log other than where I have reloaded the configurations: /# cat /var/log/apache2/error.log [Thu May 29 01:19:00 2014] [notice] Graceful restart requested, doing restart [Thu May 29 01:19:00 2014] [info] Init: Seeding PRNG with 656 bytes of entropy [Thu May 29 01:19:00 2014] [info] Init: Generating temporary RSA private keys (512/1024 bits) [Thu May 29 01:19:00 2014] [info] Init: Generating temporary DH parameters (512/1024 bits) [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(253): shmcb_init allocated 512000 bytes of shared memory [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(272): for 511920 bytes (512000 including header), recommending 32 subcaches, 133 indexes each [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(306): shmcb_init_memory choices follow [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(308): subcache_num = 32 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(310): subcache_size = 15992 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(312): subcache_data_offset = 3208 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(314): subcache_data_size = 12784 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(316): index_num = 133 [Thu May 29 01:19:00 2014] [info] Shared memory session cache initialised [Thu May 29 01:19:00 2014] [info] Init: Initializing (virtual) servers for SSL [Thu May 29 01:19:00 2014] [info] mod_ssl/2.2.22 compiled against Server: Apache/2.2.22, Library: OpenSSL/1.0.1e [Thu May 29 01:19:00 2014] [notice] Apache/2.2.22 (Debian) PHP/5.4.4-14+deb7u9 mod_ssl/2.2.22 OpenSSL/1.0.1e mod_perl/2.0.7 Perl/v5.14.2 configured -- resuming normal operations [Thu May 29 01:19:00 2014] [info] Server built: Mar 4 2013 22:05:16 [Thu May 29 01:19:00 2014] [debug] prefork.c(1023): AcceptMutex: sysvsem (default: sysvsem) I've ensured to enable each vhost with a2ensite {sitename.conf} with no errors there, either. Below are the contents of the configuration files... /etc/apache2/apache2.conf # Global configuration # LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxClients: maximum number of simultaneous client connections # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxClients: maximum number of simultaneous client connections # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType None HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel debug # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include list of ports to listen on and which to use for name based vhosts Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent <Directory "/var/www"> Order allow,deny Allow from all Require all granted </Directory> # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/*.conf NameVirtualHost *:80 /etc/apache2/sites-available/site1.net.conf <VirtualHost *:80> ServerName site1.net ServerAlias site1.net *.site1.net DocumentRoot "/var/www/site1" ErrorLog "/var/www/site1/logs/error.log" CustomLog "/var/www/site1/logs/access.log" vhost_combined <Directory "/var/www/site1"> Options None AllowOverride All Order allow,deny Allow from all Satisfy Any </Directory> </VirtualHost> /etc/apache2/sites-available/site2.com.conf <VirtualHost *:80> ServerName site2.com ServerAlias site2.com *.site2.com DocumentRoot "/var/www/site2" ErrorLog "/var/www/site2/logs/error.log" CustomLog "/var/www/site2/logs/access.log" vhost_combined <Directory "/var/www/site2"> Options None AllowOverride All Order allow,deny Allow from all Satisfy Any </Directory> </VirtualHost> I've also tried setting NameVirtualHost like: Listen 80 NameVirtualHost 23.88.121.82:80 NameVirtualHost 127.0.0.1:80 and the VirtualHost Directives: <VirtualHost 23.88.121.82:80> ... </VirtualHost> for both sites, but that causes the first site to fail, as well. I'm wondering if I need to set up individual IPs for each site, possibly? I have 2 more IPv4 and 3 IPv6 addresses available, if that would make a difference. Also, in the grand scheme of things, I will need to enable SSL for the first site. I've been reading that I'll need to basically just mimic the directives for listening on port 80, only on port 443, and make sure mod_ssl is enabled? EDIT: I just ran apache2 -t to test the config files that way, and got the error: apache2: bad user name ${APACHE_RUN_USER}. However, apachectl configtest returns Syntax OK. There are no other mentions of errors with the mutex anywhere else, however. I was pretty sure if there was an error with the user apache was supposed to run under, the server wouldn't start at all... EDIT 2: Restarting apache fixed the bad user name error.

    Read the article

  • Why is iTunes starting and stopping play randomly, and how do I stop it?

    - by Chris R
    Since yesterday morning my copy of iTunes has been starting and stopping randomly. If iTunes is not running, then it opens and sometimes begins playing, other times sits idle. Eventually, after a random interval it will begin playing a song, and then stop, and so on... Needless to say, it's driving me mad. (Mac OSX, 10.6.3, on a new-ish (< 1 year old) 24" iMac) I've made five changes to my system that may or may not be connected to this: My office phone was replaced with a Linksys IP Phone, which necessitated a change to my networking; where previously my Mac was connected directly to the office network port, now it is connected through the phone. My network connection now uses auto link detection in lieu of forcing 100Mbit I unpaired my bluetooth headset. I removed the USB audio device associated with another headset. I upgraded to Safari 5. I don't use it as a primary browser, but it's often open to run web apps that I'm developing. All of these things happened in pretty close proximity to each other, so one or more of them may be the culprit. One other thing that may or may not be related; for some reason my built-in microphone is no longer picking up audio. It seems like this might be connected to the iTunes issue, because it happened around the same time. In terms of things that I've tried in order to solve this, I'm at a bit of a loss. I followed the instructions at http://developer.apple.com/mac/library/technotes/tn2004/tn2124.html#SECLAUNCHDLOGGING to enable detailed launchd logging to see if I could track down which process was asking iTunes to open (when it's not already open) but I wasn't able to make heads or tails of the output. I'm not even sure if I'm looking in the right place, to be honest; it actually acts like something is activating the application with AppleScript, but I have no processes running that are doing that, as far as I know. I'm running a few apps that have iTunes integration: Adium, iChat with Chax, Quicksilver. None of these have been changed lately, so I consider them low risks of causing this, but it's not impossible. Moreover, I'm not using any of those features intentionally. This is a snippet of launchd debug logging from around the time it just launched: 10-06-09 9:14:29 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:29 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) Dispatching kevent callback. 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) EVFILT_PROC event for job: 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x1004076f0 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) fork()ed 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Conceived 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Created PID 22197 anonymously by PPID 26 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.system.notification_center 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.notification_center 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.CoreServices.coreservicesd 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.CoreServices.coreservicesd 10-06-09 9:14:29 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 22197 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_EXIT 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Dispatching kevent callback. 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) EVFILT_PROC event for job: 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100401720 data = 0x0 ident = 22197 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_EXIT 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Reaping 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Total rusage: utime 0.000000 stime 0.000000 maxrss 0 ixrss 0 idrss 0 isrss 0 minflt 0 majflt 0 nswap 0 inblock 0 oublock 0 msgsnd 0 msgrcv 0 nsignals 0 nvcsw 0 nivcsw 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Removed 10-06-09 9:14:30 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:30 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 22197 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR|EV_EOF|EV_ONESHOT fflags = NOTE_REAP 10-06-09 9:14:32 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:32 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:33 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:33 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 143 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Dispatching kevent callback. 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) EVFILT_PROC event for job: 10-06-09 9:14:33 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10041e9a0 data = 0x0 ident = 143 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) fork()ed 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.distributed_notifications.2 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.distributed_notifications.2 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.system.notification_center 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.notification_center 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.CoreServices.coreservicesd 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.CoreServices.coreservicesd 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.SystemConfiguration.configd 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.SystemConfiguration.configd 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.audio.coreaudiod 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.audio.coreaudiod 10-06-09 9:14:34 AM com.apple.launchd[1] System: Looking up service com.apple.system.logger 10-06-09 9:14:34 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.logger 10-06-09 9:14:35 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:35 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:35 AM com.apple.launchd[1] System: Looking up service com.apple.DiskArbitration.diskarbitrationd 10-06-09 9:14:35 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.DiskArbitration.diskarbitrationd 10-06-09 9:14:35 AM com.apple.launchd[1] System: Looking up service com.apple.system.logger 10-06-09 9:14:35 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.logger 10-06-09 9:14:36 AM com.apple.launchd[1] System: Looking up service com.apple.FSEvents 10-06-09 9:14:36 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.FSEvents 10-06-09 9:14:36 AM com.apple.launchd[1] System: Looking up service com.apple.SystemConfiguration.configd 10-06-09 9:14:36 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.SystemConfiguration.configd 10-06-09 9:14:38 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:38 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:39 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:39 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) Dispatching kevent callback. 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) EVFILT_PROC event for job: 10-06-09 9:14:39 AM com.apple.launchd[1] KEVENT[0]: udata = 0x1004076f0 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) fork()ed 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Conceived 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Created PID 22211 anonymously by PPID 26 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Looking up per user launchd for UID: 0 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Per user launchd job found for UID: 505 10-06-09 9:14:39 AM com.apple.launchd[1] System: Looking up service com.apple.system.notification_center 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.notification_center 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Looking up per user launchd for UID: 0 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Per user launchd job found for UID: 505 10-06-09 9:14:39 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.libinfo_v1

    Read the article

  • saslauthd + PostFix producing password verification and authentication errors

    - by Aram Papazian
    So I'm trying to setup PostFix while using SASL (Cyrus variety preferred, I was using dovecot earlier but I'm switching from dovecot to courier so I want to use cyrus instead of dovecot) but I seem to be having issues. Here are the errors I'm receiving: ==> mail.log <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.info <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.warn <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure I tried $testsaslauthd -u xxxx -p xxxx 0: OK "Success." So I know that the password/user I'm using is correct. I'm thinking that most likely I have a setting wrong somewhere, but can't seem to find where. Here is my files. Here is my main.cf for postfix: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname # This is already done in /etc/mailname #myhostname = crazyinsanoman.xxxxx.com smtpd_banner = $myhostname ESMTP $mail_name #biff = no # appending .domain is the MUA's job. #append_dot_mydomain = no readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # Relay smtp through another server or leave blank to do it yourself #relayhost = smtp.yourisp.com # Network details; Accept connections from anywhere, and only trust this machine mynetworks = 127.0.0.0/8 inet_interfaces = all #mynetworks_style = host #As we will be using virtual domains, these need to be empty local_recipient_maps = mydestination = # how long if undelivered before sending "delayed mail" warning update to sender delay_warning_time = 4h # will it be a permanent error or temporary unknown_local_recipient_reject_code = 450 # how long to keep message on queue before return as failed. # some have 3 days, I have 16 days as I am backup server for some people # whom go on holiday with their server switched off. maximal_queue_lifetime = 7d # max and min time in seconds between retries if connection failed minimal_backoff_time = 1000s maximal_backoff_time = 8000s # how long to wait when servers connect before receiving rest of data smtp_helo_timeout = 60s # how many address can be used in one message. # effective stopper to mass spammers, accidental copy in whole address list # but may restrict intentional mail shots. smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit # Requirements for the sender details smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes # not sure of the difference of the next two # but they are needed for local aliasing alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/vmail # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf # Setup the uid/gid of the owner of the mail files - static:5000 allows virtual ones virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 inet_protocols=all # Cyrus SASL Support smtpd_sasl_path = smtpd smtpd_sasl_local_domain = xxxxx.com ####################### ## OLD CONFIGURATION ## ####################### #myorigin = /etc/mailname #mydestination = crazyinsanoman.xxxxx.com, localhost, localhost.localdomain #mailbox_size_limit = 0 #recipient_delimiter = + #html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 #virtual_alias_domains = ##virtual_alias_maps = hash:/etc/postfix/virtual #virtual_mailbox_base = /home/vmail ##luser_relay = webmaster #smtpd_sasl_type = dovecot #smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes #smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination #virtual_create_maildirsize = yes #virtual_maildir_extended = yes #proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps #virtual_transport = dovecot #dovecot_destination_recipient_limit = 1 Here is my master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # cyrus unix - n n - - pipe user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} #dovecot unix - n n - - pipe # flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient} Here is what I'm using for /etc/postfix/sasl/smtpd.conf log_level: 7 pwcheck_method: saslauthd pwcheck_method: auxprop mech_list: PLAIN LOGIN CRAM-MD5 DIGEST-MD5 allow_plaintext: true auxprop_plugin: mysql sql_hostnames: 127.0.0.1 sql_user: xxxxx sql_passwd: xxxxx sql_database: maildb sql_select: select crypt from users where id = '%u' As you can see I'm trying to use mysql as my authentication method. The password in 'users' is set through the 'ENCRYPT()' function. I also followed the methods found in http://www.jimmy.co.at/weblog/?p=52 in order to redo /var/spool/postfix/var/run/saslauthd as that seems to be a lot of people's problems, but that didn't help at all. Also, here is my /etc/default/saslauthd START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" # Which authentication mechanisms should saslauthd use? (default: pam) # # Available options in this Debian package: # getpwent -- use the getpwent() library function # kerberos5 -- use Kerberos 5 # pam -- use PAM # rimap -- use a remote IMAP server # shadow -- use the local shadow password file # sasldb -- use the local sasldb database file # ldap -- use LDAP (configuration is in /etc/saslauthd.conf) # # Only one option may be used at a time. See the saslauthd man page # for more information. # # Example: MECHANISMS="pam" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" I had heard that potentially changing MECHANISM to MECHANISMS="mysql" but obviously that didn't help as is shown by the options listed above and also by trying it out anyway in case the documentation was outdated. So, I'm now at a loss... I have no idea where to go from here or what steps I need to do to get this working =/ Anyone have any ideas? EDIT: Here is the error that is coming from auth.log ... I don't know if this will help at all, but here you go: Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql auxprop plugin using mysql engine Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1'

    Read the article

  • Snow Leopard doesn't repair permissions, despite showing / saying its fixed them?

    - by Jules
    Sometime ago, I used carbon copy as I was replacing my hard drive in my Mac Mini running Snow Leopard. Afterwards, on my new drive I had some permission problems. I've tried several times running a repair permissions / repair disk from disk util. It shows that there are problems and I think it says its correceted the problems. However the problems remain, what can I do to fix them ? It doesn't seem to cause me any problems, that I can tell EDIT Repairing permissions for “Macintosh HD” Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Italian.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Italian.lproj/UIAgent.nib". Permissions differ on "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Classes/jconsole.jar", should be -rw-r--r-- , they are lrwxr-xr-x . Repaired "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Classes/jconsole.jar". User differs on "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/lib", should be 95, user is 0. Repaired "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/lib". User differs on "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Libraries", should be 95, user is 0. Repaired "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Libraries". Permissions differ on "System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/Deploy.bundle/Contents/Home/lib/security/cacerts", should be -rw-r--r-- , they are lrwxr-xr-x . Repaired "System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/Deploy.bundle/Contents/Home/lib/security/cacerts". Permissions differ on "System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/Deploy.bundle/Contents/Resources/Java/deploy.jar", should be -rw-r--r-- , they are lrwxr-xr-x . Repaired "System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/Deploy.bundle/Contents/Resources/Java/deploy.jar". Permissions differ on "System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/Deploy.bundle/Contents/Resources/Java/libdeploy.jnilib", should be -rwxr-xr-x , they are lrwxr-xr-x . Repaired "System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/Deploy.bundle/Contents/Resources/Java/libdeploy.jnilib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreenLeopard386.app/Contents/Resources/Italian.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreenLeopard386.app/Contents/Resources/Italian.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/zh_TW.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/zh_TW.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/zh_TW.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/zh_TW.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/zh_TW.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/zh_TW.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/zh_CN.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/zh_CN.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/zh_CN.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/zh_CN.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/zh_CN.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/zh_CN.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/ko.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/ko.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/ko.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/ko.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/ko.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/ko.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Dutch.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Dutch.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Dutch.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Dutch.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Dutch.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Dutch.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Italian.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Italian.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Italian.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Italian.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Spanish.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Spanish.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Spanish.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Spanish.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Spanish.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Spanish.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/French.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/French.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/French.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/French.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/French.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/French.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/German.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/German.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/German.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/German.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/German.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/German.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Japanese.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Japanese.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Japanese.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Japanese.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Japanese.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Japanese.lproj/MainMenu.nib". Permissions differ on "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Resources/JavaPluginCocoa.bundle/Contents/Resources/Java/deploy.jar", should be -rw-r--r-- , they are lrwxr-xr-x . Repaired "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Resources/JavaPluginCocoa.bundle/Contents/Resources/Java/deploy.jar". Permissions differ on "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Resources/JavaPluginCocoa.bundle/Contents/Resources/Java/libdeploy.jnilib", should be -rwxr-xr-x , they are lrwxr-xr-x . Repaired "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Resources/JavaPluginCocoa.bundle/Contents/Resources/Java/libdeploy.jnilib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/English.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/English.lproj/RemoteDesktopMenu.nib". Warning: SUID file "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/MacOS/ARDAgent" has been modified and will not be repaired. Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/English.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/English.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/English.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/English.lproj/MainMenu.nib". Group differs on "private/var/log/kernel.log", should be 80, group is 0. Permissions differ on "private/var/log/kernel.log", should be -rw-r----- , they are -rw-r--r-- . Repaired "private/var/log/kernel.log". Group differs on "private/var/log/secure.log", should be 80, group is 0. Permissions differ on "private/var/log/secure.log", should be -rw-r----- , they are -rw-r--r-- . Repaired "private/var/log/secure.log". Group differs on "private/var/log/system.log", should be 80, group is 0. Permissions differ on "private/var/log/system.log", should be -rw-r----- , they are -rw-r--r-- . Repaired "private/var/log/system.log". Permissions repair complete

    Read the article

  • OpenVPN Client timing out

    - by Austin
    I recently installed OpenVPN on my Ubuntu VPS. Whenenver I try to connect to it, I can establish a connection just fine. However, everything I try to connect to times out. If I try to ping something, it will resolve the IP, but will time out after resolving the IP. (So DNS Server seems to be working correctly) My server.conf has this relevant information (At least I think it's relevant. I'm not sure if you need more or not) # Which local IP address should OpenVPN # listen on? (optional) ;local a.b.c.d # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. port 1194 # TCP or UDP server? ;proto tcp proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca ca.crt cert server.crt key server.key # This file should be kept secret # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh dh1024.pem # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. ;push "route 192.168.10.0 255.255.255.0" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" # Uncomment this directive to allow different # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. ;user nobody ;group nogroup # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I've tried on multiple computers by the way. The same result on all of them. What could be wrong? Thanks in advance, and if you need other information I'll gladly post it. Information for new comments root@vps:~# iptables -L -n -v Chain INPUT (policy ACCEPT 862K packets, 51M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 3 packets, 382 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 4641 298K ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 1671K packets, 2378M bytes) pkts bytes target prot opt in out source destination And root@vps:~# iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 17937 packets, 2013K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 8975 packets, 562K bytes) pkts bytes target prot opt in out source destination 1579 103K SNAT all -- * * 10.8.0.0/24 0.0.0.0/0 to:SERVERIP Chain OUTPUT (policy ACCEPT 8972 packets, 562K bytes) pkts bytes target prot opt in out source destination

    Read the article

  • Set up linux box for secure local hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms Virtualhosts In the rssh section above I added a user to use for SFTP. In this users' home directory, I created a folder called 'https'. This is where the documents for this site will live, so I need to add a virtualhost that will point to it. I will use the above virtual interface for this site (herein called dev.site.local). vi /etc/http/conf/httpd.conf Add the following to the end of httpd.conf: <VirtualHost 192.168.1.3:80> ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> I put a dummy index.html file in the https directory just to check everything out. I tried browsing to it, and was met with permission denied errors. The logs only gave an obscure reference to what was going on: [Mon May 17 14:57:11 2010] [error] [client 192.168.1.100] (13)Permission denied: access to /index.html denied I tried chmod 777 et. al., but to no avail. Turns out, I needed to chmod+x the https directory and its' parent directories. chmod +x /home chmod +x /home/dev chmod +x /home/dev/https This solved that problem. DNS I'm handling DNS via our local Windows Server 2003 box. However, the CentOS documentation for BIND can be found here: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-bind.html SSL To get SSL working, I changed the following in httpd.conf: NameVirtualHost 192.168.1.3:443 #make sure this line is in httpd.conf <VirtualHost 192.168.1.3:443> #change port to 443 ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Unfortunately, I keep getting (Error code: ssl_error_rx_record_too_long) errors when trying to access a page with SSL. As JamesHannah gracefully pointed out below, I had not set up the locations of the certs in httpd.conf, and thusly was getting the page thrown at the broswer as the cert making the browser balk. So first, I needed to set up a CA and make certificate files. I found a great (if old) walkthrough on the process here: http://www.debian-administration.org/articles/284. Here are the relevant steps I took from that article: mkdir /home/CA cd /home/CA/ mkdir newcerts private echo '01' > serial touch index.txt #this and the above command are for the database that will keep track of certs Create an openssl.cnf file in the /home/CA/ dir and edit it per the walkthrough linked above. (For reference, my finished openssl.cnf file looked like this: http://pastebin.com/raw.php?i=hnZDij4T) openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out cacert.pem -days 3650 -config ./openssl.cnf #this creates the cacert.pem which gets distributed and imported to the browser(s) Modified openssl.cnf again per walkthrough instructions. openssl req -new -nodes -out dev.req.pem -config ./openssl.cnf #generates certificate request, and key.pem which I renamed dev.key.pem. Modified openssl.cnf again per walkthrough instructions. openssl ca -out dev.cert.pem -config ./openssl.cnf -infiles dev.req.pem #create and sign certificate. cp dev.cert.pem /home/dev/certs/cert.pem cp dev.key.pem /home/certs/key.pem I updated httpd.conf to reflect the certs and turn SSLEngine on: NameVirtualHost 192.168.1.3:443 <VirtualHost 192.168.1.3:443> ServerAdmin [email protected] DocumentRoot /home/dev/https SSLEngine on SSLCertificateFile /home/dev/certs/cert.pem SSLCertificateKeyFile /home/dev/certs/key.pem ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Put the CA cert.pem in a web-accessible place, and downloaded/imported it into my browser. Now I can visit https://dev.site.local with no errors or warnings. And this is where I'm at. I will keep editing this as I make progress. Any tips on how to configure SSL email would be appreciated.

    Read the article

  • Unable to PPTP through NAT on Cisco 881

    - by MasterRoot24
    I'm trying to connect to a PPTP server which is sat behind a Cisco 881 NAT router. The server is running Ubuntu Server 12.04 and is running Poptop pptpd as the PPTP daemon listening for connections. As discussed in my other question, I'm trying to setup a Cisco 881 router to replace my old Linksys WAG320N. This same server and WAN connection worked fine with the WAG320N with no special configuration, other than allowing 1723 in through the firewall. On the Cisco 881, I'm using the newer ip nat enable or NAT NVI to setup static routes in through the firewall for the services running behind the router. My reason being that I can't run another copy of my live DNS domains internally with local IP addresses in. For the purposes of this question, though, I have rebuilt the router with ip nat inside/outside style NAT'ing, but this issue is still apparent. HTTP/SMTP/IMAP etc. all work ok from both the WAN and LAN interfaces of the router. I'm only having issues with SIP (see other question) and PPTP. My issue is that the GRE doesn't appear to be passing through NAT correctly and one end of the connection is not receiving GRE traffic when it should be, so the server hangs up the connection. Here's an example of /var/log/syslog with debug enabled in /etc/pptpd.conf: Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: MGR: Launching /usr/sbin/pptpctrl to handle client Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: local address = 192.168.1.50 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: remote address = 192.168.1.51 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: pppd options file = /etc/ppp/pptpd-options Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Client 82.132.248.216 control connection started Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 1) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Made a START CTRL CONN RPLY packet Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: I wrote 156 bytes to the client. Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Sent packet to client Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 7) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Set parameters to 100000000 maxbps, 64 window size Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Made a OUT CALL RPLY packet Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Starting call (launching pppd, opening GRE) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: pty_fd = 6 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: tty_fd = 7 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: I wrote 32 bytes to the client. Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Sent packet to client Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): program binary = /usr/sbin/pppd Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): local address = 192.168.1.50 Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): remote address = 192.168.1.51 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Dec 11 21:06:30 <HOSTNAME> pppd[22627]: pppd 2.4.5 started by root, uid 0 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Using interface ppp0 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Connect: ppp0 <--> /dev/pts/3 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: GRE: Bad checksum from pppd. Dec 11 21:06:31 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 15) Dec 11 21:06:31 <HOSTNAME> pptpd[22626]: CTRL: Got a SET LINK INFO packet with standard ACCMs Dec 11 21:07:00 <HOSTNAME> pppd[22627]: LCP: timeout sending Config-Requests Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Connection terminated. Dec 11 21:07:00 <HOSTNAME> avahi-daemon[1042]: Withdrawing workstation service for ppp0. Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Modem hangup Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Exit. Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: GRE: read(fd=6,buffer=6075a0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Reaping child PPP[22627] Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Client 82.132.248.216 control connection finished Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Exiting now Dec 11 21:07:00 <HOSTNAME> pptpd[5803]: MGR: Reaped child 22626 As far as Cisco are concerned, all I need is ip nat source static tcp <SERVER LAN IP> 1723 interface FastEthernet4 1723 but of course this doesn't seem to the be helping the GRE traffic through as it should. Trying the connection to the LAN IP of the server from the same LAN as the server (behind the router), the PPTP connection works fine, so I'm confident that the server's config is ok. Furthermore, all I needed on my WAG320N was to open 1723 in the firewall. Here's my current router config: ! ! Last configuration change at 20:20:15 UTC Tue Dec 11 2012 by xxx version 15.2 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname xxx ! boot-start-marker boot-end-marker ! ! enable secret 4 xxxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 xxx quit ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! ip domain list dmz.xxx.local ip domain list xxx.local ip domain name dmz.xxx.local ip name-server 192.168.1.x ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn xxx ! ! username admin privilege 15 secret 4 xxx username joe secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.x 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.x 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ! ! ip nat source list 1 interface FastEthernet4 overload ip nat source list 2 interface FastEthernet4 overload ip nat source static tcp 192.168.1.x 1723 interface FastEthernet4 1723 ! ! access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.1.0 0.0.0.255 ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 exec-timeout 15 0 login authentication local_auth line aux 0 exec-timeout 15 0 login authentication local_auth line vty 0 4 access-class 2 in login authentication local_auth length 0 transport input all ! ! end UPDATE 16/12/2012: The only progress that I have been able to make on this issue is that I'm confident that the issue is caused by the GRE tunnels (which are required for the PPTP connection to complete) are being blocked. When attempting a connection, I can see in show ip nat nvi translations that both a TCP translation on 1723 is setup and also a GRE translation is setup also. I appear to be able to see GRE related packets on the LAN that the server is on, so I am lead to believe that the server is sending(?) GRE packets, however running Wireshark on a client PC when attempting a connection shows absolutely no GRE packets. Whilst there are no configuration directives in my config posted above (that I can pin point) which would specifically block them, it would appear that the GRE packets are not being allowed in/out of the router's firewall, even though a NAT translation entry is setup to the server's LAN address. Would anyone be able to provide me with some help to ensure that GRE packets are not blocked by the router's firewall, so that this can be ruled out as a possible issue please?

    Read the article

  • Building applications with WPF, MVVM and Prism(aka CAG)

    - by skjagini
    In this article I am going to walk through an application using WPF and Prism (aka composite application guidance, CAG) which simulates engaging a taxi (cab).  The rules are simple, the app would have3 screens A login screen to authenticate the user An information screen. A screen to engage the cab and roam around and calculating the total fare Metered Rate of Fare The meter is required to be engaged when a cab is occupied by anyone $3.00 upon entry $0.35 for each additional unit The unit fare is: one-fifth of a mile, when the cab is traveling at 6 miles an hour or more; or 60 seconds when not in motion or traveling at less than 12 miles per hour. Night surcharge of $.50 after 8:00 PM & before 6:00 AM Peak hour Weekday Surcharge of $1.00 Monday - Friday after 4:00 PM & before 8:00 PM New York State Tax Surcharge of $.50 per ride. Example: Friday (2010-10-08) 5:30pm Start at Lexington Ave & E 57th St End at Irving Pl & E 15th St Start = $3.00 Travels 2 miles at less than 6 mph for 15 minutes = $3.50 Travels at more than 12 mph for 5 minutes = $1.75 Peak hour Weekday Surcharge = $1.00 (ride started at 5:30 pm) New York State Tax Surcharge = $0.50 Before we dive into the app, I would like to give brief description about the framework.  If you want to jump on to the source code, scroll all the way to the end of the post. MVVM MVVM pattern is in no way related to the usage of PRISM in your application and should be considered if you are using WPF irrespective of PRISM or not. Lets say you are not familiar with MVVM, your typical UI would involve adding some UI controls like text boxes, a button, double clicking on the button,  generating event handler, calling a method from business layer and updating the user interface, it works most of the time for developing small scale applications. The problem with this approach is that there is some amount of code specific to business logic wrapped in UI specific code which is hard to unit test it, mock it and MVVM helps to solve the exact problem. MVVM stands for Model(M) – View(V) – ViewModel(VM),  based on the interactions with in the three parties it should be called VVMM,  MVVM sounds more like MVC (Model-View-Controller) so the name. Why it should be called VVMM: View – View Model - Model WPF allows to create user interfaces using XAML and MVVM takes it to the next level by allowing complete separation of user interface and business logic. In WPF each view will have a property, DataContext when set to an instance of a class (which happens to be your view model) provides the data the view is interested in, i.e., view interacts with view model and at the same time view model interacts with view through DataContext. Sujith, if view and view model are interacting directly with each other how does MVVM is helping me separation of concerns? Well, the catch is DataContext is of type Object, since it is of type object view doesn’t know exact type of view model allowing views and views models to be loosely coupled. View models aggregate data from models (data access layer, services, etc) and make it available for views through properties, methods etc, i.e., View Models interact with Models. PRISM Prism is provided by Microsoft Patterns and Practices team and it can be downloaded from codeplex for source code,  samples and documentation on msdn.  The name composite implies, to compose user interface from different modules (views) without direct dependencies on each other, again allowing  loosely coupled development. Well Sujith, I can already do that with user controls, why shall I learn another framework?  That’s correct, you can decouple using user controls, but you still have to manage some amount of coupling, like how to do you communicate between the controls, how do you subscribe/unsubscribe, loading/unloading views dynamically. Prism is not a replacement for user controls, provides the following features which greatly help in designing the composite applications. Dependency Injection (DI)/ Inversion of Control (IoC) Modules Regions Event Aggregator  Commands Simply put, MVVM helps building a single view and Prism helps building an application using the views There are other open source alternatives to Prism, like MVVMLight, Cinch, take a look at them as well. Lets dig into the source code.  1. Solution The solution is made of the following projects Framework: Holds the common functionality in building applications using WPF and Prism TaxiClient: Start up project, boot strapping and app styling TaxiCommon: Helps with the business logic TaxiModules: Holds the meat of the application with views and view models TaxiTests: To test the application 2. DI / IoC Dependency Injection (DI) as the name implies refers to injecting dependencies and Inversion of Control (IoC) means the calling code has no direct control on the dependencies, opposite of normal way of programming where dependencies are passed by caller, i.e inversion; aside from some differences in terminology the concept is same in both the cases. The idea behind DI/IoC pattern is to reduce the amount of direct coupling between different components of the application, the higher the dependency the more tightly coupled the application resulting in code which is hard to modify, unit test and mock.  Initializing Dependency Injection through BootStrapper TaxiClient is the starting project of the solution and App (App.xaml)  is the starting class that gets called when you run the application. From the App’s OnStartup method we will invoke BootStrapper.   namespace TaxiClient { /// <summary> /// Interaction logic for App.xaml /// </summary> public partial class App : Application { protected override void OnStartup(StartupEventArgs e) { base.OnStartup(e);   (new BootStrapper()).Run(); } } } BootStrapper is your contact point for initializing the application including dependency injection, creating Shell and other frameworks. We are going to use Unity for DI and there are lot of open source DI frameworks like Spring.Net, StructureMap etc with different feature set  and you can choose a framework based on your preferences. Note that Prism comes with in built support for Unity, for example we are deriving from UnityBootStrapper in our case and for any other DI framework you have to extend the Prism appropriately   namespace TaxiClient { public class BootStrapper: UnityBootstrapper { protected override IModuleCatalog CreateModuleCatalog() { return new ConfigurationModuleCatalog(); } protected override DependencyObject CreateShell() { Framework.FrameworkBootStrapper.Run(Container, Application.Current.Dispatcher);   Shell shell = new Shell(); shell.ResizeMode = ResizeMode.NoResize; shell.Show();   return shell; } } } Lets take a look into  FrameworkBootStrapper to check out how to register with unity container. namespace Framework { public class FrameworkBootStrapper { public static void Run(IUnityContainer container, Dispatcher dispatcher) { UIDispatcher uiDispatcher = new UIDispatcher(dispatcher); container.RegisterInstance<IDispatcherService>(uiDispatcher);   container.RegisterType<IInjectSingleViewService, InjectSingleViewService>( new ContainerControlledLifetimeManager());   . . . } } } In the above code we are registering two components with unity container. You shall observe that we are following two different approaches, RegisterInstance and RegisterType.  With RegisterInstance we are registering an existing instance and the same instance will be returned for every request made for IDispatcherService   and with RegisterType we are requesting unity container to create an instance for us when required, i.e., when I request for an instance for IInjectSingleViewService, unity will create/return an instance of InjectSingleViewService class and with RegisterType we can configure the life time of the instance being created. With ContaienrControllerLifetimeManager, the unity container caches the instance and reuses for any subsequent requests, without recreating a new instance. Lets take a look into FareViewModel.cs and it’s constructor. The constructor takes one parameter IEventAggregator and if you try to find all references in your solution for IEventAggregator, you will not find a single location where an instance of EventAggregator is passed directly to the constructor. The compiler still finds an instance and works fine because Prism is already configured when used with Unity container to return an instance of EventAggregator when requested for IEventAggregator and in this particular case it is called constructor injection. public class FareViewModel:ObservableBase, IDataErrorInfo { ... private IEventAggregator _eventAggregator;   public FareViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator; InitializePropertyNames(); InitializeModel(); PropertyChanged += OnPropertyChanged; } ... 3. Shell Shells are very similar in operation to Master Pages in asp.net or MDI in Windows Forms. And shells contain regions which display the views, you can have as many regions as you wish in a given view. You can also nest regions. i.e, one region can load a view which in itself may contain other regions. We have to create a shell at the start of the application and are doing it by overriding CreateShell method from BootStrapper From the following Shell.xaml you shall notice that we have two content controls with Region names as ‘MenuRegion’ and ‘MainRegion’.  The idea here is that you can inject any user controls into the regions dynamically, i.e., a Menu User Control for MenuRegion and based on the user action you can load appropriate view into MainRegion.    <Window x:Class="TaxiClient.Shell" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Regions="clr-namespace:Microsoft.Practices.Prism.Regions;assembly=Microsoft.Practices.Prism" Title="Taxi" Height="370" Width="800"> <Grid Margin="2"> <ContentControl Regions:RegionManager.RegionName="MenuRegion" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" />   <ContentControl Grid.Row="1" Regions:RegionManager.RegionName="MainRegion" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" /> <!--<Border Grid.ColumnSpan="2" BorderThickness="2" CornerRadius="3" BorderBrush="LightBlue" />-->   </Grid> </Window> 4. Modules Prism provides the ability to build composite applications and modules play an important role in it. For example if you are building a Mortgage Loan Processor application with 3 components, i.e. customer’s credit history,  existing mortgages, new home/loan information; and consider that the customer’s credit history component involves gathering data about his/her address, background information, job details etc. The idea here using Prism modules is to separate the implementation of these 3 components into their own visual studio projects allowing to build components with no dependency on each other and independently. If we need to add another component to the application, the component can be developed by in house team or some other team in the organization by starting with a new Visual Studio project and adding to the solution at the run time with very little knowledge about the application. Prism modules are defined by implementing the IModule interface and each visual studio project to be considered as a module should implement the IModule interface.  From the BootStrapper.cs you shall observe that we are overriding the method by returning a ConfiguratingModuleCatalog which returns the modules that are registered for the application using the app.config file  and you can also add module using code. Lets take a look into configuration file.   <?xml version="1.0"?> <configuration> <configSections> <section name="modules" type="Microsoft.Practices.Prism.Modularity.ModulesConfigurationSection, Microsoft.Practices.Prism"/> </configSections> <modules> <module assemblyFile="TaxiModules.dll" moduleType="TaxiModules.ModuleInitializer, TaxiModules" moduleName="TaxiModules"/> </modules> </configuration> Here we are adding TaxiModules project to our solution and TaxiModules.ModuleInitializer implements IModule interface   5. Module Mapper With Prism modules you can dynamically add or remove modules from the regions, apart from that Prism also provides API to control adding/removing the views from a region within the same module. Taxi Information Screen: Engage the Taxi Screen: The sample application has two screens, ‘Taxi Information’ and ‘Engage the Taxi’ and they both reside in same module, TaxiModules. ‘Engage the Taxi’ is again made of two user controls, FareView on the left and TotalView on the right. We have created a Shell with two regions, MenuRegion and MainRegion with menu loaded into MenuRegion. We can create a wrapper user control called EngageTheTaxi made of FareView and TotalView and load either TaxiInfo or EngageTheTaxi into MainRegion based on the user action. Though it will work it tightly binds the user controls and for every combination of user controls, we need to create a dummy wrapper control to contain them. Instead we can apply the principles we learned so far from Shell/regions and introduce another template (LeftAndRightRegionView.xaml) made of two regions Region1 (left) and Region2 (right) and load  FareView and TotalView dynamically.  To help with loading of the views dynamically I have introduce an helper an interface, IInjectSingleViewService,  idea suggested by Mike Taulty, a must read blog for .Net developers. using System; using System.Collections.Generic; using System.ComponentModel;   namespace Framework.PresentationUtility.Navigation {   public interface IInjectSingleViewService : INotifyPropertyChanged { IEnumerable<CommandViewDefinition> Commands { get; } IEnumerable<ModuleViewDefinition> Modules { get; }   void RegisterViewForRegion(string commandName, string viewName, string regionName, Type viewType); void ClearViewFromRegion(string viewName, string regionName); void RegisterModule(string moduleName, IList<ModuleMapper> moduleMappers); } } The Interface declares three methods to work with views: RegisterViewForRegion: Registers a view with a particular region. You can register multiple views and their regions under one command.  When this particular command is invoked all the views registered under it will be loaded into their regions. ClearViewFromRegion: To unload a specific view from a region. RegisterModule: The idea is when a command is invoked you can load the UI with set of controls in their default position and based on the user interaction, you can load different contols in to different regions on the fly.  And it is supported ModuleViewDefinition and ModuleMappers as shown below. namespace Framework.PresentationUtility.Navigation { public class ModuleViewDefinition { public string ModuleName { get; set; } public IList<ModuleMapper> ModuleMappers; public ICommand Command { get; set; } }   public class ModuleMapper { public string ViewName { get; set; } public string RegionName { get; set; } public Type ViewType { get; set; } } } 6. Event Aggregator Prism event aggregator enables messaging between components as in Observable pattern, Notifier notifies the Observer which receives notification it is interested in. When it comes to Observable pattern, Observer has to unsubscribes for notifications when it no longer interested in notifications, which allows the Notifier to remove the Observer’s reference from it’s local cache. Though .Net has managed garbage collection it cannot remove inactive the instances referenced by an active instance resulting in memory leak, keeping the Observers in memory as long as Notifier stays in memory.  Developers have to be very careful to unsubscribe when necessary and it often gets overlooked, to overcome these problems Prism Event Aggregator uses weak references to cache the reference (Observer in this case)  and releases the reference (memory) once the instance goes out of scope. Using event aggregator is very simple, declare a generic type of CompositePresenationEvent by inheriting from it. using Microsoft.Practices.Prism.Events; using TaxiCommon.BAO;   namespace TaxiCommon.CompositeEvents { public class TaxiOnMoveEvent:CompositePresentationEvent<TaxiOnMove> { } }   TaxiOnMove.cs includes the properties which we want to exchange between the parties, FareView and TotalView. using System;   namespace TaxiCommon.BAO { public class TaxiOnMove { public TimeSpan MinutesAtTweleveMPH { get; set; } public double MilesAtSixMPH { get; set; } } }   Lets take a look into FareViewodel (Notifier) and how it raises the event.  Here we are raising the event by getting the event through GetEvent<..>() and publishing it with the payload private void OnAddMinutes(object obj) { TaxiOnMove payload = new TaxiOnMove(); if(MilesAtSixMPH != null) payload.MilesAtSixMPH = MilesAtSixMPH.Value; if(MinutesAtTweleveMPH != null) payload.MinutesAtTweleveMPH = new TimeSpan(0,0,MinutesAtTweleveMPH.Value,0);   _eventAggregator.GetEvent<TaxiOnMoveEvent>().Publish(payload); ResetMinutesAndMiles(); } And TotalViewModel(Observer) subscribes to notifications by getting the event through GetEvent<..>() namespace TaxiModules.ViewModels { public class TotalViewModel:ObservableBase { .... private IEventAggregator _eventAggregator;   public TotalViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator; ... }   private void SubscribeToEvents() { _eventAggregator.GetEvent<TaxiStartedEvent>() .Subscribe(OnTaxiStarted, ThreadOption.UIThread,false,(filter) => true); _eventAggregator.GetEvent<TaxiOnMoveEvent>() .Subscribe(OnTaxiMove, ThreadOption.UIThread, false, (filter) => true); _eventAggregator.GetEvent<TaxiResetEvent>() .Subscribe(OnTaxiReset, ThreadOption.UIThread, false, (filter) => true); }   ... private void OnTaxiMove(TaxiOnMove taxiOnMove) { OnMoveFare fare = new OnMoveFare(taxiOnMove); Fares.Add(fare); SetTotalFare(new []{fare}); }   .... 7. MVVM through example In this section we are going to look into MVVM implementation through example.  I have all the modules declared in a single project, TaxiModules, again it is not necessary to have them into one project. Once the user logs into the application, will be greeted with the ‘Engage the Taxi’ screen which is made of two user controls, FareView.xaml and TotalView.Xaml. As you can see from the solution explorer, each of them have their own code behind files and  ViewModel classes, FareViewMode.cs, TotalViewModel.cs Lets take a look in to the FareView and how it interacts with FareViewModel using MVVM implementation. FareView.xaml acts as a view and FareViewMode.cs is it’s view model. The FareView code behind class   namespace TaxiModules.Views { /// <summary> /// Interaction logic for FareView.xaml /// </summary> public partial class FareView : UserControl { public FareView(FareViewModel viewModel) { InitializeComponent(); this.Loaded += (s, e) => { this.DataContext = viewModel; }; } } } The FareView is bound to FareViewModel through the data context  and you shall observe that DataContext is of type Object, i.e. the FareView doesn’t really know the type of ViewModel (FareViewModel). This helps separation of View and ViewModel as View and ViewModel are independent of each other, you can bind FareView to FareViewModel2 as well and the application compiles just fine. Lets take a look into FareView xaml file  <UserControl x:Class="TaxiModules.Views.FareView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Toolkit="clr-namespace:Microsoft.Windows.Controls;assembly=WPFToolkit" xmlns:Commands="clr-namespace:Microsoft.Practices.Prism.Commands;assembly=Microsoft.Practices.Prism"> <Grid Margin="10" > ....   <Border Style="{DynamicResource innerBorder}" Grid.Row="0" Grid.Column="0" Grid.RowSpan="11" Grid.ColumnSpan="2" Panel.ZIndex="1"/>   <Label Grid.Row="0" Content="Engage the Taxi" Style="{DynamicResource innerHeader}"/> <Label Grid.Row="1" Content="Select the State"/> <ComboBox Grid.Row="1" Grid.Column="1" ItemsSource="{Binding States}" Height="auto"> <ComboBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding Name}"/> </DataTemplate> </ComboBox.ItemTemplate> <ComboBox.SelectedItem> <Binding Path="SelectedState" Mode="TwoWay"/> </ComboBox.SelectedItem> </ComboBox> <Label Grid.Row="2" Content="Select the Date of Entry"/> <Toolkit:DatePicker Grid.Row="2" Grid.Column="1" SelectedDate="{Binding DateOfEntry, ValidatesOnDataErrors=true}" /> <Label Grid.Row="3" Content="Enter time 24hr format"/> <TextBox Grid.Row="3" Grid.Column="1" Text="{Binding TimeOfEntry, TargetNullValue=''}"/> <Button Grid.Row="4" Grid.Column="1" Content="Start the Meter" Commands:Click.Command="{Binding StartMeterCommand}" />   <Label Grid.Row="5" Content="Run the Taxi" Style="{DynamicResource innerHeader}"/> <Label Grid.Row="6" Content="Number of Miles &lt;@6mph"/> <TextBox Grid.Row="6" Grid.Column="1" Text="{Binding MilesAtSixMPH, TargetNullValue='', ValidatesOnDataErrors=true}"/> <Label Grid.Row="7" Content="Number of Minutes @12mph"/> <TextBox Grid.Row="7" Grid.Column="1" Text="{Binding MinutesAtTweleveMPH, TargetNullValue=''}"/> <Button Grid.Row="8" Grid.Column="1" Content="Add Minutes and Miles " Commands:Click.Command="{Binding AddMinutesCommand}"/> <Label Grid.Row="9" Content="Other Operations" Style="{DynamicResource innerHeader}"/> <Button Grid.Row="10" Grid.Column="1" Content="Reset the Meter" Commands:Click.Command="{Binding ResetCommand}"/>   </Grid> </UserControl> The highlighted code from the above code shows data binding, for example ComboBox which displays list of states has it’s ItemsSource bound to States property, with DataTemplate bound to Name and SelectedItem  to SelectedState. You might be wondering what are all these properties and how it is able to bind to them.  The answer lies in data context, i.e., when you bound a control, WPF looks for data context on the root object (Grid in this case) and if it can’t find data context it will look into root’s root, i.e. FareView UserControl and it is bound to FareViewModel.  Each of those properties have be declared on the ViewModel for the View to bind correctly. To put simply, View is bound to ViewModel through data context of type object and every control that is bound on the View actually binds to the public property on the ViewModel. Lets look into the ViewModel code (the following code is not an exact copy of FareViewMode.cs, pasted relevant code for this section)   namespace TaxiModules.ViewModels { public class FareViewModel:ObservableBase, IDataErrorInfo { public List<USState> States { get { return USStates.StateList; } }   public USState SelectedState { get { return _selectedState; } set { _selectedState = value; RaisePropertyChanged(_selectedStatePropertyName); } }   public DateTime? DateOfEntry { get { return _dateOfEntry; } set { _dateOfEntry = value; RaisePropertyChanged(_dateOfEntryPropertyName); } }   public TimeSpan? TimeOfEntry { get { return _timeOfEntry; } set { _timeOfEntry = value; RaisePropertyChanged(_timeOfEntryPropertyName); } }   public double? MilesAtSixMPH { get { return _milesAtSixMPH; } set { _milesAtSixMPH = value; RaisePropertyChanged(_distanceAtSixMPHPropertyName); } }   public int? MinutesAtTweleveMPH { get { return _minutesAtTweleveMPH; } set { _minutesAtTweleveMPH = value; RaisePropertyChanged(_minutesAtTweleveMPHPropertyName); } }   public ICommand StartMeterCommand { get { if(_startMeterCommand == null) { _startMeterCommand = new DelegateCommand<object>(OnStartMeter, CanStartMeter); } return _startMeterCommand; } }   public ICommand AddMinutesCommand { get { if(_addMinutesCommand == null) { _addMinutesCommand = new DelegateCommand<object>(OnAddMinutes, CanAddMinutes); } return _addMinutesCommand; } }   public ICommand ResetCommand { get { if(_resetCommand == null) { _resetCommand = new DelegateCommand<object>(OnResetCommand); } return _resetCommand; } }   } private void OnStartMeter(object obj) { _eventAggregator.GetEvent<TaxiStartedEvent>().Publish( new TaxiStarted() { EngagedOn = DateOfEntry.Value.Date + TimeOfEntry.Value, EngagedState = SelectedState.Value });   _isMeterStarted = true; OnPropertyChanged(this,null); } And views communicate user actions like button clicks, tree view item selections, etc using commands. When user clicks on ‘Start the Meter’ button it invokes the method StartMeterCommand, which calls the method OnStartMeter which publishes the event to TotalViewModel using event aggregator  and TaxiStartedEvent. namespace TaxiModules.ViewModels { public class TotalViewModel:ObservableBase { ... private IEventAggregator _eventAggregator;   public TotalViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator;   InitializePropertyNames(); InitializeModel(); SubscribeToEvents(); }   public decimal? TotalFare { get { return _totalFare; } set { _totalFare = value; RaisePropertyChanged(_totalFarePropertyName); } } .... private void SubscribeToEvents() { _eventAggregator.GetEvent<TaxiStartedEvent>().Subscribe(OnTaxiStarted, ThreadOption.UIThread,false,(filter) => true); _eventAggregator.GetEvent<TaxiOnMoveEvent>().Subscribe(OnTaxiMove, ThreadOption.UIThread, false, (filter) => true); _eventAggregator.GetEvent<TaxiResetEvent>().Subscribe(OnTaxiReset, ThreadOption.UIThread, false, (filter) => true); }   private void OnTaxiStarted(TaxiStarted taxiStarted) { Fares.Add(new EntryFare()); Fares.Add(new StateTaxFare(taxiStarted)); Fares.Add(new NightSurchargeFare(taxiStarted)); Fares.Add(new PeakHourWeekdayFare(taxiStarted));   SetTotalFare(Fares); }   private void SetTotalFare(IEnumerable<IFare> fares) { TotalFare = (_totalFare ?? 0) + TaxiFareHelper.GetTotalFare(fares); } ....   } }   TotalViewModel subscribes to events, TaxiStartedEvent and rest. When TaxiStartedEvent gets invoked it calls the OnTaxiStarted method which sets the total fare which includes entry fee, state tax, nightly surcharge, peak hour weekday fare.   Note that TotalViewModel derives from ObservableBase which implements the method RaisePropertyChanged which we are invoking in Set of TotalFare property, i.e, once we update the TotalFare property it raises an the event that  allows the TotalFare text box to fetch the new value through the data context. ViewModel is communicating with View through data context and it has no knowledge about View, helping in loose coupling of ViewModel and View.   I have attached the source code (.Net 4.0, Prism 4.0, VS 2010) , download and play with it and don’t forget to leave your comments.  

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Problem with Google Web Toolkit Maven Plugin

    - by arreche
    I got an error following the PetClinic GWT application in less then 30 minutes Any idea? C:\Users\user\Desktop\petclinic>mvn -e gwt:run + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building petclinic [INFO] task-segment: [gwt:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing gwt:run [INFO] [aspectj:compile {execution: default}] [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 4 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date Downloading: http://repository.springsource.com/maven/bundles/release/org/codeha us/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.release (http://repository.sp ringsource.com/maven/bundles/release) Downloading: http://repository.springsource.com/maven/bundles/external/org/codeh aus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.external (http://repository.s pringsource.com/maven/bundles/external) Downloading: http://repository.springsource.com/maven/bundles/milestone/org/code haus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.milestone (http://repository. springsource.com/maven/bundles/milestone) Downloading: http://maven.springframework.org/milestone/org/codehaus/plexus/plex us-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository spring-maven-milestone (http://maven.springframework.org/mileston e) Downloading: http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven/org /codehaus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/g wt/maven) Downloading: http://repo1.maven.org/maven2/org/codehaus/plexus/plexus-components /1.1.6/plexus-components-1.1.6.pom [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error building POM (may not be this project's POM). Project ID: org.codehaus.plexus:plexus-components:pom:1.1.6 Reason: Cannot find parent: org.codehaus.plexus:plexus for project: org.codehaus .plexus:plexus-components:pom:1.1.6 for project org.codehaus.plexus:plexus-compo nents:pom:1.1.6 [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Unable to get dependency information: Unable to read the metadata file for artifact 'org.codehaus.plexus :plexus-compiler-api:jar': Cannot find parent: org.codehaus.plexus:plexus for pr oject: org.codehaus.plexus:plexus-components:pom:1.1.6 for project org.codehaus. plexus:plexus-components:pom:1.1.6 org.codehaus.plexus:plexus-compiler-api:jar:1.5.3 from the specified remote repositories: com.springsource.repository.bundles.release (http://repository.springsource.co m/maven/bundles/release), com.springsource.repository.bundles.milestone (http://repository.springsource. com/maven/bundles/milestone), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.c om/maven/bundles/external), spring-maven-milestone (http://maven.springframework.org/milestone), central (http://repo1.maven.org/maven2), gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven), codehaus.org (http://snapshots.repository.codehaus.org), JBoss Repo (http://repository.jboss.com/maven2) Path to dependency: 1) org.codehaus.mojo:gwt-maven-plugin:maven-plugin:1.3.1.google at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:711) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.artifact.resolver.ArtifactResolutionException: Unabl e to get dependency information: Unable to read the metadata file for artifact ' org.codehaus.plexus:plexus-compiler-api:jar': Cannot find parent: org.codehaus.p lexus:plexus for project: org.codehaus.plexus:plexus-components:pom:1.1.6 for pr oject org.codehaus.plexus:plexus-components:pom:1.1.6 org.codehaus.plexus:plexus-compiler-api:jar:1.5.3 from the specified remote repositories: com.springsource.repository.bundles.release (http://repository.springsource.co m/maven/bundles/release), com.springsource.repository.bundles.milestone (http://repository.springsource. com/maven/bundles/milestone), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.c om/maven/bundles/external), spring-maven-milestone (http://maven.springframework.org/milestone), central (http://repo1.maven.org/maven2), gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven), codehaus.org (http://snapshots.repository.codehaus.org), JBoss Repo (http://repository.jboss.com/maven2) Path to dependency: 1) org.codehaus.mojo:gwt-maven-plugin:maven-plugin:1.3.1.google at org.apache.maven.artifact.resolver.DefaultArtifactCollector.recurse(D efaultArtifactCollector.java:430) at org.apache.maven.artifact.resolver.DefaultArtifactCollector.collect(D efaultArtifactCollector.java:74) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolveTra nsitively(DefaultArtifactResolver.java:316) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolveTra nsitively(DefaultArtifactResolver.java:304) at org.apache.maven.plugin.DefaultPluginManager.ensurePluginContainerIsC omplete(DefaultPluginManager.java:835) at org.apache.maven.plugin.DefaultPluginManager.getConfiguredMojo(Defaul tPluginManager.java:647) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:468) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.artifact.metadata.ArtifactMetadataRetrievalException : Unable to read the metadata file for artifact 'org.codehaus.plexus:plexus-comp iler-api:jar': Cannot find parent: org.codehaus.plexus:plexus for project: org.c odehaus.plexus:plexus-components:pom:1.1.6 for project org.codehaus.plexus:plexu s-components:pom:1.1.6 at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edProject(MavenMetadataSource.java:200) at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edArtifact(MavenMetadataSource.java:94) at org.apache.maven.artifact.resolver.DefaultArtifactCollector.recurse(D efaultArtifactCollector.java:387) ... 24 more Caused by: org.apache.maven.project.ProjectBuildingException: Cannot find parent : org.codehaus.plexus:plexus for project: org.codehaus.plexus:plexus-components: pom:1.1.6 for project org.codehaus.plexus:plexus-components:pom:1.1.6 at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1396) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1407) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1407) at org.apache.maven.project.DefaultMavenProjectBuilder.buildInternal(Def aultMavenProjectBuilder.java:823) at org.apache.maven.project.DefaultMavenProjectBuilder.buildFromReposito ry(DefaultMavenProjectBuilder.java:255) at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edProject(MavenMetadataSource.java:163) ... 26 more Caused by: org.apache.maven.project.InvalidProjectModelException: Parse error re ading POM. Reason: expected START_TAG or END_TAG not TEXT (position: TEXT seen . ..<role>Developer</role>\n 6878/?\r</... @163:16) for project org.codehaus .plexus:plexus at C:\Users\user\.m2\repository\org\codehaus\plexus\plexus\1.0.8\ plexus-1.0.8.pom at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1610) at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1571) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepo sitory(DefaultMavenProjectBuilder.java:562) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1392) ... 31 more Caused by: org.codehaus.plexus.util.xml.pull.XmlPullParserException: expected ST ART_TAG or END_TAG not TEXT (position: TEXT seen ...<role>Developer</role>\n 6878/?\r</... @163:16) at hidden.org.codehaus.plexus.util.xml.pull.MXParser.nextTag(MXParser.ja va:1095) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.parseDeveloper(MavenXp p3Reader.java:1389) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.parseModel(MavenXpp3Re ader.java:1944) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.read(MavenXpp3Reader.j ava:3912) at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1606) ... 34 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11 seconds [INFO] Finished at: Fri May 21 20:28:23 BST 2010 [INFO] Final Memory: 45M/205M [INFO] ------------------------------------------------------------------------

    Read the article

  • Including an embedded framework using a cross-project-reference: Header no such file or directory

    - by d11wtq
    I'm trying to create a Cocoa framework by using a cross-project reference in Xcode. I have 2 projects: one for the framework; one for the application that will use the framework. This framework is not intended to be stored within the system; it is an embedded framework that lives within the application bundle. I have successfully made the cross-project reference, marked the framework as being a dependency of my target, added a Copy Files build phase that puts the framework in Contents/Frameworks/ and added the framework to the linker phase (I checked the little "Target" checkbox; I've also done it manually by dragging the framework into the linker phase). My framework's install directory is correctly set to @executable_path/../Frameworks. However, when I try to build my app it: a) Correctly builds the framework first b) Correctly copies the framework c) Errors because it cannot find the master header file in my framework I have verified that the header is there. I can see it in the app product that is partially built. ls build/Debug/CioccolataTest.webapp/Contents/Frameworks/Cioccolata.framework/Headers/Cioccolata.h build/Debug/CioccolataTest.webapp/Contents/Frameworks/Cioccolata.framework/Headers/Cioccolata.h I have been able to successfully build the app by copying my framework into /Library/Frameworks (I can then delete it again after the successful build), but this is a workaround, I'm looking to find it out why Xcode doesn't find the framework's master header file without it being copied to a system directory. Is copying it to the app bundle during the build not sufficient? Here's the full build transcript if it's any help (it's just a Hello World app right now, so not much going on here): Build Cioccolata of project Cioccolata with configuration Debug SymLink /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/Current A cd /Users/chris/Projects/Mac/Cioccolata /bin/ln -sf A /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/Current SymLink /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Resources Versions/Current/Resources cd /Users/chris/Projects/Mac/Cioccolata /bin/ln -sf Versions/Current/Resources /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Resources SymLink /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Headers Versions/Current/Headers cd /Users/chris/Projects/Mac/Cioccolata /bin/ln -sf Versions/Current/Headers /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Headers SymLink /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Cioccolata Versions/Current/Cioccolata cd /Users/chris/Projects/Mac/Cioccolata /bin/ln -sf Versions/Current/Cioccolata /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Cioccolata ProcessInfoPlistFile /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Resources/Info.plist Info.plist cd /Users/chris/Projects/Mac/Cioccolata builtin-infoPlistUtility Info.plist -expandbuildsettings -platform macosx -o /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Resources/Info.plist CpHeader build/Debug/Cioccolata.framework/Versions/A/Headers/CWHelloWorld.h CWHelloWorld.h cd /Users/chris/Projects/Mac/Cioccolata /Developer/Library/PrivateFrameworks/DevToolsCore.framework/Resources/pbxcp -exclude .DS_Store -exclude CVS -exclude .svn -resolve-src-symlinks /Users/chris/Projects/Mac/Cioccolata/CWHelloWorld.h /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Headers CpHeader build/Debug/Cioccolata.framework/Versions/A/Headers/Cioccolata.h Cioccolata.h cd /Users/chris/Projects/Mac/Cioccolata /Developer/Library/PrivateFrameworks/DevToolsCore.framework/Resources/pbxcp -exclude .DS_Store -exclude CVS -exclude .svn -resolve-src-symlinks /Users/chris/Projects/Mac/Cioccolata/Cioccolata.h /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Headers CopyStringsFile /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Resources/English.lproj/InfoPlist.strings English.lproj/InfoPlist.strings cd /Users/chris/Projects/Mac/Cioccolata setenv ICONV /usr/bin/iconv /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copystrings --validate --inputencoding utf-8 --outputencoding UTF-16 English.lproj/InfoPlist.strings --outdir /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Resources/English.lproj ProcessPCH /var/folders/Xy/Xy-bvnxtFpiYBQPED0dK1++++TI/-Caches-/com.apple.Xcode.501/SharedPrecompiledHeaders/Cioccolata_Prefix-dololiigmwjzkgenggebqtpvbauu/Cioccolata_Prefix.pch.gch Cioccolata_Prefix.pch normal i386 objective-c com.apple.compilers.gcc.4_2 cd /Users/chris/Projects/Mac/Cioccolata setenv LANG en_US.US-ASCII /Developer/usr/bin/gcc-4.2 -x objective-c-header -arch i386 -fmessage-length=0 -pipe -std=gnu99 -Wno-trigraphs -fpascal-strings -fasm-blocks -O0 -Wreturn-type -Wunused-variable -isysroot /Developer/SDKs/MacOSX10.5.sdk -mfix-and-continue -mmacosx-version-min=10.5 -gdwarf-2 -iquote /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-generated-files.hmap -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-own-target-headers.hmap -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-all-target-headers.hmap -iquote /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-project-headers.hmap -F/Users/chris/Projects/Mac/Cioccolata/build/Debug -I/Users/chris/Projects/Mac/Cioccolata/build/Debug/include -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/DerivedSources/i386 -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/DerivedSources -c /Users/chris/Projects/Mac/Cioccolata/Cioccolata_Prefix.pch -o /var/folders/Xy/Xy-bvnxtFpiYBQPED0dK1++++TI/-Caches-/com.apple.Xcode.501/SharedPrecompiledHeaders/Cioccolata_Prefix-dololiigmwjzkgenggebqtpvbauu/Cioccolata_Prefix.pch.gch CompileC build/Cioccolata.build/Debug/Cioccolata.build/Objects-normal/i386/CWHelloWorld.o /Users/chris/Projects/Mac/Cioccolata/CWHelloWorld.m normal i386 objective-c com.apple.compilers.gcc.4_2 cd /Users/chris/Projects/Mac/Cioccolata setenv LANG en_US.US-ASCII /Developer/usr/bin/gcc-4.2 -x objective-c -arch i386 -fmessage-length=0 -pipe -std=gnu99 -Wno-trigraphs -fpascal-strings -fasm-blocks -O0 -Wreturn-type -Wunused-variable -isysroot /Developer/SDKs/MacOSX10.5.sdk -mfix-and-continue -mmacosx-version-min=10.5 -gdwarf-2 -iquote /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-generated-files.hmap -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-own-target-headers.hmap -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-all-target-headers.hmap -iquote /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-project-headers.hmap -F/Users/chris/Projects/Mac/Cioccolata/build/Debug -I/Users/chris/Projects/Mac/Cioccolata/build/Debug/include -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/DerivedSources/i386 -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/DerivedSources -include /var/folders/Xy/Xy-bvnxtFpiYBQPED0dK1++++TI/-Caches-/com.apple.Xcode.501/SharedPrecompiledHeaders/Cioccolata_Prefix-dololiigmwjzkgenggebqtpvbauu/Cioccolata_Prefix.pch -c /Users/chris/Projects/Mac/Cioccolata/CWHelloWorld.m -o /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Objects-normal/i386/CWHelloWorld.o Ld /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Cioccolata normal i386 cd /Users/chris/Projects/Mac/Cioccolata setenv MACOSX_DEPLOYMENT_TARGET 10.5 /Developer/usr/bin/gcc-4.2 -arch i386 -dynamiclib -isysroot /Developer/SDKs/MacOSX10.5.sdk -L/Users/chris/Projects/Mac/Cioccolata/build/Debug -F/Users/chris/Projects/Mac/Cioccolata/build/Debug -filelist /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Objects-normal/i386/Cioccolata.LinkFileList -install_name @executable_path/../Frameworks/Cioccolata.framework/Versions/A/Cioccolata -mmacosx-version-min=10.5 -framework Foundation -single_module -compatibility_version 1 -current_version 1 -o /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Cioccolata Touch /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework cd /Users/chris/Projects/Mac/Cioccolata /usr/bin/touch -c /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework Build CioccolataTest of project CioccolataTest with configuration Debug ProcessInfoPlistFile /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Info.plist Info.plist cd /Users/chris/Projects/Mac/CioccolataTest builtin-infoPlistUtility Info.plist -expandbuildsettings -platform macosx -o /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Info.plist PBXCp build/Debug/CioccolataTest.webapp/Contents/Frameworks/Cioccolata.framework /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework cd /Users/chris/Projects/Mac/CioccolataTest /Developer/Library/PrivateFrameworks/DevToolsCore.framework/Resources/pbxcp -exclude .DS_Store -exclude CVS -exclude .svn -resolve-src-symlinks /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Frameworks CopyStringsFile /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Resources/English.lproj/InfoPlist.strings English.lproj/InfoPlist.strings cd /Users/chris/Projects/Mac/CioccolataTest setenv ICONV /usr/bin/iconv /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copystrings --validate --inputencoding utf-8 --outputencoding UTF-16 English.lproj/InfoPlist.strings --outdir /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Resources/English.lproj CompileC build/CioccolataTest.build/Debug/CioccolataTest.build/Objects-normal/i386/main.o main.m normal i386 objective-c com.apple.compilers.gcc.4_2 cd /Users/chris/Projects/Mac/CioccolataTest setenv LANG en_US.US-ASCII /Developer/usr/bin/gcc-4.2 -x objective-c -arch i386 -fmessage-length=0 -pipe -std=gnu99 -Wno-trigraphs -fpascal-strings -fasm-blocks -O0 -Wreturn-type -Wunused-variable -isysroot /Developer/SDKs/MacOSX10.5.sdk -mfix-and-continue -mmacosx-version-min=10.5 -gdwarf-2 -iquote /Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/CioccolataTest-generated-files.hmap -I/Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/CioccolataTest-own-target-headers.hmap -I/Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/CioccolataTest-all-target-headers.hmap -iquote /Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/CioccolataTest-project-headers.hmap -F/Users/chris/Projects/Mac/CioccolataTest/build/Debug -I/Users/chris/Projects/Mac/CioccolataTest/build/Debug/include -I/Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/DerivedSources/i386 -I/Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/DerivedSources -include /Users/chris/Projects/Mac/CioccolataTest/prefix.pch -c /Users/chris/Projects/Mac/CioccolataTest/main.m -o /Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/Objects-normal/i386/main.o In file included from <command-line>:0: /Users/chris/Projects/Mac/CioccolataTest/prefix.pch:13:35: error: Cioccolata/Cioccolata.h: No such file or directory /Users/chris/Projects/Mac/CioccolataTest/main.m: In function 'main': /Users/chris/Projects/Mac/CioccolataTest/main.m:13: error: 'CWHelloWorld' undeclared (first use in this function) /Users/chris/Projects/Mac/CioccolataTest/main.m:13: error: (Each undeclared identifier is reported only once /Users/chris/Projects/Mac/CioccolataTest/main.m:13: error: for each function it appears in.) /Users/chris/Projects/Mac/CioccolataTest/main.m:13: error: 'hello' undeclared (first use in this function)

    Read the article

  • Custom filtering in Android using ArrayAdapter

    - by Alxandr
    I'm trying to filter my ListView which is populated with this ArrayAdapter: package me.alxandr.android.mymir.adapters; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.Iterator; import java.util.Set; import me.alxandr.android.mymir.R; import me.alxandr.android.mymir.model.Manga; import android.content.Context; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.Filter; import android.widget.SectionIndexer; import android.widget.TextView; public class MangaListAdapter extends ArrayAdapter<Manga> implements SectionIndexer { public ArrayList<Manga> items; public ArrayList<Manga> filtered; private Context context; private HashMap<String, Integer> alphaIndexer; private String[] sections = new String[0]; private Filter filter; private boolean enableSections; public MangaListAdapter(Context context, int textViewResourceId, ArrayList<Manga> items, boolean enableSections) { super(context, textViewResourceId, items); this.filtered = items; this.items = filtered; this.context = context; this.filter = new MangaNameFilter(); this.enableSections = enableSections; if(enableSections) { alphaIndexer = new HashMap<String, Integer>(); for(int i = items.size() - 1; i >= 0; i--) { Manga element = items.get(i); String firstChar = element.getName().substring(0, 1).toUpperCase(); if(firstChar.charAt(0) > 'Z' || firstChar.charAt(0) < 'A') firstChar = "@"; alphaIndexer.put(firstChar, i); } Set<String> keys = alphaIndexer.keySet(); Iterator<String> it = keys.iterator(); ArrayList<String> keyList = new ArrayList<String>(); while(it.hasNext()) keyList.add(it.next()); Collections.sort(keyList); sections = new String[keyList.size()]; keyList.toArray(sections); } } @Override public View getView(int position, View convertView, ViewGroup parent) { View v = convertView; if(v == null) { LayoutInflater vi = (LayoutInflater)context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = vi.inflate(R.layout.mangarow, null); } Manga o = items.get(position); if(o != null) { TextView tt = (TextView) v.findViewById(R.id.MangaRow_MangaName); TextView bt = (TextView) v.findViewById(R.id.MangaRow_MangaExtra); if(tt != null) tt.setText(o.getName()); if(bt != null) bt.setText(o.getLastUpdated() + " - " + o.getLatestChapter()); if(enableSections && getSectionForPosition(position) != getSectionForPosition(position + 1)) { TextView h = (TextView) v.findViewById(R.id.MangaRow_Header); h.setText(sections[getSectionForPosition(position)]); h.setVisibility(View.VISIBLE); } else { TextView h = (TextView) v.findViewById(R.id.MangaRow_Header); h.setVisibility(View.GONE); } } return v; } @Override public void notifyDataSetInvalidated() { if(enableSections) { for (int i = items.size() - 1; i >= 0; i--) { Manga element = items.get(i); String firstChar = element.getName().substring(0, 1).toUpperCase(); if(firstChar.charAt(0) > 'Z' || firstChar.charAt(0) < 'A') firstChar = "@"; alphaIndexer.put(firstChar, i); } Set<String> keys = alphaIndexer.keySet(); Iterator<String> it = keys.iterator(); ArrayList<String> keyList = new ArrayList<String>(); while (it.hasNext()) { keyList.add(it.next()); } Collections.sort(keyList); sections = new String[keyList.size()]; keyList.toArray(sections); super.notifyDataSetInvalidated(); } } public int getPositionForSection(int section) { if(!enableSections) return 0; String letter = sections[section]; return alphaIndexer.get(letter); } public int getSectionForPosition(int position) { if(!enableSections) return 0; int prevIndex = 0; for(int i = 0; i < sections.length; i++) { if(getPositionForSection(i) > position && prevIndex <= position) { prevIndex = i; break; } prevIndex = i; } return prevIndex; } public Object[] getSections() { return sections; } @Override public Filter getFilter() { if(filter == null) filter = new MangaNameFilter(); return filter; } private class MangaNameFilter extends Filter { @Override protected FilterResults performFiltering(CharSequence constraint) { // NOTE: this function is *always* called from a background thread, and // not the UI thread. constraint = constraint.toString().toLowerCase(); FilterResults result = new FilterResults(); if(constraint != null && constraint.toString().length() > 0) { ArrayList<Manga> filt = new ArrayList<Manga>(); ArrayList<Manga> lItems = new ArrayList<Manga>(); synchronized (items) { Collections.copy(lItems, items); } for(int i = 0, l = lItems.size(); i < l; i++) { Manga m = lItems.get(i); if(m.getName().toLowerCase().contains(constraint)) filt.add(m); } result.count = filt.size(); result.values = filt; } else { synchronized(items) { result.values = items; result.count = items.size(); } } return result; } @SuppressWarnings("unchecked") @Override protected void publishResults(CharSequence constraint, FilterResults results) { // NOTE: this function is *always* called from the UI thread. filtered = (ArrayList<Manga>)results.values; notifyDataSetChanged(); } } } However, when I call filter('test') on the filter nothing happens at all (or the background-thread is run, but the list isn't filtered as far as the user conserns). How can I fix this?

    Read the article

  • How LINQ to Object statements work

    - by rajbk
    This post goes into detail as to now LINQ statements work when querying a collection of objects. This topic assumes you have an understanding of how generics, delegates, implicitly typed variables, lambda expressions, object/collection initializers, extension methods and the yield statement work. I would also recommend you read my previous two posts: Using Delegates in C# Part 1 Using Delegates in C# Part 2 We will start by writing some methods to filter a collection of data. Assume we have an Employee class like so: 1: public class Employee { 2: public int ID { get; set;} 3: public string FirstName { get; set;} 4: public string LastName {get; set;} 5: public string Country { get; set; } 6: } and a collection of employees like so: 1: var employees = new List<Employee> { 2: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 3: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 4: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 5: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" }, 6: }; Filtering We wish to  find all employees that have an even ID. We could start off by writing a method that takes in a list of employees and returns a filtered list of employees with an even ID. 1: static List<Employee> GetEmployeesWithEvenID(List<Employee> employees) { 2: var filteredEmployees = new List<Employee>(); 3: foreach (Employee emp in employees) { 4: if (emp.ID % 2 == 0) { 5: filteredEmployees.Add(emp); 6: } 7: } 8: return filteredEmployees; 9: } The method can be rewritten to return an IEnumerable<Employee> using the yield return keyword. 1: static IEnumerable<Employee> GetEmployeesWithEvenID(IEnumerable<Employee> employees) { 2: foreach (Employee emp in employees) { 3: if (emp.ID % 2 == 0) { 4: yield return emp; 5: } 6: } 7: } We put these together in a console application. 1: using System; 2: using System.Collections.Generic; 3: //No System.Linq 4:  5: public class Program 6: { 7: [STAThread] 8: static void Main(string[] args) 9: { 10: var employees = new List<Employee> { 11: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 12: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 13: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 14: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" }, 15: }; 16: var filteredEmployees = GetEmployeesWithEvenID(employees); 17:  18: foreach (Employee emp in filteredEmployees) { 19: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 20: emp.ID, emp.FirstName, emp.LastName, emp.Country); 21: } 22:  23: Console.ReadLine(); 24: } 25: 26: static IEnumerable<Employee> GetEmployeesWithEvenID(IEnumerable<Employee> employees) { 27: foreach (Employee emp in employees) { 28: if (emp.ID % 2 == 0) { 29: yield return emp; 30: } 31: } 32: } 33: } 34:  35: public class Employee { 36: public int ID { get; set;} 37: public string FirstName { get; set;} 38: public string LastName {get; set;} 39: public string Country { get; set; } 40: } Output: ID 2 First_Name Jim Last_Name Ashlock Country UK ID 4 First_Name Jill Last_Name Anderson Country AUS Our filtering method is too specific. Let us change it so that it is capable of doing different types of filtering and lets give our method the name Where ;-) We will add another parameter to our Where method. This additional parameter will be a delegate with the following declaration. public delegate bool Filter(Employee emp); The idea is that the delegate parameter in our Where method will point to a method that contains the logic to do our filtering thereby freeing our Where method from any dependency. The method is shown below: 1: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 2: foreach (Employee emp in employees) { 3: if (filter(emp)) { 4: yield return emp; 5: } 6: } 7: } Making the change to our app, we create a new instance of the Filter delegate on line 14 with a target set to the method EmployeeHasEvenId. Running the code will produce the same output. 1: public delegate bool Filter(Employee emp); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: var employees = new List<Employee> { 9: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 10: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 11: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 12: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 13: }; 14: var filterDelegate = new Filter(EmployeeHasEvenId); 15: var filteredEmployees = Where(employees, filterDelegate); 16:  17: foreach (Employee emp in filteredEmployees) { 18: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 19: emp.ID, emp.FirstName, emp.LastName, emp.Country); 20: } 21: Console.ReadLine(); 22: } 23: 24: static bool EmployeeHasEvenId(Employee emp) { 25: return emp.ID % 2 == 0; 26: } 27: 28: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 29: foreach (Employee emp in employees) { 30: if (filter(emp)) { 31: yield return emp; 32: } 33: } 34: } 35: } 36:  37: public class Employee { 38: public int ID { get; set;} 39: public string FirstName { get; set;} 40: public string LastName {get; set;} 41: public string Country { get; set; } 42: } Lets use lambda expressions to inline the contents of the EmployeeHasEvenId method in place of the method. The next code snippet shows this change (see line 15).  For brevity, the Employee class declaration has been skipped. 1: public delegate bool Filter(Employee emp); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: var employees = new List<Employee> { 9: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 10: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 11: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 12: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 13: }; 14: var filterDelegate = new Filter(EmployeeHasEvenId); 15: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 16:  17: foreach (Employee emp in filteredEmployees) { 18: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 19: emp.ID, emp.FirstName, emp.LastName, emp.Country); 20: } 21: Console.ReadLine(); 22: } 23: 24: static bool EmployeeHasEvenId(Employee emp) { 25: return emp.ID % 2 == 0; 26: } 27: 28: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 29: foreach (Employee emp in employees) { 30: if (filter(emp)) { 31: yield return emp; 32: } 33: } 34: } 35: } 36:  The output displays the same two employees.  Our Where method is too restricted since it works with a collection of Employees only. Lets change it so that it works with any IEnumerable<T>. In addition, you may recall from my previous post,  that .NET 3.5 comes with a lot of predefined delegates including public delegate TResult Func<T, TResult>(T arg); We will get rid of our Filter delegate and use the one above instead. We apply these two changes to our code. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: var employees = new List<Employee> { 7: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 8: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 9: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 10: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 11: }; 12:  13: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 14:  15: foreach (Employee emp in filteredEmployees) { 16: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 17: emp.ID, emp.FirstName, emp.LastName, emp.Country); 18: } 19: Console.ReadLine(); 20: } 21: 22: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 23: foreach (var x in source) { 24: if (filter(x)) { 25: yield return x; 26: } 27: } 28: } 29: } We have successfully implemented a way to filter any IEnumerable<T> based on a  filter criteria. Projection Now lets enumerate on the items in the IEnumerable<Employee> we got from the Where method and copy them into a new IEnumerable<EmployeeFormatted>. The EmployeeFormatted class will only have a FullName and ID property. 1: public class EmployeeFormatted { 2: public int ID { get; set; } 3: public string FullName {get; set;} 4: } We could “project” our existing IEnumerable<Employee> into a new collection of IEnumerable<EmployeeFormatted> with the help of a new method. We will call this method Select ;-) 1: static IEnumerable<EmployeeFormatted> Select(IEnumerable<Employee> employees) { 2: foreach (var emp in employees) { 3: yield return new EmployeeFormatted { 4: ID = emp.ID, 5: FullName = emp.LastName + ", " + emp.FirstName 6: }; 7: } 8: } The changes are applied to our app. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: var employees = new List<Employee> { 7: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 8: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 9: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 10: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 11: }; 12:  13: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 14: var formattedEmployees = Select(filteredEmployees); 15:  16: foreach (EmployeeFormatted emp in formattedEmployees) { 17: Console.WriteLine("ID {0} Full_Name {1}", 18: emp.ID, emp.FullName); 19: } 20: Console.ReadLine(); 21: } 22:  23: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 24: foreach (var x in source) { 25: if (filter(x)) { 26: yield return x; 27: } 28: } 29: } 30: 31: static IEnumerable<EmployeeFormatted> Select(IEnumerable<Employee> employees) { 32: foreach (var emp in employees) { 33: yield return new EmployeeFormatted { 34: ID = emp.ID, 35: FullName = emp.LastName + ", " + emp.FirstName 36: }; 37: } 38: } 39: } 40:  41: public class Employee { 42: public int ID { get; set;} 43: public string FirstName { get; set;} 44: public string LastName {get; set;} 45: public string Country { get; set; } 46: } 47:  48: public class EmployeeFormatted { 49: public int ID { get; set; } 50: public string FullName {get; set;} 51: } Output: ID 2 Full_Name Ashlock, Jim ID 4 Full_Name Anderson, Jill We have successfully selected employees who have an even ID and then shaped our data with the help of the Select method so that the final result is an IEnumerable<EmployeeFormatted>.  Lets make our Select method more generic so that the user is given the freedom to shape what the output would look like. We can do this, like before, with lambda expressions. Our Select method is changed to accept a delegate as shown below. TSource will be the type of data that comes in and TResult will be the type the user chooses (shape of data) as returned from the selector delegate. 1:  2: static IEnumerable<TResult> Select<TSource, TResult>(IEnumerable<TSource> source, Func<TSource, TResult> selector) { 3: foreach (var x in source) { 4: yield return selector(x); 5: } 6: } We see the new changes to our app. On line 15, we use lambda expression to specify the shape of the data. In this case the shape will be of type EmployeeFormatted. 1:  2: public class Program 3: { 4: [STAThread] 5: static void Main(string[] args) 6: { 7: var employees = new List<Employee> { 8: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 9: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 10: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 11: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 12: }; 13:  14: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 15: var formattedEmployees = Select(filteredEmployees, (emp) => 16: new EmployeeFormatted { 17: ID = emp.ID, 18: FullName = emp.LastName + ", " + emp.FirstName 19: }); 20:  21: foreach (EmployeeFormatted emp in formattedEmployees) { 22: Console.WriteLine("ID {0} Full_Name {1}", 23: emp.ID, emp.FullName); 24: } 25: Console.ReadLine(); 26: } 27: 28: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 29: foreach (var x in source) { 30: if (filter(x)) { 31: yield return x; 32: } 33: } 34: } 35: 36: static IEnumerable<TResult> Select<TSource, TResult>(IEnumerable<TSource> source, Func<TSource, TResult> selector) { 37: foreach (var x in source) { 38: yield return selector(x); 39: } 40: } 41: } The code outputs the same result as before. On line 14 we filter our data and on line 15 we project our data. What if we wanted to be more expressive and concise? We could combine both line 14 and 15 into one line as shown below. Assuming you had to perform several operations like this on our collection, you would end up with some very unreadable code! 1: var formattedEmployees = Select(Where(employees, emp => emp.ID % 2 == 0), (emp) => 2: new EmployeeFormatted { 3: ID = emp.ID, 4: FullName = emp.LastName + ", " + emp.FirstName 5: }); A cleaner way to write this would be to give the appearance that the Select and Where methods were part of the IEnumerable<T>. This is exactly what extension methods give us. Extension methods have to be defined in a static class. Let us make the Select and Where extension methods on IEnumerable<T> 1: public static class MyExtensionMethods { 2: static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 3: foreach (var x in source) { 4: if (filter(x)) { 5: yield return x; 6: } 7: } 8: } 9: 10: static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 11: foreach (var x in source) { 12: yield return selector(x); 13: } 14: } 15: } The creation of the extension method makes the syntax much cleaner as shown below. We can write as many extension methods as we want and keep on chaining them using this technique. 1: var formattedEmployees = employees 2: .Where(emp => emp.ID % 2 == 0) 3: .Select (emp => new EmployeeFormatted { ID = emp.ID, FullName = emp.LastName + ", " + emp.FirstName }); Making these changes and running our code produces the same result. 1: using System; 2: using System.Collections.Generic; 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: var employees = new List<Employee> { 10: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 11: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 12: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 13: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 14: }; 15:  16: var formattedEmployees = employees 17: .Where(emp => emp.ID % 2 == 0) 18: .Select (emp => 19: new EmployeeFormatted { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: } 23: ); 24:  25: foreach (EmployeeFormatted emp in formattedEmployees) { 26: Console.WriteLine("ID {0} Full_Name {1}", 27: emp.ID, emp.FullName); 28: } 29: Console.ReadLine(); 30: } 31: } 32:  33: public static class MyExtensionMethods { 34: static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 35: foreach (var x in source) { 36: if (filter(x)) { 37: yield return x; 38: } 39: } 40: } 41: 42: static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 43: foreach (var x in source) { 44: yield return selector(x); 45: } 46: } 47: } 48:  49: public class Employee { 50: public int ID { get; set;} 51: public string FirstName { get; set;} 52: public string LastName {get; set;} 53: public string Country { get; set; } 54: } 55:  56: public class EmployeeFormatted { 57: public int ID { get; set; } 58: public string FullName {get; set;} 59: } Let’s change our code to return a collection of anonymous types and get rid of the EmployeeFormatted type. We see that the code produces the same output. 1: using System; 2: using System.Collections.Generic; 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: var employees = new List<Employee> { 10: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 11: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 12: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 13: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 14: }; 15:  16: var formattedEmployees = employees 17: .Where(emp => emp.ID % 2 == 0) 18: .Select (emp => 19: new { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: } 23: ); 24:  25: foreach (var emp in formattedEmployees) { 26: Console.WriteLine("ID {0} Full_Name {1}", 27: emp.ID, emp.FullName); 28: } 29: Console.ReadLine(); 30: } 31: } 32:  33: public static class MyExtensionMethods { 34: public static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 35: foreach (var x in source) { 36: if (filter(x)) { 37: yield return x; 38: } 39: } 40: } 41: 42: public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 43: foreach (var x in source) { 44: yield return selector(x); 45: } 46: } 47: } 48:  49: public class Employee { 50: public int ID { get; set;} 51: public string FirstName { get; set;} 52: public string LastName {get; set;} 53: public string Country { get; set; } 54: } To be more expressive, C# allows us to write our extension method calls as a query expression. Line 16 can be rewritten a query expression like so: 1: var formattedEmployees = from emp in employees 2: where emp.ID % 2 == 0 3: select new { 4: ID = emp.ID, 5: FullName = emp.LastName + ", " + emp.FirstName 6: }; When the compiler encounters an expression like the above, it simply rewrites it as calls to our extension methods.  So far we have been using our extension methods. The System.Linq namespace contains several extension methods for objects that implement the IEnumerable<T>. You can see a listing of these methods in the Enumerable class in the System.Linq namespace. Let’s get rid of our extension methods (which I purposefully wrote to be of the same signature as the ones in the Enumerable class) and use the ones provided in the Enumerable class. Our final code is shown below: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; //Added 4:  5: public class Program 6: { 7: [STAThread] 8: static void Main(string[] args) 9: { 10: var employees = new List<Employee> { 11: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 12: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 13: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 14: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 15: }; 16:  17: var formattedEmployees = from emp in employees 18: where emp.ID % 2 == 0 19: select new { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: }; 23:  24: foreach (var emp in formattedEmployees) { 25: Console.WriteLine("ID {0} Full_Name {1}", 26: emp.ID, emp.FullName); 27: } 28: Console.ReadLine(); 29: } 30: } 31:  32: public class Employee { 33: public int ID { get; set;} 34: public string FirstName { get; set;} 35: public string LastName {get; set;} 36: public string Country { get; set; } 37: } 38:  39: public class EmployeeFormatted { 40: public int ID { get; set; } 41: public string FullName {get; set;} 42: } This post has shown you a basic overview of LINQ to Objects work by showning you how an expression is converted to a sequence of calls to extension methods when working directly with objects. It gets more interesting when working with LINQ to SQL where an expression tree is constructed – an in memory data representation of the expression. The C# compiler compiles these expressions into code that builds an expression tree at runtime. The provider can then traverse the expression tree and generate the appropriate SQL query. You can read more about expression trees in this MSDN article.

    Read the article

  • SQL Server 2012 - AlwaysOn

    - by Claus Jandausch
    Ich war nicht nur irritiert, ich war sogar regelrecht schockiert - und für einen kurzen Moment sprachlos (was nur selten der Fall ist). Gerade eben hatte mich jemand gefragt "Wann Oracle denn etwas Vergleichbares wie AlwaysOn bieten würde - und ob überhaupt?" War ich hier im falschen Film gelandet? Ich konnte nicht anders, als meinen Unmut kundzutun und zu erklären, dass die Fragestellung normalerweise anders herum läuft. Zugegeben - es mag vielleicht strittige Punkte geben im Vergleich zwischen Oracle und SQL Server - bei denen nicht unbedingt immer Oracle die Nase vorn haben muss - aber das Thema Clustering für Hochverfügbarkeit (HA), Disaster Recovery (DR) und Skalierbarkeit gehört mit Sicherheit nicht dazu. Dieses Erlebnis hakte ich am Nachgang als Einzelfall ab, der so nie wieder vorkommen würde. Bis ich kurz darauf eines Besseren belehrt wurde und genau die selbe Frage erneut zu hören bekam. Diesmal sogar im Exadata-Umfeld und einem Oracle Stretch Cluster. Einmal ist keinmal, doch zweimal ist einmal zu viel... Getreu diesem alten Motto war mir klar, dass man das so nicht länger stehen lassen konnte. Ich habe keine Ahnung, wie die Microsoft Marketing Abteilung es geschafft hat, unter dem AlwaysOn Brading eine innovative Technologie vermuten zu lassen - aber sie hat ihren Job scheinbar gut gemacht. Doch abgesehen von einem guten Marketing, stellt sich natürlich die Frage, was wirklich dahinter steckt und wie sich das Ganze mit Oracle vergleichen lässt - und ob überhaupt? Damit wären wir wieder bei der ursprünglichen Frage angelangt.  So viel zum Hintergrund dieses Blogbeitrags - von meiner Antwort handelt der restliche Blog. "Windows was the God ..." Um den wahren Unterschied zwischen Oracle und Microsoft verstehen zu können, muss man zunächst das bedeutendste Microsoft Dogma kennen. Es lässt sich schlicht und einfach auf den Punkt bringen: "Alles muss auf Windows basieren." Die Überschrift dieses Absatzes ist kein von mir erfundener Ausspruch, sondern ein Zitat. Konkret stammt es aus einem längeren Artikel von Kurt Eichenwald in der Vanity Fair aus dem August 2012. Er lautet Microsoft's Lost Decade und sei jedem ans Herz gelegt, der die "Microsoft-Maschinerie" unter Steve Ballmer und einige ihrer Kuriositäten besser verstehen möchte. "YOU TALKING TO ME?" Microsoft C.E.O. Steve Ballmer bei seiner Keynote auf der 2012 International Consumer Electronics Show in Las Vegas am 9. Januar   Manche Dinge in diesem Artikel mögen überspitzt dargestellt erscheinen - sind sie aber nicht. Vieles davon kannte ich bereits aus eigener Erfahrung und kann es nur bestätigen. Anderes hat sich mir erst so richtig erschlossen. Insbesondere die folgenden Passagen führten zum Aha-Erlebnis: “Windows was the god—everything had to work with Windows,” said Stone... “Every little thing you want to write has to build off of Windows (or other existing roducts),” one software engineer said. “It can be very confusing, …” Ich habe immer schon darauf hingewiesen, dass in einem SQL Server Failover Cluster die Microsoft Datenbank eigentlich nichts Nenneswertes zum Geschehen beiträgt, sondern sich voll und ganz auf das Windows Betriebssystem verlässt. Deshalb muss man auch die Windows Server Enterprise Edition installieren, soll ein Failover Cluster für den SQL Server eingerichtet werden. Denn hier werden die Cluster Services geliefert - nicht mit dem SQL Server. Er ist nur lediglich ein weiteres Server Produkt, für das Windows in Ausfallszenarien genutzt werden kann - so wie Microsoft Exchange beispielsweise, oder Microsoft SharePoint, oder irgendein anderes Server Produkt das auf Windows gehostet wird. Auch Oracle kann damit genutzt werden. Das Stichwort lautet hier: Oracle Failsafe. Nur - warum sollte man das tun, wenn gleichzeitig eine überlegene Technologie wie die Oracle Real Application Clusters (RAC) zur Verfügung steht, die dann auch keine Windows Enterprise Edition voraussetzen, da Oracle die eigene Clusterware liefert. Welche darüber hinaus für kürzere Failover-Zeiten sorgt, da diese Cluster-Technologie Datenbank-integriert ist und sich nicht auf "Dritte" verlässt. Wenn man sich also schon keine technischen Vorteile mit einem SQL Server Failover Cluster erkauft, sondern zusätzlich noch versteckte Lizenzkosten durch die Lizenzierung der Windows Server Enterprise Edition einhandelt, warum hat Microsoft dann in den vergangenen Jahren seit SQL Server 2000 nicht ebenfalls an einer neuen und innovativen Lösung gearbeitet, die mit Oracle RAC mithalten kann? Entwickler hat Microsoft genügend? Am Geld kann es auch nicht liegen? Lesen Sie einfach noch einmal die beiden obenstehenden Zitate und sie werden den Grund verstehen. Anders lässt es sich ja auch gar nicht mehr erklären, dass AlwaysOn aus zwei unterschiedlichen Technologien besteht, die beide jedoch wiederum auf dem Windows Server Failover Clustering (WSFC) basieren. Denn daraus ergeben sich klare Nachteile - aber dazu später mehr. Um AlwaysOn zu verstehen, sollte man sich zunächst kurz in Erinnerung rufen, was Microsoft bisher an HA/DR (High Availability/Desaster Recovery) Lösungen für SQL Server zur Verfügung gestellt hat. Replikation Basiert auf logischer Replikation und Pubisher/Subscriber Architektur Transactional Replication Merge Replication Snapshot Replication Microsoft's Replikation ist vergleichbar mit Oracle GoldenGate. Oracle GoldenGate stellt jedoch die umfassendere Technologie dar und bietet High Performance. Log Shipping Microsoft's Log Shipping stellt eine einfache Technologie dar, die vergleichbar ist mit Oracle Managed Recovery in Oracle Version 7. Das Log Shipping besitzt folgende Merkmale: Transaction Log Backups werden von Primary nach Secondary/ies geschickt Einarbeitung (z.B. Restore) auf jedem Secondary individuell Optionale dritte Server Instanz (Monitor Server) für Überwachung und Alarm Log Restore Unterbrechung möglich für Read-Only Modus (Secondary) Keine Unterstützung von Automatic Failover Database Mirroring Microsoft's Database Mirroring wurde verfügbar mit SQL Server 2005, sah aus wie Oracle Data Guard in Oracle 9i, war funktional jedoch nicht so umfassend. Für ein HA/DR Paar besteht eine 1:1 Beziehung, um die produktive Datenbank (Principle DB) abzusichern. Auf der Standby Datenbank (Mirrored DB) werden alle Insert-, Update- und Delete-Operationen nachgezogen. Modi Synchron (High-Safety Modus) Asynchron (High-Performance Modus) Automatic Failover Unterstützt im High-Safety Modus (synchron) Witness Server vorausgesetzt     Zur Frage der Kontinuität Es stellt sich die Frage, wie es um diesen Technologien nun im Zusammenhang mit SQL Server 2012 bestellt ist. Unter Fanfaren seinerzeit eingeführt, war Database Mirroring das erklärte Mittel der Wahl. Ich bin kein Produkt Manager bei Microsoft und kann hierzu nur meine Meinung äußern, aber zieht man den SQL AlwaysOn Team Blog heran, so sieht es nicht gut aus für das Database Mirroring - zumindest nicht langfristig. "Does AlwaysOn Availability Group replace Database Mirroring going forward?” “The short answer is we recommend that you migrate from the mirroring configuration or even mirroring and log shipping configuration to using Availability Group. Database Mirroring will still be available in the Denali release but will be phased out over subsequent releases. Log Shipping will continue to be available in future releases.” Damit wären wir endlich beim eigentlichen Thema angelangt. Was ist eine sogenannte Availability Group und was genau hat es mit der vielversprechend klingenden Bezeichnung AlwaysOn auf sich?   SQL Server 2012 - AlwaysOn Zwei HA-Features verstekcne sich hinter dem “AlwaysOn”-Branding. Einmal das AlwaysOn Failover Clustering aka SQL Server Failover Cluster Instances (FCI) - zum Anderen die AlwaysOn Availability Groups. Failover Cluster Instances (FCI) Entspricht ungefähr dem Stretch Cluster Konzept von Oracle Setzt auf Windows Server Failover Clustering (WSFC) auf Bietet HA auf Instanz-Ebene AlwaysOn Availability Groups (Verfügbarkeitsgruppen) Ähnlich der Idee von Consistency Groups, wie in Storage-Level Replikations-Software von z.B. EMC SRDF Abhängigkeiten zu Windows Server Failover Clustering (WSFC) Bietet HA auf Datenbank-Ebene   Hinweis: Verwechseln Sie nicht eine SQL Server Datenbank mit einer Oracle Datenbank. Und auch nicht eine Oracle Instanz mit einer SQL Server Instanz. Die gleichen Begriffe haben hier eine andere Bedeutung - nicht selten ein Grund, weshalb Oracle- und Microsoft DBAs schnell aneinander vorbei reden. Denken Sie bei einer SQL Server Datenbank eher an ein Oracle Schema, das kommt der Sache näher. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema. Wenn Sie die genauen Unterschiede kennen möchten, finden Sie eine detaillierte Beschreibung in meinem Buch "Oracle10g Release 2 für Windows und .NET", erhältich bei Lehmanns, Amazon, etc.   Windows Server Failover Clustering (WSFC) Wie man sieht, basieren beide AlwaysOn Technologien wiederum auf dem Windows Server Failover Clustering (WSFC), um einerseits Hochverfügbarkeit auf Ebene der Instanz zu gewährleisten und andererseits auf der Datenbank-Ebene. Deshalb nun eine kurze Beschreibung der WSFC. Die WSFC sind ein mit dem Windows Betriebssystem geliefertes Infrastruktur-Feature, um HA für Server Anwendungen, wie Microsoft Exchange, SharePoint, SQL Server, etc. zu bieten. So wie jeder andere Cluster, besteht ein WSFC Cluster aus einer Gruppe unabhängiger Server, die zusammenarbeiten, um die Verfügbarkeit einer Applikation oder eines Service zu erhöhen. Falls ein Cluster-Knoten oder -Service ausfällt, kann der auf diesem Knoten bisher gehostete Service automatisch oder manuell auf einen anderen im Cluster verfügbaren Knoten transferriert werden - was allgemein als Failover bekannt ist. Unter SQL Server 2012 verwenden sowohl die AlwaysOn Avalability Groups, als auch die AlwaysOn Failover Cluster Instances die WSFC als Plattformtechnologie, um Komponenten als WSFC Cluster-Ressourcen zu registrieren. Verwandte Ressourcen werden in eine Ressource Group zusammengefasst, die in Abhängigkeit zu anderen WSFC Cluster-Ressourcen gebracht werden kann. Der WSFC Cluster Service kann jetzt die Notwendigkeit zum Neustart der SQL Server Instanz erfassen oder einen automatischen Failover zu einem anderen Server-Knoten im WSFC Cluster auslösen.   Failover Cluster Instances (FCI) Eine SQL Server Failover Cluster Instanz (FCI) ist eine einzelne SQL Server Instanz, die in einem Failover Cluster betrieben wird, der aus mehreren Windows Server Failover Clustering (WSFC) Knoten besteht und so HA (High Availability) auf Ebene der Instanz bietet. Unter Verwendung von Multi-Subnet FCI kann auch Remote DR (Disaster Recovery) unterstützt werden. Eine weitere Option für Remote DR besteht darin, eine unter FCI gehostete Datenbank in einer Availability Group zu betreiben. Hierzu später mehr. FCI und WSFC Basis FCI, das für lokale Hochverfügbarkeit der Instanzen genutzt wird, ähnelt der veralteten Architektur eines kalten Cluster (Aktiv-Passiv). Unter SQL Server 2008 wurde diese Technologie SQL Server 2008 Failover Clustering genannt. Sie nutzte den Windows Server Failover Cluster. In SQL Server 2012 hat Microsoft diese Basistechnologie unter der Bezeichnung AlwaysOn zusammengefasst. Es handelt sich aber nach wie vor um die klassische Aktiv-Passiv-Konfiguration. Der Ablauf im Failover-Fall ist wie folgt: Solange kein Hardware-oder System-Fehler auftritt, werden alle Dirty Pages im Buffer Cache auf Platte geschrieben Alle entsprechenden SQL Server Services (Dienste) in der Ressource Gruppe werden auf dem aktiven Knoten gestoppt Die Ownership der Ressource Gruppe wird auf einen anderen Knoten der FCI transferriert Der neue Owner (Besitzer) der Ressource Gruppe startet seine SQL Server Services (Dienste) Die Connection-Anforderungen einer Client-Applikation werden automatisch auf den neuen aktiven Knoten mit dem selben Virtuellen Network Namen (VNN) umgeleitet Abhängig vom Zeitpunkt des letzten Checkpoints, kann die Anzahl der Dirty Pages im Buffer Cache, die noch auf Platte geschrieben werden müssen, zu unvorhersehbar langen Failover-Zeiten führen. Um diese Anzahl zu drosseln, besitzt der SQL Server 2012 eine neue Fähigkeit, die Indirect Checkpoints genannt wird. Indirect Checkpoints ähnelt dem Fast-Start MTTR Target Feature der Oracle Datenbank, das bereits mit Oracle9i verfügbar war.   SQL Server Multi-Subnet Clustering Ein SQL Server Multi-Subnet Failover Cluster entspricht vom Konzept her einem Oracle RAC Stretch Cluster. Doch dies ist nur auf den ersten Blick der Fall. Im Gegensatz zu RAC ist in einem lokalen SQL Server Failover Cluster jeweils nur ein Knoten aktiv für eine Datenbank. Für die Datenreplikation zwischen geografisch entfernten Sites verlässt sich Microsoft auf 3rd Party Lösungen für das Storage Mirroring.     Die Verbesserung dieses Szenario mit einer SQL Server 2012 Implementierung besteht schlicht darin, dass eine VLAN-Konfiguration (Virtual Local Area Network) nun nicht mehr benötigt wird, so wie dies bisher der Fall war. Das folgende Diagramm stellt dar, wie der Ablauf mit SQL Server 2012 gehandhabt wird. In Site A und Site B wird HA jeweils durch einen lokalen Aktiv-Passiv-Cluster sichergestellt.     Besondere Aufmerksamkeit muss hier der Konfiguration und dem Tuning geschenkt werden, da ansonsten völlig inakzeptable Failover-Zeiten resultieren. Dies liegt darin begründet, weil die Downtime auf Client-Seite nun nicht mehr nur von der reinen Failover-Zeit abhängt, sondern zusätzlich von der Dauer der DNS Replikation zwischen den DNS Servern. (Rufen Sie sich in Erinnerung, dass wir gerade von Multi-Subnet Clustering sprechen). Außerdem ist zu berücksichtigen, wie schnell die Clients die aktualisierten DNS Informationen abfragen. Spezielle Konfigurationen für Node Heartbeat, HostRecordTTL (Host Record Time-to-Live) und Intersite Replication Frequeny für Active Directory Sites und Services werden notwendig. Default TTL für Windows Server 2008 R2: 20 Minuten Empfohlene Einstellung: 1 Minute DNS Update Replication Frequency in Windows Umgebung: 180 Minuten Empfohlene Einstellung: 15 Minuten (minimaler Wert)   Betrachtet man diese Werte, muss man feststellen, dass selbst eine optimale Konfiguration die rigiden SLAs (Service Level Agreements) heutiger geschäftskritischer Anwendungen für HA und DR nicht erfüllen kann. Denn dies impliziert eine auf der Client-Seite erlebte Failover-Zeit von insgesamt 16 Minuten. Hierzu ein Auszug aus der SQL Server 2012 Online Dokumentation: Cons: If a cross-subnet failover occurs, the client recovery time could be 15 minutes or longer, depending on your HostRecordTTL setting and the setting of your cross-site DNS/AD replication schedule.    Wir sind hier an einem Punkt unserer Überlegungen angelangt, an dem sich erklärt, weshalb ich zuvor das "Windows was the God ..." Zitat verwendet habe. Die unbedingte Abhängigkeit zu Windows wird zunehmend zum Problem, da sie die Komplexität einer Microsoft-basierenden Lösung erhöht, anstelle sie zu reduzieren. Und Komplexität ist das Letzte, was sich CIOs heutzutage wünschen.  Zur Ehrenrettung des SQL Server 2012 und AlwaysOn muss man sagen, dass derart lange Failover-Zeiten kein unbedingtes "Muss" darstellen, sondern ein "Kann". Doch auch ein "Kann" kann im unpassenden Moment unvorhersehbare und kostspielige Folgen haben. Die Unabsehbarkeit ist wiederum Ursache vieler an der Implementierung beteiligten Komponenten und deren Abhängigkeiten, wie beispielsweise drei Cluster-Lösungen (zwei von Microsoft, eine 3rd Party Lösung). Wie man die Sache auch dreht und wendet, kommt man an diesem Fakt also nicht vorbei - ganz unabhängig von der Dauer einer Downtime oder Failover-Zeiten. Im Gegensatz zu AlwaysOn und der hier vorgestellten Version eines Stretch-Clusters, vermeidet eine entsprechende Oracle Implementierung eine derartige Komplexität, hervorgerufen duch multiple Abhängigkeiten. Den Unterschied machen Datenbank-integrierte Mechanismen, wie Fast Application Notification (FAN) und Fast Connection Failover (FCF). Für Oracle MAA Konfigurationen (Maximum Availability Architecture) sind Inter-Site Failover-Zeiten im Bereich von Sekunden keine Seltenheit. Wenn Sie dem Link zur Oracle MAA folgen, finden Sie außerdem eine Reihe an Customer Case Studies. Auch dies ist ein wichtiges Unterscheidungsmerkmal zu AlwaysOn, denn die Oracle Technologie hat sich bereits zigfach in höchst kritischen Umgebungen bewährt.   Availability Groups (Verfügbarkeitsgruppen) Die sogenannten Availability Groups (Verfügbarkeitsgruppen) sind - neben FCI - der weitere Baustein von AlwaysOn.   Hinweis: Bevor wir uns näher damit beschäftigen, sollten Sie sich noch einmal ins Gedächtnis rufen, dass eine SQL Server Datenbank nicht die gleiche Bedeutung besitzt, wie eine Oracle Datenbank, sondern eher einem Oracle Schema entspricht. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema.   Eine Verfügbarkeitsgruppe setzt sich zusammen aus einem Set mehrerer Benutzer-Datenbanken, die im Falle eines Failover gemeinsam als Gruppe behandelt werden. Eine Verfügbarkeitsgruppe unterstützt ein Set an primären Datenbanken (primäres Replikat) und einem bis vier Sets von entsprechenden sekundären Datenbanken (sekundäre Replikate).       Es können jedoch nicht alle SQL Server Datenbanken einer AlwaysOn Verfügbarkeitsgruppe zugeordnet werden. Der SQL Server Spezialist Michael Otey zählt in seinem SQL Server Pro Artikel folgende Anforderungen auf: Verfügbarkeitsgruppen müssen mit Benutzer-Datenbanken erstellt werden. System-Datenbanken können nicht verwendet werden Die Datenbanken müssen sich im Read-Write Modus befinden. Read-Only Datenbanken werden nicht unterstützt Die Datenbanken in einer Verfügbarkeitsgruppe müssen Multiuser Datenbanken sein Sie dürfen nicht das AUTO_CLOSE Feature verwenden Sie müssen das Full Recovery Modell nutzen und es muss ein vollständiges Backup vorhanden sein Eine gegebene Datenbank kann sich nur in einer einzigen Verfügbarkeitsgruppe befinden und diese Datenbank düerfen nicht für Database Mirroring konfiguriert sein Microsoft empfiehl außerdem, dass der Verzeichnispfad einer Datenbank auf dem primären und sekundären Server identisch sein sollte Wie man sieht, eignen sich Verfügbarkeitsgruppen nicht, um HA und DR vollständig abzubilden. Die Unterscheidung zwischen der Instanzen-Ebene (FCI) und Datenbank-Ebene (Availability Groups) ist von hoher Bedeutung. Vor kurzem wurde mir gesagt, dass man mit den Verfügbarkeitsgruppen auf Shared Storage verzichten könne und dadurch Kosten spart. So weit so gut ... Man kann natürlich eine Installation rein mit Verfügbarkeitsgruppen und ohne FCI durchführen - aber man sollte sich dann darüber bewusst sein, was man dadurch alles nicht abgesichert hat - und dies wiederum für Desaster Recovery (DR) und SLAs (Service Level Agreements) bedeutet. Kurzum, um die Kombination aus beiden AlwaysOn Produkten und der damit verbundene Komplexität kommt man wohl in der Praxis nicht herum.    Availability Groups und WSFC AlwaysOn hängt von Windows Server Failover Clustering (WSFC) ab, um die aktuellen Rollen der Verfügbarkeitsreplikate einer Verfügbarkeitsgruppe zu überwachen und zu verwalten, und darüber zu entscheiden, wie ein Failover-Ereignis die Verfügbarkeitsreplikate betrifft. Das folgende Diagramm zeigt de Beziehung zwischen Verfügbarkeitsgruppen und WSFC:   Der Verfügbarkeitsmodus ist eine Eigenschaft jedes Verfügbarkeitsreplikats. Synychron und Asynchron können also gemischt werden: Availability Modus (Verfügbarkeitsmodus) Asynchroner Commit-Modus Primäres replikat schließt Transaktionen ohne Warten auf Sekundäres Synchroner Commit-Modus Primäres Replikat wartet auf Commit von sekundärem Replikat Failover Typen Automatic Manual Forced (mit möglichem Datenverlust) Synchroner Commit-Modus Geplanter, manueller Failover ohne Datenverlust Automatischer Failover ohne Datenverlust Asynchroner Commit-Modus Nur Forced, manueller Failover mit möglichem Datenverlust   Der SQL Server kennt keinen separaten Switchover Begriff wie in Oracle Data Guard. Für SQL Server werden alle Role Transitions als Failover bezeichnet. Tatsächlich unterstützt der SQL Server keinen Switchover für asynchrone Verbindungen. Es gibt nur die Form des Forced Failover mit möglichem Datenverlust. Eine ähnliche Fähigkeit wie der Switchover unter Oracle Data Guard ist so nicht gegeben.   SQL Sever FCI mit Availability Groups (Verfügbarkeitsgruppen) Neben den Verfügbarkeitsgruppen kann eine zweite Failover-Ebene eingerichtet werden, indem SQL Server FCI (auf Shared Storage) mit WSFC implementiert wird. Ein Verfügbarkeitesreplikat kann dann auf einer Standalone Instanz gehostet werden, oder einer FCI Instanz. Zum Verständnis: Die Verfügbarkeitsgruppen selbst benötigen kein Shared Storage. Diese Kombination kann verwendet werden für lokale HA auf Ebene der Instanz und DR auf Datenbank-Ebene durch Verfügbarkeitsgruppen. Das folgende Diagramm zeigt dieses Szenario:   Achtung! Hier handelt es sich nicht um ein Pendant zu Oracle RAC plus Data Guard, auch wenn das Bild diesen Eindruck vielleicht vermitteln mag - denn alle sekundären Knoten im FCI sind rein passiv. Es existiert außerdem eine weitere und ernsthafte Einschränkung: SQL Server Failover Cluster Instanzen (FCI) unterstützen nicht das automatische AlwaysOn Failover für Verfügbarkeitsgruppen. Jedes unter FCI gehostete Verfügbarkeitsreplikat kann nur für manuelles Failover konfiguriert werden.   Lesbare Sekundäre Replikate Ein oder mehrere Verfügbarkeitsreplikate in einer Verfügbarkeitsgruppe können für den lesenden Zugriff konfiguriert werden, wenn sie als sekundäres Replikat laufen. Dies ähnelt Oracle Active Data Guard, jedoch gibt es Einschränkungen. Alle Abfragen gegen die sekundäre Datenbank werden automatisch auf das Snapshot Isolation Level abgebildet. Es handelt sich dabei um eine Versionierung der Rows. Microsoft versuchte hiermit die Oracle MVRC (Multi Version Read Consistency) nachzustellen. Tatsächlich muss man die SQL Server Snapshot Isolation eher mit Oracle Flashback vergleichen. Bei der Implementierung des Snapshot Isolation Levels handelt sich um ein nachträglich aufgesetztes Feature und nicht um einen inhärenten Teil des Datenbank-Kernels, wie im Falle Oracle. (Ich werde hierzu in Kürze einen weiteren Blogbeitrag verfassen, wenn ich mich mit der neuen SQL Server 2012 Core Lizenzierung beschäftige.) Für die Praxis entstehen aus der Abbildung auf das Snapshot Isolation Level ernsthafte Restriktionen, derer man sich für den Betrieb in der Praxis bereits vorab bewusst sein sollte: Sollte auf der primären Datenbank eine aktive Transaktion zu dem Zeitpunkt existieren, wenn ein lesbares sekundäres Replikat in die Verfügbarkeitsgruppe aufgenommen wird, werden die Row-Versionen auf der korrespondierenden sekundären Datenbank nicht sofort vollständig verfügbar sein. Eine aktive Transaktion auf dem primären Replikat muss zuerst abgeschlossen (Commit oder Rollback) und dieser Transaktions-Record auf dem sekundären Replikat verarbeitet werden. Bis dahin ist das Isolation Level Mapping auf der sekundären Datenbank unvollständig und Abfragen sind temporär geblockt. Microsoft sagt dazu: "This is needed to guarantee that row versions are available on the secondary replica before executing the query under snapshot isolation as all isolation levels are implicitly mapped to snapshot isolation." (SQL Storage Engine Blog: AlwaysOn: I just enabled Readable Secondary but my query is blocked?)  Grundlegend bedeutet dies, dass ein aktives lesbares Replikat nicht in die Verfügbarkeitsgruppe aufgenommen werden kann, ohne das primäre Replikat vorübergehend stillzulegen. Da Leseoperationen auf das Snapshot Isolation Transaction Level abgebildet werden, kann die Bereinigung von Ghost Records auf dem primären Replikat durch Transaktionen auf einem oder mehreren sekundären Replikaten geblockt werden - z.B. durch eine lang laufende Abfrage auf dem sekundären Replikat. Diese Bereinigung wird auch blockiert, wenn die Verbindung zum sekundären Replikat abbricht oder der Datenaustausch unterbrochen wird. Auch die Log Truncation wird in diesem Zustant verhindert. Wenn dieser Zustand längere Zeit anhält, empfiehlt Microsoft das sekundäre Replikat aus der Verfügbarkeitsgruppe herauszunehmen - was ein ernsthaftes Downtime-Problem darstellt. Die Read-Only Workload auf den sekundären Replikaten kann eingehende DDL Änderungen blockieren. Obwohl die Leseoperationen aufgrund der Row-Versionierung keine Shared Locks halten, führen diese Operatioen zu Sch-S Locks (Schemastabilitätssperren). DDL-Änderungen durch Redo-Operationen können dadurch blockiert werden. Falls DDL aufgrund konkurrierender Lese-Workload blockiert wird und der Schwellenwert für 'Recovery Interval' (eine SQL Server Konfigurationsoption) überschritten wird, generiert der SQL Server das Ereignis sqlserver.lock_redo_blocked, welches Microsoft zum Kill der blockierenden Leser empfiehlt. Auf die Verfügbarkeit der Anwendung wird hierbei keinerlei Rücksicht genommen.   Keine dieser Einschränkungen existiert mit Oracle Active Data Guard.   Backups auf sekundären Replikaten  Über die sekundären Replikate können Backups (BACKUP DATABASE via Transact-SQL) nur als copy-only Backups einer vollständigen Datenbank, Dateien und Dateigruppen erstellt werden. Das Erstellen inkrementeller Backups ist nicht unterstützt, was ein ernsthafter Rückstand ist gegenüber der Backup-Unterstützung physikalischer Standbys unter Oracle Data Guard. Hinweis: Ein möglicher Workaround via Snapshots, bleibt ein Workaround. Eine weitere Einschränkung dieses Features gegenüber Oracle Data Guard besteht darin, dass das Backup eines sekundären Replikats nicht ausgeführt werden kann, wenn es nicht mit dem primären Replikat kommunizieren kann. Darüber hinaus muss das sekundäre Replikat synchronisiert sein oder sich in der Synchronisation befinden, um das Beackup auf dem sekundären Replikat erstellen zu können.   Vergleich von Microsoft AlwaysOn mit der Oracle MAA Ich komme wieder zurück auf die Eingangs erwähnte, mehrfach an mich gestellte Frage "Wann denn - und ob überhaupt - Oracle etwas Vergleichbares wie AlwaysOn bieten würde?" und meine damit verbundene (kurze) Irritation. Wenn Sie diesen Blogbeitrag bis hierher gelesen haben, dann kennen Sie jetzt meine darauf gegebene Antwort. Der eine oder andere Punkt traf dabei nicht immer auf Jeden zu, was auch nicht der tiefere Sinn und Zweck meiner Antwort war. Wenn beispielsweise kein Multi-Subnet mit im Spiel ist, sind alle diesbezüglichen Kritikpunkte zunächst obsolet. Was aber nicht bedeutet, dass sie nicht bereits morgen schon wieder zum Thema werden könnten (Sag niemals "Nie"). In manch anderes Fettnäpfchen tritt man wiederum nicht unbedingt in einer Testumgebung, sondern erst im laufenden Betrieb. Erst recht nicht dann, wenn man sich potenzieller Probleme nicht bewusst ist und keine dedizierten Tests startet. Und wer AlwaysOn erfolgreich positionieren möchte, wird auch gar kein Interesse daran haben, auf mögliche Schwachstellen und den besagten Teufel im Detail aufmerksam zu machen. Das ist keine Unterstellung - es ist nur menschlich. Außerdem ist es verständlich, dass man sich in erster Linie darauf konzentriert "was geht" und "was gut läuft", anstelle auf das "was zu Problemen führen kann" oder "nicht funktioniert". Wer will schon der Miesepeter sein? Für mich selbst gesprochen, kann ich nur sagen, dass ich lieber vorab von allen möglichen Einschränkungen wissen möchte, anstelle sie dann nach einer kurzen Zeit der heilen Welt schmerzhaft am eigenen Leib erfahren zu müssen. Ich bin davon überzeugt, dass es Ihnen nicht anders geht. Nachfolgend deshalb eine Zusammenfassung all jener Punkte, die ich im Vergleich zur Oracle MAA (Maximum Availability Architecture) als unbedingt Erwähnenswert betrachte, falls man eine Evaluierung von Microsoft AlwaysOn in Betracht zieht. 1. AlwaysOn ist eine komplexe Technologie Der SQL Server AlwaysOn Stack ist zusammengesetzt aus drei verschiedenen Technlogien: Windows Server Failover Clustering (WSFC) SQL Server Failover Cluster Instances (FCI) SQL Server Availability Groups (Verfügbarkeitsgruppen) Man kann eine derartige Lösung nicht als nahtlos bezeichnen, wofür auch die vielen von Microsoft dargestellten Einschränkungen sprechen. Während sich frühere SQL Server Versionen in Richtung eigener HA/DR Technologien entwickelten (wie Database Mirroring), empfiehlt Microsoft nun die Migration. Doch weshalb dieser Schwenk? Er führt nicht zu einem konsisten und robusten Angebot an HA/DR Technologie für geschäftskritische Umgebungen.  Liegt die Antwort in meiner These begründet, nach der "Windows was the God ..." noch immer gilt und man die Nachteile der allzu engen Kopplung mit Windows nicht sehen möchte? Entscheiden Sie selbst ... 2. Failover Cluster Instanzen - Kein RAC-Pendant Die SQL Server und Windows Server Clustering Technologie basiert noch immer auf dem veralteten Aktiv-Passiv Modell und führt zu einer Verschwendung von Systemressourcen. In einer Betrachtung von lediglich zwei Knoten erschließt sich auf Anhieb noch nicht der volle Mehrwert eines Aktiv-Aktiv Clusters (wie den Real Application Clusters), wie er von Oracle bereits vor zehn Jahren entwickelt wurde. Doch kennt man die Vorzüge der Skalierbarkeit durch einfaches Hinzufügen weiterer Cluster-Knoten, die dann alle gemeinsam als ein einziges logisches System zusammenarbeiten, versteht man was hinter dem Motto "Pay-as-you-Grow" steckt. In einem Aktiv-Aktiv Cluster geht es zwar auch um Hochverfügbarkeit - und ein Failover erfolgt zudem schneller, als in einem Aktiv-Passiv Modell - aber es geht eben nicht nur darum. An dieser Stelle sei darauf hingewiesen, dass die Oracle 11g Standard Edition bereits die Nutzung von Oracle RAC bis zu vier Sockets kostenfrei beinhaltet. Möchten Sie dazu Windows nutzen, benötigen Sie keine Windows Server Enterprise Edition, da Oracle 11g die eigene Clusterware liefert. Sie kommen in den Genuss von Hochverfügbarkeit und Skalierbarkeit und können dazu die günstigere Windows Server Standard Edition nutzen. 3. SQL Server Multi-Subnet Clustering - Abhängigkeit zu 3rd Party Storage Mirroring  Die SQL Server Multi-Subnet Clustering Architektur unterstützt den Aufbau eines Stretch Clusters, basiert dabei aber auf dem Aktiv-Passiv Modell. Das eigentlich Problematische ist jedoch, dass man sich zur Absicherung der Datenbank auf 3rd Party Storage Mirroring Technologie verlässt, ohne Integration zwischen dem Windows Server Failover Clustering (WSFC) und der darunterliegenden Mirroring Technologie. Wenn nun im Cluster ein Failover auf Instanzen-Ebene erfolgt, existiert keine Koordination mit einem möglichen Failover auf Ebene des Storage-Array. 4. Availability Groups (Verfügbarkeitsgruppen) - Vier, oder doch nur Zwei? Ein primäres Replikat erlaubt bis zu vier sekundäre Replikate innerhalb einer Verfügbarkeitsgruppe, jedoch nur zwei im Synchronen Commit Modus. Während dies zwar einen Vorteil gegenüber dem stringenten 1:1 Modell unter Database Mirroring darstellt, fällt der SQL Server 2012 damit immer noch weiter zurück hinter Oracle Data Guard mit bis zu 30 direkten Stanbdy Zielen - und vielen weiteren durch kaskadierende Ziele möglichen. Damit eignet sich Oracle Active Data Guard auch für die Bereitstellung einer Reader-Farm Skalierbarkeit für Internet-basierende Unternehmen. Mit AwaysOn Verfügbarkeitsgruppen ist dies nicht möglich. 5. Availability Groups (Verfügbarkeitsgruppen) - kein asynchrones Switchover  Die Technologie der Verfügbarkeitsgruppen wird auch als geeignetes Mittel für administrative Aufgaben positioniert - wie Upgrades oder Wartungsarbeiten. Man muss sich jedoch einem gravierendem Defizit bewusst sein: Im asynchronen Verfügbarkeitsmodus besteht die einzige Möglichkeit für Role Transition im Forced Failover mit Datenverlust! Um den Verlust von Daten durch geplante Wartungsarbeiten zu vermeiden, muss man den synchronen Verfügbarkeitsmodus konfigurieren, was jedoch ernstzunehmende Auswirkungen auf WAN Deployments nach sich zieht. Spinnt man diesen Gedanken zu Ende, kommt man zu dem Schluss, dass die Technologie der Verfügbarkeitsgruppen für geplante Wartungsarbeiten in einem derartigen Umfeld nicht effektiv genutzt werden kann. 6. Automatisches Failover - Nicht immer möglich Sowohl die SQL Server FCI, als auch Verfügbarkeitsgruppen unterstützen automatisches Failover. Möchte man diese jedoch kombinieren, wird das Ergebnis kein automatisches Failover sein. Denn ihr Zusammentreffen im Failover-Fall führt zu Race Conditions (Wettlaufsituationen), weshalb diese Konfiguration nicht länger das automatische Failover zu einem Replikat in einer Verfügbarkeitsgruppe erlaubt. Auch hier bestätigt sich wieder die tiefere Problematik von AlwaysOn, mit einer Zusammensetzung aus unterschiedlichen Technologien und der Abhängigkeit zu Windows. 7. Problematische RTO (Recovery Time Objective) Microsoft postioniert die SQL Server Multi-Subnet Clustering Architektur als brauchbare HA/DR Architektur. Bedenkt man jedoch die Problematik im Zusammenhang mit DNS Replikation und den möglichen langen Wartezeiten auf Client-Seite von bis zu 16 Minuten, sind strenge RTO Anforderungen (Recovery Time Objectives) nicht erfüllbar. Im Gegensatz zu Oracle besitzt der SQL Server keine Datenbank-integrierten Technologien, wie Oracle Fast Application Notification (FAN) oder Oracle Fast Connection Failover (FCF). 8. Problematische RPO (Recovery Point Objective) SQL Server ermöglicht Forced Failover (erzwungenes Failover), bietet jedoch keine Möglichkeit zur automatischen Übertragung der letzten Datenbits von einem alten zu einem neuen primären Replikat, wenn der Verfügbarkeitsmodus asynchron war. Oracle Data Guard hingegen bietet diese Unterstützung durch das Flush Redo Feature. Dies sichert "Zero Data Loss" und beste RPO auch in erzwungenen Failover-Situationen. 9. Lesbare Sekundäre Replikate mit Einschränkungen Aufgrund des Snapshot Isolation Transaction Level für lesbare sekundäre Replikate, besitzen diese Einschränkungen mit Auswirkung auf die primäre Datenbank. Die Bereinigung von Ghost Records auf der primären Datenbank, wird beeinflusst von lang laufenden Abfragen auf der lesabaren sekundären Datenbank. Die lesbare sekundäre Datenbank kann nicht in die Verfügbarkeitsgruppe aufgenommen werden, wenn es aktive Transaktionen auf der primären Datenbank gibt. Zusätzlich können DLL Änderungen auf der primären Datenbank durch Abfragen auf der sekundären blockiert werden. Und imkrementelle Backups werden hier nicht unterstützt.   Keine dieser Restriktionen existiert unter Oracle Data Guard.

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480  | Next Page >